using AI



 

using AI

 

 

"The real question is not whether machines think but whether men do."

-B.F. Skinner 

 

There are lies, damned lies, and AIs.


As I was grading essays for the class final, two submissions caught my attention. They weren't direct copies of each other, but they were similar enough for my spidey senses to tingle. One thing that threw me was that these two students weren't buddies: they didn't sit together, and they didn't mingle during class breaks (my class was one night a week for three hours, so we had sanity breaks). The plagiarization-detector didn't sense anything untoward. Ultimately, after reviewing university policies, I passed them both.

A few months later, the media sparkled with articles about something called "ChatGPT," the latest in artificial intelligence, aka AI. I tried it out and was duly impressed. The text was crisp, the thoughts were well-organized. In fact, the writing was shockingly good. "Our robot overlords have arrived!" I informed my wife.

In addition to writing papers and reports on correlative allocations, groundwater sustainability, and hydraulic conductivities, I also write restaurant reviews for neighborhood newsletters, something I've done for more than 30 years now. I decided to employ ChatGPT to write a first-draft of a sushi restaurant I had just visited. I provided the GPT a link to the restaurant, some input on style and mood, and set it off to conjure its magic. Within seconds, it returned a well-written and well-organized review. "Darn!" I darned as I read the text, feeling somewhat replaced. But then...

The artificial intelligence folks call them "hallucinations," which is a polite way of saying "making shit up." ChatGPT wrote about how fresh the fish was and how the salty roe added unami to the California rolls. However, there was a problem. The sushi restaurant I visited was vegan: there was no fish nor roe!

I used ChatGPT a few more times for my restaurant reviews. One thing I noted was that it had a particular writing style. Mind you, it was a good style (as long as you could set aside the truthfulness issues), but a style nonetheless. And then it occurred to me: Those two students who submitted final essays to me had used ChatGPT to write their essays. The style was similar, although the content varied with some overlap. I shook my fist at the sky: "Goldarnit, I've been GPT'd!"

But buyer beware.

I am currently working on a book that includes a number of topics that I am aware of but not an expert on. So I tried ChatGPT once again in the hopes it had improved. It hadn't. I read an article that suggested asking your favorite AI not to lie to you. It's odd to me to treat a research tool as a serial philanderer you are married to: "DON'T YOU DARE LIE TO ME AGAIN!!!" I did start asking it to provide a list of academic references for whatever it told me. But sadly, even here, it made stuff up.

I spent an hour looking for one reference ChatGPT offered as evidence of its truthfulness before concluding that it did not exist. The authors existed, the publisher existed, but the book did not. I searched academic databases, I searched the general internet, I inspected curricula vitae, and I searched publisher catalogs. It. Did. Not. Exist.

In a recent graduate class, I had a student submit his section for the class project. I nodded in approval as it was well written and well organized. As I merged it into the paper for the class project (something we planned to turn into a peer-reviewed journal for publication), I sought to flesh out the citations to the references the student included. Two of the three didn't exist.

"I'm 97.8% sure you used AI to write your section," I wrote to him. "There are no consequences here except, perhaps, a learning experience. Did you use AI?"

"How did you know?" the student replied, having used anti-AI AI tracker to avoid detection.

These days, I ask ChatGPT to provide references but also links to the references. However, oftentimes the references it links do not support ChatGPT's conclusions, or the reference has nothing to do with the conclusions it arrived at.

At this stage in AI's development, it's key to fact check everything AI tells you unless you know for a fact it is accurate. The Economist noted that AI is most powerful in the hands of someone who is already an expert. I still use AI, but in ways I can verify everything, and mostly for brainstorming. For example, I recently wrote a chapter in a book on evaporation. I broadly know from my schooling what affects evaporation, but I asked ChatGPT to give me its thoughts. It helpfully provided a few things I hadn't thought about to consider and even included some real references. 

Never, ever, just copy and paste what your favorite AI tells you unless you independently know that it is correct. Check (and read) the reference (something you should already be doing).

Also, be careful about using AIs to check how much of your writing was done by an AI. A recent study showed that the more you submit your text to an AI-checker, the more it shows up as AI generated even if the text doesn't change. That's because the AI-checker is an AI and it's absorbing your content, just as the Borg in Star Trek ingests species into The Collective. A student recently confirmed this when she said that an AI said her original writing was 25 percent written by an AI and, after she reworked the text, her next check had increased to 40 percent.

Perhaps at some point in the future, the AIs will become great researchers. Until then, it's up to you to manage your AI use. Just remember: there are lies, there are damn lies, and there are AIs!

Comments

Popular posts from this blog

table of contents

staying between the outlines

introducing the introduction