Jamie Ennis | GNP contributor
ECU students are under the impression that they can use ChatGPT to write their course papers and get away with it scot-free.
But the English department knows how to get the chatbot to admit it’s the true author of a student’s paper.
Jennifer Sisk, a senior English instructor, says she and her colleagues have found a way to confirm if students used ChatGPT.
Go to the website of OpenAI (openai.com), the chatbot’s maker, and click the link to access ChatGPT. Then, Sisk says, you can ask it, “‘ChatGPT, did you write this? Can you tell me why you wrote this?’” After the question give it the text of the item you’re checking. The chatbot will say “yes” or “no” in a variety of ways.
Many English professors have added a section in their syllabi specifically about academic integrity and how it applies to ChatGPT. Sisk expects more of her colleagues to update their syllabi as ECU over the summer and into the Fall semester.
There is no official training for ChatGPT, she adds, but it the machine learning bot and its temptation for students continues to be a huge topic of discussion in the English department.
Over the Spring semester the Greenville News Project tested the ability of instructors and the Turnitin plagiarism detector to pick out a human-written course essay from a ChatGPT-written version of it.
Turnitin is a third-party app embedded in Canvas, the learning management system ECU uses. At first Turnitin couldn’t detect the chatbot’s work. That changed in early April when Turnitin upgraded its app to catch students who used the chatbot. GNP found it worked, sort of. It found the same for instructors.
Inside the chatbot
ChatGPT is generative learning artificial intelligence that Open AI launched late last year. It’s since taken the world by storm.
OpenAI teaches ChatGPT by exposing it to vast amounts of data from the internet and other sources. Humans write the code that tells the chatbot how self-learn by breaking down text into words, sentences and paragraphs in its database, analyze them for writing styles and meanings, and apply all that to writing whatever users ask it to.
For college students, ChatGPT is a simple solution for getting out of the work of physically researching and writing a course essay.
Writing college-level papers isn’t ChatGPT’s only function. It can be used for just research or composing speeches and solving math word problems.
“I think it is probably more interesting to look at what ChatGPT can and is doing ‘in the real world’ to get a feel for what students should do with it,” says Kasen Christensen, a Ph.D. student in the ECU English department.
Crossing the line
For most, it seems clear that a ChatGPT-written course essay violates any college’s academic integrity rules. But it turns out that isn’t so clear-cut.
Plagiarism is the easy part. ECU’s Academic Integrity Policy defines it as it’s commonly defined: using someone else’s work but claiming it as your own. ChatGPT is known to plagiarize, and Turnitin is good at catching plagiarism.
The harder part comes with cheating. ECU’s policy defines it as students giving or getting unauthorized help for coursework. One example from the policy is a student who “collaborat[es] on academic work without authorization and/or truthful disclosure.”
The question is whether ChatGPT is considered unauthorized collaboration. Sisk says she might not consider it that way if students acknowledge they used the chatbot.
There are a couple of loopholes. If instructors don’t specifically say in their syllabi that ChatGPT is cheating or plagiarism, students could argue themselves out of trouble—if their instructor gets proof they used he chatbot.
There’s also the question of originality of work.
ECU’s policy doesn’t specifically address whether it’s OK for a student to type in a quick essay prompt and let ChatGPT do all the work. Its plagiarism policy forbids students buying or downloading coursework from essay-writing services and passing it off as their own. But it’s not clear that ChatGPT strictly qualifies as that kind of service.
Blame students, not the bot
Right off the bat, ChatGPT caught people off guard. Few were aware of what the AI tool could do. That includes universities.
“There are arguments to be made that using ChatGPT constitutes plagiarism, and there are arguments to be made that it is a tool that can used just like any other tool,” says Christensen. She sees the potential benefits of the chatbot as well as the inevitable of students falling into the temptation of using it for coursework.
Christensen says one solution is to teach students how to use ChatGPT as a helpmate tool, not a crutch that does all the work.
“Blanket Bans on ChatGPT in the classroom might work to get students to not cheat but stopping the discussion [of it] will not prepare students for the monumental shift [that] AI writing tools are currently making,” she says.
Hard times catching the chatbot
GNP took a 200-word essay an instructor wrote as an example for a class assignment, and it asked ChatGPT to write its own.
It ran both through Turnitin. Before the April upgrade, Turnitin gave the OK to the human- and chatbot-written essays. After the upgrade, it said “OK” to the human-written essay—and flagged the chatbot-written one as “100% AI”.
But Turnitin admitted on its website that its chatbot detector had a problem with “false positives,” or flagging human work as being AI work and flagging AI work as human. Canvas at ECU now carries this notice: “Turnitin has received feedback and concerns from customers, so the AI Detection feature in Turnitin has been disabled for ECU until further notice.”
GNP gave ChatGPT both of the test essays and asked if it wrote them. The chatbot was no help. For the human-written essay, it responded: “Yes, I generated the text you provided.” For the version it wrote, ChatGPT said, “Yes, I did write that response.”
GNP next tried the app ZeroGPT, which Edward Tian created as a Princeton University senior shortly after OpenAI unleased its chatbot. Tian’s app caught the ChatGPT essay, saying, “Your text is AI/GPT generated: 100% AI GPT.” And it correctly identified the human essay as “human written: 0% AI GPT.”
When it came to three ECU instructors, two correctly picked the chatbot essay from the test pair, but one didn’t.
English is on guard
Sisk says she and many of her colleagues took the time to play around with ChatGPT and learn how it works.
“As for what we’re looking for, most of us were testing our assignment prompts to see what it would produce, and how that matches up to previous student work,” she says. “We also had some fun with it and created love poems, student recommendations letters, email responses to test its usefulness in a variety of writing genres.”
Sisk is confident in her ability to decipher a student’s work from ChatGPT’s work.
“I’ve been teaching a variety of writing intensive classes for 18 years now, so I do feel confident in my ability to point out suspicious work,” she says. “A student’s voice and tone in their writing are typically much different than that from ChatGPT.”
She’s already caught two students in a class of 15 who used ChatGPT. She suspected a third student did too, but she couldn’t verify it.
Ennis reported this story for the Spring 2023 class, In-depth Reporting Capstone.
Comments