OPINION | ChatGPT threatens academic honesty

Thamidul Alam, Staff Writer

(Will Embree)

In November 2022, OpenAI launched ChatGPT, the newest iteration of artificial intelligence known as large language models. These systems use deep learning networks, a subset of artificial intelligence that mimics how the human brain learns. These networks take large quantities of data and run them through specific programs that attempt to predict the next word or sentence. With a seemingly endless amount of data, the language models can essentially respond to almost any question. 

While the model can benefit academia, some may be tempted to use it to avoid doing any work. The program has a simple interface, which attempts to answer questions based on information within its database. Students can easily insert a writing prompt for an essay and generate a paper that addresses the prompt and employs a professional style and tone, submitting the work as their own.

ChatGPT avoids conventional plagiarism checks because it produces its own responses rather than direct transcriptions or poor paraphrasing. In essence, the system summarizes in its own words any information it can obtain about an inputted topic. 

The Newcomb-Tulane College  Code of Academic Conduct establishes that any form of academic cheating, plagiarism or unauthorized collaboration is grounds for an investigation by the university. Depending on the severity of the violation, the current punishments range from warning notices to expulsions and rescission of degrees awarded. While the honor code does not differentiate between aid from a person versus one from artificial intelligence, the university makes it clear that any aid not specifically cleared by course instructors will result in penalties. 

School systems across the nation are already aware of the academic dishonesty ChatGPT can facilitate. The New York City’s Department of Education went as far as to ban the software from its public school network. Even the creators acknowledged their shortcomings, with OpenAI’s guest researcher Scott Aaronson noting that the company will make it easier for plagiarism detectors to identify if ChatGPT was used. 

Princeton University computer scientist Edward Tian is developing software to precisely detect ChatGPT in written text. GPTZero, as Tian has named it, works by charting a text’s randomness of vocabulary and the variance of non-common items grouped together, which AI models cannot replicate. In contrast, AI writing tends to favor even distribution. While it may still be in its beta version, GPTZero’s detection is highly accurate — its major limitation being long waiting times. 

Turnitin is also testing a detection system. The site is a widely used plagiarism detector that is popular among universities. The software works similarly to ChatGPT, but instead of creating text based on previous data, it compares inputs to AI-generated text. AI writing works by selecting the most probable word based on the previous sentence, which Turnitin’s detector focuses on according to the company’s Vice President of AI Eric Wang

In terms of artificial intelligence, this is only the tip of the iceberg. Language models will only become more complex. While it may be evident to some now, the difference between AI writing and human writing will become more blurred.

Once perfected, programs such as ChatGPT can revolutionize how information is spread globally. However, in its current state, the program hinders the promotion of academic honesty on college campuses. By using such programs, future contributions to fields of study are diminished. If students decide to use artificial intelligence to author their essays and take their exams, then it should be ChatGPT being awarded a degree instead of the student.

Leave a Comment