The influence of artificial intelligence is as consequential as ever. Its widespread presence affects jobs, schools and daily life as we previously knew it. But, when a tool so powerful makes its way to a college campus, it beckons the question: is it ethical, and if so, to what extent?
In an August update shared with the Tulane University community, President Michael Fitts shared Tulane’s plan to “harness the exciting promise AI holds.” This initiative was furthered by the establishment of the Connolly Alexander Institute for Data Science, Center for Community-Engaged Artificial Intelligence and Jurist Center for Artificial Intelligence. According to Fitts, all three of these resources were consciously designed for students to better understand and leverage artificial intelligence in the context of data. Charles Mignot, director of Tulane’s French language program, also said he views AI as a powerful learning tool that, specifically in terms of language acquisition and retention, has unmatched potential. During a trial writing assignment Mignot conducted with his students, ChatGPT was able to identify proper French verb forms and provide meaningful feedback.
On the surface, students using AI as a resource seems harmless. ChatGPT, an intuitive and widely used platform, serves as an objective resource for accessing any information one may need. While ChatGPT itself is difficult to assess on an ethical basis, how it is used creates various ethical implications.
Regarding those implications, Mignot said he believes the use of AI is “ethical as long as it’s not replacing the goal of the assignments” which are meant for students to show “reflection of [their] thought process.” In most classes, beyond checking answers, proofreading essays and summarizing readings, the scope of ChatGPT’s value for students who use it ethically is relatively limited. So, in reality, student use of ChatGPT often oversteps its innate ethical parameters, and ChatGPT can become a tool to cheat, rather than a resource to learn.
Due to limited available technology, Tulane has no uniform measures in place to identify the use of ChatGPT on assignments. Therefore, professors cannot hold students accountable for their dishonest methods, and many professors are not yet clear on the fine line that defines ChatGPT’s breach of academic integrity.
As a result, it is up to students to use the tool responsibly. By allowing students this autonomy, however, Tulane violates its duty to both maintain a level playing field and challenge students to think critically.
Perhaps more consequential to society, ChatGPT is only capable of reproducing one homogenous way of thought. Professor Shuhua Sun teaches students the importance of having diverse perspectives within an organization. “Without diversity,” he said, “issues like disconnection from the needs of a diverse world, succumbing to groupthink, and limited innovation” may arise.
When students who are writing a paper, for example, defer to ChatGPT for inspiration, avoiding the dreadful writer’s block everyone has once faced, they subsequently lose the opportunity to exercise their brain’s right hemisphere. There becomes no need for creativity, as ChatGPT provides a jumping-off point, the perfect prompt to build a prototypical essay. While the student sees this as an ethical way to succeed in their class, in a macro lens, the presence of AI does a disservice by reducing the unique opinions shared in society.
Over time, Tulane must carefully implement AI-related policies, so that when college students emerge as future leaders, the existence of individuality does not disappear. An indifferent computer program, similar to a calculator, can never replicate values such as curiosity and inspiration that students perpetually possess. But, through targeted efforts, Tulane can embrace AI to further cultivate the knowledge students look to gain during their time on campus.
Leave a Comment