Can machines think?
Proposed by British computer scientist Alan Turing, it is arguably the most famous opening line of an academic paper and a question that continues to be debated 70 years after it was written. Titled “Computing Machinery and Intelligence,” Turing’s paper laid the philosophical foundation for what many today believe, for better or worse, will dramatically change society: artificial intelligence.
On these grounds, the prompt Tulane University professor and author Walter Isaacson gave to students of his “Digital Revolution” class made perfect sense: “Describe the development of artificial intelligence from Turing to large language model chatbots (LLMs).”
It was a seemingly straightforward assignment, complicated by one condition.
“Take that prompt,” said Isaacson, pointing to the projector, “and put it into any chatbot.”
“Make it so it comes up to about 3,000 words,” he said. Most chatbots give a maximum response of about 800 words. “I want you to put where you used your intuition … where can you be creative?”
Isaacson encouraged students to make their papers as unique as possible, as long as they stayed informative. The grade would be divided between the quality of the final paper and how students explained their thought process.
Hands shot up in a packed Herbert Hall lecture room. Could students use multiple chatbots? How many iterations would it take? How would they explain how they got our final product?
“Use your judgment,” Isaacson replied again and again, barring one exception. He asked students to track down and cite their sources, which most chatbots do not provide outright. “It is a history class,” said Isaacson, “not a class on how to cheat and write a paper.” He promised extra credit if a student spotted a hallucination, a poorly understood limitation whereby a chatbot generates incorrect information and presents it as if it were true.
It was the first stage of a semester-long class experiment, where students took 15 minutes at the end of each class to share their progress on their final papers.
“Isaacson kind of left it open to us,” said William Bai, a senior studying cell and molecular biology.
“One of the essences of the digital age is it’s about collaboration and peer-to-peer sharing,” said Isaacson, an acclaimed biographer who spent years profiling tech titans like Elon Musk and Steve Jobs.
Generative AI explodes
Turing’s seminal work judged a machine’s intelligence by its ability to hold a conversation indistinguishable from that of a human. He believed machines could “think” if they passed this test. Dubbed today as the “Turing test,” some researchers argue large language models, a form of generative AI that powers modern chatbots, have broken that test.
Trained on massive amounts of text — hundreds of billions of words — from the internet, LLMs can learn the statistical patterns of language. By anticipating the words that are likely to follow a user-prompted question, LLMs can generate human-like conversations.
When Microsoft-backed Open AI launched ChatGPT-3 in November of 2022, educators around the country sounded the alarm. Students drooled at a new frontier of cheating, while English teachers around the country prepared for unemployment.
Despite many high schools banning ChatGPT, software designed to detect chatbot use has proven unreliable. Since ChatGPT’s first launch, a horde of new chatbots have reached the public; Musk has Grock, Google has Gemini and Meta has Llama.
Outside big tech, startups like Anthropic and Perplexity have also joined the scene. Their influence is growing too — evidence of chatbot use has seeped into the abstracts of academic journals, and author Rie Qudan won Japan’s highest literary award after using a chatbot to write a portion of her book.
As generative AI’s potential to shift the dynamics of higher education and writing grows, only one-in-five provosts around the country say they have published a policy governing the use of AI. In-class exams and spoken word assignments are all possible avenues to address increasing chatbot use, but questions remain about whether to dam its inevitable surge or open the floodgates.
“This is a moment of experimentation,” said Tulane Provost Robin Forman. “We’re trying to learn what works best and what doesn’t.” In addition to establishing committees for AI in education and AI in research, Forman has encouraged faculty to play around with chatbots while doing so himself.
His goal, he said, is to find the right balance between teaching students to use generative AI while maintaining the core values of a liberal education, such as developing critical thinking skills.
“They will be graduating into a world where in almost every industry, that will be either a requirement or an expectation, or at least an advantage,” Forman said.
Learning the ropes
William Bai had read many of Isaacson’s books since high school. For him, taking the class was a no-brainer.
In his final paper, he used ChatGPT to build an outline, then turned to Gemini to write in-depth summaries of each section.
“I thought that Gemini was better at explaining things more clearly,” Bai said. “It was also able to give citations for everything that I was mentioning.”
Bai is set on attending medical school in the future but felt it was essential for him to become familiar with the technology.
“If you’re not at the table, you’re on the menu,” he said, reflecting an attitude that has fueled his hometown of Silicon Valley. “You have to be part of the people who are driving the development of AI, especially the integration of AI into medicine.”
Bai wants to continue learning about how AI can replace the roles of physicians, but also where it can help automate some of the grunt work like processing medical records.
Meanwhile, sophomore Keona Patel used the opportunity to criticize existing LLMs.
“I think it’s cool that [Isaacson is] giving us free rein to use AI,” she said. But she was quick to point out what she believed as its potential pitfalls. LLMs are typically trained on a wide range of data, much of which contains historical biases. As AI becomes more personalized, she believes chatbots can harm marginalized groups by replicating the same biases they are trained on.
“Not understanding the range of humanity and how it exists can make it exploitative or not make it the most efficient tool for certain people,” Patel said.
Top computer scientists’ attempts at addressing this pitfall have so far proved unsuccessful. When Google launched its image-generation AI tool in February, many users reported bizarre depictions of the U.S. founding fathers or Nazi soldiers as people of color, and Google faced backlash for an overcorrection of bias.
“It was, I believe, well-intentioned,” said Forman, “but it just shows how hard the problem is.”
For her final paper, Patel will use Google’s NotebookLM – a relatively obscure AI tool that allows users to input sources manually before asking the chatbot questions. She plans on launching an educational technology company one day and thinks the project will aid her pursuit.
“I’d love to do something the way that Khan Academy is breaking into creating accessible access to education across the world,” she said.
Patel was in the crowd when Sal Khan, the founder of Khan Academy and New Orleans native discussed AI with Isaacson during the New Orleans Book Festival at Tulane University in March.
Last year, Khan Academy launched Khanmigo, an AI-powered educational tool built upon ChatGPT designed as a personal tutor for K-12 students. In McAlister Auditorium, Khan showed how Khanmigo could guide students through math problems, papers and book reports, even posing as literary characters like Jay Gatsby with whom students could chat.
Both Forman and Khan recognize the high potential for generative AI to provide students with real-time feedback. Instead of waiting for a professor to grade their paper or practice problems, students could engage in dialogue with chatbots.
“Imagine,” Khan said, “if the university had the resources, so that every paper you wrote, you were assigned a grad student that just hung out with you and was just there for you while you’re writing the paper.”
He also said if students are required to write some papers through a program similar to Khanmigo, it may reduce cheating.
“If you submit it through the AI, the AI is going to say, oh, yeah, here’s the paper,” Khan said. “And by the way, we worked on it together for four hours. Yeah, this is if you want, here’s a whole transcript of us working together on it. Here’s a summary.”
There could even be a database that gives an entire history of a student’s high school or college work done with an AI, he said, used for future admissions or hiring processes.
Patel was skeptical. “Instead of submitting a resume or an application to a university, you’d submit your database and how that summarizes what this person is capable of,” she said. “If you don’t take into account systemic disadvantages, then it’s going to be very clearly skewed towards people.”
Beyond personalized learning, Khan also discussed the potential to free up educators’ time. Khan believes AI could address teacher shortages by automating tasks like grading and lesson planning, allowing for increased teacher-student interaction. In higher education, this may translate to professors dedicating more energy to research.
“If we can automate the things that can be automated to free up people’s time, to better support the work of the university, I think that’s a win-win,” said Forman.
Eight million subscribers later, Khan is still making educational videos. He finds chatbots useful in clarifying certain edge cases for students.
“I can find 50 articles on a topic,” he said, “but none of them answer my question.”
He sometimes enters into a Socratic dialogue with Khanmigo that helps him zero in on concepts much quicker, like the difference between a homogenous mixture and a solution for a high school biology lesson.
Back in Herbert Hall, Turing’s question “Can machines think?” morphed into a new one. Will machines replace human thinking or enhance it?
Just the beginning
“Every generation gets a new technology,” Isaacson said in his office, pointing to his father’s slide ruler, then to his Macintosh, Apple’s first personal computer.
The printing press and later the internet allowed for the widespread dissemination of information. Among numerous societal impacts, this decentralization coupled with less grunt work has leveled the playing field between individuals and larger institutions, allowing culture, research and innovation to flourish.
However, the recent rise of certain technologies, including what’s in most Americans’ pockets, is seen by some as a direct detriment to attention spans, public discourse and mental health. Generative AI may take both aspects a step further. Today, Martin Luther’s “95 Theses” is not only accessible in seconds, it can also be summarized in seconds, then turned into rap lyrics and spewed out to sound like Drake.
A former math teacher who wields dual Ivy League degrees in the subject, Forman remembers when the calculator first appeared in classrooms 40 years ago, allowing teachers to write more complex problems for students.
“We could go more deeply because we weren’t constrained by the human ability to do arithmetic,” he said.
Chatbots can be used in a similar vein to write papers, rescuing students from the drudgery of grammar and endless web searching. While writing his final paper, senior Connor Hogan said he copied and pasted an outline created from ChatGPT into another chatbot, Anthropic’s Claude AI. It wrote 1,600 words based on the outline. From there, Hogan essentially “injected steroids” into certain sections, such as asking it for an additional 200 words about the future of AI in the workforce or providing a background on the first computer programmer, Ada Lovelace.
The essay was long enough, but he was not satisfied.
“While you’re writing, you’re thinking about the argument that you’re trying to convince the reader of, and [keeping] them engaged.” But similar to how a bodybuilder looks after years of injecting steroids, Hogan felt the strategy yielded more filler than substance. “It’s like show-muscles,” he said. “The machine is just processing your request, pumping some more words.”
So he went back to the outline and used the chatbot as a co-pilot to guide him through an explanation of each decade of AI development, shifting his attention to the way he structured each successive prompt.
“[It] was just so impressive at putting what looks like unstructured thoughts into something that’s really organized and coherent,” he said.
The value of calculators in the classroom is still debated, and though not a direct comparison to generative AI, Forman believes it may provide a cautionary tale. “I talk to people all the time who haven’t multiplied or divided any numbers in years … if you’re getting to rely on a tool, you’ve lost the ability to do that work yourself.”
As Hogan and others found out, chatbots are not close to writing a coherent, well-structured paper. Using a calculator requires at least knowing when and how to add, divide, subtract or multiply; to use ChatGPT to write a paper, “one must know what a good paper looks like,” said Forman.
Chatbots also fabricate or misattribute sources. One student caught it citing a fake Harvard Business Review article, a catch presumably worthy of extra credit.
When late April came, and the class was asked whether or not they learned more about the evolution of AI using the chatbot over traditional methods, only a few hands raised. That doubled when asked if it was better at helping develop useful techniques for writing a paper.
Though the value of knowledge versus skill depends on who’s asked, no one could dispute the diversity of papers turned in.
A portion of the class used chatbots to explore areas they were most interested in, diving into the role of women in AI or more technical aspects like transformer architecture. Others chose creative routes, experimenting with poetry, framing it as a Western novel or cosplaying different figures in the digital revolution, like Steve Jobs or the Unabomber. One student wrote the essay as if it were ChatGPT telling a bedtime story to a robot child 3,000 years in the future; another duo collaborated to write an SNL skit.
“At first it was honestly really bad. It wasn’t funny at all, super dry humor,” said Ellianna Bryan, a senior studying marketing. Then Bryan and Lucie Stern, another senior, prompted the chatbot to assign famous cast members to major historical figures; for example, Keenan Thompson played Alan Turing. After countless rewrites, the script emerged as almost Lorne Michaels-worthy.
“They would bring in interviewers from each point in history and ask them what the state of the digital technology world looks like.” Bryan said, “They were just bickering back and forth the entire time … it was really funny.”
Isaacson plans to create a database of his students’ work to look back on for future research.
In the SNL skit, when cast members were asked whether or not AI would render human workers obsolete, the chatbot generated the following: “The group falls silent, pondering the implications.”
As LLMs continue to train on higher degrees of processing power and scour the internet for more data, they’ll require less input and oversight from the user. Among countless existential questions, whether or not they will buttress or erode the pillars of liberal education will unfold in the same fashion as Isaacson’s paper: like an experiment.
“Now we get to play with it,” said Isaacson. “Because that’s the core of this question with AI. Will it be our partner or will it try to replace us?”
Leave a Comment