Since the release of ChatGPT and other generative artificial intelligence tools, colleges and universities around the world have fretted that their students would lose their ability to code, write, and think critically. Many fear that students would rely on ChatGPT as a crutch by passing off AI-generated writing as their own.
In light of this fear, the Haverford Libraries hosted their own discourse around AI in the College’s classroom last week to understand how Fords are actually using the new technology. The discussion included students, professors, and an administrator.
“Right now, the AI discourse is pretty polarized and heated,” Research & Instruction Librarian Semyon Khokhlov, who moderated last week’s panel, says. “But this thinking doesn’t get us closer to evaluating the usefulness of AI. Instead, we should be asking, like any tool, how useful is this?”
Panelist Pranav Rane ’25, a computer science and physics double major, finds ChatGPT “extremely beneficial.” He says he uses the software to review concepts he learned in class and help him explore potential topics for his senior thesis. “My academic goal is to learn more about the world around me, and to that extent, I feel that AI has helped a lot more than hurt,” he says.
Fellow panelist and political science and French double major Patrick Kelly ’25 utilizes AI as an advanced search engine for his senior thesis. He says he also uses the software to give feedback on graduate school applications.
However, both Rane and Kelley don’t blindly believe what they read. “I err on the side of caution with anything it spits out,” Kelly says. “The sources it spits out might get the year wrong or may not exist at all.”
Those AI-generated falsehoods, called hallucinations, are not uncommon in ChatGPT and other models, prompting Kelly to constantly evaluate AI writing against reality.
One way to forestall these hallucinations, the panelists say, is to change what ChatGPT responds to. “You do have to be careful, and over time, I’ve learned what questions to ask,” Rane says, “In the back of my mind, I know the questions I shouldn’t ask and I know the examples I should.”
Despite students finding ways to employ ChatGPT as a helpful tool, administrators and professors have reformed their syllabi and departments to prevent plagiarism and cheating. Associate Provost and Associate Professor of Psychology Benjamin Le hasn’t been in the classroom since AI became mainstream, but he does advise professors on its use. In his role, he has observed the impact that ChatGPT has had on academic departments across disciplines.
“It has allowed our faculty to be much more intentional about our learning goals,” Le says. “Every department has learning goals, and we should revisit them now that these tools are available.”
Le views generative AI to be a “great opportunity for us to be reflective” and advises professors on creating policies that are fair to both the students and the professors. His biggest advice? “Be super, super clear about the use cases [of ChatGPT] on any particular assignment.”
A recent Honor Code trial involving AI plagiarism brought new clarity to Le and other administrators on how to address the new issue. “If they’re using ChatGPT to cheat, it’s just like any other form of academic dishonesty, as if they’re copying a paragraph from a journal article,” Le explains.
Associate Professor of Computer Science Sara Mathieson, who also participated in the panel discussion, has been a staunch opponent of AI usage in introductory computer science classes, expressing the desire for student learning. “For introductory computer science classes, we’ve always said it’s not allowed,” Mathieson says. “Our motivation is that we thought students were going to use it to solve problems, and we wanted people to go through the experience of working it out on their own.”
Mathieson inputs her own assignments into ChatGPT as part of her vetting process. She’s found that AI-generated code is typically not very intelligible and gives her insight into what such code looks like in the event it appears in a student’s work.
“I play offense and try and make things a little more robust in academic dishonesty,” she says.
Humanities professors, Mathieson adds, have also been changing their assignments in response to AI. Mathieson says they often run their assignments through ChatGPT to identify which might be AI-generated. For both STEM and humanities classes, she concludes, “The harder it is to get something sensible out of ChatGPT, the easier it is to detect student usage.”
However, Mathieson has recently started allowing AI tools in her upper-level classes as long as students use citations. After informally surveying her class, she learned students aren’t using ChatGPT for entire assignments like she once feared, but instead are using it to reinforce learning just like Rane does. “I don’t think students are trying to get a short solution to their assignments,” she says.
One area where both professors and students use ChatGPT is drafting emails. Rane, who is currently reaching out to labs as he applies for doctoral programs, has found value in seeing examples of ChatGPT-written emails. He says, “I’ll take some parts from Chat, but over the course of time, I’m learning how to craft these emails in a respectful way.”
Le is a big proponent of AI-assisted emails and encourages other professors to embrace it as well. “When I talk to faculty skeptical of AI, I ask, ‘How many times have you gotten the overly casual email that’s annoyed you?’ With AI, you wouldn’t have that because they would craft professional emails.”
Even Professor Mathieson uses ChatGPT to write emails, joking, “I no longer know how to spell. Some things I have to look up every single time.”
At its core, the discussion over ChatGPT usage in colleges and universities questions the value of education itself. If AI can write and code, what’s the point of college in the first place?
Kelly believes college is not reduced to learning skills that can be replaced by AI. Instead, he believes, “There’s a very human element of education that will always be there. In college, you’re learning to be a person,” he says. “And being a person requires having the communication abilities, the technical abilities … points I don’t think ChatGPT will ever reach.”
For more about AI’s role at Haverford, read “ChatGPT Doesn’t Have to Ruin College” by Tyler Austin Harper in The Atlantic.