Since the release of ChatGPT in November 2022, the Honor and Discipline Committee has seen an uptick in cases involving artificial intelligence (AI) usage.
The large language model-based chatbot ChatGPT, alongside other similar AI programs released in the last year, has upended cheating policies at peer institutions as faculty and administrators struggle to identify instances when students are using the software and wrestle with larger questions about the role of writing in educational practices.
According to an annual Honor and Discipline Committee report, the committee heard three cases in which students were accused of using artificial intelligence in a manner that violated the Honor Code during the 2022–23 academic year. Though the committee could not provide the precise number of AI-related cases it has heard this semester, Sam Sidders ’25, student co-chair of the committee, estimated that the quantity has risen to between a fifth and a third of all cases the committee has seen thus far.
“My impression of ChatGPT use is that it has now become a part of the mix of our regular plagiarism cases,” said Student Co-Chair Harper Treschuk ’26. She noted, however, that the majority of cases heard by the committee often pertain to cheating on exams without the use of AI, not plagiarism.
During a hearing, the accused student and their professor have the opportunity to make their cases by presenting evidence to the Honor and Discipline Committee, which is composed of elected members from each class year as well as non-voting faculty members. A student is found responsible for breaking the Honor Code through a three-fourths majority vote, and, if found responsible, the committee holds another student vote to determine the appropriate sanction for the student.
When the committee first started seeing cases about AI last year, Sidders told the Record, it was especially difficult to rule whether accused students had violated the Honor Code because there were few to no policies about AI usage in faculty members’ syllabi at the time and because the committee often struggled to determine whether AI had been used.
Part of the issue, Sidders explained, is that AI tools like ChatGPT can be employed for many different parts of the writing process, and unspecific policies on syllabi makes it difficult to determine if a student has violated any rules.
“It’s sort of unclear what you can use AI for,” she said. “There’s a difference between someone using it to craft an outline and someone pretty clearly using it to write a whole paper.”
While Sidders acknowledged that many professors adequately updated their syllabi for the fall semester, she said that the committee still often struggles to prove that AI tools have been used at all. Cases must be pre-approved by the committee’s faculty chair, Associate Professor of Philosophy Justin Shaddock, and he has often determined that there is not even enough evidence for a hearing, she said. And, Sidders added, she believes there are many potential cases that faculty and the committee are missing altogether.
When the committee does have to discern whether AI has been used in a student’s work, it often tries to find “silver bullet moments” — part of submitted prose where an AI model made a glaring error, like a falsified quote or inaccurate plot summary of a text, that a student likely wouldn’t write themselves, Sidders said.
Students on the committee can only review evidence presented by the faculty member or accused student. Often in hearings, Treschuk said, faculty will highlight these suspected AI “hallucinations” in the student’s work and, in rare cases, present the findings of AI-detecting software to support their case as well.
While the committee now has more experience judging cases relating to AI, guidance on how to determine whether a student is using the software is still not a part of committee training for new members, Treschuk said. “That’s something that I honestly hope could be a part of training going forward.”
AI usage has also been central to the work of the College’s Ad Hoc Committee on Academic Integrity, which was started by Dean of the College Gretchen Long at the end of the 2022–23 academic year following the rise of AI and concerns raised by faculty to Long about sanctions determined by the Honor and Discipline Committee.
“[AI] brought the problem to a whole other level of magnitude, both in detection, prevalence, and capacity,” Professor of Biology Luana Maroja, member of the Ad Hoc Committee on Academic Integrity, told the Record.
Though the committee is composed of only faculty members, including Shaddock and Long, its members have met with students from the Honor and Discipline Committee to learn about their practices and baseline sanctions, Maroja said.
During the next few months, the ad hoc committee plans to hold a series of lunches to gauge faculty perspectives about the current state of the Honor Code, both with respect to AI usage and more generally. The committee is still determining how best to structure meetings with the student body to receive its input as well.