
Oberlin College announced that the 2025–26 academic year has been designated the “Year of AI Exploration,” according to a message from its president, Carmen Twillie Ambar. As part of the initiative, all faculty and staff will receive access to enterprise versions of ChatGPT and Google Gemini this fall, with students gaining access in the spring. These premium versions provide stronger privacy protections, unlimited access to higher-speed models, and expanded capabilities compared to the free versions.
The initiative’s goals include fostering campuswide exploration of artificial intelligence; addressing its academic, ethical, privacy, and environmental implications; and developing clear policies and procedures around its use, according to the letter.
“AI is here,” Ambar wrote. “To ignore it would be to do so at our peril. We should understand its constraints and its challenges, absolutely; and then we must determine how to harness this technology so that it helps Oberlin produce the best critical thinkers, creative problem-solvers, and generous and skilled collaborators.”
Duke launched a similar pilot program with OpenAI in June, offering undergraduates unlimited access to its GPT-4o model. Duke also launched an internal platform, DukeGPT, that provides students with access to AI tools developed by Duke using GPT-5, OpenAI’s most advanced software to date.
As part of the initiative at Oberlin, the school’s Center for Information Technology will run guided, four-week AI workshops to help participants experiment with different tools and build practical skills, according to the Oberlin Review.
Faculty, staff, and students will spend the year trying out new AI features, including those built into existing campus software. Oberlin also plans to launch a shared online hub that will centralize AI resources, track ongoing projects, support policy development, and make it easy for community members to share what they learn. Throughout the year, expert-led lectures and discussions on AI will complement this hands-on work, while faculty committees tackle broader issues such as policy, curriculum updates, academic integrity, general education, and AI literacy, according to the Oberlin Review.
Faculty governance groups at Oberlin have been tasked with reviewing policies and curricula to ensure clear guidelines for AI use in coursework and research. In particular, that work will focus on academic integrity. The review will also consider how AI literacy and discipline-specific competencies fit into general education requirements.
Duke has taken a similar stance on academic integrity in its AI rollout. The university reminds undergraduates to uphold the Duke Community Standard when using AI tools and to follow faculty instructions for their coursework. Student leaders have also shared recommendations to help peers use platforms like GPT-4o. These recommendations urge students to consider intellectual property concerns, inaccuracy of data, biases, inequalities, and ethical dilemmas when using AI. “We aim to create a thoughtful environment for AI use, ensuring it enhances learning rather than replacing essential skills like critical thinking and problem-solving,” the guideline reads. “Think of AI as a supportive resource rather than a definitive answer provider.”