As generative AI tools continue to reshape the educational landscape, many university students are eager to learn how to use these technologies responsibly. But according to a recent article by Tim Requarth, director of graduate science writing at NYU and research assistant professor of neuroscience, faculty are lagging behind—often unsure how to adapt traditional teaching methods to meet this rapidly evolving reality.
In large-scale surveys, over 70% of students expressed a desire for formal instruction in AI use, yet only a third of instructors had integrated AI into their teaching. Meanwhile, students are already using these tools widely—40–60% reporting weekly use for writing tasks—but often without clear understanding of best practices.
"These tools are here to stay," one student noted. "Instructors may as well get on board and help us determine how to use them in a reliable and appropriate way."
Requarth emphasizes that while AI can enhance efficiency, unregulated use may impair deeper learning. Drawing on studies in cognitive science and education, he argues that like other assistive technologies—such as GPS or autopilot—AI can diminish skill development if not paired with intentional guidance. For example, one recent study found that students using AI tutoring showed improved practice performance but ultimately retained less knowledge when tested without it.
Despite these concerns, Requarth does not advocate banning AI. Rather, he calls for thoughtful integration, with clear course-wide policies and assignment-specific guidelines. In his own graduate-level writing course, students must submit “AI use statements” with every assignment, disclosing the model used and nature of the assistance. Certain uses—like grammar checking or structural comparison—are allowed, while tasks that involve core scientific reasoning or research design are off-limits.
“If you want to build intellectual muscle, you need to do the heavy lifting yourself,” Requarth tells his students, likening AI use to bringing a forklift to the gym.
While his approach works well with motivated graduate students, Requarth acknowledges that conditions vary widely. In some undergraduate settings, AI tools have been described as contributing to alarming skill gaps, with students potentially graduating without essential writing abilities.
Ultimately, he argues, educators must rethink not just policies but the purpose of higher education in an AI-saturated world. Clear rules may offer short-term help, but long-term solutions will demand a shift in pedagogy—one that redefines what skills matter, how they’re taught, and what learning looks like in the age of intelligent machines.
“AI poses a profound challenge,” Requarth writes, “but it’s also an opportunity to build an education system that’s more resilient, more relevant, and better prepared for the future.”
