Many college instructors face a familiar issue: students can discuss the “main idea” of readings but struggle with specific passages, argument development, or in-depth text engagement. This surface-level knowledge often stems not from actual reading but from AI-generated summaries. Tools like ChatGPT and NotebookLM provide quick overviews of complex texts, which can aid review or clarification but also allow students to avoid the mental effort reading requires. Rather than monitoring or limiting AI use, a simpler approach was tested: redesigning the reading environment to make text engagement more visible and accountable.
Reading has long been seen as an active, interpretive practice essential for discussion, analysis, and writing. However, instructors report increasing difficulty in keeping students focused on assigned texts, a problem exacerbated in AI-heavy classrooms. Students can access summaries and explanations without engaging with the original text’s language or structure. The issue is not merely AI usage but the invisibility of reading itself. If students can engage in class using secondhand text representations, they have little reason to delve deeply into the texts. The goal was to make reading a visible activity again.
The intervention took place in a first-year writing program at an American branch campus in the Middle East, where all students undertake two writing courses focused on critical reading, analysis, and argumentation. Small class sizes (sixteen students) allow instructors to monitor reading practices. Courses cover themes like identity, inequality, or taste and distinction, with readings ranging from scholarly articles to essays. Noticing reliance on AI-generated summaries, the focus shifted to redesigning reading conditions rather than students’ tool use. Although implemented in a writing course, the problem of limited student engagement with readings is widespread in college courses, so transferability was a key consideration.
At the semester’s start, each student received a printed booklet containing all course readings. Instead of PDFs or links, students worked with a physical packet throughout the term. They annotated the readings by hand, highlighting key ideas, underlining claims, and writing questions or comments in the margins. This shift reframed reading as a tangible activity, resisting easy interruption or replacement by screen-based tasks. Research suggests handwriting fosters selective attention and deeper processing, a pattern observed in practice. The booklet became a cumulative record of engagement, frequently referenced during discussions and assignments.
To reinforce active reading, the booklet was paired with an accountability mechanism. Before each class, students uploaded photos of their annotated pages to the university’s learning management system. These submissions were not graded but served as preparation evidence, akin to being ready for class discussions. Small class sizes made reviewing submissions manageable, quickly establishing that reading was expected and visible. This had a noticeable impact, with students arriving prepared to reference specific passages, ask text-based questions, and support claims with evidence. Handwritten annotations made it challenging to rely solely on AI-generated summaries.
Throughout the semester, students reported reading more carefully and with greater focus. Many noted that printed texts encouraged slowing down and paying attention to language and structure, unlike screen reading. Class discussions became more text-grounded, with students referring to page numbers, quoting directly, and building on each other’s observations instead of relying on general summaries. Written assignments showed clearer textual evidence use and more precise source integration. Reading became a visible component of intellectual work, not just a preparatory task.
The intervention required compiling course readings into a printed booklet, coordinated with library staff to ensure all materials were covered under existing licenses or course reserves. This did not introduce new copyright issues, simply consolidating readings already approved for instructional use. Instructors considering a similar approach should consult institutional guidelines, but assembling readings into a single format is often manageable and compliant.
This intervention does not ban AI tools or view them as inherently problematic. Instead, it creates conditions where reading cannot be outsourced. By making text interaction visible and tangible, the booklet restores reading as a site of interpretation, judgment, and meaning-making—engagement forms AI-generated summaries cannot replace. The aim is not to revert to print for its own sake but to design pedagogy intentionally. Changing learning’s material conditions alters students’ focus and what they can avoid.
Although developed for a first-year writing course, the intervention is adaptable to any course where reading is vital. It requires no new technology, minimal ongoing labor, and little explanation to students. As generative AI reshapes student engagement with academic work, instructors may not need complex technological solutions. Sometimes, a low-tech redesign suffices to return responsibility for meaning-making to students. María Pía Gómez-Laich, PhD, an Associate Teaching Professor of English at Carnegie Mellon University in Qatar, teaches and researches academic writing and genre-based pedagogy in higher education.
Original Source: facultyfocus.com
