Colleges Advised on Ethical AI Use Strategies by Scholarly Teacher

Todd Zakrajsek, who leads the ITLC Lilly Conferences on College and University Teaching, identifies the biggest challenge in higher education as guiding students to use AI ethically instead of focusing on catching them in academic misconduct. Teaching conferences have been consumed by discussions on how to redesign assignments to prevent AI usage and detect when students rely on AI. The emphasis has often been on students’ “inappropriate” actions, while potential solutions to the underlying issues remain overlooked. Here are four suggestions for encouraging students to produce original work, adaptable to various institutional or classroom settings.

Firstly, consider historical research on academic misconduct. Factors leading students to cheat on assignments, such as purchasing term papers, copying without citing, or using tools like ChatGPT, remain consistent. Research by Don McCabe and others has shown that academic misconduct increases when students perceive it as common, feel pressure to excel, find the work meaningless, or believe they are unlikely to be caught. Addressing these issues, like requiring intermediate drafts or in-class work phases without technology, won’t eliminate AI misuse but can reduce the temptation for dishonest behavior.

Secondly, define what constitutes appropriate AI use. Explicitly communicate expectations regarding AI in coursework. While some assignments should be entirely human-driven, the evolving world makes some AI use reasonable. Faculty have long used tools like spell checkers and online searches, early AI forms. For instance, Zakrajsek used ChatGPT to locate a specific study by Don McCabe while writing this article. It’s fair for students to use AI similarly, but they need guidance on acceptable use levels, as faculty thresholds vary. Clarifying expectations, such as when collaboration is allowed, can prevent misunderstandings, illustrated by a past incident at Southern Oregon University where students misunderstood collaboration rules on a take-home exam. Providing clear instructions on AI usage, like allowing AI for editing but not drafting a paper, can help.

Thirdly, require an AI disclosure statement for written work. This statement should reveal whether and how AI was used, discouraging inappropriate use. At the University of North Carolina at Chapel Hill, faculty must disclose AI use in coursework to students, explaining its role and limitations, encouraging transparency.

Lastly, occasionally verify student work authenticity. While AI checkers aren’t reliable enough to accuse students of cheating, informing students that periodic checks occur may deter inappropriate AI use. Current AI detection tools often lack accuracy, especially affecting non-native speakers and neurodivergent learners. Although not foolproof, mentioning that papers will be checked can serve as a deterrent, similar to strategies used to maintain behavior standards, like customs checks in various countries. This approach aims to encourage integrity while acknowledging the limitations of detection technologies.

Original Source: scholarlyteacher.com

Leave a Reply

Your email address will not be published. Required fields are marked *