I’m starting a series of articles about my experiences with Claude Cowork and the broader realm of AI agents. Before delving into security settings, privacy policies, and other considerations, there’s something crucial to say about AI discussions: there’s no rush to dive in. The implicit pressure to quickly adopt these tools, connect apps, and allow access is immense. The fast-paced tech industry may seem to reward haste, but exploring AI doesn’t mean you must instantly share access to your files or email. It’s perfectly acceptable, even necessary, to remain in a learning phase. Marc Watkins, Assistant Director of Academic Innovation at the University of Mississippi, highlights the importance of nurturing skepticism and curiosity in AI’s era on Episode 613 of the Teaching in Higher Ed podcast.
Sam Illingworth, a Full Professor of Creative Pedagogies in Edinburgh, advocates for a cautious approach with his Slow AI newsletter, emphasizing the importance of discerning when to use AI and when to avoid it. He argues that most AI advice misses the point because it doesn’t question what might be lost. If you feel behind for not integrating AI with your calendar or email, heed Sam’s advice to slow down. Serious security and privacy concerns demand careful attention. Sam also warns about the risks when organizations prematurely distribute AI tools, calling the resulting chaos “adoption.”
Leon Furze, an international consultant, author, and speaker, has developed a comprehensive series on AI ethics. His Teaching AI Ethics project, intended for educators and students, addresses issues like bias, environmental impact, copyright, privacy, human labor, and power. Available as a free ebook, it probes who powers these systems, the environmental costs, and whose work was used without permission. Furze advises against completely avoiding AI due to ethical concerns, urging engagement and experimentation to gain firsthand experience.
Emily M. Bender and Alex Hanna, co-hosts of Mystery AI Hype Theater 3000 and authors of The AI Con, highlight the exaggerated claims of AI’s inevitability and power. In Episode 576 of Teaching in Higher Ed, Emily contrasts AI enthusiasts and skeptics, noting their shared belief in AI’s significant impact. Meredith Whittaker, president of Signal, warns about the dangers of granting AI tools extensive access to personal data. Kate Crawford, co-founder of the AI Now Institute, examines the economic forces behind data collection. Kashmir Hill from the New York Times reports on privacy issues, with her book and TED talk offering insights into the implications of AI technologies.
In upcoming posts, I will discuss my considerations regarding what access to grant Claude Cowork and what to keep private, including implications for personal and professional privacy. It’s essential to remember that decisions about AI adoption don’t have to follow anyone else’s timeline. It’s wise to proceed cautiously, given the rapid evolution of AI capabilities. You’re encouraged to keep some aspects of your life separate from AI. Moreover, Maha Bali, a Professor of Practice at the American University in Cairo, advises against AI-shaming, emphasizing thoughtful decision-making in education during this AI era. Some professionals, like James Lang, discussed on Episode 529, must remain both curious and skeptical, as their roles require engagement with AI developments.
Original Source: teachinginhighered.com
