This entry is part of a series discussing my experiences with AI agents, especially Claude Cowork. Before delving deeper, it is crucial to address some fundamental issues. The previous post emphasized the importance of taking things slowly and not rushing into using these tools. This time, the focus is on recognizing actual risks to make a well-informed decision when adopting these technologies. It is essential to understand that there is no need to hurry.
I am not a security specialist, but I have extensively read and contemplated these topics, aiming to share insights from a beginner’s perspective. Let’s consider what could potentially go wrong. This discussion isn’t exhaustive but highlights issues overlooked in conversations about AI tools. Email serves as a significant risk example when using AI that integrates with accounts, as access can cascade. If someone gains access to your email, they could reset passwords for other accounts, potentially locking you out entirely.
For instance, a recent Daring Fireball post by Dave, about a phishing attempt documented by Matt Mullenweg, illustrates how even tech-savvy individuals could be duped if rushed. Email account breaches are a common method for identity theft and account takeovers. Before granting AI tools access to your email, consider the level of access, such as read-only versus full permissions.
There is also a slower, less dramatic risk involving personal data collected by apps, websites, and AI tools. Data brokers gather and sell this information, often without users’ awareness. Many AI tools rely on data as part of their business model. When using free tools, it’s vital to question what data you are exchanging. Anthropic, behind Claude, updated its privacy policy in 2025, now using consumer conversations for model training unless users opt out.
This pattern is prevalent across AI tools. Stanford’s Human-Centered Artificial Intelligence advises caution with AI chatbots. Institutions like Ohio University provide guidance on AI use. Additionally, copyright concerns arise as publishers, like Routledge, license academic content for AI training without authors’ consent. The Authors Guild asserts that AI training rights are not automatically included in publishing agreements.
Other risks include prompt injection, where hidden instructions in documents or webpages could lead AI to unintended actions. Data breaches at AI companies could expose stored conversations. Deleting these conversations can mitigate some risks. Surveillance creep is another concern, as AI tools connected to personal accounts create detailed profiles of users, which may be accessed by others through various means.
While not all risks are covered, these points reflect considerations for determining personal risk profiles with AI tools. For further exploration, resources like the Electronic Frontier Foundation provide more information on digital risks.
Original Source: teachinginhighered.com
