AI Poisoning - What It Is, Why It Matters, and What Your Business Should Know
- Campfire
- 2 days ago
- 3 min read
You've probably heard a lot about AI over the past year. Most of it focuses on what AI can do for your business. This article is about something different: what can happen when AI tools are deliberately tampered with, and why it's worth understanding before it affects you.
This is also the topic for security awareness training (SAT) this month. We'd encourage everyone who uses AI tools at work to complete it.
What Is AI Poisoning?
AI poisoning is when someone corrupts the data that an AI tool learns from or references, so that it produces wrong, misleading, or harmful outputs without anyone realizing it.
Most AI tools work by processing large amounts of data to generate responses, summaries, or recommendations. If the underlying data has been manipulated, the tool's outputs can be manipulated as well. The people using the tool see a normal-looking result. The problem is invisible until something goes wrong.
Think of it like contaminating a water supply. The water still comes out of the tap. It still looks clear. You don't know anything is wrong until someone gets sick.
A Real-World Example
Consider a finance team using an AI assistant to review and summarise vendor invoices before approval. The tool has been trained on historical payment data and flags unusual patterns, recommending whether to approve or hold.
Now imagine that an attacker has fed manipulated data into the system, adjusting what "normal" looks like. The tool starts approving invoices it should flag. Nothing looks unusual in the summary. The payments go through. By the time anyone notices, the money is gone.
This is not a hypothetical. Variants of this attack have been documented across financial services, healthcare, and professional services organizations. The tools look fine. The outputs look fine. The damage is real.
Why the Numbers Are Worth Paying Attention To
AI-related fraud surged by over 1,200% in the same year.
These numbers are large enough to warrant pausing on them. The speed of AI adoption has outpaced the security practices around it. Many businesses are using AI tools that access sensitive data without fully understanding what those tools do with it, where the data goes, or whether the underlying models can be tampered with.
This is not a reason to avoid AI. It is a reason to be deliberate about it.
What Makes a Business Vulnerable
A few patterns tend to show up when AI poisoning or AI-related fraud affects a business:
Using AI tools that weren't vetted before deployment. Free or low-cost AI tools often have opaque data practices. If you don't know what data the tool was trained on or who has access to it, you don't know what you're trusting.
Giving AI tools broad access. The more an AI tool can see and act on, the larger the potential impact if something goes wrong. A tool that can read emails, access files, and send messages on your behalf can cause a lot of damage if compromised.
Removing humans from the loop. Agentic AI tools can take actions without approval. When a human reviews outputs before anything happens, there is a natural checkpoint. When actions are fully automated, that checkpoint disappears.
Not keeping track of which AI tools are in use. Individual team members often install AI tools without IT's knowledge. Each one is a potential entry point.
What You Can Do Right Now
You don't need to be a technical expert to take sensible steps here.
Complete this month's SAT training. Your team's security awareness training for April covers AI poisoning in detail, with practical examples and guidance for recognizing suspicious AI behaviour. It takes about ten minutes and is one of the most relevant modules we've run this year.
Know what AI tools your team is using. Ask around. You may be surprised by what's already running in your business.
Apply the principle of least access. If an AI tool doesn't need access to your email, don't give it access. Limit what each tool can see and do.
Keep a human in the loop for consequential decisions. Any action involving money, client data, or system changes should have a person reviewing the AI's recommendation before it's acted on.
Ask us before you commit. If someone on your team is considering a new AI tool, a quick conversation with Campfire before deployment is a lot easier than trying to fix a problem after the fact.
The Bigger Picture
AI is genuinely useful, and we're not suggesting you step back from it. The businesses that get the most out of AI are the ones that adopt it thoughtfully: understanding what each tool does, limiting its access to what it actually needs, and keeping people involved in decisions that matter.
The goal isn't to avoid AI. It's to use it in a way that doesn't create new problems while it's solving old ones.