When speed and volume replace judgment, clarity, and accountability.
We need to talk about AI slop (sometimes called workslop).
Definition: AI slop is content generated by artificial intelligence that looks polished on the surface but is shallow, inaccurate, or misleading underneath. It’s the slide deck that repeats clichés without substance. The sales email full of hallucinated statistics. The internal memo that confuses more than it clarifies.
It’s not obviously bad work — that’s what makes it dangerous. It’s plausible, professional-sounding, and therefore more likely to slip into circulation unchecked. And every time it does, it slows teams down, erodes trust, and quietly taxes productivity.
According to a joint Stanford/BetterUp study (derived from a specific U.S. desk-worker survey), workslop already costs businesses an average of USD 186 per employee per month in wasted effort — the equivalent of over USD 2.2 million annually for a 1,000-person firm (Axios, 2025).
When AI slop hits reality
Legal filings with fake citations
– In the U.S., lawyers were fined after submitting court documents with 55 fabricated citations produced by AI (Reuters, 2025).
– The UK High Court has warned lawyers to urgently stop misuse of AI after similar incidents (Guardian, 2025).
Customer service failures
– Air Canada’s chatbot promised refunds outside policy, leaving the company liable (EvidentlyAI, 2025).
– A Chevrolet dealership’s bot was tricked into “selling” a $76,000 SUV for $1 (Prompt Security, 2025).
Engineering errors
– AI coding assistants have been implicated in deleting production data and introducing bugs that humans then spend hours or days repairing (CIO.com, 2025).
In each case, the damage wasn’t caused by malicious intent — but by trusting polished output without proper review.
Why leadership is the biggest source of AI slop
In the Stanford/BetterUp workslop survey, ~16% of respondents reported that they had received AI-generated content judged to be “workslop” from managers or senior leaders. This suggests that when low-quality AI output flows downward, it communicates a tacit preference for speed over rigor.
The main drivers:
Top-down pressure without standards → Leaders mandate “use AI” but don’t define review processes.
Skipping quality checks → Executives forward AI-drafted memos or decks assuming they’re “good enough.”
Lack of AI literacy → Many leaders don’t fully grasp hallucination risks or prompt design basics.
Rewarding volume over impact → If the metric is “deliverables,” AI will churn them out — slop included.
The hidden cost of slop
| Dimension | Cost |
| Productivity | ~USD 186 per employee per month wasted cleaning up slop (Axios, 2025)
|
| Time | ~2 hours per incident fixing AI-generated mistakes (AllWork, 2025)
|
| Morale | 50% of workers report feeling annoyed, 38% confused when receiving AI slop (Fast Company, 2025)
|
| Reputation | Legal sanctions, misleading marketing, brand damage (Air Canada, UK High Court) |
This isn’t a “future risk.” It’s an active drag on productivity today.
How leaders can stop the slop
Define clear guardrails: Standards for prompts, tone, citations, and “no publish without review” rules.
- Institute review loops: Senior output must always be human-checked before cascading to teams or clients.
- Train leadership in AI literacy: Not just the tools, but the risks — hallucinations, bias, overconfidence.
- Measure clarity, not just volume: Reward communication that informs, not just content that exists.
- Encourage AI humility: Share examples of AI failures openly — to normalise review, not shortcut it.
Final word
AI doesn’t just create faster. It creates plausible. And plausible is often the most dangerous.
The real productivity threat in 2025 isn’t that employees refuse to adopt AI — it’s that leaders are adopting it uncritically, flooding their organisations with polished but empty output.
If you want AI to accelerate your business rather than bury it, start by setting a higher bar for yourself as a leader. Don’t just ask, “Did AI make this faster?” Ask, “Did AI make this clearer, truer, and more valuable?”
Have you seen AI slop creeping into your organisation?
What’s the most surprising example you’ve run into?