Let me be clear from the start: I’m not anti-AI. I use it. I appreciate what it can do when deployed intelligently. I’m tech-forward and believe artificial intelligence has legitimate applications that can make businesses more efficient and effective.
But we’ve already hit the tipping point of overuse, and the consequences are starting to show up everywhere.
Does it Really Make You More Productive?
A recent Harvard Business Review study revealed something startling: even as more companies implement AI-driven processes, 95% see no increase in productivity. Think about that. Companies are investing heavily in AI tools, reorganizing workflows around them, and training employees to use them – yet productivity remains flat or actually declines. When you factor in their investment, they’re actually losing money.
The culprit? What researchers call “workslop” – AI-generated content that fills pages but says nothing meaningful. Or worse, says inaccurate things with complete confidence.
Deloitte just had to refund a significant portion of a $440,000 government contract because their report contained numerous AI-fabricated citations. Someone – likely multiple people – reviewed that report before delivery. Nobody caught the fabrications because nobody actually vetted the AI’s work.
That’s the core problem we’re facing: people are treating AI as a replacement for human judgment rather than a tool that requires human oversight.
AI as Your Brilliant Intern
From the start, I’ve advocated thinking about AI the way I think about a brilliant intern: high IQ, multiple degrees, access to vast amounts of information – but zero street smarts and no real-world experience.
You wouldn’t hand a critical project to an intern and assume the output was correct without review. Yet that’s exactly what’s happening with AI across industries. People are accepting AI-generated work as finished product rather than first draft that requires human refinement and verification.
The intern needs supervision. They need context. They need someone with judgment and experience to review their work, identify the gaps, catch the errors, and add the nuance that only comes from actually understanding the problem you’re trying to solve.
AI is no different. It’s a powerful tool for research, initial drafts, data analysis, and pattern recognition. But it requires human intelligence to deploy artificial intelligence effectively.
AI Suspicion – It’s Real.
I’m seeing another dynamic emerging that should concern every business leader: AI suspicion.
Watch any remarkable video on social media and within seconds, you’ll see comments saying “That’s AI.” Sometimes they’re right. Sometimes they’re wrong. But the suspicion is real and growing.
I recently saw a post from an executive coach whose daughter wrote a college paper entirely herself. It scored 70% AI-generated on the professor’s detection tool. The professor’s policy? Anything over 50% gets a zero.
Her daughter spent two days trying to “dumb down” her writing to game the detector. Let that sink in – a college student learning to write worse to avoid being falsely accused of using AI.
This is what happens when we outsource evaluation to algorithms. The detection tools don’t actually detect AI – they detect patterns associated with AI, which often means clear, well-structured writing gets flagged while actual AI-generated garbage slips through because it’s been prompted to “sound more human.” Remember – AI is an information aggregator, first and foremost. If the “AI Detector” decides that most college writing is junk, then the student who turns in something that’s not junk gets penalized.
The same dynamic is playing out in business. Authentic human work is getting dismissed as AI while AI-generated content is accepted as authentic. We’re creating a world where people are incentivized to produce lower-quality work to avoid suspicion.
What Your Empowered Buyers Actually Want
Here’s what this means for business relationships: your customers and prospects are developing sophisticated detection skills for AI-generated content. They can tell when they’re reading generic, AI-produced responses to their inquiries. They can hear the lag in AI phone calls. They recognize template-based outreach that’s been “personalized” by inserting their name and company.
And increasingly, they’re making buying decisions based on whether they feel like they’re dealing with a real person who understands their situation or an automated system that’s trying to appear human.
Today’s empowered buyers – especially younger, tech-savvy decision-makers – have grown up surrounded by AI and automation. They’re not impressed by it. They’re suspicious of it. What impresses them is authentic human expertise, genuine understanding of their challenges, and real intellectual curiosity about their business. Gen-Z, in particular, likes the “adults in the room” feel. AI doesn’t give them that.
The generational dynamics here matter enormously. Millennials and Gen Z buyers can smell AI-generated content from a mile away. They value authenticity precisely because it’s becoming rare.
The Coming Correction
I predict that by 2027, we’ll see AI move back to where it belongs: a valuable tool used strategically, not a core process to be blindly relied upon.
Companies will realize that:
- AI without human oversight creates legal and professional liability
- Audiences increasingly distrust AI-generated content
- Productivity gains require thoughtful implementation, not wholesale replacement
- Authenticity is becoming a competitive advantage in a world saturated with AI content
This doesn’t mean abandoning AI. It means being authentically intelligent about how we deploy artificial intelligence.
How to Use AI Intelligently
Here’s what that looks like in practice:
Automate research, not relationship building. Use AI to gather data, identify patterns, and surface insights. Use humans to build connections, understand context, and make judgments.
Treat AI output as a first draft, not finished product. This is HUGE. Every AI-generated piece of content should be reviewed, refined, and verified by someone with actual expertise in the subject matter.
Maintain human touchpoints where they matter most. Customer communication, sales outreach, problem-solving, and relationship building should remain primarily human activities, supported (not replaced) by AI tools.
Be transparent about AI use. When you use AI for certain functions, be upfront about it. Trying to pass off AI-generated content as human-created erodes trust when people figure it out – and they usually do.
Invest in human skills that AI can’t replicate. Judgment, empathy, creativity, strategic thinking, and genuine relationship building are becoming more valuable, not less, in an AI-saturated world.
The Bottom Line
AI isn’t going away, and it shouldn’t. Used properly, it’s a powerful tool that can eliminate grunt work, surface insights, and improve efficiency. But we need to be authentically intelligent about how we use artificial intelligence.
That means recognizing its limitations, maintaining human oversight, and understanding that in a world increasingly filled with AI-generated content, authentic human expertise and genuine connection are becoming differentiators rather than commodities.
The companies that win in the next decade won’t be the ones that automated everything. They’ll be the ones that thoughtfully combined AI capabilities with human judgment to create experiences that feel genuinely helpful rather than artificially intelligent.
We don’t need less AI. We need more wisdom about how to use it.
That’s not retreating from technology. That’s being smart about it.

