Used well, AI can help your team do exceptional work faster. So why are we spending valuable time “de-AI-ifying” everything we produce?
Our Dirty Little Secret
A client admitted something to me recently: Every time she uses AI, she spends between 5 and 15 minutes removing anything that “sounds like” ChatGPT. On a typical day, that’s 40 minutes making superficial edits to otherwise good work.
And honestly, she’s not wrong to do it. There’s a growing body of research showing people judge work they believe was created by AI more harshly.
But as a person who coaches teams to work smarter, faster, and kinder…this is a conundrum. I’ve worked with six teams in the last 60 days who are spending time not just editing for quality, but editing to hide their tools.
Which got me wondering.
We tell our teams to embrace new technologies. We buy the licenses, run the training, and talk about working smarter, not harder. But we’ve created a culture where using the most powerful new tool in a generation comes with a side of shame.
Why?
What’s the Real Cost of Fixing Work That Isn’t Broken?
Here’s my worry. That shame carries a cost.
When teams feel ashamed of their tools, the consequences ripple outward. It’s not just about a few minutes spent rephrasing an email.
- Time Lost. Editing for appearances means less time spent on deep thinking, customer calls, or solving actual problems. It’s a tax on the very productivity that AI was supposed to help us achieve.
- Good Work Devalued. What happens when a sharp insight gets dismissed just because it has too many em dashes? Or, when a smart, human-led analysis is suspect due to a well-worn AI phrase?
- Innovation Stalls. Teams afraid to experiment with AI—discouraged, even subtly, from integrating it—fall behind. The tools are here to stay. Shame is a powerful brake on innovation.
It reminds me how “hours in the office” used to be a proxy for productivity. It was a terrible measure, but it was easy. Judging work on whether it “sounds like AI” is the new lazy proxy. It’s easier than assessing the quality of the thinking.
This whole thing gets nice and ridiculous when you consider that people are now using AI tools like Grammarly’s “humanizer” to make their writing sound more human.
It’s a Messy Question…
Because let’s be honest: a lot of people using AI are producing indefensible work.
Generic, inaccurate, soulless fluff that lacks a point of view. And after seeing enough of it, we’ve developed a Pavlovian response.
The moment something sounds like it was written by AI—with its perfectly balanced sentences and love of words like “delve” and “tapestry”—our guard goes up. We assume the thinking is as shallow as the prose is polished.
But in my humble opinion, humans were producing epic amounts of “workslop” long before ChatGPT came along. We’ve all received rambling, poorly structured emails and read presentations that said absolutely nothing.
The problem isn’t new. The tool just makes it faster to produce.
Why I Haven’t Picked a Side (Yet)
I sit at the intersection of all of this:
- As an executive, I want my team at Persimmon to use AI to be faster and more innovative than our competitors.
- As a consultant, I’m accountable for providing value that AI can’t replicate—a real, expert point of view that clients trust.
- As a marketer, I know people crave authentic content written by humans—but competitors who create content at scale are winning the search rankings game.
- As someone who helps teams move faster, I see how these conflicting attitudes are creating execution drag.
And I’ve been personally burned. I hired a consultant who produced a pure AI workflow and then sent me the invoice.
So I get why the skepticism is there. I’m aware of the risks of using AI—and appearing to do so. And, like many of you, I’m also tired of making superficial edits to work I built and stand behind.
What I’m Telling Teams Right Now
Look, this isn’t an easy problem to solve. But here’s what I’m telling my clients:
- Keep editing public-facing work. The risks are too great to risk perception problems with customers, potential customers, or high-stakes leaders. I’m not going to pretend otherwise.
- Agree on your internal standards. As a team—or as an executive, for the whole organization—decide how you will use AI. If the rule is “it’s okay to use it as long as a human is responsible for the final output,” then mean it. Normalize it. Agree that you will assess work on its quality, its point of view, and its accuracy—not on whether it sounds like a human or a robot wrote it. Stop judging each other and get back to work.
- Identify the “human-only” zones. People crave genuine connection. Some work requires a human touch, full stop. You wouldn’t use AI to write an email responding to a tragedy. You wouldn’t use it to generate your core convictions. Every job has aspects of work that AI shouldn’t touch, or where the final draft needs to be 100% human-authored for the sake of authenticity. Name them.
- Recognize where expertise comes from. AI provides certain dimensions of expertise. It can synthesize vast amounts of information in seconds. But it can’t replace your critical eye, your knowledge of a client’s political landscape, or your unique point of view. When I use AI for a client deliverable, it’s often a four-hour “conversation” where I am the one supplying the critical thinking, shaping the output, and filtering it through years of experience. The AI is an assistant, not the expert.
If You Only Do One Thing…
This week, have a conversation with your team about AI. About culture, not policy.
Create a simple, explicit team agreement. It doesn’t need to be a formal document. It needs to be a set of substantive agreements about how AI will be used—and, just as importantly, the attitudes that are and aren’t okay regarding its use by others. Make it safe to talk about the tools you’re using to get the work done.
Then, if you want your team to embrace AI, you have to stand by the agreement. Don’t judge your team—even silently—for AI-generated prose if the content is solid and your team can defend it effectively.