Picture of Sara Gallagher
Sara Gallagher

Is AI Rotting Your PMO’s Brain?

The Internet Has Decided: AI Is Destroying Our Brains

The proof? A study that’s suddenly everywhere…and that almost no one has actually read.

Researchers at MIT’s Media Lab asked undergrads to write short essays in 20 minutes. Some used a search engine, some used ChatGPT, some went without. Then, a variety of methods were used to evaluate the final product.

The actual results:

  1. Brain engagement was lowest when using ChatGPT.
  2. People who used LLMs had a hard time quoting their own essay later.
  3. LLM writing looked more alike across people (narrower idea range)
  4. Results were BEST when writing without AI first—then bringing in AI to polish.

What people heard:

AI is making us stupid.  Making us lazy. Eroding our brains.

But let’s be real.

Most PMs aren’t turning in freshman essays under a stopwatch. We’re navigating boardrooms, sponsors, trade‑offs, and politics. And there’s other problems with the study:

  • It was small. 54 people.
  • They were mostly undergraduates. Little to no professional experience.
  • There was no task-stage breakdown. The study didn’t split writing into subtasks (idea generation, drafting, revision), which limits insight into where AI helps or hurts.
  • They only had twenty minutes. Researchers reported that because of the time limit, people instructed to use AI were mostly copy/pasting.
  • MOST HAD NEVER USED AI BEFORE!

The study is flawed, but it might have a point

I’ve used AI almost hourly since it became available to the public. I use it to plan vacations. I use it to manage my ADHD. I use it to create deliverables. And I spend just as much time editing what AI helped create.

Here’s what I’ve noticed.

In the hands of the inexperienced, AI can absolutely make you dumber. Which is exactly why you should start using it.

The real problem isn’t AI—it’s how we’re using it. We treat it like a shortcut when it’s really a skill. We treat the first draft like a final product. We ask too little of it and expect too much in return.

The result: Work that “sounds right” but says nothing new.

Six AI Mistakes that Erode Our Brain…And Our Output

If you’re frustrated by what you’re getting back from ChatGPT—and secretly cringe when someone says they are a ‘super-user’…here’s what might be going wrong.

  1. You didn’t gather your own thoughts first.

Unguided, AI will default to obvious answers. If you want the deliverable to reflect your unique thinking, you have to engage your brain first.

What to Do Instead: Start the AI session by laying out the assignment (e.g., “I need to draft an executive presentation about an upcoming go/no-go decision.”) Then ask AI to interview you until it has enough detail to write a draft.

  1. You didn’t define what “good” looks like.

Left alone, AI might give you six sentences where one would suffice. Or one weak option when what you need is three strong ones.

What to Do Instead: Build a “guidelines” section into your prompt. Bullet out word/sentence maxes, tone, audience, and goals.

  1. You didn’t ask AI to argue with itself.

Every good decision lives at the intersection of tension. So why aren’t we asking for both sides?

What to Do Instead: After AI gives you an answer, ask: “What would make this fail?” or “What’s the strongest counterpoint?”

  1. You asked it to do too much at once.

If you ask for a polished five-page product, you’re sidelining your own thinking.

What to Do Instead: Break it down. Ask for an outline first. Then review it. Then build sections in layers.

  1. You didn’t state the red flags.

You know your battle scars. That word that triggers your sponsor. That missing data point that always causes delays.

What to Do Instead: Spell them out in the prompt. AI can only work with what you give it.

  1. You believed the citations without checking.

It doesn’t matter that there’s a link. Check the link. Then check the link that link points to.

(Ask me sometime about the study that wasn’t.)

 

What Smart PMs Are Doing

Smart PMs don’t treat AI like a supercomputer. They treat it like a sparring partner. Here’s what they do:

  • Role-play critical audiences. One PM I know uses AI to act like a skeptical CFO before big presentations. It helps her sharpen the logic.
  • Argue with themselves. Instead of just asking for pros, they ask for risks, failure points, and opposing views.
  • Set constraints. Instead of asking for a business case, they ask for a 2-page case for HR projects with non-financial ROI.
  • Tailor to the audience. They don’t use the same output for execs and teams. They feed the audience context first.
  • Check for risk. Smart PMs ask, “What compliance or reputational risks am I missing?”

 

What Smart PMOs Are Doing

Smart PMOs don’t just set AI policies. They set principles that help set expectations—with PMs and their stakeholders—about how AI should be used.

Not sure where to start? Don’t overthink it. Here are a few to consider:

  1. Use AI ethically and in alignment with our values. Every chat should meet ethical standards, client commitments/MSAs, and applicable laws.
  2. You are accountable for your work. AI-assisted work must be accurate, insightful, and professional—and you should be able to explain both the content and reasoning behind any content you create.
  3. Understand how AI works before relying on it. You should have a working knowledge of what AI tools can and can’t do—including how they generate responses, practices for improving accuracy and helpfulness of responses, and the risks of inaccurate or biased outputs.
  4. Protect privacy and comply with data agreements. All AI usage must safeguard company and employee privacy and security, and comply with any relevant MSAs, NDAs, or AI usage agreements.
  5. Minimize the data you share with AI tools. Use the “minimum viable input” approach: Enter only the least amount of identifying or limited-use information necessary to complete your task—even when using enterprise AI tools.
  6. Use AI to enhance—not replace—human judgment. AI tools can improve quality and efficiency, but they can’t replace lived experience. Stay involved in the creation process from end-to-end.

So is AI making us dumber?

My take? Not if we use it right.

The study shows what happens when novices use AI under pressure: they copy, paste, and skip the thinking.

But in the hands of a skilled PM, AI isn’t a crutch. It’s a cognitive multiplier. It sharpens judgment by forcing us to slow down, shape, and supervise the work.

AI won’t make you dumber. But outsourcing your thinking will.

Until next time,
Sara