Picture of Sara Gallagher
Sara Gallagher

Can PMOs Make Prioritization Fair Without Making It Complicated?

There’s a moment every PMO dreads:
Three solid projects. One open slot. No consensus.

So what happens? I heard it all the time from PMO leaders:

“It feels like the loudest voice wins.”

Usually, that voice is a charismatic executive. Or someone who simply knows how to write a proposal that makes everything look urgent and strategic. Either way, it leaves everyone wondering:

Why do we even have a prioritization process if it’s all just influence theater?

To fix it, PMOs often reach for structure: scoring criteria, decision matrices, weighted averages, and cross-functional review committees. The idea is to inject fairness and objectivity into the process. Unfortunately, it often backfires.

When Prioritization Gets Political

A client of mine once spent weeks building a detailed scoring matrix to prioritize projects. Five-point scale. Cross-functional committee. Weighted averages. The whole thing.

Each committee member rated projects individually. No definitions, just gut feel. Scores were averaged, projects were ranked, and decisions were made.

But here’s what happened:

  • Scores were based on wildly different assumptions
  • Averages masked major disagreement
  • Influential leaders learned to game the scoring
  • Committee membership became a political football

The process, intended to create trust, ended up eroding it. And it took forever.

Objectivity is an Optical Illusion

We talk about objectivity like it’s the holy grail. But in portfolio prioritization, it’s more like a magic trick: impressive on the surface, illusory underneath.

The truth is, objectivity isn’t how business decisions get made. Not the big ones, anyway. Strategy is part bet, part instinct, part competitive move. Even with data, there’s interpretation. Even with definitions, there’s judgment.

So instead of chasing objectivity (and pretending it’s possible), I steer clients toward three better targets:

  1. Apples-to-apples comparisons—so we’re at least arguing on the same field.
  2. A clear understanding of strategy—so we know what “good” looks like right now.
  3. A fair, transparent process—so even when it’s messy, people can trust how the call got made.

You won’t get perfect alignment. But you’ll get decisions you can explain—and defend.

Why the System Breaks Down

The problem isn’t scoring itself. It’s how we use it.

Scoring can create a veneer of objectivity that disguises a fundamentally subjective process. Instead of debating the merits of a project, we argue about numbers. Instead of surfacing disagreement, we average it out.

And when PMOs aren’t clear about how scores are defined, who gets to vote, or how to interpret the results, people will do what people do: lobby offline, vote tactically, or try to out-propose the competition.

Ironically, the very tools meant to level the playing field can entrench the power dynamics they were meant to disrupt.

Bonus Tip: Even the best prioritization decision can create frustration if it’s communicated poorly. How leaders explain what was chosen, what was deferred, and why has a huge impact on trust. This article from Harvard Business Review on communication fail points is worth a read.

What “Good” Looks Like Without a Scorecard

One of my clients—a financial institution with about 1000 employees—doesn’t use a scoring matrix.

Instead, five members of the executive team meet once a year to set strategy, review proposals, and make the call in the room. For everything that isn’t a major initiative, they have VPs who control and manage an iterative backlog based on what they can take on outside the “big rocks.”

It works because the inputs are strong, the conversation is holistic and strategic, and the rationale is transparent. Here’s how it works:

  • Proposals hit the essentials (problem, measurable outcome, time-to-first-value, dependencies, known risks, etc.)
  • Strategy guardrails are explicit for the cycle, so everyone knows what’s important right now
  • Executives are held to collective KPIs—reducing the incentive to advocate for one business area over another
  • Rationale is recorded and communicated to leaders—including why some initiatives weren’t approved or highly prioritized
  • Executives leave room for VPs to pursue other priorities—especially the ones that couldn’t be anticipated but have high upside potential

If You Do Use Scoring, Make It Useful

Plenty of PMOs need a lightweight score to make sense of volume. The trick is to keep the math in service of the conversation.

Start by making the scale mean the same thing to everyone. If you’re using 1-5, write a sentence to anchor each end and the middle. For strategic fit, for example:

  • 5 = Direct Impact. This project delivers on one (or more) strategic pillar’s outcome this year; if we don’t do it, we likely miss the pillar.
  • 3 = Indirect Impact. This project enables the teams doing the pillar work or removes meaningful friction, but by itself, it won’t move the metric this year.
  • 1 = Tangential to Stated Strategy—Good work, but neither required nor enabling for a strategic pillar this year.

Next, expose disagreement on purpose. Have people score silently, then put the spread on the screen—min, max, median. Invite outliers to speak first. The question isn’t “what’s the right number?” It’s “What are you seeing that the rest of us aren’t?” Often, the missing fact or conflicting assumption shows up in thirty seconds if you make room for it.

Finally, moderate for assumptions, not advocacy. When debate stalls, name the fork: “If assumption X holds, this is a 5. If assumption Y holds, it’s a 2. Which future are we betting on, and how/when will we know if we were right?”

Bottom line: Let scores start and guide the debate, not end it. Make the call in plain language, record the trade-off you chose, and communicate the result transparently.

Where Prioritization Scoring Breaks (And How to Fix It)

Too much precision, not enough clarity. As the list of criteria grows, people optimize for points rather than outcomes. Cap the list and make two of them dominant; everything else is a tie‑breaker.

Fuzzy strategy. If people are scoring against the strategy in their heads, the spread becomes politics. Publish a one‑page note with explicit bets and anti‑bets for the cycle—or a short prioritization charter if that’s all you can manage right now.

The single‑number trap. Leaders love one score; averages love to hide disagreement. If you must deliver a number, pair it with the spread, the key assumption, and the fork you’re choosing (“if X then high, if Y then low”). Ask leaders to pick the future, not the average.

Gaming and proposal pageantry. When style wins, everyone learns the wrong lesson. Hold proposals to the same “score‑ready” standard (the PMO can validate completeness) and label anything incomplete as No‑Score until the facts show up.

Capacity denial. Theoretical greenlights don’t move work. Show a real capacity ledger by team, set WIP limits, and force a swap when you add something.

Interruptions and reprioritization. Emergencies happen; thrash doesn’t have to. Keep a protected fast‑track lane for true must‑dos and agree on a short list of triggers that justify mid‑flight changes. Otherwise, hold the line.

Power in the room. Sponsors can tilt the table without trying. Keep the decision body small, require pre‑reads, start with outliers, rotate who speaks first, and empower the facilitator to timebox and call for the decision.

If You Only Do One Thing

Ask this tomorrow:

What would this look like if it were fair and easy?

Then redesign one move accordingly. Maybe it’s writing score anchors. Maybe it’s displaying score ranges. Maybe it’s publishing a “what good looks like” proposal example. Fair and easy is a design choice.

 

Until next time,
Sara