Thursday, May 7, 2026

AI Driven PM: S2E6 - Net Operating Value

Your Project Isn't Competing Against a Standard. It's Competing Against Everything Else.

Have you ever had a project going really well—hitting milestones, staying on budget, team is energized—and an executive pulls the plug?

You're furious. You're confused. You think, "How could they kill this? We were succeeding."

Here's the uncomfortable answer: Because they weren't evaluating your project. They were evaluating your portfolio.

And those are completely different conversations.

Most project managers optimize in isolation. We focus on our project. Our timeline. Our velocity. Our scope. We find our lane and we stay in it.

But executives? They're not optimizing your project.

They're optimizing across a dozen projects competing for the same people, the same budget, and the same strategic window.

And without a shared language for value—one that accounts for all the trade-offs—PMs and executives talk right past each other.

Your project looks green. Their portfolio looks broken.

That's the gap. And that's exactly what Net Operating Value is designed to close.

What Is Net Operating Value?

Net Operating Value (NOV) is a metric I helped develop to tell a more complete story about project value.

Here's the formula:

NOV = Expected Value − Effort Cost − Risk Cost − Opportunity Cost

Let's break that down:

Expected Value: Revenue, cost savings, strategic positioning, user impact. Everything you gain if the project succeeds.

Effort Cost: Budget, team capacity, person-months, timeline. Everything you spend to get there.

Risk Cost: Probability of failure or underperformance × the financial impact of that failure. What you might lose.

Opportunity Cost: What you are not building because you're building this.

That last one is the one most business cases never include. And it's the one that changes everything.

ROI asks, "Is this worth doing?"

NOV asks, "Is this the best thing we could be doing with these resources right now?"

That's a completely different question.

The AI Capacity Trap

Before we get into the prompts, I want to call out something I'm seeing happen in organizations right now.

They're seeing AI increase team velocity. Story points per sprint are going up. Delivery speed is improving.

And their first instinct is: Let's throw more projects at the team.

I've already had clients say, "If our team's capacity was 270 points before AI, shouldn't it be 390 now? Let's plan accordingly."

And look—maybe. You might see velocity increases. But here's the question you're not asking:

Can we build it? That's the wrong question. The right question is: Should we build it, given everything else we could build?

More capacity doesn't mean more projects. It means more opportunities to choose poorly.

The 2-1-0 philosophy exists precisely for this moment: You need to be two full quarters ahead in ideas competing for quarterly planning, and two full sprints ahead in fully defined, designed, and architected user stories.

Not so you can do more. So you have the tension to choose better.

Can we do this feature? Sure. Should we—given what else is in the queue? That's the real conversation.

Now Let's Do the Math

I ran three prompts live against our Social Wishing app—the bucket-list social platform we've been building throughout this season.

And the results were... illuminating.


Prompt 1: NOV Calculator (Your Non-Negotiable)

What it does: Calculates a full NOV assessment—expected value, effort cost, risk cost, and opportunity cost—and compares your project to a baseline threshold for approval.

I gave it this context:

  • 10,000 users in 90 days
  • 4.99/month premium, 10% conversion)
  • $300,000 budget, six months, eight people
  • Fully dependent on Facebook API
  • Opportunity cost: A flagship product feature we're not building that could drive $500,000 in upsell revenue with 80% confidence

ChatGPT asked me:

  • "What's your gross margin on subscription revenue?" (70%)
  • "What's your expected monthly churn rate?" (30%)
  • "Are you planning paid acquisition or assuming organic?" (Paid, $11 CAC)
  • "If Facebook API access is restricted, what percent of core functionality breaks?" (Total shutdown)

Then it ran the math.

Unit economics summary:

  • Monthly price × 70% margin ÷ 30% churn = $11 lifetime value per premium user
  • CAC = $11
  • Net unit contribution per premium user: 63 cents

ChatGPT's verdict:

"You're spending 50,000 in premium lifetime contribution. This is deeply negative unit economics."

Claude's NOV calculation: Negative $702,000.

"This project has a strongly negative NOV. Your alternative project produces positive expected contribution of $280,000 with far lower uncertainty. Build the flagship feature."

And just like that—in under four minutes—we had a business case analysis that would have taken a finance team days to produce.

That's the power of the NOV calculator.


Prompt 2: Value Assumption Stress Test

This is the one I love most.

It doesn't just tell you what your project is worth. It tells you which assumptions, if wrong, kill the entire value case—and what you can do about it.

I gave it the Social Wishing business case with these underlying assumptions:

  • 10,000 users in 90 days via viral Facebook sharing
  • 30% wish fulfillment rate
  • 10% premium conversion
  • Users will trust the platform with personal bucket list information
  • Facebook maintains stable API access
  • Organic network effects keep CAC low

Then I answered honestly:

  • Never launched a consumer social product before
  • No evidence users will share wishes—just a hunch
  • No waitlist, no beta, no validated demand
  • Not solving an urgent problem—creating new behavior
  • No relationship with Facebook's API team

Claude's response:

"Good. Now we're thinking clearly. You just removed most of the illusion from the business case."

Then it walked through each assumption with best case, expected case, and worst case scenarios.

And then it said something I think every PM needs to hear:

"This is not a 25,000 behavior experiment."

It recommended a 45-day validation sprint instead:

  • Build a landing page, spend $3-5K in ads, measure cost per email signup
  • Start a private community of 100 people manually, see who posts wishes and who engages
  • If cost per signup exceeds $5-7 or organic growth assumptions weaken, you have your answer

Then it gave me the framing for the sponsor conversation:

"The question you should be bringing to your sponsor isn't 'Should we build this?' It's 'Can we spend 285,000?'"

That one reframe changes the entire conversation.

You're not saying no to the idea. You're saying yes to being smarter about how you validate it.

How many of us have had a CIO come back from a conference with a "cool thing" they saw? And six months and $300K later, we find out nobody actually wanted it?

This is how you avoid that.


Prompt 3: Portfolio Trade-Off Analyzer

Now let's zoom out. Three projects. One six-month window. Capacity for two.

The options:

  • Option A: Social Wishing (new product) — 300K cost, high risk
  • Option B: Flagship product feature (upsell to existing customers) — 200K cost, low risk
  • Option C: Infrastructure modernization (tech debt reduction) — 250K cost, medium risk

Which two do you choose?

ChatGPT and Claude gave me the same answer:

Option B + Option C. Kill Option A in its current form.

And here's the framing that I would use in the executive meeting:

"Option B is the cash generator. Option C is the capability builder. Together they fund growth AND protect our future velocity. That's a balanced portfolio."

That's not a financial argument. That's a story. Two sentences. Executives get it immediately.

For communicating the trade-off on Social Wishing, both tools gave me the same counsel:

"Don't frame this as killing creativity. Frame it as disciplined capital allocation."

Script:

  • "Our priority this year is predictable revenue growth and operational stability."
  • "For every dollar invested in the flagship feature, we get approximately $2 in risk-adjusted value. Social Wishing does not meet that threshold today."
  • "Social Wishing is interesting—but unvalidated. We will test demand with a capped experiment before committing full capital."

That's not a no. That's a responsible yes.


The Full Circle Moment

We spent all season building the Social Wishing dream. We clarified the vision. We wrote the team motivation story. We built the backlog. We ran the health diagnostic.

And now the NOV says: Don't build it. Not yet. Validate it first.

That's not failure. That's exactly how great project management works.

Dreams deserve data before dollars.

And the PMs who can have that conversation with their sponsors—who can say "here's the math, here's the risk, here's the smarter path forward"—those are the PMs executives trust with their most important projects.


Your Non-Negotiable Experiment This Week

Run the NOV Calculator (Prompt 1) on a current project.

Then identify: What is the one assumption that, if wrong, kills the entire value case?

Test it. Find the cheapest way to validate or invalidate it before committing more resources.

Here's what I want you to notice:

  1. Does calculating NOV change how you talk about your project's value?
  2. How do executives respond when your business case includes opportunity cost?
  3. Did the stress test surface an assumption you've been avoiding?

Because here's the truth: Protecting your project isn't about defending it. It's about proving it deserves the resources over everything else competing for them.

That's thinking like a portfolio manager.

And that's how you earn a seat at the table.


Next time: Influence without authority—how to lead when you can't command. It wasn't your idea. They're not your people. It's not your budget. So how do you actually move a project forward?

Want these prompts ready to copy/paste? Head to PMThatWorks.com for the full library.

Now go run the math. Your dream deserves to know if it can stand up to the numbers.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - NOV Calculator

You are a portfolio strategist and financial analyst for project investments.

First, ask me 5–7 clarifying questions about the project's expected benefits, costs, risks, and what else the organization could be doing with the same resources.

Then calculate a net operating value assessment by answering:

  1. What is the expected value of this project? (revenue, cost savings, strategic value, user impact — quantify as much as possible)
  2. What is the total effort cost? (budget, team capacity, person-months, timeline)
  3. What is the risk cost? (probability of failure or underperformance × financial impact)
  4. What is the opportunity cost? (what alternative projects or initiatives are we not doing because of this?)
  5. What is the calculated NOV, and how does it compare to a baseline threshold for project approval?
  6. What assumptions are most uncertain, and how would changing them affect the NOV?

Project context: [Enter Context]


Prompt 2 - Value Assumption Stress Test

You are a critical thinking coach and risk analyst.

Ask me 3–4 questions about the value assumptions underlying my project's business case.

Then stress test those assumptions by answering:

  1. What are the 3–5 core assumptions that must be true for this project to deliver its expected value?
  2. For each assumption, what is the best case, expected case, and worst case scenario?
  3. How sensitive is the project's NOV to changes in each assumption?
  4. What evidence or data exists to validate or challenge each assumption?
  5. What experiments or MVPs could we run to de-risk the biggest assumptions before committing fully?

Project business case: [Enter Business Case and Assumptions]


Prompt 3 - Portfolio Trade-Off Analyzer

You are a portfolio management consultant helping executives make investment decisions.

Ask me 3–4 questions about the competing projects or initiatives in our portfolio and the organization's strategic priorities.

Then provide a trade-off analysis answering:

  1. How do the competing projects compare on NOV?
  2. What projects are must-dos (strategic imperatives) vs. nice-to-haves?
  3. What is the optimal portfolio mix given current capacity and risk tolerance?
  4. What projects should we greenlight, pause, or kill based on NOV?
  5. How do I communicate trade-offs to stakeholders in a way that builds alignment rather than resentment?

Portfolio context: [Enter Competing Projects and Constraints]

 

Thursday, April 23, 2026

AI Driven PM: S2E5 - Data Rules 2.0

Let me tell you something I learned from Six Sigma that changed the way I run projects:

If you can't measure it, you can't improve it.

I came up through the DMAIC era—Define, Measure, Analyze, Improve, Control. I love data. I live for metrics. Give me a spreadsheet and a story to tell, and I'm happy.

But here's the uncomfortable truth I've discovered after 30+ years and 150+ implementations:

Most of the metrics we track don't tell the story we actually need to tell.

We obsess over story points. Velocity. Task completions. Hours logged. Burn rate.

And then an executive asks, "Are we going to hit the launch date?"

And we stare at our dashboard.

And it doesn't answer the question.

That's not a data problem. That's a framing problem.

The Activity Metrics Trap

Here's what most PMs measure:

  • Story points completed
  • Task closes
  • Hours logged
  • Budget spent to date
  • Number of commits to repo

You know what all of those have in common?

They measure busyness. Not achievement.

Activity doesn't equal value. And executives—even if they can't always articulate it—don't actually care about activity. They care about outcomes that drive business value.

I was working with a client recently who said their goal was a 2% net sales lift.

I asked, "If you land Walmart, do you win?"

They said, "No."

I said, "Okay—walk me through what this 2% net sales lift actually means."

And here's what was wild: They were using that metric to justify building a data warehouse, but they had no idea what data they were going to put into the warehouse to measure it.

The metric was real. The measurement plan was nonexistent.

That's the trap.

We pick metrics because they're easy to track, not because they answer the questions that actually matter.

We Are Lawyers. Our Sponsors Are Judges.

Here's a frame I use all the time that completely changes how PMs think about data:

Your job is to build a case. Your sponsor is the judge.

You gather evidence. You analyze the data. You present your recommendation. You let the judge decide.

And if you don't like the ruling? You don't argue in the courtroom.

You appeal.

You go back, review your data, figure out why it didn't tell the compelling story you needed, and you come back better prepared.

And here's the thing: If I have more data than you, I'm going to win the conversation.

Not because I'm louder or more senior or more confident.

Because data tells a story. And the PM who tells the better story wins.

The problem is we keep bringing the wrong data to court.

What You Should Actually Be Measuring

Here's what I suggest you measure instead of activity:

1. Value Delivered

Not "features shipped"—features in production being used.

I worked on the GrowthDay app build. We had all these features planned, but at the last minute, the founder said, "Wouldn't it be cool if we had a daily motivational segment—something that fires people up every morning?"

We almost cut it. Time pressure. Scope pressure.

We didn't cut it.

That tiny, last-minute feature became one of the stickiest in the whole app.

But here's the key: We only knew it because we measured how people were actually using the app. How many times. How long they stayed. Whether they came back.

If you're not measuring features being used, you don't know if you're delivering value or just shipping code.

2. Time to Impact

How fast do you go from idea to user value?

What's your average cycle time from "we need this" to "users are using this"?

That's a story executives actually care about.

3. Quality Signals

Defect rates. Technical debt. User satisfaction.

But here's the nuance: When you compress testing due to date pressure, defect growth becomes exponential—not linear.

I had this exact conversation in a live demo. The data showed:

  • Sprint velocity dropped
  • Defect rate rising
  • Testing coverage shrinking

I told the team: "If testing continues to erode, expect 15 to 18 defects per sprint within four to six weeks. The rework alone will cost more time than the testing would have."

That's the conversation a PM should be having with a sponsor. Not "we're at 80% of story points." But: "We're trading short-term velocity for long-term quality debt. Here's what that actually costs us."

4. Team Health

Velocity stability (not just velocity), morale, attrition risk, sentiment analysis.

A velocity drop after adding a team member? That's normal. Expected, even.

Knowledge transfer consumes senior capacity. Code review loads increase. A 20% velocity drop after onboarding one person is common in the first two to four sprints.

But if it hasn't recovered in three sprints? That's structural. That's something else.

Know the difference.

5. Stakeholder Confidence

Sponsor engagement. Clarity of vision. Meeting attendance.

When your sponsor starts missing meetings, that's a leading indicator—not a footnote.

The Metric I'm Most Proud Of: Scope Stability Index

Here's one I love that most teams don't track:

Scope Stability Index = New story points added ÷ Total committed story points

If that number exceeds 15% mid-sprint, execution predictability collapses.

Let me make that concrete. You committed to 30 story points for the sprint. During the sprint, 10 new points get added. That's 10 ÷ 30 = 33%.

Your sprint just broke.

Not because the team is failing—but because the input changed faster than the output could absorb it.

This is the conversation you bring to a sponsor: "Every time we add scope mid-sprint, we pay a compounding tax. Here's what that tax looks like in data."

That's a case. That's a lawyer walking into court prepared.

How AI Fits Into All of This

Here's the piece most people miss:

AI can automate all the activity tracking.

I have agents that pull data from JIRA, Microsoft Planner, ServiceNow, spreadsheets—normalize it, format it, and report it. Automatically.

That means I'm not spending 60% of my week staring backwards at what already happened.

I'm spending it on outcome measurement, impact analysis, and inventing new metrics that tell the parts of the story nobody else is telling.

AI can correlate your leading indicators (velocity, quality, team sentiment) with your lagging indicators (revenue, retention, delivery dates). It can isolate trends across data sources you'd never have time to manually connect.

But here's the catch, and I said this right at the top of the episode:

AI can't do for you what it can't do through you.

The metrics it surfaces are only as good as the questions you're asking. You have to know what story you're trying to tell before AI can help you tell it.

That's what these three prompts are designed to do.


Prompt 1: Metrics Dashboard Designer

Start here. This builds you a dashboard that actually answers the questions your executives are asking.

What it creates:

  • 3-5 outcome metrics that prove value delivery
  • 3-5 activity health metrics as leading indicators
  • Data sources and measurement approach for each
  • Green/yellow/red thresholds
  • How to present differently to executives vs. team vs. sponsor

What I got when I ran it for the Social Wishing app:

ChatGPT asked:

  • "What would cause leadership to declare this a failure at month six or nine?"
  • "How does this app make money in the first 12-18 months?"
  • "Do you have any analytics tooling selected?"

That first question? Bring it to your sponsor. Seriously. Ask them: "What would cause you to declare this a failure at month six?" You'll learn more in that 10-minute conversation than in three weeks of status reports.

The outcome metrics it generated:

  • New user signups per week
  • 90-day active user rate
  • Wish progress rate
  • 30-day retention

The leading indicators:

  • Visitor-to-signup conversion rate
  • Invite rate (% of users who invite at least one friend)
  • Time to first wish (median time from signup to first wish created)
  • Sprint predictability

The dashboard format for founders: Single slide, three rows.

  • Row 1: Growth (signups, conversion)
  • Row 2: Engagement (wish progress, invite rate)
  • Row 3: Retention (30-day retention, trend arrow)

Simple. Powerful. Tells the dream story.


Prompt 2: Predictive Risk Indicator Finder

This one is for when you feel something is wrong but can't quite articulate it yet.

What it does:

  • Identifies leading indicators that predict trouble
  • Correlates team health metrics with outcome metrics
  • Surfaces data you're NOT capturing (but should be)
  • Sets intervention thresholds
  • Coaches you on how to communicate risk without panicking the room

I gave it this project context:

  • Sprint velocity dropped from 35 to 28 points
  • Defects up from 7 to 12 per sprint
  • Code review cycle time averaging 2.3 days
  • Sponsor missed last 2-3 meetings
  • Team sentiment dropped from 8.0 to 6.5
  • 8 new feature requests added this month, 3 original features cut

What AI told me:

"Velocity drop after adding a person is classic onboarding drag. A 20% drop is common in the first two to four sprints. If it doesn't recover within three sprints, the issue is structural—not onboarding."

"Your defect increase combined with shrinking testing coverage is the highest risk signal in your data. When testing coverage drops due to date pressure, defect growth becomes exponential, not linear. Expect 15 to 18 defects per sprint within four to six weeks if nothing changes."

And then—the one I loved most—it surfaced metrics I wasn't tracking:

Code review comment density per PR.

"High comment density means complexity or standards drift. Low comment density with long cycle times means avoidance. These require completely different interventions."

I would never have thought to track that. That's AI as a thinking partner.


Prompt 3: Vanity vs. Value Metrics Audit

This is the one that will save you from the most painful meeting of your career.

You know the meeting. Twenty executives in the room. You walk through your status report. They eat you alive.

Because your report answered "Are we busy?" instead of "Are we going to succeed?"

I've been in that meeting. I never want you to experience it.

I gave AI this list of my current metrics:

  • Tasks completed this week
  • Story points burned
  • Budget spent to date
  • Team utilization
  • Number of commits to repo
  • Lines of code written
  • Meetings held
  • Risks identified

And I told it: "Executives keep asking if we'll hit the launch date and user targets. My metrics don't answer that question."

Claude's response was brutal. And perfect:

"Why are you asking me questions, Rick? You've already diagnosed the problem yourself. Your metrics answer 'Are we busy?'—not 'Are we going to succeed?'"

"You have six vanity metrics out of eight. The executives are asking the right question. Your dashboard is giving them the wrong answer. It's not a data problem. It's a framing problem. You're reporting inputs when they're asking about outcomes."

Couldn't have said it better myself.

Then it gave me the swap:

Vanity Metric

Why It's Vanity

Replace With

Story points burned

Doesn't predict completion without trend

Forecasted completion date based on velocity trend

Team utilization

Measures busyness, not throughput

Scope stability index

Commits to repo

More commits can signal churn, not progress

Defect escape rate

Lines of code

More code often = more defects

Time to value / time to first wish

That table is a career-saver.


Your Non-Negotiable Experiment This Week

Two challenges:

1. Build your outcome metrics dashboard using Prompt 1. Take your current project and identify 3-5 metrics that actually answer your executive's burning questions.

2. Replace at least one vanity metric in your next status report with a value metric.

Just one swap.

Here's what I want you to notice:

  • How do stakeholders react when your report answers their actual questions?
  • Does better data help you spot risks earlier?
  • Do you feel more confident walking into that executive meeting?

Because here's the truth: The PM who tells the better story with better data wins.

Not because they're louder. Because they came prepared.


Next time: Net Operating Value—the metric I use for portfolio decisions. How to stack-rank your portfolio, make trade-off decisions, and help executives choose between good ideas using data that actually reflects business value.

Want these prompts ready to copy/paste? Head to PMThatWorks.com for the full library.

Now go build that dashboard.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - Metrics Dashboard Designer

You are a data-driven PM coach and metric strategist.

First, ask me 4–5 questions about the project goals, stakeholders, team, and what success means in business terms.

Then help me design a metrics dashboard by answering:

  1. What are the 3–5 outcome metrics that prove the project is delivering value?
  2. What are 3–5 activity health metrics that are leading indicators of those outcomes?
  3. For each metric, what is the data source and how do we measure it?
  4. What thresholds or targets indicate green, yellow, red status for each metric?
  5. How should I present these metrics to executives vs. the team vs. the sponsor?

Project context: [Enter Context]


Prompt 2 - Predictive Risk Indicator Finder

You are a predictive analytics expert for project management.

Ask me 3–4 questions about current project metrics, team dynamics, and any early warning signs I'm seeing.

Then analyze potential risk patterns by answering:

  1. Based on the metrics I'm tracking, what are the 3–5 leading indicators that typically predict project trouble?
  2. What correlation exists between team health metrics (velocity, morale) and outcome metrics (quality, delivery)?
  3. What data am I not currently capturing that would give me an earlier warning sign of risk?
  4. What specific metric threshold should trigger a project health intervention?
  5. How do I communicate risk using data without sounding alarmist?

Current metrics: [Enter Metrics and Current Project Context]


Prompt 3 - Vanity vs. Value Metrics Audit

You are a metric strategist helping PMs distinguish signal from noise.

Ask me 2–3 questions about the metrics I'm currently reporting and what decisions those metrics inform.

Then provide an analysis answering:

  1. Which of my current metrics are vanity metrics? (They look good but don't drive decisions.)
  2. Which metrics are value metrics? (They directly inform action or prove impact.)
  3. For each vanity metric, what is the underlying value metric I should track instead?
  4. What questions should I ask myself to test if a metric is worth tracking?
  5. How do I transition stakeholders away from vanity metrics they're used to seeing?

Current metrics: [List Your Current Metrics]

 

Thursday, April 9, 2026

AI Driven PM: S2E4 - Do We Have to be the Domain Expert?

I get this question all the time when I'm working with a new client or interviewing for a role.

"So Rick, are you an expert in the insurance industry?"

"No."

"Well... shouldn't you be? I mean, shouldn't a project manager understand the domain they're managing projects in?"

"Not really."

They look confused. So I follow up:

"How many people work here?"

"About 5,000."

"Great. So you've got 5,000 experts in insurance. What you need is someone like me who can get the best out of those 5,000 people."

That's the difference between domain expertise and project management expertise.

And most organizations don't understand it.

They prioritize hiring PMs who "speak the language of the business" over PMs who know how to facilitate, translate, and orchestrate expertise.

And that hiring bias? It backfires more often than you think.

The Conventional Wisdom (And Why It's Wrong)

Here's what most organizations believe:

  • You need a PM who understands healthcare, finance, manufacturing, [insert industry here]
  • They have to "speak the language" of the business
  • Domain expertise signals credibility and competence
  • Deep knowledge = better decisions

The reality? Domain expertise can be a hindrance just as often as it's a help.

Here's why.

When Domain Expertise Backfires

1. Domain Expert PMs Micromanage

When you know how to do the work, it's really hard not to tell people how to do the work.

A PM with deep domain expertise will hear an engineer say, "That'll take 10 hours," and think, "I could do that in two hours."

And then they start challenging estimates. Second-guessing approaches. Offering "suggestions" that aren't really suggestions.

That's not project management. That's micromanagement.

2. They Focus on WHAT, Not HOW

Domain expert PMs get obsessed with what's being built instead of how the team is building it.

They care more about the technical solution than the team dynamics, the stakeholder alignment, or the energy in the room.

And that leads to projects that might be technically perfect but operationally a disaster.

3. Personal Bias Takes Over

When you have deep domain experience, you carry bias—whether you're aware of it or not.

You think, "I've been there before. I've seen that approach. It doesn't work."

But here's the thing: Just because it didn't work when YOU tried it doesn't mean it won't work now.

Context matters. Teams matter. Timing matters.

And sometimes the team needs to try something, discover it doesn't work, and pivot. That's how ownership and learning happen.

When a domain expert PM shuts that down with "I already know that won't work," they kill ownership and innovation.

4. They Become Decision Bottlenecks

Domain expert PMs feel like they need to be involved in every decision because they "understand the implications."

So they become the bottleneck.

Every technical choice, every scope question, every trade-off discussion has to run through them.

That's not leadership. That's dependency.

What Great PMs Actually Do

Here's the truth most organizations miss:

A great PM knows how to ask the right questions—not provide the right answers.

Let me say that again for the people in the back:

You don't make decisions on scope, budget, timeline, or what's in or out. You make recommendations. You influence. But you don't own the decision.

Your job is to:

  • Translate between domains (tech to business, business to customer, customer to tech)
  • Facilitate expertise (create the conditions for experts to do their best work)
  • Ask the "dumb" question (the one everyone assumed was already answered)
  • Frame trade-offs (so the right people can make informed decisions)
  • Orchestrate, not dictate (you're the conductor, not the soloist)

And here's the magic: Facilitation expertise + deep learning desire > domain expertise.

Why?

Because you're unafraid to ask the next question. You're not stuck in "how it's always been done." You bring fresh eyes, challenge assumptions, and force clarity where experts have gotten comfortable with ambiguity.

When Domain Expertise DOES Matter

I'm not saying domain expertise is useless. There are contexts where it absolutely matters:

1. Highly Regulated Industries

Healthcare, finance, government—anywhere compliance is complex and non-negotiable.

In these environments, knowing which questions to ask requires baseline domain knowledge. You need to know what regulations exist so you know who to pull into the conversation.

But even then, you don't need to be the compliance expert. You just need to know when to engage one.

2. Deeply Technical Domains

If you're building your own AI/ML systems, embedded systems, or highly specialized technology, some technical fluency helps with translation.

But notice I said fluency, not mastery.

You don't need to code the solution. You need to understand enough to ask, "What are the trade-offs?" and "What happens if we're wrong?"

3. When You're the Only Person in the Room

If you're a solo PM in a startup with no dedicated domain experts, then yeah—you might need to wear both hats for a while.

But even then, your job is to build the team that replaces your domain gaps as fast as possible.

How AI Helps You Bridge Domain Gaps in Days, Not Months

This is where it gets fun.

One of the most powerful uses of AI for project managers isn't writing status reports or generating meeting notes.

It's becoming a domain learning accelerator.

You can use AI to:

  • Get up to speed on unfamiliar domains in days instead of months
  • Build stakeholder expertise maps so you know who to ask what
  • Generate facilitation scripts so you can lead technical debates without pretending to be the expert

Let me show you.


Prompt 1: Domain Knowledge Accelerator (Your Non-Negotiable)

This is your experiment for this week. Use AI to get up to speed on an unfamiliar domain—fast.

What it does:

  • Identifies 5-7 core concepts you need to understand
  • Explains each concept in plain language with analogies
  • Maps key stakeholder types and what they care about
  • Surfaces common PM pitfalls in that domain
  • Generates questions to ask experts

What I got when I ran it for the Social Wishing app:

ChatGPT gave me concepts like:

  • OAuth and API authorization flows
  • Graph API rate limits
  • Data privacy classification
  • Viral growth and infrastructure scaling

And then—here's what I loved—it gave me analogies.

For "viral growth and infrastructure scaling," it said:

Plain language: If growth spikes, your system must handle sudden load increases.

Analogy: It's like a small coffee shop that suddenly gets national press—but you only have one espresso machine. Service will collapse.

That's gold.

Now I can explain infrastructure risk to a business stakeholder without using the word "horizontal scaling."

I can say, "We just got national press, and we've got a line around the block—but we only have one espresso machine. We need to decide: Do we buy more machines now, or risk turning customers away?"

That's translation. That's facilitation. That's what great PMs do.


Prompt 2: Stakeholder Expertise Mapper

This one helps you figure out who knows what and who cares about what on your project.

What it creates:

  • 8-12 key stakeholders by role (not name)
  • What domain expertise each brings
  • What each stakeholder's "win condition" is
  • Who to rely on for domain expertise vs. business context vs. technical decisions
  • Questions to ask each stakeholder type

What I got:

ChatGPT mapped out:

  • Executive sponsor (cares about market differentiation and user growth)
  • Product owner (cares about MVP clarity and scope control)
  • Back-end engineer (cares about API stability and Facebook integration)
  • Marketing director (cares about launch readiness and campaign metrics)
  • QA engineer (cares about testing strategy for third-party integrations)

Then it gave me questions tailored to each stakeholder.

For the marketing director: "What needs to be true in terms of experience or metrics for you to feel confident running a full launch campaign?"

For the QA engineer: "If you were going to design the testing strategy for the Facebook API integration from scratch, what would you prioritize?"

These aren't generic questions. They're role-specific, expertise-tapping questions that show you're learning—and give you credibility without pretending to know.


Prompt 3: Facilitation Over Expertise Script

This is the one I use when the team is stuck in a heated debate and I don't have the technical chops to declare a winner.

The scenario I gave it:

The Social Wishing engineering team is debating architecture.

Option A: Microservices from day one (more complex, scales better)
Option B: Monolith first, split later (faster to MVP, potential refactor pain)

I don't have strong back-end architecture expertise. Two senior engineers are dug in on opposite sides. The debate is getting heated, and we're burning time.

How do I lead this without pretending I know what's technically right?

What AI gave me:

Questions to draw out expertise:

  • "What's each of us assuming about how fast this app will scale—and are those assumptions written down anywhere?"
  • "What would have to be true about our growth trajectory for Option A to be clearly the right call? Or Option B?"
  • "Has anyone on the team built something similar before, and what happened?"
  • "Is there anything about our team's current skills or bandwidth that should factor into this choice that we haven't mentioned yet?"

Framework to organize the discussion:

Claude suggested:

  1. Structured input from both sides (5 minutes each, no interruptions)
  2. Engineering lead makes recommendation
  3. I confirm alignment with business constraints
  4. If no clear owner exists, escalate ownership before debating substance

That last one is killer: Find out who's going to make the call before you go into a full debate.

Authority without expertise:

ChatGPT gave me this framing:

"I'm not here to declare the technically pure answer. I'm here to ensure we understand the trade-offs and align the architecture to our business goals."

That's leadership.

You're not pretending to know the answer. You're facilitating the process that gets to the right answer.


Your Non-Negotiable Experiment This Week

Use the Domain Knowledge Accelerator (Prompt 1) on an unfamiliar area of your current project.

Then ask at least one question from the expert question list AI generates for you.

Here's what I want you to notice:

  1. Did asking questions instead of pretending to know earn you more credibility?
    (It almost always does.)
  2. How much faster can you learn with AI as a tutor?
    (Days instead of months.)
  3. Did the "dumb" question you asked surface something nobody else was saying out loud?
    (That's where breakthroughs happen.)

Because here's the truth: Asking questions doesn't make you look weak. It makes you look curious, coachable, and confident enough to admit what you don't know.

And that earns trust faster than pretending to be the expert ever will.


The Takeaway

Domain expertise is overrated for project managers.

Facilitation expertise is underrated.

Great PMs don't have all the answers. They ask the right questions and create the conditions for experts to thrive.

And with AI as your learning partner, you can bridge domain knowledge gaps in days—not months—so you can lead with confidence even when you're not the expert in the room.

So stop worrying about whether you "know the industry."

Start worrying about whether you know how to get the best out of the people who do.


Next time: Data-Driven Metrics 2.0—What metrics actually matter in the AI era, and how do we use AI to surface what's really going on in our projects?

If you would like to see the podcast live, check out this link: https://youtu.be/zqrspMN0gCM

Now go ask a "dumb" question. Your team is waiting.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - Domain Knowledge Accelerator

You are a strategic learning coach helping a project manager quickly understand a new domain.

First, ask me 2–3 questions about the project, the domain, and what I specifically need to understand to lead effectively.

Then provide a learning plan answering:

  1. What are the 5–7 core concepts or frameworks I must understand in this domain?
  2. For each concept, explain it in plain language with an analogy to something more familiar.
  3. What are the key stakeholder types in this domain and what does each care most about?
  4. What are the 3–5 most common pitfalls or mistakes PMs make when they don't understand this domain?
  5. What questions should I ask domain experts to demonstrate I'm learning and to uncover critical constraints?

Domain and project context: [Enter Context]


Prompt 2 - Stakeholder Expertise Mapper

You are a project stakeholder analyst.

Ask me 3–4 questions about the project, its goals, and who's involved or affected.

Then create a stakeholder expertise map answering:

  1. Who are the 8–12 key stakeholders (by role, not name)?
  2. For each stakeholder, what domain expertise or knowledge do they bring?
  3. What does each stakeholder care most about (their "win condition")?
  4. Which stakeholders should I rely on for domain expertise vs. business context vs. technical decisions?
  5. What questions should I ask each stakeholder type to tap their expertise effectively?

Project context: [Add Context]


Prompt 3 - Facilitation over Expertise Script

You are a coaching expert helping PMs lead through facilitation rather than expertise.

Ask me 2–3 questions about a specific domain decision or technical choice the team is debating.

Then help me facilitate the decision by providing:

  1. What open-ended questions should I ask to draw out the team's expertise?
  2. What framework or structure can I offer to organize the discussion (without dictating the answer)?
  3. How do I acknowledge my knowledge gaps while still leading with authority?
  4. What decision-making process should I facilitate (consensus, consultative, executive call)?
  5. How do I summarize and communicate the decision in a way that shows I understand the "why" even if I didn't provide the "what"?

Situation: [Enter Situation]

 

Thursday, March 26, 2026

AI Driven PM: S2E3 - The Halo Effect

Just Because You're Good at One Thing Doesn't Mean You're Good at Another

I do this exercise every time I speak at a PMI chapter or project management conference.

I get up on stage and say, "Everybody who's a practicing project manager right now, raise your hand."

Hands go up across the room. Hundreds of them.

"Okay, keep them up. Keep them up," I say. "Now—who came out of college wanting to be a project manager?"

And I watch every hand in the room drop.

Except for three or four people. Maybe.

That right there? That's the Halo Effect.

We were all good at something else first. I started as a technical engineer. You probably started as a developer, analyst, designer, accountant, or subject matter expert in your domain.

We didn't set out to be project managers. We fell into it because someone looked at us one day and said, "You're really good at [X]. You should lead the team."

And here's the problem: Being really good at [X] doesn't automatically make you good at leading people who do [X].

But organizations make this assumption all the time. It's called the Halo Effect—the cognitive bias that says if you're good at one thing, we assume you'll be good at something else.

And it's killing project success rates.

What Is the Halo Effect (And Why It Matters for PMs)

The Halo Effect is simple: Just because you're excellent at something doesn't mean you'll be excellent at something else.

But we promote like it does.

We take our best engineer and make them a project manager. We take our best project manager and make them a PMO leader. We take our top salesperson and make them a sales manager.

And then we wonder why they struggle.

Here's why: These are completely different skill sets.

Let me show you.

Great Engineer vs. Great PM: The Skill Mismatch

What makes a great engineer:

  • Deep focus and technical mastery
  • Love of solving hard, complex problems
  • Delivering working code or product
  • Individual contribution and craftsmanship

What makes a great PM:

  • Broad coordination across stakeholders
  • Fluency in navigating ambiguity
  • Managing energy, decisions, and people dynamics
  • Influence without authority

See the disconnect?

A great engineer wants to solve the problem themselves. A great PM needs to orchestrate others to solve the problem.

So what happens? The newly promoted PM tries to apply engineering rigor to people problems. They delegate, get frustrated when it's not done "right," and then just do it themselves.

They burn out trying to do everyone else's job.

Great PM vs. Great PMO Leader: Another Mismatch

Here's another one I see all the time: We take our best project manager and promote them to PMO manager.

And they crash.

Why? Because these are also totally different skill sets.

I wrote a blog post years ago called "The Kindergartner's Guide to PMO," and the key insight was this:

A project manager has been taught to color inside the lines.

A PMO manager has to establish the lines that everyone else colors in.

One role is about controlling a project within constraints. The other is about negotiating ambiguity and setting organizational standards.

Totally. Different. Skills.

And here's where I see it go wrong: The newly promoted PMO manager thinks, "I was successful doing it this way, so everyone should do it this way."

But that's not leadership. That's replication.

When I ran a PMO, I was very intentional about not forcing my way onto my team. I'd say, "Here's what we need to accomplish. You figure out how you want to do it."

You need meeting notes? Great. Do it in Word, dictate it, put it in bullet points in an email—I don't care. As long as we have clear, actionable notes, you do it your way.

That's how you build a high-performing team. You set the standard, not the process.

My Uncomfortable Truth About Being a PMO Leader

Let me tell you a story I don't share often.

When I ran a PMO, I found out I'm not a very good HR manager. At least not in the corporate sense.

Here's what happened:

We were a brand new PMO. Our goal was to hit 60% of our milestone and production dates. Industry standard at the time was around 30%.

We hit 99% in our first year.

Everybody crushed it. Not just met expectations—exceeded them. Our sponsors were thrilled. Our teams were energized. It was one of those rare moments where everything just clicked.

So when it came time for annual evaluations, I gave everyone fives.

HR came to me and said, "You can't do that."

"Why not?" I asked. "That's what happened."

"Someone has to fall below your expectations," they said. "Most people can meet expectations. Maybe one person can exceed. But you can't give everyone fives."

I thought that was the dumbest thing I'd ever heard.

I pushed back. And then they said the line that killed me:

"If you don't do this, then the problem is your expectations."

I looked at them and said, "Or it's yours."

They didn't like that.

I said, "You want me to forcibly tell someone they're not doing their job—or just barely meeting it—when what we're actually doing is exceeding every possible expectation and building leaders?"

"I'm not doing that."

So no—I'm not a good HR manager in the corporate sense. But I am a good people leader. And I had an incredible HR partner in Carrie Blaise (still one of my best friends) who taught me something invaluable:

Preparation is everything.

Carrie would come into our monthly one-on-ones with a bullet-point list of things he was concerned about, things he was proud of, observations he'd made. The level of detail he could hold onto and bring into coaching conversations was phenomenal.

And I learned: If I want to coach well, I need to prepare well.

That's where AI comes in.

AI as Your Coaching Preparation Partner

Here's what most people miss about AI: It's not just for typing emails or generating status reports.

AI is a thinking partner for coaching.

I use AI now to help me prepare for coaching conversations. I ask it to:

  • Assess skill gaps objectively
  • Build development plans for struggling PMs
  • Surface blind spots I might be missing
  • Frame hard conversations with empathy

I even use it for my own work. I've got a new methodology I'm developing, and I've had AI:

  • Do deep research on competing frameworks
  • Ask me 30-40 clarifying questions to refine my thinking
  • Look for holes in my logic
  • Make sure I'm not accidentally copying something I read years ago

AI runs research in the background while I do the high-value work.

And that's what today's episode is about: Using AI to help you become a better coach, a better leader, and a more objective evaluator of talent.

Three Prompts to Fight the Halo Effect

I'm going to walk you through three prompts I use to assess PM talent, build development plans, and create competency frameworks.

I ran all three live in ChatGPT and Claude. And interestingly, ChatGPT followed my Socratic prompting instructions better this time, while Claude kept wanting to skip ahead.

Just goes to show—you've got to experiment with multiple tools. What works best changes depending on the task.


Prompt 1: PM Skills Gap Analysis

This is your non-negotiable experiment for this episode. I want you to run this on yourself—or on a PM on your team.

Here's what it does:

  • Identifies the 8-10 core PM competencies needed for a role
  • Assesses which competencies the PM likely excels at
  • Surfaces growth areas and skill gaps
  • Provides a 90-day development plan

What happened when I ran it:

ChatGPT asked me great clarifying questions:

  • What authority do you formally have as a PM? Can you approve scope changes or push back on the sponsor?
  • How is success measured for you on this project?
  • What's the sponsor's level of PM maturity?

Then it gave me competencies like:

  • Scope and change management
  • Stakeholder management
  • Strategic communication
  • Risk anticipation
  • Team energy management

And a 90-day plan broken into:

  • Month 1: Shadow a senior PM on stakeholder conversations, practice framing scope changes
  • Month 2: Lead a retrospective, facilitate a decision architecture session
  • Month 3: Present a project health diagnostic to leadership

That's actionable coaching.

Claude, on the other hand, skipped my questions and went straight to the assessment. It said:

"Given your background and the context I already have on this project, I have enough to give you a substantive assessment without asking you to repeat yourself."

Not what I wanted. I wanted it to ask me questions because that's where the thinking happens.

But you know what? Claude's assessment was still good. Just a reminder—these tools have personalities. Experiment and find what works.


Prompt 2: Halo Effect Audit

This one's for diagnosing whether your organization is suffering from Halo Effect bias in PM hiring and promotion.

What it surfaces:

  • What skills you tend to prioritize (hint: probably technical ones)
  • What critical PM skills you're overlooking
  • Evidence that you're promoting the wrong people
  • How to change your selection criteria
  • Interview questions that reveal actual PM competencies

What ChatGPT told me:

"Your pattern is clear. You prioritize deep product knowledge, technical credibility, system architecture understanding, and individual performance reputation. The implicit belief appears to be: The best engineer will become the best PM."

Ouch. But true.

It then said:

"In a 200-person software company, PM success depends heavily on: stakeholder boundary-setting, scope governance, executive communication, strategic framing, conflict navigation, cross-functional influence without authority, and change containment."

None of which have anything to do with being a great engineer.

Then it gave me interview questions that would actually reveal PM competencies:

  • "You need to tell your executive sponsor the project will be three weeks late. Write the first three sentences of that email right now."
  • "How do you prevent scope creep when priorities change weekly?"
  • "Explain this technical roadmap to a non-technical executive in two minutes."

These questions don't test technical knowledge. They test influence, framing, and communication under pressure.

That's what separates good PMs from great ones.


Prompt 3: PM Competency Framework Builder

This one's for PMO leaders or anyone building career paths for PMs.

What it creates:

  • 8-12 core PM competencies tailored to your org
  • Observable behaviors at "developing," "proficient," and "expert" levels
  • Which competencies matter most for junior vs. senior PMs
  • How to assess competencies in hiring/promotion
  • Development resources to support growth

What I loved from the output:

It gave me a competency matrix like this:

Strategic Alignment / Executive Communication

  • Developing: Shares updates focused on tasks, struggles to translate technical details
  • Proficient: Frames updates in terms of outcomes, timelines, and risks; pre-aligns executives before major decisions
  • Expert: Shapes priorities before execution begins; anticipates executive concerns and addresses them early; reduces rework caused by shifting direction

That's a promotion rubric right there.

If you're a new PMO manager and someone just asked you to define PM1, PM2, and PM3 levels—this prompt will give you 80% of the framework in 10 minutes.

Then you customize it for your org.


Your Non-Negotiable Experiment This Week

Run the PM Skills Gap Analysis (Prompt 1) on yourself or a PM on your team.

Be honest. Where do you think you're strong? Where do you think you're weak?

Let AI ask you clarifying questions. Let it surface blind spots.

Then identify the top two gaps and one concrete action to address each.

Here's what I want you to notice:

  1. Were the gaps what you expected, or did AI surface something you weren't anticipating?
  2. Did it reveal any blind spots you'd been avoiding?
  3. How does having a structured 90-day development plan change the conversation?

Because here's the truth: Most PMs never get objective feedback. They get vague performance reviews and assumptions based on whether their last project shipped on time.

AI can give you the structured, evidence-based assessment most organizations never provide.


The Takeaway

The Halo Effect is real. And it's why so many great engineers fail as PMs, and why so many great PMs fail as PMO leaders.

Being good at one thing doesn't automatically make you good at another.

We need objective competency frameworks. We need to assess the actual skills that make PMs successful—not just assume technical mastery translates to people leadership.

And we need AI to help us do it objectively, without bias, without politics, and without the uncomfortable truth that maybe we've been promoting the wrong people for years.

So run that skills gap analysis. Be honest. Surface the blind spots.

Because the best leaders aren't the ones who were always great. They're the ones who were willing to see where they weren't—and did something about it.


Next time: People skills vs. domain expertise—which one actually matters for project managers? (Spoiler: It's not what you think.)

Want these prompts ready to copy/paste? See below!

If you would like to see the podcast live, check out this link: https://youtu.be/-9CvnXoPsFs

Now go find your blind spots. Your team is counting on you.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - PM Skills Gap Analysis

You are a senior project management coach and assessor.

First, ask me 3–5 questions about the PM's background, current role, challenges they're facing, and what success looks like in their context.

Then provide an assessment answering:

  1. What are the 8–10 core PM competencies needed for this role?
  2. Based on what I've shared, which competencies does this PM likely excel at?
  3. Which competencies are likely gaps or growth areas?
  4. What specific behaviors or outcomes would indicate strength or weakness in each gap area?
  5. What is a 90-day development plan to address the top 2–3 gaps?

PM background: [Add your background]


Prompt 2 - Halo Effect Audit

You are an organizational psychologist specializing in team composition and role fit.

Ask me 2–3 questions about how PMs are selected and promoted in my organization.

Then help me analyze our hiring/promotion patterns by answering:

  1. What skills or background do we tend to prioritize when selecting PMs?
  2. What skills critical to PM success might we be overlooking?
  3. What evidence would indicate we're suffering from halo effect in PM selection?
  4. How should we change our PM selection criteria to reduce halo effect bias?
  5. What interview questions or assessments would reveal true PM competencies?

Organization context: [Add Organization Context]


Prompt 3 - PM Competency Framework

You are a PMO director building a competency framework.

First, ask me 3–4 questions about our project types, organizational culture, and what "great PM performance" looks like here.

Then create a PM competency framework answering:

  1. What are the 8–12 core PM competencies for our context?
  2. For each competency, what does "developing," "proficient," and "expert" look like in observable behaviors?
  3. Which competencies are most critical for new PMs vs. senior PMs?
  4. How should we assess these competencies in hiring and promotion decisions?
  5. What development resources (training, coaching, mentoring) support growth in each area?

Organizational context: [Add Organizational Context]