Thursday, April 23, 2026

AI Driven PM: S2E5 - Data Rules 2.0

Let me tell you something I learned from Six Sigma that changed the way I run projects:

If you can't measure it, you can't improve it.

I came up through the DMAIC era—Define, Measure, Analyze, Improve, Control. I love data. I live for metrics. Give me a spreadsheet and a story to tell, and I'm happy.

But here's the uncomfortable truth I've discovered after 30+ years and 150+ implementations:

Most of the metrics we track don't tell the story we actually need to tell.

We obsess over story points. Velocity. Task completions. Hours logged. Burn rate.

And then an executive asks, "Are we going to hit the launch date?"

And we stare at our dashboard.

And it doesn't answer the question.

That's not a data problem. That's a framing problem.

The Activity Metrics Trap

Here's what most PMs measure:

  • Story points completed
  • Task closes
  • Hours logged
  • Budget spent to date
  • Number of commits to repo

You know what all of those have in common?

They measure busyness. Not achievement.

Activity doesn't equal value. And executives—even if they can't always articulate it—don't actually care about activity. They care about outcomes that drive business value.

I was working with a client recently who said their goal was a 2% net sales lift.

I asked, "If you land Walmart, do you win?"

They said, "No."

I said, "Okay—walk me through what this 2% net sales lift actually means."

And here's what was wild: They were using that metric to justify building a data warehouse, but they had no idea what data they were going to put into the warehouse to measure it.

The metric was real. The measurement plan was nonexistent.

That's the trap.

We pick metrics because they're easy to track, not because they answer the questions that actually matter.

We Are Lawyers. Our Sponsors Are Judges.

Here's a frame I use all the time that completely changes how PMs think about data:

Your job is to build a case. Your sponsor is the judge.

You gather evidence. You analyze the data. You present your recommendation. You let the judge decide.

And if you don't like the ruling? You don't argue in the courtroom.

You appeal.

You go back, review your data, figure out why it didn't tell the compelling story you needed, and you come back better prepared.

And here's the thing: If I have more data than you, I'm going to win the conversation.

Not because I'm louder or more senior or more confident.

Because data tells a story. And the PM who tells the better story wins.

The problem is we keep bringing the wrong data to court.

What You Should Actually Be Measuring

Here's what I suggest you measure instead of activity:

1. Value Delivered

Not "features shipped"—features in production being used.

I worked on the GrowthDay app build. We had all these features planned, but at the last minute, the founder said, "Wouldn't it be cool if we had a daily motivational segment—something that fires people up every morning?"

We almost cut it. Time pressure. Scope pressure.

We didn't cut it.

That tiny, last-minute feature became one of the stickiest in the whole app.

But here's the key: We only knew it because we measured how people were actually using the app. How many times. How long they stayed. Whether they came back.

If you're not measuring features being used, you don't know if you're delivering value or just shipping code.

2. Time to Impact

How fast do you go from idea to user value?

What's your average cycle time from "we need this" to "users are using this"?

That's a story executives actually care about.

3. Quality Signals

Defect rates. Technical debt. User satisfaction.

But here's the nuance: When you compress testing due to date pressure, defect growth becomes exponential—not linear.

I had this exact conversation in a live demo. The data showed:

  • Sprint velocity dropped
  • Defect rate rising
  • Testing coverage shrinking

I told the team: "If testing continues to erode, expect 15 to 18 defects per sprint within four to six weeks. The rework alone will cost more time than the testing would have."

That's the conversation a PM should be having with a sponsor. Not "we're at 80% of story points." But: "We're trading short-term velocity for long-term quality debt. Here's what that actually costs us."

4. Team Health

Velocity stability (not just velocity), morale, attrition risk, sentiment analysis.

A velocity drop after adding a team member? That's normal. Expected, even.

Knowledge transfer consumes senior capacity. Code review loads increase. A 20% velocity drop after onboarding one person is common in the first two to four sprints.

But if it hasn't recovered in three sprints? That's structural. That's something else.

Know the difference.

5. Stakeholder Confidence

Sponsor engagement. Clarity of vision. Meeting attendance.

When your sponsor starts missing meetings, that's a leading indicator—not a footnote.

The Metric I'm Most Proud Of: Scope Stability Index

Here's one I love that most teams don't track:

Scope Stability Index = New story points added ÷ Total committed story points

If that number exceeds 15% mid-sprint, execution predictability collapses.

Let me make that concrete. You committed to 30 story points for the sprint. During the sprint, 10 new points get added. That's 10 ÷ 30 = 33%.

Your sprint just broke.

Not because the team is failing—but because the input changed faster than the output could absorb it.

This is the conversation you bring to a sponsor: "Every time we add scope mid-sprint, we pay a compounding tax. Here's what that tax looks like in data."

That's a case. That's a lawyer walking into court prepared.

How AI Fits Into All of This

Here's the piece most people miss:

AI can automate all the activity tracking.

I have agents that pull data from JIRA, Microsoft Planner, ServiceNow, spreadsheets—normalize it, format it, and report it. Automatically.

That means I'm not spending 60% of my week staring backwards at what already happened.

I'm spending it on outcome measurement, impact analysis, and inventing new metrics that tell the parts of the story nobody else is telling.

AI can correlate your leading indicators (velocity, quality, team sentiment) with your lagging indicators (revenue, retention, delivery dates). It can isolate trends across data sources you'd never have time to manually connect.

But here's the catch, and I said this right at the top of the episode:

AI can't do for you what it can't do through you.

The metrics it surfaces are only as good as the questions you're asking. You have to know what story you're trying to tell before AI can help you tell it.

That's what these three prompts are designed to do.


Prompt 1: Metrics Dashboard Designer

Start here. This builds you a dashboard that actually answers the questions your executives are asking.

What it creates:

  • 3-5 outcome metrics that prove value delivery
  • 3-5 activity health metrics as leading indicators
  • Data sources and measurement approach for each
  • Green/yellow/red thresholds
  • How to present differently to executives vs. team vs. sponsor

What I got when I ran it for the Social Wishing app:

ChatGPT asked:

  • "What would cause leadership to declare this a failure at month six or nine?"
  • "How does this app make money in the first 12-18 months?"
  • "Do you have any analytics tooling selected?"

That first question? Bring it to your sponsor. Seriously. Ask them: "What would cause you to declare this a failure at month six?" You'll learn more in that 10-minute conversation than in three weeks of status reports.

The outcome metrics it generated:

  • New user signups per week
  • 90-day active user rate
  • Wish progress rate
  • 30-day retention

The leading indicators:

  • Visitor-to-signup conversion rate
  • Invite rate (% of users who invite at least one friend)
  • Time to first wish (median time from signup to first wish created)
  • Sprint predictability

The dashboard format for founders: Single slide, three rows.

  • Row 1: Growth (signups, conversion)
  • Row 2: Engagement (wish progress, invite rate)
  • Row 3: Retention (30-day retention, trend arrow)

Simple. Powerful. Tells the dream story.


Prompt 2: Predictive Risk Indicator Finder

This one is for when you feel something is wrong but can't quite articulate it yet.

What it does:

  • Identifies leading indicators that predict trouble
  • Correlates team health metrics with outcome metrics
  • Surfaces data you're NOT capturing (but should be)
  • Sets intervention thresholds
  • Coaches you on how to communicate risk without panicking the room

I gave it this project context:

  • Sprint velocity dropped from 35 to 28 points
  • Defects up from 7 to 12 per sprint
  • Code review cycle time averaging 2.3 days
  • Sponsor missed last 2-3 meetings
  • Team sentiment dropped from 8.0 to 6.5
  • 8 new feature requests added this month, 3 original features cut

What AI told me:

"Velocity drop after adding a person is classic onboarding drag. A 20% drop is common in the first two to four sprints. If it doesn't recover within three sprints, the issue is structural—not onboarding."

"Your defect increase combined with shrinking testing coverage is the highest risk signal in your data. When testing coverage drops due to date pressure, defect growth becomes exponential, not linear. Expect 15 to 18 defects per sprint within four to six weeks if nothing changes."

And then—the one I loved most—it surfaced metrics I wasn't tracking:

Code review comment density per PR.

"High comment density means complexity or standards drift. Low comment density with long cycle times means avoidance. These require completely different interventions."

I would never have thought to track that. That's AI as a thinking partner.


Prompt 3: Vanity vs. Value Metrics Audit

This is the one that will save you from the most painful meeting of your career.

You know the meeting. Twenty executives in the room. You walk through your status report. They eat you alive.

Because your report answered "Are we busy?" instead of "Are we going to succeed?"

I've been in that meeting. I never want you to experience it.

I gave AI this list of my current metrics:

  • Tasks completed this week
  • Story points burned
  • Budget spent to date
  • Team utilization
  • Number of commits to repo
  • Lines of code written
  • Meetings held
  • Risks identified

And I told it: "Executives keep asking if we'll hit the launch date and user targets. My metrics don't answer that question."

Claude's response was brutal. And perfect:

"Why are you asking me questions, Rick? You've already diagnosed the problem yourself. Your metrics answer 'Are we busy?'—not 'Are we going to succeed?'"

"You have six vanity metrics out of eight. The executives are asking the right question. Your dashboard is giving them the wrong answer. It's not a data problem. It's a framing problem. You're reporting inputs when they're asking about outcomes."

Couldn't have said it better myself.

Then it gave me the swap:

Vanity Metric

Why It's Vanity

Replace With

Story points burned

Doesn't predict completion without trend

Forecasted completion date based on velocity trend

Team utilization

Measures busyness, not throughput

Scope stability index

Commits to repo

More commits can signal churn, not progress

Defect escape rate

Lines of code

More code often = more defects

Time to value / time to first wish

That table is a career-saver.


Your Non-Negotiable Experiment This Week

Two challenges:

1. Build your outcome metrics dashboard using Prompt 1. Take your current project and identify 3-5 metrics that actually answer your executive's burning questions.

2. Replace at least one vanity metric in your next status report with a value metric.

Just one swap.

Here's what I want you to notice:

  • How do stakeholders react when your report answers their actual questions?
  • Does better data help you spot risks earlier?
  • Do you feel more confident walking into that executive meeting?

Because here's the truth: The PM who tells the better story with better data wins.

Not because they're louder. Because they came prepared.


Next time: Net Operating Value—the metric I use for portfolio decisions. How to stack-rank your portfolio, make trade-off decisions, and help executives choose between good ideas using data that actually reflects business value.

Want these prompts ready to copy/paste? Head to PMThatWorks.com for the full library.

Now go build that dashboard.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - Metrics Dashboard Designer

You are a data-driven PM coach and metric strategist.

First, ask me 4–5 questions about the project goals, stakeholders, team, and what success means in business terms.

Then help me design a metrics dashboard by answering:

  1. What are the 3–5 outcome metrics that prove the project is delivering value?
  2. What are 3–5 activity health metrics that are leading indicators of those outcomes?
  3. For each metric, what is the data source and how do we measure it?
  4. What thresholds or targets indicate green, yellow, red status for each metric?
  5. How should I present these metrics to executives vs. the team vs. the sponsor?

Project context: [Enter Context]


Prompt 2 - Predictive Risk Indicator Finder

You are a predictive analytics expert for project management.

Ask me 3–4 questions about current project metrics, team dynamics, and any early warning signs I'm seeing.

Then analyze potential risk patterns by answering:

  1. Based on the metrics I'm tracking, what are the 3–5 leading indicators that typically predict project trouble?
  2. What correlation exists between team health metrics (velocity, morale) and outcome metrics (quality, delivery)?
  3. What data am I not currently capturing that would give me an earlier warning sign of risk?
  4. What specific metric threshold should trigger a project health intervention?
  5. How do I communicate risk using data without sounding alarmist?

Current metrics: [Enter Metrics and Current Project Context]


Prompt 3 - Vanity vs. Value Metrics Audit

You are a metric strategist helping PMs distinguish signal from noise.

Ask me 2–3 questions about the metrics I'm currently reporting and what decisions those metrics inform.

Then provide an analysis answering:

  1. Which of my current metrics are vanity metrics? (They look good but don't drive decisions.)
  2. Which metrics are value metrics? (They directly inform action or prove impact.)
  3. For each vanity metric, what is the underlying value metric I should track instead?
  4. What questions should I ask myself to test if a metric is worth tracking?
  5. How do I transition stakeholders away from vanity metrics they're used to seeing?

Current metrics: [List Your Current Metrics]

 

Thursday, April 9, 2026

AI Driven PM: S2E4 - Do We Have to be the Domain Expert?

I get this question all the time when I'm working with a new client or interviewing for a role.

"So Rick, are you an expert in the insurance industry?"

"No."

"Well... shouldn't you be? I mean, shouldn't a project manager understand the domain they're managing projects in?"

"Not really."

They look confused. So I follow up:

"How many people work here?"

"About 5,000."

"Great. So you've got 5,000 experts in insurance. What you need is someone like me who can get the best out of those 5,000 people."

That's the difference between domain expertise and project management expertise.

And most organizations don't understand it.

They prioritize hiring PMs who "speak the language of the business" over PMs who know how to facilitate, translate, and orchestrate expertise.

And that hiring bias? It backfires more often than you think.

The Conventional Wisdom (And Why It's Wrong)

Here's what most organizations believe:

  • You need a PM who understands healthcare, finance, manufacturing, [insert industry here]
  • They have to "speak the language" of the business
  • Domain expertise signals credibility and competence
  • Deep knowledge = better decisions

The reality? Domain expertise can be a hindrance just as often as it's a help.

Here's why.

When Domain Expertise Backfires

1. Domain Expert PMs Micromanage

When you know how to do the work, it's really hard not to tell people how to do the work.

A PM with deep domain expertise will hear an engineer say, "That'll take 10 hours," and think, "I could do that in two hours."

And then they start challenging estimates. Second-guessing approaches. Offering "suggestions" that aren't really suggestions.

That's not project management. That's micromanagement.

2. They Focus on WHAT, Not HOW

Domain expert PMs get obsessed with what's being built instead of how the team is building it.

They care more about the technical solution than the team dynamics, the stakeholder alignment, or the energy in the room.

And that leads to projects that might be technically perfect but operationally a disaster.

3. Personal Bias Takes Over

When you have deep domain experience, you carry bias—whether you're aware of it or not.

You think, "I've been there before. I've seen that approach. It doesn't work."

But here's the thing: Just because it didn't work when YOU tried it doesn't mean it won't work now.

Context matters. Teams matter. Timing matters.

And sometimes the team needs to try something, discover it doesn't work, and pivot. That's how ownership and learning happen.

When a domain expert PM shuts that down with "I already know that won't work," they kill ownership and innovation.

4. They Become Decision Bottlenecks

Domain expert PMs feel like they need to be involved in every decision because they "understand the implications."

So they become the bottleneck.

Every technical choice, every scope question, every trade-off discussion has to run through them.

That's not leadership. That's dependency.

What Great PMs Actually Do

Here's the truth most organizations miss:

A great PM knows how to ask the right questions—not provide the right answers.

Let me say that again for the people in the back:

You don't make decisions on scope, budget, timeline, or what's in or out. You make recommendations. You influence. But you don't own the decision.

Your job is to:

  • Translate between domains (tech to business, business to customer, customer to tech)
  • Facilitate expertise (create the conditions for experts to do their best work)
  • Ask the "dumb" question (the one everyone assumed was already answered)
  • Frame trade-offs (so the right people can make informed decisions)
  • Orchestrate, not dictate (you're the conductor, not the soloist)

And here's the magic: Facilitation expertise + deep learning desire > domain expertise.

Why?

Because you're unafraid to ask the next question. You're not stuck in "how it's always been done." You bring fresh eyes, challenge assumptions, and force clarity where experts have gotten comfortable with ambiguity.

When Domain Expertise DOES Matter

I'm not saying domain expertise is useless. There are contexts where it absolutely matters:

1. Highly Regulated Industries

Healthcare, finance, government—anywhere compliance is complex and non-negotiable.

In these environments, knowing which questions to ask requires baseline domain knowledge. You need to know what regulations exist so you know who to pull into the conversation.

But even then, you don't need to be the compliance expert. You just need to know when to engage one.

2. Deeply Technical Domains

If you're building your own AI/ML systems, embedded systems, or highly specialized technology, some technical fluency helps with translation.

But notice I said fluency, not mastery.

You don't need to code the solution. You need to understand enough to ask, "What are the trade-offs?" and "What happens if we're wrong?"

3. When You're the Only Person in the Room

If you're a solo PM in a startup with no dedicated domain experts, then yeah—you might need to wear both hats for a while.

But even then, your job is to build the team that replaces your domain gaps as fast as possible.

How AI Helps You Bridge Domain Gaps in Days, Not Months

This is where it gets fun.

One of the most powerful uses of AI for project managers isn't writing status reports or generating meeting notes.

It's becoming a domain learning accelerator.

You can use AI to:

  • Get up to speed on unfamiliar domains in days instead of months
  • Build stakeholder expertise maps so you know who to ask what
  • Generate facilitation scripts so you can lead technical debates without pretending to be the expert

Let me show you.


Prompt 1: Domain Knowledge Accelerator (Your Non-Negotiable)

This is your experiment for this week. Use AI to get up to speed on an unfamiliar domain—fast.

What it does:

  • Identifies 5-7 core concepts you need to understand
  • Explains each concept in plain language with analogies
  • Maps key stakeholder types and what they care about
  • Surfaces common PM pitfalls in that domain
  • Generates questions to ask experts

What I got when I ran it for the Social Wishing app:

ChatGPT gave me concepts like:

  • OAuth and API authorization flows
  • Graph API rate limits
  • Data privacy classification
  • Viral growth and infrastructure scaling

And then—here's what I loved—it gave me analogies.

For "viral growth and infrastructure scaling," it said:

Plain language: If growth spikes, your system must handle sudden load increases.

Analogy: It's like a small coffee shop that suddenly gets national press—but you only have one espresso machine. Service will collapse.

That's gold.

Now I can explain infrastructure risk to a business stakeholder without using the word "horizontal scaling."

I can say, "We just got national press, and we've got a line around the block—but we only have one espresso machine. We need to decide: Do we buy more machines now, or risk turning customers away?"

That's translation. That's facilitation. That's what great PMs do.


Prompt 2: Stakeholder Expertise Mapper

This one helps you figure out who knows what and who cares about what on your project.

What it creates:

  • 8-12 key stakeholders by role (not name)
  • What domain expertise each brings
  • What each stakeholder's "win condition" is
  • Who to rely on for domain expertise vs. business context vs. technical decisions
  • Questions to ask each stakeholder type

What I got:

ChatGPT mapped out:

  • Executive sponsor (cares about market differentiation and user growth)
  • Product owner (cares about MVP clarity and scope control)
  • Back-end engineer (cares about API stability and Facebook integration)
  • Marketing director (cares about launch readiness and campaign metrics)
  • QA engineer (cares about testing strategy for third-party integrations)

Then it gave me questions tailored to each stakeholder.

For the marketing director: "What needs to be true in terms of experience or metrics for you to feel confident running a full launch campaign?"

For the QA engineer: "If you were going to design the testing strategy for the Facebook API integration from scratch, what would you prioritize?"

These aren't generic questions. They're role-specific, expertise-tapping questions that show you're learning—and give you credibility without pretending to know.


Prompt 3: Facilitation Over Expertise Script

This is the one I use when the team is stuck in a heated debate and I don't have the technical chops to declare a winner.

The scenario I gave it:

The Social Wishing engineering team is debating architecture.

Option A: Microservices from day one (more complex, scales better)
Option B: Monolith first, split later (faster to MVP, potential refactor pain)

I don't have strong back-end architecture expertise. Two senior engineers are dug in on opposite sides. The debate is getting heated, and we're burning time.

How do I lead this without pretending I know what's technically right?

What AI gave me:

Questions to draw out expertise:

  • "What's each of us assuming about how fast this app will scale—and are those assumptions written down anywhere?"
  • "What would have to be true about our growth trajectory for Option A to be clearly the right call? Or Option B?"
  • "Has anyone on the team built something similar before, and what happened?"
  • "Is there anything about our team's current skills or bandwidth that should factor into this choice that we haven't mentioned yet?"

Framework to organize the discussion:

Claude suggested:

  1. Structured input from both sides (5 minutes each, no interruptions)
  2. Engineering lead makes recommendation
  3. I confirm alignment with business constraints
  4. If no clear owner exists, escalate ownership before debating substance

That last one is killer: Find out who's going to make the call before you go into a full debate.

Authority without expertise:

ChatGPT gave me this framing:

"I'm not here to declare the technically pure answer. I'm here to ensure we understand the trade-offs and align the architecture to our business goals."

That's leadership.

You're not pretending to know the answer. You're facilitating the process that gets to the right answer.


Your Non-Negotiable Experiment This Week

Use the Domain Knowledge Accelerator (Prompt 1) on an unfamiliar area of your current project.

Then ask at least one question from the expert question list AI generates for you.

Here's what I want you to notice:

  1. Did asking questions instead of pretending to know earn you more credibility?
    (It almost always does.)
  2. How much faster can you learn with AI as a tutor?
    (Days instead of months.)
  3. Did the "dumb" question you asked surface something nobody else was saying out loud?
    (That's where breakthroughs happen.)

Because here's the truth: Asking questions doesn't make you look weak. It makes you look curious, coachable, and confident enough to admit what you don't know.

And that earns trust faster than pretending to be the expert ever will.


The Takeaway

Domain expertise is overrated for project managers.

Facilitation expertise is underrated.

Great PMs don't have all the answers. They ask the right questions and create the conditions for experts to thrive.

And with AI as your learning partner, you can bridge domain knowledge gaps in days—not months—so you can lead with confidence even when you're not the expert in the room.

So stop worrying about whether you "know the industry."

Start worrying about whether you know how to get the best out of the people who do.


Next time: Data-Driven Metrics 2.0—What metrics actually matter in the AI era, and how do we use AI to surface what's really going on in our projects?

If you would like to see the podcast live, check out this link: https://youtu.be/zqrspMN0gCM

Now go ask a "dumb" question. Your team is waiting.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - Domain Knowledge Accelerator

You are a strategic learning coach helping a project manager quickly understand a new domain.

First, ask me 2–3 questions about the project, the domain, and what I specifically need to understand to lead effectively.

Then provide a learning plan answering:

  1. What are the 5–7 core concepts or frameworks I must understand in this domain?
  2. For each concept, explain it in plain language with an analogy to something more familiar.
  3. What are the key stakeholder types in this domain and what does each care most about?
  4. What are the 3–5 most common pitfalls or mistakes PMs make when they don't understand this domain?
  5. What questions should I ask domain experts to demonstrate I'm learning and to uncover critical constraints?

Domain and project context: [Enter Context]


Prompt 2 - Stakeholder Expertise Mapper

You are a project stakeholder analyst.

Ask me 3–4 questions about the project, its goals, and who's involved or affected.

Then create a stakeholder expertise map answering:

  1. Who are the 8–12 key stakeholders (by role, not name)?
  2. For each stakeholder, what domain expertise or knowledge do they bring?
  3. What does each stakeholder care most about (their "win condition")?
  4. Which stakeholders should I rely on for domain expertise vs. business context vs. technical decisions?
  5. What questions should I ask each stakeholder type to tap their expertise effectively?

Project context: [Add Context]


Prompt 3 - Facilitation over Expertise Script

You are a coaching expert helping PMs lead through facilitation rather than expertise.

Ask me 2–3 questions about a specific domain decision or technical choice the team is debating.

Then help me facilitate the decision by providing:

  1. What open-ended questions should I ask to draw out the team's expertise?
  2. What framework or structure can I offer to organize the discussion (without dictating the answer)?
  3. How do I acknowledge my knowledge gaps while still leading with authority?
  4. What decision-making process should I facilitate (consensus, consultative, executive call)?
  5. How do I summarize and communicate the decision in a way that shows I understand the "why" even if I didn't provide the "what"?

Situation: [Enter Situation]

 

Thursday, March 26, 2026

AI Driven PM: S2E3 - The Halo Effect

Just Because You're Good at One Thing Doesn't Mean You're Good at Another

I do this exercise every time I speak at a PMI chapter or project management conference.

I get up on stage and say, "Everybody who's a practicing project manager right now, raise your hand."

Hands go up across the room. Hundreds of them.

"Okay, keep them up. Keep them up," I say. "Now—who came out of college wanting to be a project manager?"

And I watch every hand in the room drop.

Except for three or four people. Maybe.

That right there? That's the Halo Effect.

We were all good at something else first. I started as a technical engineer. You probably started as a developer, analyst, designer, accountant, or subject matter expert in your domain.

We didn't set out to be project managers. We fell into it because someone looked at us one day and said, "You're really good at [X]. You should lead the team."

And here's the problem: Being really good at [X] doesn't automatically make you good at leading people who do [X].

But organizations make this assumption all the time. It's called the Halo Effect—the cognitive bias that says if you're good at one thing, we assume you'll be good at something else.

And it's killing project success rates.

What Is the Halo Effect (And Why It Matters for PMs)

The Halo Effect is simple: Just because you're excellent at something doesn't mean you'll be excellent at something else.

But we promote like it does.

We take our best engineer and make them a project manager. We take our best project manager and make them a PMO leader. We take our top salesperson and make them a sales manager.

And then we wonder why they struggle.

Here's why: These are completely different skill sets.

Let me show you.

Great Engineer vs. Great PM: The Skill Mismatch

What makes a great engineer:

  • Deep focus and technical mastery
  • Love of solving hard, complex problems
  • Delivering working code or product
  • Individual contribution and craftsmanship

What makes a great PM:

  • Broad coordination across stakeholders
  • Fluency in navigating ambiguity
  • Managing energy, decisions, and people dynamics
  • Influence without authority

See the disconnect?

A great engineer wants to solve the problem themselves. A great PM needs to orchestrate others to solve the problem.

So what happens? The newly promoted PM tries to apply engineering rigor to people problems. They delegate, get frustrated when it's not done "right," and then just do it themselves.

They burn out trying to do everyone else's job.

Great PM vs. Great PMO Leader: Another Mismatch

Here's another one I see all the time: We take our best project manager and promote them to PMO manager.

And they crash.

Why? Because these are also totally different skill sets.

I wrote a blog post years ago called "The Kindergartner's Guide to PMO," and the key insight was this:

A project manager has been taught to color inside the lines.

A PMO manager has to establish the lines that everyone else colors in.

One role is about controlling a project within constraints. The other is about negotiating ambiguity and setting organizational standards.

Totally. Different. Skills.

And here's where I see it go wrong: The newly promoted PMO manager thinks, "I was successful doing it this way, so everyone should do it this way."

But that's not leadership. That's replication.

When I ran a PMO, I was very intentional about not forcing my way onto my team. I'd say, "Here's what we need to accomplish. You figure out how you want to do it."

You need meeting notes? Great. Do it in Word, dictate it, put it in bullet points in an email—I don't care. As long as we have clear, actionable notes, you do it your way.

That's how you build a high-performing team. You set the standard, not the process.

My Uncomfortable Truth About Being a PMO Leader

Let me tell you a story I don't share often.

When I ran a PMO, I found out I'm not a very good HR manager. At least not in the corporate sense.

Here's what happened:

We were a brand new PMO. Our goal was to hit 60% of our milestone and production dates. Industry standard at the time was around 30%.

We hit 99% in our first year.

Everybody crushed it. Not just met expectations—exceeded them. Our sponsors were thrilled. Our teams were energized. It was one of those rare moments where everything just clicked.

So when it came time for annual evaluations, I gave everyone fives.

HR came to me and said, "You can't do that."

"Why not?" I asked. "That's what happened."

"Someone has to fall below your expectations," they said. "Most people can meet expectations. Maybe one person can exceed. But you can't give everyone fives."

I thought that was the dumbest thing I'd ever heard.

I pushed back. And then they said the line that killed me:

"If you don't do this, then the problem is your expectations."

I looked at them and said, "Or it's yours."

They didn't like that.

I said, "You want me to forcibly tell someone they're not doing their job—or just barely meeting it—when what we're actually doing is exceeding every possible expectation and building leaders?"

"I'm not doing that."

So no—I'm not a good HR manager in the corporate sense. But I am a good people leader. And I had an incredible HR partner in Carrie Blaise (still one of my best friends) who taught me something invaluable:

Preparation is everything.

Carrie would come into our monthly one-on-ones with a bullet-point list of things he was concerned about, things he was proud of, observations he'd made. The level of detail he could hold onto and bring into coaching conversations was phenomenal.

And I learned: If I want to coach well, I need to prepare well.

That's where AI comes in.

AI as Your Coaching Preparation Partner

Here's what most people miss about AI: It's not just for typing emails or generating status reports.

AI is a thinking partner for coaching.

I use AI now to help me prepare for coaching conversations. I ask it to:

  • Assess skill gaps objectively
  • Build development plans for struggling PMs
  • Surface blind spots I might be missing
  • Frame hard conversations with empathy

I even use it for my own work. I've got a new methodology I'm developing, and I've had AI:

  • Do deep research on competing frameworks
  • Ask me 30-40 clarifying questions to refine my thinking
  • Look for holes in my logic
  • Make sure I'm not accidentally copying something I read years ago

AI runs research in the background while I do the high-value work.

And that's what today's episode is about: Using AI to help you become a better coach, a better leader, and a more objective evaluator of talent.

Three Prompts to Fight the Halo Effect

I'm going to walk you through three prompts I use to assess PM talent, build development plans, and create competency frameworks.

I ran all three live in ChatGPT and Claude. And interestingly, ChatGPT followed my Socratic prompting instructions better this time, while Claude kept wanting to skip ahead.

Just goes to show—you've got to experiment with multiple tools. What works best changes depending on the task.


Prompt 1: PM Skills Gap Analysis

This is your non-negotiable experiment for this episode. I want you to run this on yourself—or on a PM on your team.

Here's what it does:

  • Identifies the 8-10 core PM competencies needed for a role
  • Assesses which competencies the PM likely excels at
  • Surfaces growth areas and skill gaps
  • Provides a 90-day development plan

What happened when I ran it:

ChatGPT asked me great clarifying questions:

  • What authority do you formally have as a PM? Can you approve scope changes or push back on the sponsor?
  • How is success measured for you on this project?
  • What's the sponsor's level of PM maturity?

Then it gave me competencies like:

  • Scope and change management
  • Stakeholder management
  • Strategic communication
  • Risk anticipation
  • Team energy management

And a 90-day plan broken into:

  • Month 1: Shadow a senior PM on stakeholder conversations, practice framing scope changes
  • Month 2: Lead a retrospective, facilitate a decision architecture session
  • Month 3: Present a project health diagnostic to leadership

That's actionable coaching.

Claude, on the other hand, skipped my questions and went straight to the assessment. It said:

"Given your background and the context I already have on this project, I have enough to give you a substantive assessment without asking you to repeat yourself."

Not what I wanted. I wanted it to ask me questions because that's where the thinking happens.

But you know what? Claude's assessment was still good. Just a reminder—these tools have personalities. Experiment and find what works.


Prompt 2: Halo Effect Audit

This one's for diagnosing whether your organization is suffering from Halo Effect bias in PM hiring and promotion.

What it surfaces:

  • What skills you tend to prioritize (hint: probably technical ones)
  • What critical PM skills you're overlooking
  • Evidence that you're promoting the wrong people
  • How to change your selection criteria
  • Interview questions that reveal actual PM competencies

What ChatGPT told me:

"Your pattern is clear. You prioritize deep product knowledge, technical credibility, system architecture understanding, and individual performance reputation. The implicit belief appears to be: The best engineer will become the best PM."

Ouch. But true.

It then said:

"In a 200-person software company, PM success depends heavily on: stakeholder boundary-setting, scope governance, executive communication, strategic framing, conflict navigation, cross-functional influence without authority, and change containment."

None of which have anything to do with being a great engineer.

Then it gave me interview questions that would actually reveal PM competencies:

  • "You need to tell your executive sponsor the project will be three weeks late. Write the first three sentences of that email right now."
  • "How do you prevent scope creep when priorities change weekly?"
  • "Explain this technical roadmap to a non-technical executive in two minutes."

These questions don't test technical knowledge. They test influence, framing, and communication under pressure.

That's what separates good PMs from great ones.


Prompt 3: PM Competency Framework Builder

This one's for PMO leaders or anyone building career paths for PMs.

What it creates:

  • 8-12 core PM competencies tailored to your org
  • Observable behaviors at "developing," "proficient," and "expert" levels
  • Which competencies matter most for junior vs. senior PMs
  • How to assess competencies in hiring/promotion
  • Development resources to support growth

What I loved from the output:

It gave me a competency matrix like this:

Strategic Alignment / Executive Communication

  • Developing: Shares updates focused on tasks, struggles to translate technical details
  • Proficient: Frames updates in terms of outcomes, timelines, and risks; pre-aligns executives before major decisions
  • Expert: Shapes priorities before execution begins; anticipates executive concerns and addresses them early; reduces rework caused by shifting direction

That's a promotion rubric right there.

If you're a new PMO manager and someone just asked you to define PM1, PM2, and PM3 levels—this prompt will give you 80% of the framework in 10 minutes.

Then you customize it for your org.


Your Non-Negotiable Experiment This Week

Run the PM Skills Gap Analysis (Prompt 1) on yourself or a PM on your team.

Be honest. Where do you think you're strong? Where do you think you're weak?

Let AI ask you clarifying questions. Let it surface blind spots.

Then identify the top two gaps and one concrete action to address each.

Here's what I want you to notice:

  1. Were the gaps what you expected, or did AI surface something you weren't anticipating?
  2. Did it reveal any blind spots you'd been avoiding?
  3. How does having a structured 90-day development plan change the conversation?

Because here's the truth: Most PMs never get objective feedback. They get vague performance reviews and assumptions based on whether their last project shipped on time.

AI can give you the structured, evidence-based assessment most organizations never provide.


The Takeaway

The Halo Effect is real. And it's why so many great engineers fail as PMs, and why so many great PMs fail as PMO leaders.

Being good at one thing doesn't automatically make you good at another.

We need objective competency frameworks. We need to assess the actual skills that make PMs successful—not just assume technical mastery translates to people leadership.

And we need AI to help us do it objectively, without bias, without politics, and without the uncomfortable truth that maybe we've been promoting the wrong people for years.

So run that skills gap analysis. Be honest. Surface the blind spots.

Because the best leaders aren't the ones who were always great. They're the ones who were willing to see where they weren't—and did something about it.


Next time: People skills vs. domain expertise—which one actually matters for project managers? (Spoiler: It's not what you think.)

Want these prompts ready to copy/paste? See below!

If you would like to see the podcast live, check out this link: https://youtu.be/-9CvnXoPsFs

Now go find your blind spots. Your team is counting on you.

— Rick A. Morris


The Prompts (Copy/Paste Ready)

Prompt 1 - PM Skills Gap Analysis

You are a senior project management coach and assessor.

First, ask me 3–5 questions about the PM's background, current role, challenges they're facing, and what success looks like in their context.

Then provide an assessment answering:

  1. What are the 8–10 core PM competencies needed for this role?
  2. Based on what I've shared, which competencies does this PM likely excel at?
  3. Which competencies are likely gaps or growth areas?
  4. What specific behaviors or outcomes would indicate strength or weakness in each gap area?
  5. What is a 90-day development plan to address the top 2–3 gaps?

PM background: [Add your background]


Prompt 2 - Halo Effect Audit

You are an organizational psychologist specializing in team composition and role fit.

Ask me 2–3 questions about how PMs are selected and promoted in my organization.

Then help me analyze our hiring/promotion patterns by answering:

  1. What skills or background do we tend to prioritize when selecting PMs?
  2. What skills critical to PM success might we be overlooking?
  3. What evidence would indicate we're suffering from halo effect in PM selection?
  4. How should we change our PM selection criteria to reduce halo effect bias?
  5. What interview questions or assessments would reveal true PM competencies?

Organization context: [Add Organization Context]


Prompt 3 - PM Competency Framework

You are a PMO director building a competency framework.

First, ask me 3–4 questions about our project types, organizational culture, and what "great PM performance" looks like here.

Then create a PM competency framework answering:

  1. What are the 8–12 core PM competencies for our context?
  2. For each competency, what does "developing," "proficient," and "expert" look like in observable behaviors?
  3. Which competencies are most critical for new PMs vs. senior PMs?
  4. How should we assess these competencies in hiring and promotion decisions?
  5. What development resources (training, coaching, mentoring) support growth in each area?

Organizational context: [Add Organizational Context]


 

Thursday, March 12, 2026

AI Driven PM: S2E2 - What Do Project Managers Actually Do?

 So AI can write your status reports now.

It can take your meeting minutes. Update your project schedule. Do resource forecasting. Some tools even claim they can manage your risks and predict your delays.

Cool.

So what's left for us?

I'll tell you what's left: Everything that actually matters.

Here's the uncomfortable truth most PMs don't want to hear: Right now, you're probably spending about 60% of your time on coordination and busy work. That means only 40% of your job is focused on creating real value.

You can't avert a risk you didn't see coming if you're busy documenting what happened last week.

You can't clear roadblocks if you're stuck in your third status meeting of the day.

You can't anticipate needs or generate new ideas if you're staring at a screen updating who said what in yesterday's standup.

So if AI can handle status updates, meeting notes, risk logs, timeline adjustments, and all that other stuff—what's our actual job?

Let's talk about it.

We Make Dreams Come True (But What Does That Really Mean?)

I've said it before: Project managers are dream translators. But what does that actually look like in practice?

Here are the four things PMs do that AI will never replace:

1. Sense Making

We translate chaos into clarity for stakeholders.

Think about it: You've got twenty moving pieces, five competing priorities, three sponsors who all want different things, and a team trying to build something nobody's fully defined yet.

Your job is to be the centralized storyteller.

You take an abstract idea and turn it into concrete requirements. You hear a technical problem and frame it as a business value problem so a sponsor can actually make a decision.

AI can summarize. But it can't make sense of the why behind the chaos.

2. Decision Architecture

We design how decisions get made and by whom.

I don't know if anyone's ever said it that way before, but think about how often people try to force project managers to make decisions. That's not what we do.

Nowhere in the PMBOK does it say the project manager makes the decision.

What we do is:

  • Analyze options
  • Present trade-offs
  • Go for decision
  • Enforce the decision that was made

We don't own the decision. We own the process that gets to the right decision at the right time with the right people in the room.

AI can suggest options. But it can't read the room when a sponsor is about to overrule their entire leadership team because of a personal bias.

3. Energy Management

We keep teams motivated and aligned on what matters.

This isn't scope control. It's energy regulation.

Have you ever walked into a room where people are proactive, enjoying each other, having fun, being creative? You can feel that momentum. That's positive energy driving innovation.

I've also walked into rooms where you literally can't walk in because it's so tense. The air is frigid. People are angry. You can feel the toxicity.

We regulate that energy. We create the conditions where innovation can thrive.

AI can tell you morale is low. But it can't facilitate the conversation that gets the team back on track.

4. Conflict Resolution

We navigate competing priorities and personalities.

I once worked on a project where the CIO, CTO, CFO, and CEO all came to me separately and told me not to listen to the other three—just them.

You can't automate your way out of that.

AI can highlight the conflict. But it can't build the trust or influence the relationships that resolve it.


So here's the mantra for this episode:

AI Handles the Transactional. PMs Handle the Transformational.

AI does the reports, tracking, transcripts, and data analysis.

PMs do the vision, relationships, judgment calls, and influence.

That's the augmentation model. And if you get that balance right, you're not just surviving the AI era—you're thriving in it.

Today's Focus: Use AI to Surface What Actually Needs Your Attention

If AI is going to free up 60% of your time, you need to spend that time on high-value work.

So today, I'm giving you three prompts designed to help you think like a strategist, not a secretary:

  1. Project Health Diagnostic (your non-negotiable experiment)
  2. Decision Architecture Mapper
  3. Energy Audit

I ran all three live in ChatGPT and Claude. And honestly? Claude surprised me with how good its coaching was in this round.

Let me show you.


Prompt 1: Project Health Diagnostic (The Non-Negotiable)

This is your weekly experiment. I want you to run this on a real project.

Here's the prompt:


You are an experienced PMO director reviewing project health.

First, ask me 3–5 clarifying questions about the project's current state, team dynamics, and stakeholder concerns.

Then, using my answers, provide a diagnostic report that answers:

  1. What are the top 3 risks that need PM attention right now?
  2. Which stakeholder relationships need strengthening and why?
  3. What decision is being avoided or delayed that's creating drag?
  4. Where is the team's energy or morale most vulnerable?
  5. What one action would have the highest impact on project momentum this week?

Current project context: [Enter Context Here]


What I Tested It On

I used our fictional Social Wishing app project (the bucket-list social network from Episode 1).

Here's the context I gave it:

  • Month 2 of development
  • 5 engineers, 2 designers, 1 QA
  • Sprint velocity dropped 20% in the last two sprints
  • Daily standups feel perfunctory
  • Sponsor keeps asking about launch date, but MVP scope isn't finalized
  • One senior engineer just gave notice
  • 4 months and $200K left in budget

Both ChatGPT and Claude asked clarifying questions:

ChatGPT asked:

  • Who owns MVP scope decisions?
  • Are stories entering sprint fully defined or being clarified mid-sprint?
  • Has the sponsor agreed on success criteria for launch?

Claude asked:

  • Is the MVP not finalized because of genuine disagreement or because no one's forcing the decision?
  • Do you know why the senior engineer is leaving?
  • When stand-ups feel perfunctory, what's actually happening?

Here's Where Claude Crushed It

ChatGPT gave me a solid diagnostic. Top risks: uncontrolled MVP expansion, architectural fragility, team disengagement. Good stuff.

But Claude gave me coaching.

Check out this section on stakeholder relationships:

"The sponsor—urgently, but carefully.

Core problem here is a shared reality gap. They're operating in a world where their feature ideas are refinements. You're operating in a world where those are scope additions with real cost.

This isn't malicious. Sponsors often genuinely don't see the downstream impact of small additions.

What you need is a direct, structured conversation that makes trade-offs visible: Here's what's in MVP. Here's what each addition costs in time and money. Here's what we must cut to accommodate it.

The goal isn't to win an argument. It's to get them into the same decision-making frame as the rest of the team."

That's the kind of coaching I'd give a junior PM.

It's not just identifying the problem. It's framing how to talk about it in a way that doesn't create defensiveness.

And then it ended with this:

"The good news is none of this is fatal yet. Month 2, budget intact, and a team that's disengaged but not yet departed is recoverable. The window is short, but it's open."

That's empathy. That's perspective. That's what a senior PMO director would say.

Both tools recommended the same highest-impact action: Run a 90-minute MVP reset session with the sponsor and core team.

And both gave me an agenda for it.


Prompt 2: Decision Architecture Mapper

This one's for when decisions aren't getting made—or they're getting made by the wrong people.

Here's the prompt:


You are a project governance consultant.

First, ask me 2–3 questions about how decisions are currently being made on this project.

Then help me design better decision architecture by answering:

  1. What categories of decisions exist on this project? (strategic, tactical, technical, resource, scope)
  2. For each category, who should make the decision, who should be consulted, and who just needs to be informed?
  3. What decision rights are currently unclear or creating bottlenecks?
  4. What decision-making process should I establish for the most critical category?

Project context: [Enter Context Here]


What Happened

I told it:

  • Sponsor requests features mid-sprint and goes directly to developers
  • Senior engineer (who just left) had final say on architecture
  • Sponsor is also acting as product owner

ChatGPT told me I had a "governance vacuum."

Claude built me a full RACI chart and told me:

"The critical shift here is that scope moves to you. With a fixed budget and no senior engineer buffer, ungoverned scope is an existential threat."

Then it gave me the script for the hardest conversation:

"This isn't designing the architecture. It's the single conversation where you tell the sponsor that scope authority is shifting to you."

Again—coaching, not just output.


Prompt 3: Energy Audit

This one's for when something feels off with the team, but you can't quite articulate it.

Here's the prompt:


You are a team dynamics coach working with project managers.

Ask me 3–4 questions about team morale, motivation, and alignment with project goals.

Then provide an analysis answering:

  1. Where is team energy highest right now? (What's motivating them?)
  2. Where is energy lowest? (What's draining them?)
  3. What misalignment exists between team priorities and leadership priorities?
  4. What story or narrative could I reinforce to re-energize and re-align the team?
  5. What one conversation should I have with the team this week?

Team context: [Enter Context Here]


What I Learned

Claude asked me:

  • "When the team talks about Social Wishing, what specifically lights them up?"
  • "How did the team react to the senior engineer's resignation?"
  • "Outside of stand-ups, where's the real conversation actually happening?"

That last question is phenomenal. Is it Slack? Hallway conversations? Lunch groups?

Because if the real conversation isn't happening in your ceremonies, you've got a trust problem.

Claude's narrative recommendation:

"We're two months in, and we just learned something expensive but valuable. The project needed better architecture before it could move. We're not behind. We're resetting with clear rules. People still here chose to stay. Now we're going to build the thing we actually believe in—with scope we can defend—and ship something we're proud of."

I would use that exact language in a team meeting.


Your Non-Negotiable Experiment This Week

Run the Project Health Diagnostic on one current project.

Use transcripts from your last few team meetings. Use emails. Use status reports. Give it real context.

Then act on at least one insight it surfaces.

Here's what I want you to notice:

  1. Did AI surface something you were avoiding or hadn't articulated yet?
  2. How much time did you save by having AI organize your thinking?
  3. How much faster could you move if you had this clarity every week?

None of this is about status reports or meeting minutes.

This is about surfacing what actually needs your attention so you can spend your time on the transformational work that only you can do.


The Takeaway

Project managers aren't project secretaries.

We're sense makers. Decision architects. Energy managers. Conflict navigators.

AI should handle the busy work so we can focus on the stuff that actually moves the needle.

So stop spending 60% of your week looking backwards.

Start spending 80% of your week looking forwards.

Because that's where dreams come true.


If you would like to see the podcast live, check out this link: https://youtu.be/3sE6ynNyjIg

Next time: We're talking about the Halo Effect—why we keep promoting the wrong people into project management—and how AI can help you become the transformational leader your team actually needs.

Now go run that diagnostic. Your project is waiting.