All examples for Stakeholder Priority Analyzer

Redirecting Solution Request to Problem Discovery

VP Engineering proposes AI chatbot citing competitor parity without explaining user need. Demonstrates underlying need analysis with hypothesis generation, discovery questions to uncover real problems (likely onboarding friction), alternative solutions for different scenarios, and conversation guide for redirecting from premature solution to productive problem exploration.

Input

What you provide to the skill

VP Engineering wants to build AI chatbot (competitor has one). Won’t explain why users need it. I suspect they need better onboarding.

Output

What the skill generates for you

UNDERLYING NEED ANALYSIS

Hypothesis: The Real Problem

Stated request: AI chatbot (because competitor has one)

Likely underlying problem: Users are struggling to get value from the product quickly. The VP Engineering may have observed users failing to accomplish key tasks, seen support tickets about basic functionality, or noticed poor activation metrics. The AI chatbot is their proposed solution to help users—but the root issue appears to be onboarding friction.

Evidence supporting hypothesis:

  • “Won’t explain why users need it” — suggests no clear user research or problem statement
  • “Competitor has one” — competitive reaction rather than problem-driven request
  • Your suspicion about onboarding — you likely have context (support tickets, churn data, user feedback) pointing to this

Red flags in the request:

  • No user problem articulated
  • Justification is “competitor has one” (competitive parity ≠ user need)
  • AI chatbot is a heavy engineering investment (months) for unclear benefit

Discovery Questions to Ask

These questions will uncover the real need:

  1. “What user problem would the AI chatbot solve?”

    • Goal: Force articulation of user need
    • Reveals: Whether there’s a real problem or just feature envy
  2. “What are users struggling with today that the chatbot would help with?”

    • Goal: Get specific pain points
    • Reveals: If it’s onboarding, support burden, feature discovery, or something else
  3. “How would we measure success if we built this?”

    • Goal: Define what “working” looks like
    • Reveals: Activation? Retention? Support ticket reduction? If they can’t answer, the problem isn’t clear
  4. “What have you seen from users that suggests they need this?”

    • Goal: Ground the request in evidence
    • Reveals: Data vs. assumption, and what the actual user signal is
  5. “What happens to users today when they get stuck?”

    • Goal: Understand current failure mode
    • Reveals: The actual friction point in the user journey
  6. “If we couldn’t build an AI chatbot, what else might solve this problem?”

    • Goal: Separate solution from problem
    • Reveals: Whether they’re attached to the solution or open to alternatives

Draft Response to VP Engineering

Tone: Collaborative problem-solving, technically credible, not dismissive

Approach: Redirect to problem discovery while showing you take the request seriously


Via Slack/Email:

Hey [VP Engineering],

I want to make sure we scope this AI chatbot right. Before we dig into implementation, I’d like to understand the user problem we’re solving so we build something that actually moves metrics.

A few questions:

  1. What specific user struggles would the chatbot address? (Getting started? Finding features? Troubleshooting issues?)
  2. What signals have you seen that suggest users need this? (Support tickets? Churn feedback? User interviews?)
  3. How would we measure success—activation rate improvement? Support ticket reduction? Something else?

Why I’m asking: An AI chatbot is a significant investment (3-6 months depending on scope). If the underlying problem is user onboarding or activation, there may be faster solutions that move the same metrics in weeks rather than months. I want to make sure we’re solving the right problem, not just matching a competitor feature.

Can we grab 30 min to walk through the user journey and identify where people are getting stuck? I’ll come with some data on support tickets and activation patterns.


Potential Solutions Based on Discovery

If underlying need is: “Users don’t know how to get started”

BETTER SOLUTION: Improved onboarding flow

Why it’s better:

  • Proactive guidance vs. reactive assistance
  • Users complete key actions instead of asking chatbot how
  • Faster to build (2-4 weeks vs. 3-6 months)
  • Measurable activation improvement
  • No ongoing AI costs or hallucination risks

Specific tactics:

  • Interactive product tour highlighting key features
  • Progressive disclosure (show features as users need them)
  • Checklist-style onboarding (“Complete 3 steps to get started”)
  • Contextual tooltips at confusion points
  • Email drip sequence with tutorials

Effort: 2-4 weeks

Value vs. AI chatbot: 5-10x faster, directly addresses activation

Your response: “Based on what we’re seeing, users are getting lost before they ask for help. An improved onboarding flow would guide them to value proactively. We could have this live in 3 weeks and measure activation improvement immediately. The chatbot could be a Phase 2 addition if we still see users needing help after onboarding.”


If underlying need is: “Users can’t find features they need”

BETTER SOLUTION: Command palette / universal search + feature discovery

Why it’s better:

  • Instant results vs. chatbot conversation
  • No AI hallucination risk (returns actual features)
  • Works for power users who know what they want
  • Pairs with progressive feature hints for new users

Effort: 2-3 weeks

Your response: “If users struggle to find features, a command palette gives instant access to anything in the product—no conversation needed. We can ship this in 2-3 weeks and see if it reduces the ‘where is X’ questions. If users still need more help, we add the chatbot layer on top.”


If underlying need is: “Support team is overwhelmed”

BETTER SOLUTION: Self-service help center + contextual documentation

Why it’s better:

  • Users get answers without waiting for AI response
  • No hallucination risk (curated content)
  • Reduces support burden immediately
  • Creates searchable knowledge base

Specific tactics:

  • Embedded help docs at key screens
  • FAQ based on top support tickets
  • Video walkthroughs for complex workflows
  • “?” icon that surfaces relevant docs for current screen

Effort: 2-3 weeks initial, ongoing content investment

Your response: “If the goal is reducing support burden, a contextual help system surfaces the right documentation exactly where users need it. We could analyze top support tickets, create targeted content, and reduce those tickets by 30-50% in a month. The chatbot becomes more valuable once we have quality help content for it to draw from.”


If AI chatbot is genuinely the right solution:

SCOPE IT DOWN: Build a focused chatbot for specific use case

When to choose this:

  • Discovery confirms users need conversational help
  • Use case is specific (e.g., “help me configure X”)
  • Other solutions tried and didn’t fully solve it

Scoped approach:

  • Phase 1: Chatbot for onboarding questions only (1-2 months)
  • Use existing help docs as knowledge base (reduces hallucination)
  • Measure: activation rate, support tickets, user satisfaction
  • Phase 2: Expand scope based on Phase 1 results

Your response: “Based on discovery, users need conversational help during onboarding. Let’s scope the chatbot to that specific use case first—onboarding questions only, grounded in our help docs. We can ship this in 6-8 weeks, measure impact, and expand if it works.”


Pushback Response Templates

“But the competitor has one!”

Response: “I hear you on competitive parity. Let’s make sure we build something that actually outperforms their chatbot rather than just matching it. What do their users say about it in reviews? If we understand what works and doesn’t work about theirs, we can build something better. And if their users are complaining about basic onboarding—which is common—we might leapfrog them by solving that first.”


“We need to show AI capabilities to the board/investors”

Response: “Totally understand the optics concern. A few thoughts: (1) An AI chatbot that doesn’t improve metrics is worse optics than no chatbot. (2) We could potentially add AI in a more targeted way—like AI-powered search or AI-generated help content—that’s faster to ship and easier to demonstrate value. (3) If we improve activation with better onboarding first, we have a better story: ‘We identified the real problem, solved it, and now we’re adding AI to make it even better.’”


“This will only take a few months”

Response: “A few months is accurate for a basic chatbot. But a chatbot that actually helps users—with good knowledge base, low hallucination, proper guardrails—is closer to 4-6 months to do well. I want to make sure we’re comparing apples to apples: a basic chatbot in 2 months vs. improved onboarding in 3 weeks that might solve the underlying problem. What if we try the faster solution first and add the chatbot if we still need it?”


“I just want the chatbot, stop questioning my requests”

Response: “I respect your decision and I’ll execute on it. Before I do, I want to share my concern: without a clear user problem, we risk building something that doesn’t move metrics. I’d hate to spend 4 months on this and then find out the real issue was onboarding. Can we at least define success metrics upfront so we know if it’s working? If activation rate or support tickets don’t improve after launch, what would we do differently?”


Recommended Next Steps

  1. Schedule discovery conversation (30 min)

    • Walk through user journey together
    • Look at support ticket themes
    • Review activation/retention data
    • Identify where users actually struggle
  2. Analyze existing data (before meeting)

    • Top 10 support ticket categories
    • Activation funnel drop-off points
    • Churn survey responses mentioning confusion/difficulty
    • Session recordings of new user struggles
  3. Propose phased approach (in meeting)

    • Phase 1: Quick wins for onboarding (2-4 weeks)
    • Measure impact on activation
    • Phase 2: AI chatbot if still needed (based on Phase 1 learnings)
  4. Define success metrics together

    • What does “solved” look like?
    • Activation rate improvement target?
    • Support ticket reduction target?
    • User satisfaction score?

Key Principles for This Conversation

Don’t position it as “your idea is wrong”:
The VP Engineering cares about users and product success. Honor that intent while redirecting to problem discovery.

Use data to ground the conversation:
Support tickets, activation metrics, and user feedback are objective. “The data shows users struggle with X” is more effective than “I think the problem is Y.”

Make it collaborative:
“Let’s figure out the right solution together” beats “let me tell you why you’re wrong.”

Offer your expertise:
You have a hypothesis (onboarding). Share it as input, not gospel: “I suspect the underlying issue might be onboarding—here’s why. But I could be wrong. Let’s look at the data together.”

Leave room to be wrong:
Maybe the AI chatbot IS the right answer. Discovery will tell you. Go in curious, not with your mind made up.