All examples for PRD Completeness Validator

Partial PRD with Weak Sections Identified

A more detailed PRD that covers 8 sections but has 2 weak areas (Stakeholders and Success Metrics lack specificity). Demonstrates the nuanced Complete/Weak/Missing scoring.

Input

What you provide to the skill

Problem

40% more support tickets. ## Users
Free tier, Pro tier. ## Solution
AI search. ## Stories
Find articles, filter. ## Metrics
50% adoption in 30 days. ## Tech
ES cluster, APIs. ## Timeline
4 weeks. ## Stakeholders
Product, Engineering, Design.

Output

What the skill generates for you

PRD Completeness Analysis

Overall Score: 7/12 weighted (58.3%)
Breakdown: 6 complete, 2 weak, 4 missing
Status: INCOMPLETE - Requires significant additions before stakeholder review

Complete Sections (6)

  1. Problem Statement - Clear pain point with quantified support ticket increase
  2. Target Users - Two specific user segments identified (free tier new users, pro tier power users)
  3. Proposed Solution - Specific implementation approach (AI-powered search with relevance ranking)
  4. User Stories - Three user capabilities defined
  5. Technical Requirements - Technology stack specified (ES cluster, API endpoints, real-time sync)
  6. Timeline - Launch target defined (4 weeks)

Weak Sections (2)

Stakeholders - WEAK

Current State: “Product, Engineering, Design” mentioned without specifics
Issue: No names, owners, or roles defined - unclear who approves, who builds, who’s accountable
How to Strengthen: Add specific names and RACI clarity

Template:

## Stakeholders
**Decision Maker**: [Name] (Product Lead)
**Accountable**: [Name] (PM owning feature)
**Engineering Owner**: [Name] (Backend), [Name] (Frontend)
**Design Owner**: [Name] (UX/UI)
**Consulted**: [Support team, Analytics team]
**Informed**: [Leadership, Customer Success]

Success Metrics - WEAK

Current State: “50% adoption in 30 days” mentioned
Issue: Missing baseline, unclear what “adoption” means (% of what?), no secondary metrics
How to Strengthen: Define adoption precisely, add baselines and secondary metrics

Template:

## Success Metrics
**Primary Metric**: Search adoption rate
- Baseline: 0% (no search exists)
- Target: 50% of monthly active users perform at least 1 search within 30 days
- Measurement: Analytics event tracking
**Secondary Metrics**:
1. Search Success Rate: 70% of searches result in click within top 5 results
2. Support Ticket Reduction: Decrease navigation-related tickets by 40%

Missing Sections (4)

1. Edge Cases & Error Handling - MISSING

Issue: No consideration of failure scenarios
Impact: Engineering will make assumptions about error states
Questions to Answer:

  • What happens when search returns zero results?
  • What if the ES cluster is down or slow?
  • How to handle malformed queries?

2. Dependencies - MISSING

Issue: Technical requirements mention ES cluster but no dependency tracking
Impact: Launch delays if dependencies aren’t ready

3. Risks & Mitigation - MISSING

Issue: No risk assessment performed
Impact: Surprises during development, no contingency plans

4. Open Questions - MISSING

Issue: No tracking of unresolved decisions
Impact: Implicit decisions made without stakeholder alignment

Recommendations

Before Stakeholder Review (Critical):

  1. Add Edge Cases & Error Handling (45 min) - Prevents engineering assumptions
  2. Strengthen Success Metrics (30 min) - Define baselines, clarify “adoption”
  3. Add Risks & Mitigation (30 min) - Shows thorough planning

Before Sprint Planning:
4. Document Dependencies (30 min)
5. Add Open Questions (20 min)
6. Expand Stakeholders section (15 min)

Estimated Time: 2.5-3 hours
Benefit: Avoid 2-3 review rounds (save 5-8 hours)