All examples for PRD Completeness Validator

Minimal PRD Missing Critical Sections

A bare-bones PRD with only Problem, Solution, and User Stories. Shows how the validator identifies 9 missing sections and provides prioritized templates to reach review-ready status.

Input

What you provide to the skill

Problem

Users struggle to find content. ## Solution
Add search bar. ## User Stories
As a user, I want to search so I can find content quickly.

Output

What the skill generates for you

PRD Completeness Analysis

Overall Score: 3/12 weighted (25%)
Breakdown: 3 complete, 0 weak, 9 missing
Status: INCOMPLETE - Requires significant additions before stakeholder review

Complete Sections (3)

  1. Problem Statement - Clear user pain identified
  2. Proposed Solution - Basic solution described (search bar)
  3. User Stories - One user story provided with intent

Missing Sections (9)

1. Target Users / Personas - MISSING

Issue: “Users” is too generic - no specific personas or segments defined
Impact: Can’t prioritize features for different user types, design won’t be targeted
Questions to Answer:

  • Who are the primary user segments? (new users, power users, content creators, admins?)
  • What are their search behaviors and expectations?
  • Do different personas need different search experiences?
  • What’s their technical proficiency?

Template:

## Target Users
**Primary Persona**: [Name/Role]
- Search frequency: [X times per day/week]
- Search context: [When and why they search]
- Technical proficiency: [Beginner/Intermediate/Advanced]
- Key needs: [What they're looking for]
**Secondary Persona**: [Name/Role]
- [Same structure]
**Out of Scope Users**: [Who this isn't designed for]

2. Success Metrics - MISSING

Issue: No measurable success criteria defined
Impact: Can’t determine if feature succeeded post-launch, no data-driven iteration
Questions to Answer:

  • What’s the primary metric? (search usage, task completion rate, time saved?)
  • What are current baselines for content findability?
  • What targets indicate success?

Template:

## Success Metrics
**Primary Metric**: Search usage rate
- Current Baseline: 0% (no search exists)
- Target: [X%] of weekly active users use search within 30 days
- Measurement: Analytics event tracking
**Secondary Metrics**:
1. Search Success Rate: [X%] of searches result in click within top 5
2. Time to Find Content: Reduce from [X min] to [Y sec]
3. Zero-Result Rate: Less than [X%] of searches return no results

3. Technical Requirements - MISSING

Issue: No technical specifications for implementation
Impact: Engineering can’t estimate or architect solution

4. Edge Cases & Error Handling - MISSING

Issue: No consideration of failure scenarios
Impact: Engineering will make assumptions, poor UX in error states

5. Dependencies - MISSING

Issue: No identification of external dependencies
Impact: Launch delays if dependencies aren’t ready

6. Timeline / Milestones - MISSING

Issue: No project timeline or milestone breakdown
Impact: Can’t plan resources, unclear launch date

7. Risks & Mitigation - MISSING

Issue: No risk assessment
Impact: Surprises during development, no contingency plans

8. Stakeholders - MISSING

Issue: No identification of stakeholders and roles
Impact: Unclear ownership, missing feedback

9. Open Questions - MISSING

Issue: No log of unresolved questions
Impact: Decisions made implicitly

Recommendations

Before Stakeholder Review (Critical):

  1. Add Success Metrics (30 min) - CRITICAL for proving feature value
  2. Document Edge Cases & Error Handling (45 min) - Prevents engineering surprises
  3. Add Technical Requirements (45 min) - Necessary for estimation
  4. Add Risks & Mitigation (30 min) - Shows thorough planning

Before Sprint Planning:
5. Create Timeline with Milestones (30 min)
6. Document Dependencies (20 min)
7. Define Target Users (30 min)
8. Identify Stakeholders (15 min)
9. List Open Questions (15 min)

Estimated Time: 4-5 hours total
Benefit: Avoid 3-4 review rounds (save 6-10 hours)