All examples for Prioritization Framework Facilitator

Small Team ICE Prioritization

Demonstrates ICE framework selection for a small team with limited customer data, scoring 6 features with evidence-based calibration and generating a tiered roadmap with stakeholder communication.

Input

What you provide to the skill

Prioritize 6 features for Q3. Team: 3 engineers. 100 customers. Features: user profiles, dark mode, API, search, mobile app, support docs.

Output

What the skill generates for you

Q3 Feature Prioritization Report

Framework: ICE (Impact, Confidence, Ease)
Date: 2025-12-04
Team: 3 engineers, Q3 planning window (3 months)
Customer Base: 100 customers


Framework Confirmation

ICE is optimal for your situation:
✓ Small team (3 engineers = ~9 person-months capacity in Q3)
✓ Small customer base (100 customers - limited reach data)
✓ Quarterly planning horizon
✓ Need to balance quick wins with strategic features

ICE Formula:

  • Impact (1-10): How much will this move key metrics?
  • Confidence (1-10): How confident are we in the impact estimate?
  • Ease (1-10): How easy is this to build?
  • ICE Score = (Impact × Confidence × Ease) / 100

Scoring Session Summary

Scored all 6 features with evidence-based calibration:

Key Calibrations:

  • API Impact: Rated 9/10 (very high) despite complexity - enables integrations and future growth
  • Mobile App Ease: Rated 2/10 (very hard) - requires separate platform, ongoing maintenance
  • Support Docs Impact: Rated 7/10 - reduces support burden and improves customer success
  • Search Ease: Calibrated to 5/10 - depends on data volume and feature scope

Bias Detection Report

No Major Bias Detected

  • Healthy score distribution across features
  • Effort estimates realistic for team size
  • Impact ratings align with business value

⚠️ Minor Calibration: Mobile App

  • Initial confidence may be optimistic given small customer base (100 users)
  • Reduced to 5/10 to reflect adoption uncertainty and platform maintenance overhead

Final Prioritized List

TIER 1: MUST DO (Q3 Priorities)

1. Support Docs

  • ICE Score: 5.67
  • Impact: 7 | Confidence: 9 | Ease: 9
  • Evidence: Reduces support load, improves onboarding success
  • Rationale: Quick win with proven ROI. Improves customer success and reduces team burden.
  • Effort estimate: 2-3 weeks (1 engineer)

2. User Profiles

  • ICE Score: 2.94
  • Impact: 6 | Confidence: 7 | Ease: 7
  • Evidence: Enables personalization, improves user experience
  • Rationale: Foundation for future features, moderate impact with reasonable effort
  • Effort estimate: 3-4 weeks (1-2 engineers)

3. Search

  • ICE Score: 2.80
  • Impact: 8 | Confidence: 7 | Ease: 5
  • Evidence: Core usability feature, especially valuable as data grows
  • Rationale: High impact on user experience, moderate complexity
  • Effort estimate: 4-6 weeks (1-2 engineers)

Tier 1 Total Effort: ~8-13 weeks (fits Q3 capacity with 3 engineers)


TIER 2: DEFER TO Q4

4. Dark Mode

  • ICE Score: 1.92
  • Impact: 3 | Confidence: 8 | Ease: 8
  • Defer reason: Cosmetic feature, low business impact despite ease of implementation
  • Effort estimate: 2 weeks

5. API

  • ICE Score: 1.62
  • Impact: 9 | Confidence: 6 | Ease: 3
  • Defer reason: High strategic value but requires significant effort (6-8 weeks). Better suited for Q4 when team has capacity.
  • Effort estimate: 6-8 weeks

TIER 3: DEFER TO 2026

6. Mobile App

  • ICE Score: 0.70
  • Impact: 7 | Confidence: 5 | Ease: 2
  • Defer reason: Very high effort (3+ months), uncertain adoption with only 100 customers, requires ongoing platform maintenance
  • Effort estimate: 3+ months initial build, ongoing maintenance

Evidence Summary

Top Priority Evidence (Tier 1):

  • Support Docs: Industry standard shows 20-30% reduction in support tickets, improves onboarding completion rates
  • User Profiles: Foundation for personalization features, enables customer segmentation
  • Search: Critical usability feature as content/data scales, prevents user frustration

Assumptions Documented:

  • Support docs effort: 2-3 weeks assumes templates/CMS exist
  • Search complexity: 4-6 weeks assumes moderate data volume (<1M records)
  • Mobile app adoption: Low confidence (5/10) given small customer base
  • API scope: Could expand significantly based on use cases

Q3 Recommendation

Build in Q3: Features #1-3 (Support Docs, User Profiles, Search)

  • Total effort: 8-13 weeks across 3 engineers
  • Expected outcome: Improved customer success, reduced support load, enhanced core UX
  • Risk mitigation: All three have clear scope and proven value

Sequencing:

  1. Week 1-3: Support Docs (1 engineer) + User Profiles kickoff (1 engineer)
  2. Week 4-7: User Profiles completion + Search implementation (2 engineers)
  3. Week 8-13: Search completion and polish (1-2 engineers)

Defer to Q4: Features #4-5 (Dark Mode, API)

  • Rationale: Dark mode is low impact; API requires dedicated focus when team has capacity

Defer to 2026: Feature #6 (Mobile App)

  • Rationale: Very high effort with uncertain ROI at current scale. Revisit when customer base is 500+

Stakeholder Communication

To Leadership:
“We prioritized using ICE to maximize Q3 impact with 3 engineers. The top 3 features focus on customer success (support docs), core UX improvements (user profiles, search). We’re deliberately deferring API and mobile app until we have capacity and greater customer scale to justify the investment.”

To Engineering:
“Q3 focus is customer success and core UX. Support docs is a quick win (2-3 weeks). User profiles and search are foundational features that set us up for personalization and scalability. All have clear scope and reasonable complexity.”

To Sales/Customer Success:
“Support docs ship first (mid-Q3), reducing your support burden. User profiles and search follow, improving customer experience. API is planned for Q4 to enable integrations. Mobile app requires significant investment and is on the 2026 roadmap pending demand validation.”

To Customers (for deferred features):
“We heard your requests for API access and mobile apps. These are on our roadmap. We’re prioritizing core platform improvements first: better documentation, personalized experiences, and powerful search. API launches Q4 2025, mobile app targeted for 2026.”


Process Quality Metrics

Distribution Health: 50% must-do, 33% defer Q4, 17% defer 2026 (realistic)
Capacity Fit: Tier 1 effort (8-13 weeks) aligns with Q3 capacity (3 engineers = ~39 person-weeks)
Evidence Strength: Top features have clear business rationale
Effort Realism: Accounts for full scope including testing, deployment, documentation
Strategic Balance: Mix of quick wins (support docs) and foundational work (profiles, search)

Recommendation: This prioritization balances quick wins with strategic foundations, fits team capacity, and defers high-effort/uncertain-ROI features until appropriate scale.