Something went wrong!
Hang in there while we get back on track
Adding Depth to Technical Story
Shows how to elevate a basic technical problem-solving story by adding specific metrics, technical details, and clear decision-making authority.
Input
What you provide to the skill
Our database queries were taking too long during peak hours. I worked with the database team to optimize them. The performance improved and fewer customers complained about timeouts.
Output
What the skill generates for you
STAR Story Evaluation: Insufficient Metrics and Vague Impact
Overall Score: 4/10
Your story has the basic STAR structure but lacks quantifiable impact metrics, specific role definition, and devotes too much attention to setup while rushing through your actual contributions. The result is vague, making it difficult for interviewers to understand your technical depth or decision-making authority.
Dimension Scores
Impact Metrics: 2/5
What you said: “Our database queries were taking too long during peak hours. I worked with the database team to optimize them. The performance improved and fewer customers complained about timeouts.”
What’s missing: No baseline query times, no improvement percentages, no definition of “peak hours,” no specifics on timeout rates before/after, no team size mentioned, no optimization timeframe.
Role Clarity: 2/5
What you said: “I worked with the database team to optimize them.”
Problem: “Worked with” is passive collaboration language. Did you identify the issue? Did you propose specific optimizations? What decisions were yours versus the database team’s? The use of “them” (queries) is also unclear—how many queries? Which ones?
Time Balance: 2/5
Current allocation: S=30%, T=10%, A=35%, R=25%
Problem: You spend 40% on setup (Situation + Task) but only 35% on Action—the most critical part where you demonstrate technical competence and decision-making. The Action section needs significantly more detail about what you actually did.
Rewritten Story
Situation:
“At our SaaS platform serving 50,000 daily active users, we saw query response times spike to 8-12 seconds during peak traffic hours (9-11am, 2-4pm), up from our normal 1.5-second baseline. This caused a 15% increase in timeout errors and 47 customer support tickets in just one week.”
Task:
“As backend engineer on-call that week, I needed to identify the root cause and implement optimizations to restore sub-2-second query performance without disrupting the production environment.”
Action:
“I spent the first day profiling our 12 most-trafficked endpoints using New Relic and identified that 3 specific dashboard queries were responsible for 80% of the slowdown. Each query was performing full table scans on our 4.5-million-row events table. I proposed adding composite indexes on (user_id, created_at, event_type)—the exact columns we were filtering on. I collaborated with our 2-person database team to test the indexes in staging, where we confirmed a 73% reduction in query execution time (8.2s to 2.2s). After getting approval from my engineering manager, I deployed the indexes during a low-traffic window at 3am on Tuesday. I monitored query performance for 48 hours post-deployment using CloudWatch dashboards I had set up specifically to track the p95 and p99 response times for these endpoints.”
Result:
“Peak-hour query times dropped from 8-12 seconds to 1.8 seconds (85% improvement). Timeout errors decreased by 92%, and customer complaints about slowness fell to zero within 72 hours. The indexes required only 450MB of additional storage, well within our capacity budget. My manager shared the approach in our weekly engineering all-hands as a case study for data-driven performance optimization.”
Before vs. After
Before: 4/10
After (rewritten): 8/10
Key improvements:
- Added 12+ specific metrics (50K users, 8-12s vs 1.5s baseline, 15% timeout increase, 47 tickets, 12 endpoints, 3 queries, 4.5M rows, 73% reduction, 85% improvement, 92% error decrease, 450MB storage)
- Clarified role with “I” for decisions (profiling, proposing indexes, setting up dashboards) and appropriate team collaboration (database team for testing, manager for approval)
- Expanded Action from 35% to 60% of story, adding technical depth about profiling tools, specific optimization approach, testing methodology, and deployment strategy
- Transformed vague “performance improved” into concrete before/after metrics with business impact (zero complaints, recognition from leadership)
About This Skill
Transform weak behavioral interview stories into metric-driven STAR narratives with before/after examples.
View Skill DetailsMore Examples
Transforming a Vague Performance Story
Shows how the skill evaluates a generic story lacking metrics and specificity, then rewrites it with concrete improvements.
Clarifying Team Collaboration Story
Demonstrates how to transform a vague team collaboration story with no metrics or personal ownership into a concrete narrative with specific time savings and role clarity.