ICE Scoring Framework: ICE for Weekly Growth Sprint Planning
Using ICE scoring to prioritize growth experiments in weekly sprint planning, ensuring the team always works on the highest-impact ideas.
How to Apply
Gather ideas from all team members. No filtering at this stage.
Each team member independently scores each idea 1-10 on Impact, Confidence, Ease.
Average the ICE scores. Rank experiments from highest to lowest.
Pick top 2-3 experiments that fit sprint capacity. Assign owners.
After each sprint, review results and calibrate future scoring.
Expected Outcomes
- ✓ Data-driven experiment prioritization
- ✓ Reduced analysis paralysis
- ✓ Higher experiment velocity
Real-World Examples
Common Pitfalls
Ehsan's Insight
ICE scoring for weekly growth sprints has one critical failure mode: Impact scores are almost always wrong because teams estimate impact based on gut feeling rather than measurement. At GrowthHackers, Sean Ellis required every Impact score to be backed by a specific calculation: "If this experiment wins, it will increase [metric] from [current value] to [estimated value], which equals [dollar impact]." Without that sentence, the experiment does not get scored. The second mistake: Confidence scores are meaningless without specifying what evidence supports them. A "4/5 Confidence" should mean "we have seen this work at 2+ comparable companies and have preliminary data from our own tests." Teams that enforce these two rules — quantified Impact and evidence-backed Confidence — ship 3x more winning experiments per quarter because they stop wasting cycles on "interesting ideas" with no supporting data.
Ehsan Jahandarpour
AI Growth Strategist & Fractional CMO
Forbes Top 20 Growth Hacker · TEDx Speaker · 716 Academic Citations · Ex-Microsoft · CMO at FirstWave (ASX:FCT) · Forbes Communications Council