ICE Scoring Framework: ICE for Product Feature Prioritization
Adapting ICE scoring to prioritize product features and improvements based on their expected impact on key metrics.
How to Apply
Impact = expected improvement to North Star Metric if feature is successful.
Confidence based on user research, data analysis, and competitive evidence.
Ease = engineering effort in story points or days. Include testing and deployment.
Calculate average ICE score. Weight toward high-impact items.
Top ICE items form the product roadmap. Review monthly.
Expected Outcomes
- ✓ Evidence-based product roadmap
- ✓ Better engineering resource allocation
- ✓ Higher feature success rate
Real-World Examples
Common Pitfalls
Ehsan's Insight
Using ICE for product feature prioritization reveals a consistent bias: product managers inflate Ease scores for features they personally want to build. Instagram's growth team solved this by separating ICE scoring into two groups — PMs scored Impact and Confidence, while engineering leads scored Ease independently, without seeing the other scores first. When combined, the rankings shifted dramatically: 40% of "top priority" features dropped to the bottom third. The other fix: time-box Ease strictly. Instead of "how hard is this?" (which is subjective), ask "can a single engineer ship this in under one sprint?" If yes, Ease = 8+. If it requires cross-team coordination, Ease = 3 maximum. This binary approach eliminates the 4-6 range where all the self-deception lives.
Ehsan Jahandarpour
AI Growth Strategist & Fractional CMO
Forbes Top 20 Growth Hacker · TEDx Speaker · 716 Academic Citations · Ex-Microsoft · CMO at FirstWave (ASX:FCT) · Forbes Communications Council