ICE Scoring Framework: ICE for Conversion Rate Optimization
Applying ICE scoring to prioritize CRO tests across landing pages, signup flows, and checkout processes for maximum revenue impact.
How to Apply
Map all conversion points. Identify biggest drop-off stages with data.
For each drop-off point, brainstorm 5-10 improvement hypotheses.
Impact = potential uplift × traffic volume. Confidence = data supporting hypothesis. Ease = implementation effort.
Test highest-ICE hypotheses first. Run until statistical significance.
Implement winning tests. Re-score remaining hypotheses with new baseline.
Expected Outcomes
- ✓ Systematic conversion improvements
- ✓ Higher revenue per visitor
- ✓ Data-driven CRO culture
Real-World Examples
Common Pitfalls
Ehsan's Insight
ICE scoring for CRO experiments has a mathematical problem most teams ignore: Impact should be scored relative to traffic volume, not conversion lift percentage. A 50% lift on a page with 100 monthly visitors (Impact: 50 additional conversions) scores much lower than a 5% lift on a page with 50,000 visitors (Impact: 2,500 additional conversions). Yet teams consistently prioritize the high-percentage-lift experiment because "50% improvement" sounds more impressive than "5% improvement." Booking.com runs 25,000+ experiments annually and weights Impact purely by estimated revenue change, ignoring percentage lift entirely. The other CRO-specific ICE rule: never score Confidence above 3 unless you have A/B test data from a similar experiment. Most CRO "best practices" (bigger buttons, social proof, urgency timers) have been tested so many times that the actual lift is near zero for sophisticated audiences.
Ehsan Jahandarpour
AI Growth Strategist & Fractional CMO
Forbes Top 20 Growth Hacker · TEDx Speaker · 716 Academic Citations · Ex-Microsoft · CMO at FirstWave (ASX:FCT) · Forbes Communications Council