Why Experiment Velocity Matters
The best growth teams don't have better ideas — they test more ideas faster. Companies like Booking.com run 25,000+ experiments per year. Your experiment velocity directly predicts your growth rate.
This playbook shows you how to 10x your experiment velocity without sacrificing quality or statistical rigor.
The Growth Experiment Process
Every experiment follows this cycle:
1. Observe: What does data tell you about user behavior? Where are the biggest drop-offs?
2. Hypothesize: "If we [change], then [metric] will improve by [amount] because [reason]."
3. Score: Use ICE scoring. Impact (1-10) × Confidence (1-10) × Ease (1-10). Run highest-scoring experiments first.
4. Test: Design the minimum viable test. What's the smallest change that tests the hypothesis?
5. Analyze: Wait for statistical significance (95% confidence). Segment results by user type.
6. Learn: Document findings regardless of outcome. Failed experiments are as valuable as successful ones.
Where to Experiment
Acquisition experiments: Landing page headlines, ad copy, channel mix, referral incentives, pricing page layout.
Activation experiments: Onboarding flow, empty states, first-run experience, tooltip copy, tutorial design.
Retention experiments: Email sequences, push notification timing, feature discovery, usage nudges, check-in campaigns.
Revenue experiments: Pricing tiers, trial length, upgrade CTAs, annual vs monthly, feature gating.
Referral experiments: Share mechanics, incentive structure, referral messaging, social proof placement.
Experiment Infrastructure
You need these tools to run experiments at scale:
Feature flags: LaunchDarkly, Statsig, or Flagsmith for controlling experiment exposure.
A/B testing: Statsig, Optimizely, or custom solution for running split tests.
Analytics: Amplitude, Mixpanel, or PostHog for measuring experiment impact.
Documentation: A shared experiment tracker (Notion, Airtable, or custom tool) for recording hypotheses, results, and learnings.
AI Integration: Use AI to generate test variants, analyze results, and suggest next experiments based on historical patterns.
Common Experimentation Mistakes
Not reaching statistical significance: The most common mistake. Running too many variants with too little traffic produces meaningless results.
HiPPO syndrome: Highest Paid Person's Opinion overriding data. Build a culture where data wins regardless of who suggests the hypothesis.
Not documenting learnings: If experiment results live only in someone's head, the organization doesn't learn. Document everything.
Only testing easy things: Changing button colors is easy but rarely impactful. Test fundamental changes: value proposition, pricing, product experience.
Giving up too early: Most experiments fail. That's expected. A 20-30% win rate means your testing system is working.