Pre vs Post Measurement

When you can't get sample size in a month, ship and compare before/after instead of A/B testing

Elena Verna
10 growth tactics that never work

Pre vs Post Measurement

"If you have low volume real estate, that is going to take you eight months to reach some sort of answer. Do you really want to test it for eight months? What's the point of it? My rule of thumb, if we cannot collect the sample size in the month, we shouldn't test it, period." - Elena Verna

What It Is

Pre vs Post Measurement is an alternative to A/B testing for situations where you don't have enough traffic volume to reach statistical significance in a reasonable timeframe. Instead of splitting traffic and waiting months for results, you ship the change and compare performance before and after the launch.

This approach prioritizes velocity and learning over statistical precision, recognizing that perfect data from a slow test is often less valuable than directional data from a fast iteration.

How It Works

The One-Month Rule

Elena's heuristic: If you can't collect sufficient sample size in one month, don't A/B test.

Running tests for 8+ months is paralyzing because:

  • You can't learn fast enough
  • The market may have changed by the time you get results
  • You've blocked that real estate from other improvements
  • The opportunity cost is enormous

Pre vs Post Process

  1. Measure "before" metrics for the relevant area

  2. Ship the change to 100% of users

  3. Measure "after" metrics with multiple checkpoints:

    • 24-hour readout
    • 7-day readout
    • 28-day readout
    • Optional: 1-year retention/extension data
  4. Compare the periods directionally

  5. Roll back if necessary if performance clearly drops

When to A/B Test vs Pre/Post

A/B Test when:

  • High traffic real estate
  • Strategic pivots requiring confidence
  • Small percentage changes mean millions of dollars
  • You need to prove causation, not just correlation

Pre/Post when:

  • Low volume areas
  • Can't reach sample size in a month
  • Speed matters more than precision
  • You can roll back if needed

How to Apply It

  1. Audit your experiment queue

    • How long would each test take to reach significance?
    • Flag anything requiring more than a month
  2. For low-volume tests:

    • Document current metrics clearly
    • Ship the change
    • Set calendar reminders for readouts
    • Be prepared to roll back
  3. Trust your intuition more

    • Data is good, but only with sufficient volume
    • Customer understanding + intuition can fill gaps
    • "People stop relying enough on their element intuition"
  4. Use multiple readout points

    • Don't just look at day 1
    • Check at 7, 28 days
    • Come back at 1 year for retention/long-term effects

Common Mistakes

  • Over-testing: Making every initiative an experiment creates paralysis
  • False precision: Taking statistical significance at face value when distributions aren't that clean
  • Ignoring intuition: Data is "a directional data point," not gospel
  • Not rolling back: Pre/post only works if you're willing to revert

The Deeper Problem

"I just think that people stop in this age of data, almost rely enough on their element intuition."

Experimentation culture can become "a disease, like a paralyzing disease" that:

  • Slows down progress
  • Reduces velocity
  • Blocks learnings
  • Creates terrible consequences for output

The goal isn't to eliminate testing—it's to test only where you have the volume and stakes to justify it.

Source

  • Guest: Elena Verna
  • Episode: "10 growth tactics that never work"
  • Key Discussion: (01:05:59) - Too much risk averseness in growth
  • YouTube: Watch on YouTube

Related Frameworks