Experiment Guide

How to Calculate A/B Test Sample Size

A/B test sample size planning is mostly about deciding what size of change is worth detecting and how much certainty you want around the test. This page walks through the practical logic behind that calculation.

Start with the effect that matters

A/B test planning begins with the minimum detectable effect, not with the sample size itself. You first decide what change would actually be meaningful enough to influence a product, growth, or design decision.

Tiny effects require much larger samples to detect.

Add confidence and power

Confidence level controls how strict you want to be about random variation. Power controls how likely you are to detect a real effect if it exists. Together, they set the sensitivity of the test.

Higher standards on either measure usually increase the traffic needed.

Use a realistic baseline

Baseline conversion rate anchors the calculation. A page that already converts at 2% behaves differently from a page converting at 20%, even when the target lift looks similar in absolute points.

That is why A/B test sample size planning works best when it uses recent baseline data rather than rough guesses.

A practical planning sequence

A useful order is to define the decision threshold first, estimate a realistic baseline, choose the minimum detectable effect, and only then look at the sample size. That keeps the experiment grounded in business relevance instead of starting with traffic alone.

It also helps teams avoid designing tests that are technically valid but operationally unrealistic. If runtime will be too long, the earlier assumptions usually need revision before launch.

  • Choose an effect size that would actually change a decision
  • Use recent baseline data from the same funnel step
  • Check runtime before committing to the experiment
  • Revise the plan instead of launching an obviously underpowered test

Related pages for How to Calculate A/B Test Sample Size

Frequently Asked Questions

What will I learn on this page?
A/B test sample size planning is mostly about deciding what size of change is worth detecting and how much certainty you want around the test. This page walks through the practical logic behind that calculation.
Who is this A/B testing guide for?
This guide is for product teams, growth marketers, analysts, and anyone planning experiments who wants to make better decisions about effect size, traffic, and test design.
What should I do after reading this page?
Use the explanation here to choose realistic assumptions, then move to the calculator or related pages to estimate the traffic needed for your experiment.
What is the best starting point for A/B test sample size planning?
Start with the effect size that would actually matter for a product or business decision, then combine that with baseline conversion rate, confidence level, and power. That order keeps the test focused on meaningful outcomes.
Why can two similar-looking tests need very different samples?
Because the baseline conversion rate, target effect, and traffic quality may differ even if the user interface change looks similar. The statistical setup, not just the creative idea, determines the sample requirement.