Why Most A/B Tests Are Useless
80% of tests we see in client reports ran for 3 days with 200 sessions and declared a winner at +2%. That's not A/B testing — it's statistical noise. An incorrect test is worse than no test: it gives false certainty.
Minimum Requirements for a Valid Test
95% statistical significance — at most 5% chance the result is random. Minimum 100 conversions per variant — if your site does 10 conversions/week, you need 10 weeks per test, not 3 days. One variable at a time — headline, image, or CTA button. Not all three simultaneously.
What's Worth Testing
Focus on high-impact elements: hero headline, main offer, primary CTA, contact form length, social proof positioning, price/pricing structure. Don't waste volume testing button colour, font, or spacing.
Classic Mistakes
Stopping early (set the duration before you start and don't stop regardless of interim results), running multiple simultaneous tests, and ignoring seasonality (run tests for at least 1–2 complete weeks to cover weekend variation).
A well-run A/B test on the hero headline can produce more value than a year of cosmetic optimisations. But a poorly run test can convince you something doesn't work when it actually does.
Want to improve conversion rate before scaling your ad budget?
We do a paid audit of your page and identify exactly what's blocking conversions.
Tags
Adela Mincea
Performance Marketing Expert
Performance marketing specialist with 10+ years of experience running Google Ads, Meta Ads and LinkedIn Ads campaigns for businesses in Romania and internationally.