A/B Test

Statistical tests to compare two variants

What is A/B Testing?

A/B Testing is a statistical method to compare two versions (A and B) and determine which performs better. It's like conducting a scientific experiment with your users!

You divide your audience into two groups: one sees version A (control) and the other sees version B (variant). The system calculates whether the observed difference is statistically significant.

Usage examples:

  • • Test two versions of a sales page
  • • Compare the effectiveness of two marketing campaigns
  • • Evaluate the impact of changes in design or copy
  • • Decide between two pricing strategies

Quick Start

  1. 1. Prepare your data in CSV format with group (A or B) and outcome metric
  2. 2. Upload the file to the upload page
  3. 3. Configure the parameters (confidence level, type of test)
  4. 4. Please wait for processing (usually 1-2 minutes)
  5. 5. Analyze the statistical results and make data-driven decisions

How to organize your data

Organize your data in a CSV spreadsheet with two columns:

Column 1: Group

Identify which version was shown. Use 'A' for control and 'B' for variant.

Column 2: Conversion/Metrics

Action outcome (1 = converted, 0 = not converted) or numerical value (time, revenue, etc)

Example of A/B test spreadsheet:

group converted
A 1
A 0
B 1
B 1

💡 Tip: Each row represents a user or observation. Use 1 for success (conversion, click, purchase) and 0 for failure.

Test Settings

Confidence Level

Define how certain you want to be that the difference is real (not due to chance).

90% Good confidence for rapid testing
95% Recommended standard (balanced)
99% High confidence for critical decisions

Test Type

Choose the appropriate statistical test for your data:

Z-Test (proportions)

For binary data (0 or 1): conversions, clicks, purchases

T-Test (means)

For continuous numerical values: time, revenue, quantity

Unicaudal vs Bicaudal Test

Define the test hypothesis:

Two-tailed

Tests for differences in either direction (greater or lesser)

One-tailed

Test if B is specifically better than A

Understanding the results

The test returns statistics that help determine if version B is truly better than A or if the difference may just be due to luck.

Key Metrics

P-value (p-valor)

Probability of observing this difference by chance.

p < 0.05 = Significant difference! | p > 0.05 = Not significant difference

Conversion Rate

Success rate in each group (A and B).

Example: Group A: 12%, Group B: 15% (B is 25% better)

Confidence Interval

Range where the true difference likely lies.

If it does not include zero, the difference is statistically significant

Effect Size

Practical magnitude of the difference found.

Small (0.2), Medium (0.5), Large (0.8)

⚠️ Important: A statistically significant result (p < 0.05) does not guarantee business impact. Always consider the effect size and the practical context of the decision.

Need help? Contact us: contato@grabatus.com