When viewing an A/B Test reporting function, you can access several levels of information.
The reporting displays the experiment results in a clear and visual way for easy reading and interpretation. Highlighted information and color codes enable you to quickly identify the best-performing variations.
The metrics shown in the new Flagship reporting layout are those chosen during the KPI configuration of the Basic Information step. If you haven't configured any KPIs, none will be displayed in the reporting layout.
![]() |
The new report is only available for AB Tests and Progressive Rollout. If you would like to share feedback, please send an e-mail to product.feedback@abtasty.com. |
Accessing the reporting analysis
To access the reporting, click Reporting from an A/B Test on the dashboard.
A/B Test information
The new reporting features a summary of information on the experiment:
- The experiment name;
- A toggle for launching or pausing the experiment in the top right corner;
- Date and Context key filters;
- A Reliability Status, to make sure you can analyze your data;
- Your primary and secondary metrics with various tabs (depending on the metrics selected during KPI configuration). For more information, refer to Flagship - A/B Tests.
Results displayed
The experiment results are based on the metrics you configured during the KPI configuration step. If you haven't configured any KPIs, none will be displayed in the reporting.
![]() |
By default, results are calculated based on the original version of the experiment. If necessary, you can change the reference variation by selecting the one that interests you in the variation tab of the experiment configuration. |
Metrics
The new reporting displays the primary metric of the experiment, followed by the secondary metrics.
The primary metric serves as a point of reference for the entire experiment and enables you to determine which variation takes precedence over the other(s). This is why it appears at the top of the reporting feature: all future decisions will be based on this metric.
It can only have one tab focused on, either Transaction Rate/Conversion Rate or Transaction Total Revenue/Conversion Total Value.
Secondary metrics are displayed one after the other underneath the primary metric. You can choose to display all variations (by default) or one in particular. To do this, check the variation(s) you want to see in the chart on the list of variations to the right of the relevant goal.
You can display the two tabs for each secondary metric.
Types of metrics and data
Transaction and Conversion KPI can have two tabs each.
Transaction:
- Transaction Rate
- Transaction Total Revenue
- Average Basket (coming soon)
Conversion:
- Conversion Rate
- Conversion Total Value
- Average Value (coming soon)
Each tab shows basic information such as:
- The Variation name
- The number of unique visitors
- The number of unique conversions
Then, depending on the KPI, there is some personalized data:
Transaction Rate and Conversion Rate:
- Transaction Rate / Conversion Rate, number of unique transactions/conversions divided by the number of visitors
- Uplift, improvement compared to the reference variation, with the confidence interval being calculated using the Bayesian algorithm.
- Chances to win, the chances you have to win more if you put that variation into production.
Transaction Total Revenue and Conversion Total Value:
- Transaction Total Revenue / Conversion Total Value, indicates how much revenue/value the variation won
- Revenue Projection / Value projection, indicates the total revenue/value if all visitors were assigned to the variation
- Uplift, improvement compared to the reference variation. This is based on raw data only and should not be interpreted as a statistically valid result.
- Potential Value/ Potential Revenue, the difference between the total revenue/value registered for the experiment and the Revenue Projection /Value Projection.
Average Basket and Average Value:
- Average Basket / Average Value, addition of all the transaction revenue divided by the number of transactions.
- Uplift, improvement made compared to the reference variation. This is based on raw data only and should not be interpreted as a statistically valid result.
Chances to win
The Chances to win enable you to quickly identify the leading variation. This information has three levels:
- Green, your experiment is on track, we are 95% sure that it will have benefits
- Orange, your experiment might be on track, but we are 95% sure that even if it has benefits, it may also have side-effects.
- Red, your experiment isnβt on track at all, we are 95% sure that it will not have any benefits.
Whatever your Chances to win results are, you need to wait 2 business cycles before analyzing your data and having significant results. By default, a business cycle is 5 weekdays and 2 weekends, so 14 consecutive days in total. If you know your business cycle, feel free to adapt the analysis of your test accordingly.
As soon as your reliability status is reliable, this means the data is statistically significant and ready to be analyzed.
Uplift data
The Uplift -- on Transaction Rate and Conversion Rate goals -- enables you to access advanced statistics. This data is based on the Bayesian approach and provides two measurements: confidence interval and improvement gain.
The improvement gain indicator enables you to manage uncertainty related to conversion rate measurements. It indicates what you may really hope to gain by replacing one variation with another.
Last update information
The data displayed on the reporting is updated at a specific frequency. Here is the frequency schema:
- From 0 to 3 days after the last launch of the experiment: every hour
- From 4 days to 7 days after the last launch of the experiment: every 4 hours
- From 8 days to 14 days after the last launch of the experiment: every 8 hours
- From 15 days to 30 days after the last launch of the experiment: every 24 hours
- From 31 days to 60 days after the last launch of the experiment: every 48 hours
- From 61 days after the last launch of the experiment: every 168 hours (1 week)
Note that during the first 12 hours of the experiment, the data displayed is in real-time.