In order to test preezie’s impact you will need to be able to track 2 things:
users who saw preezie
users who clicked on preezie
Because preezie won’t be used by all users you need to be able to compare those who use it by those who saw it and did not use it. This difference is your measure of preezie’s impact.
A simple comparison is using an AB test to test the impact of showing Afterpay/Klarna in your checkout:
Are order values higher when they use it?
i.e. Compare purchases of those who use it to those who don’t
Do people buy more over time when they use it?
i.e. Compare Afterpay/Klarna transactions per users to those who don’t use it
Does it improve conversion overall?
i.e. When Afterpay/Klarna is seen does it improve checkout completion?
In this example, the overall impact to the checkout is likely to be significant as payment options are potentially relevant to a lot of users; however you will still need to segment the users by those who used the feature to really gauge the impact.
This is the same for preezie, 100% of your users won’t be impacted you need to analyse the behaviour only of those who do engage with it.
Using a split traffic AB test tool
Here’s what we recommend if you’re using a traffic split tool:
Use the tool to load preezie to your % of traffic. This is should bucket users when they first could have seen preezie, and should happen at the user level to persist over sessions
Ensure preezie is not loaded on any other pages for the users who do not see it in your test, this will be cleaner, if this is not possible then you will need to segment the users
Measure everything at a user level, not session based. This is because preezie often increases sessions per user so dilutes it’s session impact, if users are shown specific recommednations that match their needs they come back more often to view their product recommendations
Ensure you can segment the data by those who use preezie vs those who have never used it, this keeps the test clean so repeat visits aren’t also using preezie
Use conversion/goals that you can segment by preezie users only
Note, this will often mean your traffic can no longer be an equal split
If this isn’t possible use goals that are relevant to monitor overall performance of buckets, e.g. bounce rate, page exit rate etc. but use segmentation in another tool or the raw data to measure impact
For example,
If preezie is embedded on a single page then this is straight forward:
Split traffic | Page trigger | Segment | Conversion rate | Sessions per user | Bounce rate |
---|---|---|---|---|---|
50% Test | myhomepage.com | 40% non-preezie users | 3.2% | 2.4 | 26% |
10% preezie users | 4.5% | 3.2 | 0% (preezie counts as a significant event) | ||
50% Control | 50% non-preezie users | 3.1% | 2.6 | 25% |
This means the buckets aren’t an even 50/50 split but more like a 50/10 split, we’re using traffic split to understand the control vs 40% bucket of non-preezie users.
Once you can see these buckets are performing equally then you can start to compare the preezie 10% against the 50% who never saw it and the 40% who did see it but didn’t engage. This will tell you the user level impact of preezie across sessions, e.g. sessions per user increases.
We do this because preezie won’t be a tool required by 100% of users, we want to help users who need it and not impact those who don’t.
If preezie is a timed delay pop up shown on multiple pages, then you need to ensure you’re segmenting by engaged users:
Split traffic | Page trigger | Segment | Conversion rate | Sessions per user | Bounce rate |
---|---|---|---|---|---|
50% Test | any | 40% non-preezie users | 3.2% | 2.4 | 26% |
10% preezie users | 4.5% | 3.2 | 0% (preezie counts as a significant event) | ||
50% Control | any | 50% non-preezie users | 3.1% | 2.6 | 25% |
Add Comment