Table of Contents | ||||
---|---|---|---|---|
|
What you need
In order to test preezie’s impact you will need to be able to track 2 things:
...
Because preezie won’t be used by all users you need to be able to compare those who use it by those who saw it and did not use it. This difference is your measure of preezie’s impact.
Depending on what goals you have, you can either use a split traffic tool (Google Optimize, VWO, Optimizely etc.) or track at a user level over date periods.
...
Why we need to
...
segment by usage
A simple comparison is using an AB test to test the impact of showing Afterpay/Klarna in your checkout. Although this feature is available to all checkout users, we expect it will only impact a % of them.
...
This is the same for preezie, depending on where it is shown a % of your users won’t be impacted at all, so you need to analyse the behaviour only of those who do engage with it.
...
How we define usage
A preezie user is one who clicks at least once on a question. You can also track by those who ‘complete’ a preezie journey, i.e. saw their product recommendations.
To do this you will need to ensure your analytics/testing tool can track preezie views and click events. If you want to track preezie clicks in your Google Analytis (or other event based tool) then use this guide:
Data Layer for Google Analytics events
...
Using a split traffic AB test tool
...
If preezie is embedded on a single page then this is straight forward:
Split traffic | Page trigger | Segment | Goal A: User conversion rate (sales per user) | Goal B: Sessions per user | Goal C: Bounce rate |
---|---|---|---|---|---|
50% Test | myhomepage.com | 40% non-preezie users | 3.2% | 2.4 | 23% |
10% preezie users | 4.5% | 3.2 | 0% (preezie counts as a significant event) | ||
50% Control | myhomepage.com | 50% non-preezie users | 3.1% | 2.6 | 25% |
Once you can see these the buckets are gaining a good level of traffic (e.g. use a an A/A test to understand how long your website traffic needs to acheive achieve even conversion rates), you can start to compare the preezie 10% against the 50% who never saw it and the 40% who did see it but didn’t engage. This
Control
1000 users / 31 sales = 3.1% conv rate
Test
1000 users / 37 sales = 3.5% (+12% against control)
non-preezie @3.2% conversion (+3% against control) = 26 sales
preezie @4.5% conversion (+45% against control) = 9 sales
Even if you account for the +3% increased in non-preezie conversion, the preezie bucket although smaller shows at least a +40% increase.
Tracking user conversion will tell you the user level impact of preezie across sessions, so you can compare the influence of these preezie engaged users, e.g. sessions per user increases.
...
If you take 10% of your control bucket raw numbers (e.g. users, conversions, sessions, bounces) and your preezie bucket numbers put them into a signifance calculation tool:
OR
https://abtestguide.com/bayesian/
You can use this method these methods to understand both:
How preezie performs vs those who didn’t see it (i.e. the 10% / 50% buckets)
How preezie performs vs those who did but didn’t interact (i.e. the 10% / 40% buckets)
...
Tip |
---|
Tip: If your primary goal is something more reflrective reflective of impact to users who need help (e.g. nexw user bounce rate) then you should probably can analyse the 50/50% on the total results as usual. |
...
Here we’ll track those who saw it, those who clicked and compare against the inverse of both. Our main goal of exit intents are to keep users on the website, so our goals are now:
Split traffic | Page trigger | Segment | Goal A: Exit rate |
---|
Sessions per user
Goal B: New user pages/session | Goal C: User conversion rate (sales per user) |
---|---|
50% Test | any |
25% non-preezie users
(shown based on exit intent behaviour) | 25% did not see preezie | 30% | 2.1 | 4 |
.2% | ||
10% saw preezie |
7%
3.2
and did not click it | 20% | 2.3 | 4.5% | ||
15% clicked on preezie | 6% | 4.3 | 6.8% | ||
50% Control | any | 50% |
no preezie |
loaded |
30% | 2. |
2 | 4.1% |
Here you can compare your control bucket with those who did not see it to ensure the behaviour is the same across buckets (see article on A/A testing).
However, because you have a bucket of users who do not see it you do not need to use an AB traffic split tool to get this result. Just run the test until you are comfortable the traffic volumes and difference in goal metrics are significant enough to compare, no preezie vs preezie clicked (25% vs 15%) and no preezie click vs preezie click (10% vs 15%).
Tip |
---|
Tip: Make sure your primary goals are reflective of the behaviour preezie will drive. For example you cannot expect same session conversion rate to be driven immediately by an exit intent pop up, instead you can expect to keep more users engaged on your website and hopefully convert at a later date (e.g. by ad retargetting, email incentives etc.). Just like a retail store visit, if they have a positive first experience then they’ll come back! |