In order to test preezie’s impact you will need to be able to track 2 things:

  1. users who saw preezie

  2. users who clicked on preezie

Because preezie won’t be used by all users you need to be able to compare those who use it by those who saw it and did not use it. This difference is your measure of preezie’s impact.


A simple comparison is using an AB test to test the impact of showing Afterpay/Klarna in your checkout:

In this example, the overall impact to the checkout is likely to be significant as payment options are potentially relevant to a lot of users; however you will still need to segment the users by those who used the feature to really gauge the impact.

This is the same for preezie, 100% of your users won’t be impacted you need to analyse the behaviour only of those who do engage with it.


Using a split traffic AB test tool

Here’s what we recommend if you’re using a traffic split tool:

For example,

If preezie is embedded on a single page then this is straight forward:

Split traffic

Page trigger

Segment

Conversion rate

Sessions per user

Bounce rate

50% Test

myhomepage.com

40% non-preezie users

3.2%

2.4

26%

10% preezie users

4.5%

3.2

0% (preezie counts as a significant event)

50% Control

myhomepage.com

50% non-preezie users

3.1%

2.6

25%

This means the buckets aren’t an even 50/50 split but more like a 50/10 split, we’re using traffic split to understand the control vs 40% bucket of non-preezie users.

Once you can see these buckets are performing equally then you can start to compare the preezie 10% against the 50% who never saw it and the 40% who did see it but didn’t engage. This will tell you the user level impact of preezie across sessions, e.g. sessions per user increases.

We do this because preezie won’t be a tool required by 100% of users, we want to help users who need it and not impact those who don’t.