AB testing preezie

What you need

In order to test preezie’s impact you will need to be able to track 2 things:

  1. users who saw preezie

  2. users who clicked on preezie

Because preezie won’t be used by all users you need to be able to compare those who use it by those who saw it and did not use it. This difference is your measure of preezie’s impact.

Depending on what goals you have, you can either use a split traffic tool (Google Optimize, VWO, Optimizely etc.) or track at a user level over date periods.


Why we need to segment by usage

A simple comparison is using an AB test to test the impact of showing Afterpay/Klarna in your checkout. Although this feature is available to all checkout users, we expect it will only impact a % of them.

  • Are order values higher when they use it?

    • i.e. Compare purchases of those who use it vs those who don’t

  • Do people buy more over time when they use it?

    • i.e. Compare Afterpay/Klarna transactions per users to those who don’t use it

  • Does it improve conversion overall?

    • i.e. When Afterpay/Klarna is seen does it improve checkout completion?

In this example, the overall impact to the checkout is likely to be significant as payment options are relevant to lots of users; however you would still need to segment the users by those who used the feature to really gauge the impact.

This is the same for preezie, depending on where it is shown a % of your users won’t be impacted at all, so you need to analyse the behaviour only of those who do engage with it.


How we define usage

A preezie user is one who clicks at least once on a question. You can also track by those who ‘complete’ a preezie journey, i.e. saw their product recommendations.

To do this you will need to ensure your analytics/testing tool can track preezie views and click events. If you want to track preezie clicks in your Google Analytis (or other event based tool) then use this guide:

https://preezie.atlassian.net/wiki/spaces/PW/pages/122224661


Using a split traffic AB test tool

Here’s what we recommend if you’re using a traffic split tool:

  • Use the tool to load preezie to a % of your traffic (e.g. 50/50 split) . This is should bucket users when they first could have seen preezie, and should happen at the user level to persist over sessions

  • Ensure preezie is not loaded on any other pages for your control bucket of users who do not see it, this will be cleaner. If this is not possible then you will need to segment the users by those who have never used preezie

  • Measure everything at a user level, not session based. This is because preezie often increases sessions per user so dilutes it’s session impact. If users are shown specific recommendations that match their needs they come back more often to your website to view their product recommendations

  • Ensure you can segment the data by those who use preezie vs those who have never used it, this keeps the test clean so repeat visits aren’t also using preezie

  • Use conversion/goals that you can segment by preezie users only

    • Note, this will often mean your traffic can no longer be an equal split (see below example)

    • If this isn’t possible use goals that are relevant to monitor overall performance of buckets, e.g. bounce rate, page exit rate etc. but use segmentation in another tool or the raw data to measure impact


For example: preezie embedded on a page

If preezie is embedded on a single page then this is straight forward:

Split traffic

Page trigger

Segment

Goal A: User conversion rate (sales per user)

Goal B: Sessions per user

Goal C: Bounce rate

Split traffic

Page trigger

Segment

Goal A: User conversion rate (sales per user)

Goal B: Sessions per user

Goal C: Bounce rate

50% Test

myhomepage.com

40% non-preezie users

3.2%

2.4

23%

 

 

10% preezie users

4.5%

3.2

0% (preezie counts as a significant event)

 

 

50% non/preezie users

 

 

 

50% Control

myhomepage.com

50% non-preezie users

3.1%

2.6

25%

Once you can see these the buckets are gaining a good level of traffic (e.g. use a an A/A test to understand how long your website traffic needs to achieve even conversion rates), you can start to compare the preezie 10% against the 50% who never saw it and the 40% who did see it but didn’t engage.

  • Control

    • 1000 users / 31 sales = 3.1% conv rate

  • Test

    • 1000 users / 37 sales = 3.5% (+12% against control)

      • non-preezie @3.2% conversion (+3% against control) = 26 sales

      • preezie @4.5% conversion (+45% against control) = 9 sales

Even if you account for the +3% increased in non-preezie conversion, the preezie bucket although smaller shows at least a +40% increase.

Tracking user conversion will tell you the user level impact of preezie across sessions, so you can compare the influence of these preezie engaged users, e.g. sessions per user increases.

How will I know if the result is significant?

If you take 10% of your control bucket raw numbers (e.g. users, conversions, sessions, bounces) and your preezie bucket numbers put them into a signifance calculation tool:

https://abtestguide.com/calc/

You can use this method to understand both:

  1. How preezie performs vs those who didn’t see it (i.e. the 10% / 50% buckets)

  2. How preezie performs vs those who did but didn’t interact (i.e. the 10% / 40% buckets)

The above method is used because preezie will only be required by a smaller % of users, hence we want to help users who need it but NOT impact those who don’t.

Tip: If your primary goal is something more reflective of impact to users who need help (e.g. bounce rate) then you can analyse the 50/50% on the total results as usual.

Although it may take longer to reach significance, the impact to bounce rate should be significant enough to be seen at the total 50% preezie shown bucket.


For example: preezie as an exit intent pop up on multiple pages

If preezie is triggered as a pop up when a user looks like they’ll exit the site across any page, then you need to ensure you can segment by engaged users. Again, users can either see and not interact or interact in their first session and then repeat visit because of their preezie recommendations.

Here we’ll track those who saw it, those who clicked and compare against the inverse of both. Our main goal of exit intents are to keep users on the website, so our goals are now:

Split traffic

Page trigger

Segment

Goal A: Exit rate

Goal B: New user pages/session

Goal C: User conversion rate (sales per user)

Split traffic

Page trigger

Segment

Goal A: Exit rate

Goal B: New user pages/session

Goal C: User conversion rate (sales per user)

50% Test

any
(shown based on exit intent behaviour)

25% did not see preezie

30%

2.1

4.2%

 

 

10% saw preezie and did not click it

20%

2.3

4.5%

 

 

15% clicked on preezie

6%

4.3

6.8%

50% Control

any
(never shown)

50% no preezie loaded

30%

2.2

4.1%

Here you can compare your control bucket with those who did not see it to ensure the behaviour is the same across buckets (see article on A/A testing).

However, because you have a bucket of users who do not see it you do not need to use an AB traffic split tool to get this result. Just run the test until you are comfortable the traffic volumes and difference in goal metrics are significant enough to compare, no preezie vs preezie clicked (25% vs 15%) and no preezie click vs preezie click (10% vs 15%).

Tip: Make sure your primary goals are reflective of the behaviour preezie will drive. For example you cannot expect same session conversion rate to be driven immediately by an exit intent pop up, instead you can expect to keep more users engaged on your website and hopefully convert at a later date (e.g. by ad retargetting, email incentives etc.) .

Just like a retail store visit, if they have a positive first experience then they’ll come back!