Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

In order to test preezie’s impact you will need to be able to track 2 things:

  1. users who saw preezie

  2. users who clicked on preezie

Because preezie won’t be used by all users you need to be able to compare those who use it by those who saw it and did not use it. This difference is your measure of preezie’s impact.


Why we need to analyse usage

A simple comparison is using an AB test to test the impact of showing Afterpay/Klarna in your checkout. Although this feature is available to all checkout users, we expect it will only impact a % of them.

  • Are order values higher when they use it?

    • i.e. Compare purchases of those who use it vs those who don’t

  • Do people buy more over time when they use it?

    • i.e. Compare Afterpay/Klarna transactions per users to those who don’t use it

  • Does it improve conversion overall?

    • i.e. When Afterpay/Klarna is seen does it improve checkout completion?

In this example, the overall impact to the checkout is likely to be significant as payment options are relevant to lots of users; however you would still need to segment the users by those who used the feature to really gauge the impact.

This is the same for preezie, depending on where it is shown a % of your users won’t be impacted at all, so you need to analyse the behaviour only of those who do engage with it.


Using a split traffic AB test tool

Here’s what we recommend if you’re using a traffic split tool:

  • Use the tool to load preezie to a % of your traffic (e.g. 50/50 split) . This is should bucket users when they first could have seen preezie, and should happen at the user level to persist over sessions

  • Ensure preezie is not loaded on any other pages for your control bucket of users who do not see it, this will be cleaner. If this is not possible then you will need to segment the users by those who have never used preezie

  • Measure everything at a user level, not session based. This is because preezie often increases sessions per user so dilutes it’s session impact. If users are shown specific recommendations that match their needs they come back more often to your website to view their product recommendations

  • Ensure you can segment the data by those who use preezie vs those who have never used it, this keeps the test clean so repeat visits aren’t also using preezie

  • Use conversion/goals that you can segment by preezie users only

    • Note, this will often mean your traffic can no longer be an equal split (see below example)

    • If this isn’t possible use goals that are relevant to monitor overall performance of buckets, e.g. bounce rate, page exit rate etc. but use segmentation in another tool or the raw data to measure impact


For example: preezie embedded on a page

If preezie is embedded on a single page then this is straight forward:

Split traffic

Page trigger

Segment

Goal A: User conversion rate (sales per user)

Goal B: Sessions per user

Goal C: Bounce rate

50% Test

myhomepage.com

40% non-preezie users

3.2%

2.4

23%

10% preezie users

4.5%

3.2

0% (preezie counts as a significant event)

50% Control

myhomepage.com

50% non-preezie users

3.1%

2.6

25%

Once you can see these the buckets are gaining a good level of traffic (e.g. use a an A/A test to understand how long your website traffic needs to acheive even conversion rates), you can start to compare the preezie 10% against the 50% who never saw it and the 40% who did see it but didn’t engage.

This will tell you the user level impact of preezie across sessions, e.g. sessions per user increases.

How will I know if the result is significant?

If you take 10% of your control bucket raw numbers (e.g. users, conversions, sessions, bounces) and your preezie bucket numbers put them into a signifance calculation tool:

https://abtestguide.com/calc/

You can use this method to understand both:

  1. How preezie performs vs those who didn’t see it (i.e. the 10% / 50% buckets)

  2. How preezie performs vs those who did but didn’t interact (i.e. the 10% / 40% buckets)

The above method is used because preezie will only be required by a smaller % of users, hence we want to help users who need it but NOT impact those who don’t.

Tip: If your primary goal is something more reflrective of impact to users who need help (e.g. nexw user bounce rate) then you should probably analyse the 50/50% on the total results as usual.

Although it may take longer to reach significance, the impact to bounce rate should be significant enough to be seen at the total 50% preezie shown bucket.


For example: preezie as an exit intent pop up on multiple pages

If preezie is triggered as a pop up when a user looks like they’ll exit the site across any page, then you need to ensure you can segment by engaged users. Again, users can either see and not interact or interact in their first session and then repeat visit because of their preezie recommendations.

Here we’ll track those who saw it, those who clicked and compare against the inverse of both:

Split traffic

Page trigger

Segment

Goal A: Exit rate

Sessions per user

Bounce rate

50% Test

any

25% non-preezie users

19%

2.4

23%

10% preezie users

7%

3.2

0% (preezie counts as a significant event)

50% Control

any

50% non-preezie users

26%

2.6

25%

  • No labels