...
Use the tool to load preezie to your % of traffic. This is should bucket users when they first could have seen preezie, and should happen at the user level to persist over sessions
Ensure preezie is not loaded on any other pages for the users who do not see it in your test, this will be cleaner, if this is not possible then you will need to segment the users
Measure everything at a user level, not session based. This is because preezie often increases sessions per user so dilutes it’s session impact, if users are shown specific recommednations that match their needs they come back more often to view their product recommendations
Ensure you can segment the data by those who use preezie vs those who have never used it, this keeps the test clean so repeat visits aren’t also using preezie
Use conversion/goals that you can segment by preezie users only
Note, this will often mean your traffic can no longer be an equal split
If this isn’t possible use goals that are relevant to monitor overall performance of buckets, e.g. bounce rate, page exit rate etc. but analyse using preezie segments use segmentation in another tool /spreadsheetor the raw data to measure impact
...
For example,
If preezie is embedded on a single page then this is straight forward:
...
Info |
---|
We do this because preezie won’t be a tool required by 100% of users, we want to help users who need it and not impact those who don’t. |
...
If preezie is a timed delay pop up shown on multiple pages, then you need to ensure you’re segmenting by engaged users:
Split traffic | Page trigger | Segment | Conversion rate | Sessions per user | Bounce rate |
---|---|---|---|---|---|
50% Test | any | 40% non-preezie users | 3.2% | 2.4 | 26% |
10% preezie users | 4.5% | 3.2 | 0% (preezie counts as a significant event) | ||
50% Control | any | 50% non-preezie users | 3.1% | 2.6 | 25% |