A/B tests are simple: you have a change you want to make; you define what "conversion" is; you randomly show the change to half your users, and see if "conversion" goes up or down with the change. But what if the effects are more complex?
I'll talk about an experiment we ran that initially targeted a single measure of conversion but had some knock-on effects, and how we were able to study them. I'll share some very easy things you can do right away to get more information out of your A/B tests, and hopefully convince you there's more to them than deciding on some copy, or the colour of a button.
YOU MAY ALSO LIKE:
- Agile PMO: The PMO has the power to support your organisation's revolution. (in London on 3rd April 2017)
- Uncle Bob's Clean Code: Agile Software Craftsmanship (in London on 5th - 7th April 2017)
- Alberto Brandolini's DDD Modelling Workshop (in London on 24th - 26th April 2017)
- Agile Testing & BDD eXchange 2017 (in London on 9th - 10th November 2017)
An A/B test is for life, not just for Christmas
A multi-purpose developer/engineer working mostly with Ruby these days, but with experience in C++, C#, Prolog, and an assortment of others, including a little bit of Clojure dabbling.