A/B tests are simple: you have a change you want to make; you define what "conversion" is; you randomly show the change to half your users, and see if "conversion" goes up or down with the change. But what if the effects are more complex?
I'll talk about an experiment we ran that initially targeted a single measure of conversion but had some knock-on effects, and how we were able to study them. I'll share some very easy things you can do right away to get more information out of your A/B tests, and hopefully convince you there's more to them than deciding on some copy, or the colour of a button.
YOU MAY ALSO LIKE:
An A/B test is for life, not just for Christmas
A multi-purpose developer/engineer working mostly with Ruby these days, but with experience in C++, C#, Prolog, and an assortment of others, including a little bit of Clojure dabbling.