Hi everyone,
Too many times I saw people on Facebook happy that their conversion increased from 1.1% to 2% with less than 500 visits, the truth is, there is no way to be sure that their conversion actually changed with only 500 visits and such a low conversion rate.
This is because of something called the "p-value". In short, the "p-value" is a Mathematical value that measures how statistically significant is a hypothesis.
In our case, the hypothesis could be: A is performing better than B in our A/B test or, this ad on Facebook is converting more.
This is, of course, an oversimplification, but in short, the p-value can tell you if you can draw a conclusion with your results or if you should wait to have a bigger sample.
I've made this small tool to help you draw a conclusion from your A/B tests that I hope you will like and find interesting.
This thing took me quite a bit of time to code, so if you think twitter would like it, do not hesitate to share it, it helps a lot 🙏😉
Pierre de Wulf@pierredewulfPeople tend to draw conclusion from their A/B tests when actually they should not📈.
You often can't be sure if B is better than A without computing the p-value 🤓
To make this process as easy as possible I've made this 👇
abtestchecker.com17:11 PM - 29 Feb 2020
(but do NOT create an account just for this though)
If you want to read more about it you can this excellent post on FCC.com.
Do not hesitate to follow me if you don't want to miss my next posts. I write about tech, my bootstrapping journey and I occasionally write more data analysis articles.