A/B Split Testing: 5 things that will make or break it

by: Leave a Comment

Subscribe on iTunes Subscribe on Soundcloud Subscribe on Stitcher

In episode #27, Wilco will share his split-test geekiness with all of you.

Time Stamped Show Notes:
00:25: You better get ready for some marketing geekiness.
00:55: Converted 83% better than my original version of that page.
02:55: Now, the first tip I want to give you is that you should not start running a split-test.
03:22: First, I’m just throwing something out, see if something even has potential.
04:16: My point here is that I only start split-testing once I know if something kind of works.
04:47: Now, the second thing that you need to do is to aim for big differences.
05:37: The main headline is like what really draws people into it.
07:00: You want to make sure that you have enough traffic.
10:30: My point here is that when you are doing a split-test, you need to know what you’re optimizing for, what your end goal is because sometimes, you’re looking at the wrong goals.
11:20: My point here is that you need to make sure that you have the right goal in mind.
11:40 You need to have patience
13:00 Keep track of all of your results.

Links:
UpViral
UpViral.com/blog
Facebook Ads
Visual Website Optimizer
Google Docs

Show Transcript (2,658 More Words)


You better get ready for some marketing geekiness because today, I’m actually sharing my split-test geekiness with all of you. Hey, it’s Wilco de Kreij here. Just this morning, I wrapped up a split-test that I actually had forgotten about, so this split-test has been running for a while. I didn’t even look at the stats until this morning, and when I checked the stats, I noticed that the winning one, the variation that I was trying out actually converted 83% better than my original version of that page. This is on a money page, right? This was a page where I’m actually selling something, so 83% higher conversions, that’s actually insane. It’s almost twice the amount of sales for the exact amount of traffic, right?


Obviously, I figured like, “Hey, I need to record something about split-testing to my audience because it is so, so important. It can be the best thing in your business. For those who already know me, I’m a bit of a marketing nerd. I actually like doing split-test, so I’m running split-tests on a lot of things, right? It all starts, obviously, with like things like the ads that I’m running, but also on like opt-in pages that I’m running. If I’m using UpViral, for example, I always split-test the opt-in the page. I split-test all my sales pages. Like you have no idea how much I split-test. I just do it because … not just because I like doing split-test, but also because it is the best way to learn, right?


It’s one thing to learn from others to see like, “Hey, what are they doing?” but it’s another thing to actually try different things out yourself and actually knowing like, “All right, so I tried out these three things, and from these three things, number three actually works best. Now, why is that? Why is that?” So then, I’m diving in. I’m going to break it down, and that’s actually … like that’s been probably my main way of learning things, just by seeing what works instead of guessing, right?


For that reason, I wanted to talk about split-testing because there are some things you need to do while doing a split-test, but more … Perhaps, even more important is there are some things you should absolutely not do because if you do these things, it’s wrong. Then, you’re basically doing … Like instead of like improve a conversion, you might actually be breaking them or you might not even get any results because of it. For that reason, I wanted to talk about five tips that I think you should be doing or should not be doing when going to run a split-test.


Now, the first tip I want to give you is that you should not start running a split-test and like that sounds right, right? I’m just saying that split-test is the best thing ever since … the best thing since sliced bread, and now I’m saying you should not be running a split-test. That’s right. You should not. You should not start out running a split-test. Like on most of my campaigns that I run, I don’t immediately start creating a lot of different split-tests, right?


First, I’m just throwing something out, see if something even has potential, right, because like if you’re going to split-test everything that you create right from the start, it’s going to slow the whole process down. Let’s say you have a new product in mind or you have a new service in mind and you want to bring that out to the market. You could either create multiple sales pages, multiple opt-in page. You could make everything perfect, perfect, perfect, and plan it all out, and after a couple months, release it, or you can just … right away like start a webinar. For example, get people to sign up. Introduce your product that you have even if you don’t even have a sales page yet. Like just get the word out as soon as possible and see if people even respond to your product or service.


That is just an example, but my point here is that I only start split-testing once I know if something kind of works, and from there, I start improving, but I don’t start out running a split-test usually because it just slows the whole process down, and I prefer to move fast. If something isn’t even remotely working, then why would I split-test that? Like if something is not working at all, then the best case scenario of a split-test is finding something that doesn’t work at all and almost doesn’t work at all. I hope that makes sense, so don’t start split-testing right from the start, but wait until you have something that you feel that is going to work. From there, you can actually find and improve your winning variation, for example.


Now, the second thing that you need to do is to aim for big differences. Don’t go split-test to see what color of your button would be best, whether it’s yellow, green, or blue. At some point, it will make sense, right, to split-test that, but initially, you’ll always want to go for the biggest differences, so look at your page. Whatever you’re split-testing, whether it’s an opt-in page, whether it’s a sales page, but what would be the biggest … like what could have the biggest impact, the biggest different change, right?


For me, like I have a certain process. I always start with like the biggest change first. For example, if I’m split-testing an opt-in page, like the biggest change is going to be the hook, right? What is it even about? In my design, it’s usually the headline, for example. The main headline is like what really draws people into it, right, so I first do a split-test between that, and I don’t test out anything else and only after I find the right hook.


Like if one of those titles is actually the winner, then I go and tweak other things like maybe the sub-headline, or maybe the text on the call-to-action, or maybe if the design is a big part of the landing page, then maybe a completely different design. I’m not going to make small tweaks. I do it like completely different, and the reason why is because I want to have big changes, right, because if something is … If you’re just going to change the color of your button, like that’s going to maybe improve your conversion rate by 2%, and not just that like it’s not just the small difference. That’s not even the problem, right?


If I can get a 1% or 2% improvement every single day, I’d sign for that, but the problem is that because the difference is so small, it’s going to take a long time before you actually know for sure whether it’s converting because up until that point, you just … like it could be up and down like there’s too much variation. You’re just not going to know which significant certainty whether something is working best. The bigger the difference between various things you’re testing out, the faster you will get results, and that’s really what I’m after, right? I want to make fast changes. The bigger the differences, the faster you will have a significant data to know which conversion … which variation actually converts best.


Thirdly is also you want to make sure that you have enough traffic. It isn’t always easy. I know. If you don’t have enough traffic, then it might be difficult, but like especially if you do not have a lot of traffic, then do not start a split-test where you’re testing out six different things. Then, in that case, you just have two variations, right? You can test out multiple things, but in order to do that, you will need to have a lot of traffic in order to make it worthwhile. Obviously, I could go into the map, but I think for now, I just want to keep things simple.


It really comes down to like the more traffic you have, the faster you will know with a significant certainty whether something is actually converting better or not. If you do not have a lot of traffic, then that’s a problem. Like if you only have like … I don’t know, 50 visitors a day, then that’s going to be a problem, right? It’s going to be so much faster if you have more data, and that’s also one of the reasons why I spend a lot of money on Facebook ads, for example, because basically, I’m buying data. I’m buying to see what works and doesn’t work instead of waiting around. Once I know what works, then I can tweak that and send more traffic into that with auto-traffic streams like UpViral and content marketing. All those other kinds of things. I hope that explains this well. Make sure you have enough traffic before you start split-testing.


Number four is going to be maybe the most important one actually because … and actually, the case study, the results that I just shared with you that I figured out this morning is actually super, super, super relevant for that. With that split-test, a while ago, when I set it up, I actually set up multiple goals for that split-test, and this particular split-test, I’m using … I’m running it using Visual Website Optimizer, VWO.com.


What that tool allows me to do is I can actually set up multiple goals, so I can, for example, set up a goal saying like, “Hey. If they click on a button on that page, if they visit my thank-you page, so that I actually know that they made a purchase,” and the third goal of like how many revenue it would actually generate, so it also takes into account which package someone actually bought, whether someone bought a monthly package, a yearly package, et cetera.


Now, what I’ve done for this particular campaign is I’ve created multiple goals. One-half was the main goal that I was tracking, but I added multiple goals. I remember when I started this out, I didn’t have enough data. It didn’t have enough data, right? After the first couple days the test was running, I didn’t have a lot of data, so the easiest thing to look at was the click-out, right, the click rate because basically, it was a replay page for a webinar, and I looked at the stats saying like basically, “Which of these two variations triggered people to click on to the checkout page?” Right?


There was one variation out of these two that was the clear winner. Everyone … like not everyone obviously, but almost twice as many people … at least one and a half. Initially, it was twice. Later on, one and a half times the people clicked on the button to the checkout page. If I was looking at that data, then I would have picked that as the winner right away. It was clearly performing better in terms of click-throughs to the checkout page, but after I let the campaign run, after I let the campaign run, I actually noticed that the other page converted way better in sales, right?


My point here is that when you are doing a split-test, you need to know what you’re optimizing for, what your end goal is because sometimes, you’re looking at the wrong goals. Like if I would have looked at the amount of click-outs to my checkout page, to my page where people could actually make the purchase, then I would have picked the wrong one because the page that actually get people … Sorry. The page where people did not click as much as often to the actual checkout page, that one ended up converting better overall, right?


Even though a lot of people didn’t actually click on the link, those people that actually did click, they converted way better, right? My point here is that you need to make sure that you have the right goal in mind, and if you don’t have the right goal and you’re going to focus on, for example, click-outs while that’s not your end goal, then a winner which may seem like a winner could actually be your loser. If I would have went with that, I would have just lost the 83% conversion improvement that I just found this morning, right? It’s really important to pick the right goal, and that actually brings me to tip number five. Actually, I’m going to share six tips because I just remembered a very important one as well.


Number five is that you need to have patience. Like if you’re getting initial data, and you’re just having a couple sales, and it’s not significant yet, then do not stop the test because sometimes, initially, you might see like, “Oh, this is actually converting 40% better,” while it’s actually not like … There’s no way you can actually say something about the test because you haven’t had enough patience. You need to be patient in order to let the split-test run for a while.


For those of you who were asking how long should the test run, that totally depends on how much traffic you have, what the difference is in terms of the conversion between the two variations, how many … about the conversion rate is like, “Is it 2% to 3%, or is it 20 to 30%?” Think like that. Once again, I’m probably not going to go into the math right now, but let’s just say that you need to be patient. Obviously, I encourage everyone to … If you really want to dive into the stats, you can probably google for that, or if you ask me, maybe I’ll shoot another episode later on, but for now, I think I want to cover the basics.


So far, I’ve covered five, and that is to do not start split-testing right away. Aim for big differences. Have enough traffic, obviously, right? Test the right conversion. Make sure that you’re testing the right conversion. Number five, make sure you have patience, right? Number six is actually an important one as well, and I recently started doing that in a more detailed way is to keep track of all of your results, right?


I’ve been doing split-tests for years, and years, and years. A lot of the split-tests that I’ve been running like three, four years ago, I didn’t actually stored that data in a good format, so I don’t have access to all of my split-tests from like three, four years ago, which is weird because it’s actually the best learning material you can have, right?


All we’ve been doing for the last year or so is we’re actually documenting every split-test that we’re doing so we can actually draw conclusions like, “What do we expect to get out of a test? When did this start? How much traffic did it get? What are all the variations with like screen tilt and all that, and what’s the conclusion? Like what conclusion can we draw?” Based on that, we can learn, and learn, and learn. No, not just me, but the whole business as well because my team has access to that as well, so that’s my sixth tip.


If you are doing split-test, create a Google Doc. Create something. It doesn’t really matter what it is, but create something where you’re keeping track of all of the split-tests that you are running. I just wanted to share that with all of you. I hope you appreciate that. Let me know if you are running split-tests as well so I know I’m not the only one geeking out on this kind of stuff, and I hope you have an awesome day.