[ad_1]
A/B testing is when you compare two or more versions of the same page by looking at the conversion rates and metrics that matter to your business (such as clicks, views, signups, etc.).
For example, if you change the title on a landing page, you can target all landing pages at once and they will be considered as variations of the same group. This group is a name or observation title you give to a particular test (example: Landing page title testgroup1). Hopefully, you have a much cooler group name, but you get the picture.
A/B tests are great if you want to test radical ideas for conversion optimization and also if you want to make small changes. And they’re a great way to get fast results and minimize test time.
If you have a large amount of traffic to your site and want to test key sections on a page, this is where you’d run multivariate tests. A/B testing looks at making singular changes to a whole page whereas multivariate testing looks at changing key sections on a page and how they interact with each other.
This being said, multivariate testing is more complicated than A/B testing because there are more layers involved. When you test different key sections, you can get a huge number of possible combinations that may prove too overwhelming to deal with if you’re not an experienced marketer.
Check out this post to get an idea of what a multivariate test looks like.
Split testing is where you test one element on a page and see how the results for that page are different from the original version. This may look similar to A/B testing… because it’s the same.
The terms are often used interchangeably, and in fact split testing and A/B testing are intrinsically the same.
The difference between A/B testing (aka split testing) and multivariate testing is that the former tests one variation whereas the latter tests multiple combinations at once.
How long should A/B tests last?
This is a tricky question because a lot of factors come into play.
These factors include things like sample size, statistical confidence, seasonality, representativeness of your sample, and timing. There’s no clear answer as to how short or long an A/B test should last because… drum roll… it depends on your industry and a host of other factors.
However, that doesn’t mean that running a test for one or two days is enough. Generally, a few weeks to a month can be regarded as a safe range for a test, assuming data collection was done correctly, the conditions weren’t out of the ordinary, and the test was carried out by experienced marketers.
Determining the validity of your tests
Determining the validity of a test can be done in 3 steps:
- Calculate the minimum sample size: Define what level of confidence you’d like in your test results (ex: 90-95% is largely considered a solid target to aim for) and calculate a sample size based on that number. This will give you the minimum number of visitors that your variations need.
- Check for discrepancies in segments: Before completing the test, you should know how to segment your visitors. With the minimum sample and segments, check for major discrepancies and if there aren’t any, keep the rest running.
- Assess your business cycle: As mentioned above, business cycles and seasonality can play a large role in the validity of any optimization tests. Run the test in different cycles and compare how they fare against one another (ex: are visitors and sales the same in Q4 with Christmas/New Year as they are the rest of the year?)
Conversion rate optimization is not an easy thing to tick off your checklist.
It’s a perpetual process that will kick your ass many times, but it will also take your business to the next level if you learn to embrace it. This goes for newbies and professionals alike.
Stage 5: Learn and review
If you’re looking to increase the number of people who sign up for a free trial for a product, you might want to set up goals for people who make it to the signup page and people who actually make it across the line and sign up.
In whichever testing platform you use, you should see the running test and some sort of indication as to whether that new variation has improved conversions or not.
Carefully look at the two numbers (for the original vs the new variant) and look at the percentage of growth as well as the potential it has (also in percentages) to beat the original. If that percentage is short of the ideal 90-95% goal, keep optimizing and keep running tests to hit that goal.
If you end up with inconclusive results, here are a few things you can do:
- Segment the data: Individual segments often reveal clearer data than lumped segments. Look at segments like traffic sources, devices, and other things that make sense in your business. Sometimes you need to dig even deeper into the numbers to find clarity, especially with A/B tests.
- Don’t test things that don’t matter: Another reason for inconclusive results is often tests that were run on things that didn’t actually matter to the business. Make sure all of your tests are backed up by hypotheses and are clearly prioritized before getting itchy fingers to test every single thing on your page.
- Challenge your hypothesis: If you follow a process and still get inconclusive results, it could be time to revise your hypothesis and even scrap it all together. You could test new variations on the same hypothesis or go back to the drawing board to better understand the data you collected and form a stronger hypothesis.
[ad_2]
Source link