Webvork Blog - ENG

How to split test on Facebook and where you can go wrong

Many beginners in affiliate marketing are surprised when they hear talks about split testing on Facebook. It seems to them that this is a complex and long process, but if you look into it, it turns out that that unfamiliar environment only seems difficult.

Today, we will talk about Facebook split testing, tell you how and what to do, and also highlight the points where you can make mistakes and ultimately get incorrect results.

Why is split testing needed and how does it work?

Split tests, also known as A/B tests, are a method of finding the most profitable combination of driving traffic to an offer. With its help, a PPC specialist can test hypotheses and analyze user behavior in practice.

Essentially, to conduct a split test, take two advertising strategies and compare them in terms of effectiveness. The system divides the target audience into two equal groups: the control group receives a standard ad, and the experimental group receives its modified version.

In the process, certain elements in the ads can be changed and tested to see how it affects conversions.

When all the indicators from the affiliate network have been received, all that’s left is to choose the most effective solution and launch in full.

Here’s an A/B testing algorithm:

1. Define goals and hypotheses;

2. Define a metric to evaluate performance;

3. Test;

4. Collect and validate data;

5. Make changes according to the result.

A/B testing should be your constant tool. There is no absolutely perfect ad; it’s crucial to regularly look for methods to increase the effectiveness of advertising: form and test hypotheses, test changes.

Split test on Facebook

There are several ways to run tests on Facebook. They all depend on the variable you’re trying to test and how exactly you start creating your A/B test.

However, if you want to save time and money, it is important to quickly identify key variables. You can test them all at the same time, you just need to narrow the choice to specific goals.

There are several elements that should be paid attention to when testing, let’s discuss them below.

Landing pages

The visual and text content of the landing page, depending on how you made it, can both increase the retention rate and increase the bounce rate. Test whether the landing page does not repel your target audience. To do this, you will need to create several landing options and see which one converts better.

Audiences

This variable checks how well your different campaign approaches can satisfy the target audience. Segment your audience to avoid high overlap and get more accurate data output. This will help answer the question of whether it’s time to change the current set of ads.

Betting strategies

Instead of sticking to one bidding strategy for each ad, it is better to make several options and then choose the best bid from them.

Advertising copy

Visuals always drive conversions together with copies. Here, you can change the title, ad copy, value proposition, and CTA. For example, try alternating long texts with short ones, changing the tone of voice, and so on.

Advertising placements

Poorly chosen placements can ruin your conversion rate, so instead of relying on auto placement, test and monitor where your ads appear, how they display, and what results they generate.

Demography

Helps you focus on your current audience. You can test two ad creatives based on gender to see how they perform for male and female audiences. The same is true for other demographic factors: age, employment, interests, income level, and education.

Visual ad content

Creative visuals are critical to understanding how people interact with your ads. Here, we look for what attracts more clicks: ads with images, videos, or carousels. Try different creatives for the same audience segment to see which one performs better.

How to run an A/B test on Facebook?

When it comes to Facebook, work in the Meta Ads Manager dashboard, which uses an existing ad campaign or ad set as a template for the test.

How to set up a test:

1. Go to the main page where you can find available campaigns, ad sets, and ads;

2. Select the campaigns or ads that will be split tested. In the toolbar at the top, click the A/B Test button;

3. Select a variable: Creative, Audience, Placement, or Custom. Creative will allow you to edit the ad, and the rest will allow you to make changes to the ad set;

4. Enter the name of the test, the performance metric, then determine the start and end dates of the test. Also, here you can make the test stop if the result is received earlier;

After this, the test will start and run according to the specified parameters. If you want to look at the results of this and all previous tests in the future, they are available in the Experiments tab.

In addition, split tests can be run in “Experiments”, creating or duplicating combinations in order to compare and find a more profitable one, and also when creating a new advertising campaign (so you can test it immediately).

Split testing mistakes

It’s not enough to just randomly run a test and wait for the results: to use this tool correctly, you need patience, an understanding of metrics, and the ability to use Ads Manager well.

We’ve identified several mistakes that affiliate marketers make when running A/B tests on Facebook.

Unrealistic goals and hypotheses

A hypothesis is a question that a test can answer. It can become unrealistic if you choose the wrong question to test. It’s important that the goal is achievable.

Thinking that one test is enough

Unfortunately, this is just not true. Going stingy on tests seems cheaper at first, but in reality, you will end up paying much more for it – in lost profits. A/B testing is an iterative process. After each test, information load increases, and this new knowledge is carried over to the next test test and so on.

Testing incorrect variables

There’s no need to test multiple variables at the same time. Each of them will give more valuable results in their own test. You need to understand which elements have the greatest impact on performance and conversions and start there. For example, it’s much more useful to test CTA variations rather than the color of a character’s jacket on a banner.

Wrong timing

Timing is important everywhere, including A/B tests. Running a campaign that is too short or too long will spoil the results or prevent you from collecting objective data. Facebook recommends testing for at least 7 and no more than 30 days, but this also depends on the specifics.

Incorrect campaign structure

When split testing, it’s best to place each variation in a separate ad set. If you throw them all into one set, Facebook will begin to auto-optimize the links, and so the results will be inaccurate.

Wrong budget

Here, we simply use the correct formula for calculating the budget: we multiply the cost of conversion by the number of ad options and by 100.

If one ad variation significantly outperforms the others, you can stop the test with fewer conversions. Here, you need to get at least 50 conversions for each ad, and to ensure reliability – 100 conversions for each option.

Conclusion

The skill of running split tests is extremely important for an affiliate marketer who doesn’t want to drain their budgets and look for profitable combinations blindly. Moreover, if you don’t know how to run them correctly, you can get wrong results, and therefore lose your budget.

Plus, even if the ads you’re testing are already good, you can continue to perfect them by changing or improving some variables, so good creatives need A/B tests just as much as bad ones.

Experimenting is part of the job for everyone who drives traffic, and the more you learn to use the tools, the higher the conversion rate your ads will bring.