Google Ads provides advertisers with the opportunity to create experiments for ad campaigns, which can enable users to understand the impact of any planned changes and to improve the overall campaign performance. This can be a useful tool to split test changes such as a different bid strategy against the existing settings and to determine the best performer, particularly if there is a good volume of data to provide reliable results.
As with any testing process, you’ll need to set a clear hypothesis, which should cover why you’re running the experiment and what the outcome may mean for your business objectives. This hypothesis will help you determine whether your experiment was successful or not and so what action should then be taken as a result.
If you decide to run an experiment you should just test one variable at a time. Testing more than one variable at a time makes it difficult to identify which element drove the better outcome and also requires a much larger volume of data to assess results quickly. You should also pick one or two metrics to gauge the success of your tests before a test begins, such as total sales or cost per action, so that when it’s time to end a test, that metric will indicate the winner.
You build an experiment from the base (existing) campaign which may have been performing to a particular level, and then the new experiment uses the same settings apart from the key variable that’s being tested. That means that during the test you should avoid making changes to your base campaigns unless you have sync turned on, so that adjustments apply to both the original and test campaign.
Once the campaign and experiment have been running for a required time – usually 30 days – you can analyze the results and choose the winner of the experiment, so that the best outcome can be implemented in your future campaigns. In particular, if you’ve been testing two different bid strategies, you can update an original campaign or convert to a new campaign. Updating an original campaign will port all changes over to your current campaign, which preserves that campaign’s history. However, you should consider a new campaign if you’re testing a new campaign structure, or if you want to preserve findings from a learning period, update your original campaign.
One thing to remember with automated bid strategies is that they can take 7-14 days to learn a new strategy, so you may not see reliable results for the new test until after this initial period, so you’ll need to take this into account.
At the end of any test, you can compare the results between the original and the test campaign to see which shows the best results based on the main objective, and to decide how reliable the outcome is. The ads should be shown to searches in a true A/B split over the experiment period, and although more data is better to show the best outcome, anything from 1,000 impressions or 100 clicks as a minimum volume should still provide a good comparison.
It’s also recommended to keep records of your experiments, so that as you continue testing in your account, it’s easier to prioritise what to test if you have good records of tests you’ve performed in the past. What’s more, by standardising insights from past experiments it help you tap into them for your next campaign, and track and benchmark the value of your efforts.
If you want to know more about experiments and how they can be used to improve your Google Ads campaign performance, please get in touch and we can advise you on the best approach. We have been experts in Google Ads / AdWords advertising since 2002 and can use our experience to help your advertising get the best possible results.