How to create a product data experiment that improves sales performance on Google Shopping
Running A/B tests can easily double your impressions and clicks on Google Shopping.
And you can set up a successful experiment in under five minutes...
Running experiments is guaranteed to improve the performance of your product data marketing. By rolling out successful tests, and cancelling unsuccessful ones, you cannot fail to improve the performance of your feeds across key marketing channels over time.
Over the years, and across many thousands of experiments, we have helped customers achieve an average increase of 79% in impressions and 109% in clicks on Google Shopping. In this article, we demonstrate how we create those successful experiments, in less than five minutes, using the Experiments module in the Intelligent Reach platform.
How is the Data Connector used to introduce performance data to advanced data feed experiments?
When a client is onboarded to the Intelligent Reach platform we try to bring in as many data sources as we can, so that we can run experiments and innovate around data. We always aim to include product margins as well as prices and real-time stock quantities which are essential for creating local ads and integrating with marketplaces. We also try to bring in detailed performance data which we can collect from Google and eBay with our own data connector.
NB With our newopen data connector customers will be able to bring in any kind of performance data from any source, including Google Analytics.
How to identify and analyse under-performing products?
Before we run any experiments, we analyse the inbound data. We do that in a couple of different ways:
First, because of the data connector, we can assess any individual product all the way down to the individual size, to see how well it's performing. For example, if a product was clicked 157 times, and those clicks resulted in 11 orders at a cost of £33 pounds and generated £270 pounds of revenue, that would be a cost per click of £3 and a ROAS (return on ad spend) of £8.25. Which is another way of saying, every pound spent on this product is returning £8.25 of revenue. This gives us an immediate efficiency metric that we can use to determine the success of any experiments we perform on this one product. But how do we do that for 22,000 products? What if we have 220,000 products in 10 different markets, and 10 different channels in each of those markets?
The answer is that you just can't. It isn’t possible to manage the performance of product data on a product-by-product basis. Changes must be managed at scale. And to do that you need to know how each change is going to affect the overall performance of your campaigns. You need to start running experiments…
What kind of experiments should we run?
The platform allows us to display all the products in a target group to understand why they are not visible on a specific channel. Say, for example, we have invisible products on Google Shopping, and we notice that they come from the category ‘skinny jeans’ and have the category name in their product title. The hypothesis might be that people aren’t searching on ‘skinny jeans’ and that we should change the Title.
We can create an experiment to include any type of data, from any feed source. In this instance, the solution might be as simple as changing the Title from 'skinny jeans' to 'slim fit jeans'. Alternatively, we could rebuild the Title entirely from scratch, using attributes that we know people are searching for. That could include brand, base name, size, colour and, perhaps, material.
How to set up and manage an experiment
Once we have a hypothesis that we want to test, we are ready to create an experiment. In this instance, we believe product title is the main problem, so we select Product Title Optimisation as the experiment type. Assuming performance data is available, via the Data Connector module, we select A/B test to create two equally performing groups of products. This creates an experiment that is perfectly balanced, from the start, and removes any accidental bias. If we want to have more than two variants at the same time, we would select MVT (Multivariate Test). We then set the start and end dates and make the experiment live.
NB. An experiment like this can be created in just a few minutes because the platform is doing most of the work…
How to identify under-performing products and create an A/B test that will improve performance on Google Shopping?
The platform allows us to display all the products in the target group to try to understand why they are not visible on a specific channel. Say, for example, we have invisible products on Google shopping, and we notice that they all come from the category ‘skinny jeans’ and that they have the category name in their product title. The hypothesis might be that people aren’t searching on ‘skinny jeans’. And that we should change the Title.
When we create an experiment, we can include any type of data, from any feed sources. In this instance the solution might be as simple as just the title from skinny to slim fit. Alternatively, we could rebuild the title from scratch, in a more scientific way, using attributes that we know people often search for, including brand, base name, size and colour.
While the experiment is running, the platform automatically records every key metric, including impressions, clicks, sales, revenue, order value and ROAS, for every variation of every product.
We can set the reporting metrics to anything we like, at any time. As this is an experiment to improve visibility and clicks, we might set the report to show impressions and clicks for the two groups at the beginning. Because all metrics are recorded, we can pivot to another data set at any time. For example, If we want to see the impact of the experiment on sales performance, we could switch reporting to show revenue and ROAS for the period.
How to apply metrics at the beginning of a product data feed experiment?
Reporting metrics allow us to judge the success of an experiment. For a visibility test we might select impressions, clicks and CTR, because we are hoping the experiment will generate traffic. Other experiments might have more to do with ramping up sales, or ROAS, and might require a different set of metrics.
The platform records every available metric, for every experiment, so you can fe-run a report with any metrics at any time.
If our product title experiment is a success, for example, and our invisible products start generating impressions, we might want to switch the reporting metrics to sales, turnover and ROAS.
Click on reporting at any time to drill down into the performance data of an experiment. If it is successful - in this example we had 12% more orders, 18% more sales and 50% more ROAS - we would probably want to make that change permanent. To do that, we simply click on the “Make Permanent’ button to roll out the change. If it hasn’t been successful, then it’s time for another experiment.
How to roll out a successful experiment to make a permanent change
If an experiment is not successful, the test period will expire – or the experiment could even be cancelled manually - and the feeds stay as they were. More importantly, when an experiment is successful - in this example we have 12% more orders, 18% more sales and 50% more ROAS - we make it permanent. To do this, we simply click on the “Make Permanent’ button to roll out the change.
The Award-Winning PLatform, Loved by OUR Customers...
The Award-Winning PLatform, Loved by OUR Customers...
Get in Touch
We are passionate about high quality product data. So passionate, in fact, that we offer a completely free feed review to highlight the ways you could improve your feed as quickly and as easily as possible.