How do you choose which observational audiences to apply to your search campaigns?
The usual method is to consider the most relevant industry or topics to the product you’re advertising, and pick audiences related to those.
That also seems to be the main method used for Google’s ‘audiences ideas’ feature.
But there’s a lot missing from this approach…
First, if we’re looking for high-quality (high-converting) audiences, are we right to assume that’s what we’ll find among users that Google has marked with a relevant interest? Are they more likely to convert than a random user whom we’re targeting by the intent expressed through their search?
There are a few things that could count against that assumption…
Are relevant audiences most likely to win?
- We have reasons to think that the audience member has spent some time browsing around the relevant topic… Their activity-to-purchase ratio then probably reduces, as the strength of the ‘audience signal’ that they are broadcasting to Google increases.
To put this another way: Assume that the user is in the market for a single purchase (usually the best case scenario)… then the more that user browses/watches/clicks around the topic, the less strongly we can tie each online action to a likely purchase
- Or of course, they may already have made whatever purchase they were browsing for. Doing so would not remove them from the in-market or affinity audience in question
- The user who has been browsing around the topic either long-term (Affinity) or short-term (In-Market) may well be more discerning than the average user who shows the relevant – search-defined – intent. That too could easily downweight their conversion rate
- Finally – Google’s audience allocation is fairy hit and miss (see what audiences Google has earmarked you for under adssettings.google.com/)
In fact the winning audiences are often not the most straightforwardly relevant ones. In-Market: Travel for example, often trounces the average performance in unrelated campaigns, thanks to its correlation with disposable income.
Are the most likely to win, the most useful to add?
Next, consider whether ‘winning’ audiences are what you should actually be looking for.
If what we want is usable performance patterns, aren’t underperforming segments just as (if not more) important to isolate as overperforming ones?
In general optimisation, more of our attention is directed towards identifying and cutting waste, than pushing the already successful elements harder.
So don’t neglect to add less-promising audiences in hope of finding those profit-draining segments too.
The audience pay-off
It is also worth taking a step back to consider why you’re looking for patterns among audiences in the first place.
If you are using Smart Bidding, the algorithms already have access to in-market and affinity audiences as a signal, whether you’ve added them to your campaign/ad group or not, and will attempt to weight bids accordingly. (Though NB your remarketing and customer match audiences do give the algorithms genuinely new signals to work with once added).
If you are using manual bidding (or maximise clicks) then you can of course adjust bids up and down based on the performance patterns that emerge.
If not… Then these performance patterns only gain practical value once you act on them… and acting on them means either the ruthless action of excluding the underperforming audiences you find… or the more extreme action of creating campaigns targeting only those audiences that have proven to outperform the average.
More likely to click vs more likely to convert
At play here is the principle of ‘more likely to click vs more likely to buy (having clicked)’…
It comes up in other areas too.
In this old blog post I gave the example of the advertiser selling hair curlers.
A common move for that advertiser would be to target only the female demographic (exactly what the client was suggesting when I came across this scenario).
That move may be intuitive… but provided our keyword targeting is doing its job, it may also be counterproductive…
If men are searching on ‘buy hair curlers’, then those men are telling us that they are interested in buying a hair curler. At that point, the fact that they are men is no longer a disqualifying indicator.
A randomly selected woman may be far more likely to search, to click and to buy a hair curler, than a randomly selected man…
But once you switch to comparing men and women within the set of users who have clicked – the picture looks very different, with the less likely segment often winning on conversion rate (as it did in this scenario).
Finally, remember that there can also be a difference in performance levels at the next step along the profit calculation… Conversions themselves may be less or more valuable when they come from certain user segments vs others.
In my course unit on audiences and demographics, I make this point in relation to age range targeting… which often comes into play for B2B lead generation, when guarding against clicks (and ‘conversions’) from job seekers as opposed to genuine prospects.
A user from the 18-24 year-old age range may be relatively likely both to click and to send a contact form (tracked as a goal)… but be disproportionately likely to be sending that form for unprofitable reasons.
There is now also the newish feature Conversion Value Rules that begins to address this exact scenario. Conversion values can be weighted up or down based on which location, device type or audience the converter belongs to.