Three 'A's
Two concepts have been central to the recent evolution of Google Ads: aggregation and automation.
We are encouraged to give increasingly free rein to the algorithm, to let it determine and act on the important distinctions within our activity.
At its extreme, this approach gives rise to the Hagakure Method, with its minimal segmentation, and maximal faith in the algorithm.
As well as being explicitly endorsed by Google, aggregation and automation are baked into every corner of the platform: from the changes made to text ads, match types, the latest campaign types, to the selection of metrics, and even the latest interface design.
But there is a third ‘A’, and it is a competing virtue in PPC. ‘Articulation’.
Not the ‘verbal expression’ kind (though that’s useful too) but – as the Oxford dictionary puts it – ‘the state of having a joint or connection that allows movement’.
In our accounts, it’s about how nimbly we can adjust specific parts of our activity – isolating them for analysis, targeting, bidding or state changes.
The extent of this ability runs counter to aggregation, while our willingness to use it cuts against our commitment to automation.
When we opt for a portfolio bid-strategy (as opposed to campaign-level) we’re setting ourselves up for more aggregation, and less articulation.
When we settle for campaign level targets instead of ad group level targets – again, we’re choosing aggregation over articulation.
Structurally aggregated accounts have less articulation by their nature.
So far so uncontroversial.
The crucial debate starts with the question of to what extent articulation and – in contrast – aggregation are, in fact, beneficial in our PPC efforts.
Pros and Cons
The pro-aggregation camp will argue that – with Smart Bidding, it’s a bad idea to make fine-tuning changes to our activity. Performance disparities should be left for the algorithm to identify and act on (that’s its job) – and the more actively we start trying to cook the same meal, the less effectively the algorithm can work.
This is the line that Google takes – in its actions as well as its messaging.
And certainly there is a risk of disrupting the algorithm’s process with too many manual changes.
Automate with Caution
So the question largely comes down to how good a job the algorithm does (*for us) without intervention, and how much benefit can we add by ‘interfering’?
At present there are still plenty of reasons to question the competence of automation in Google Ads.
- PMax often works well enough to pacify the mob who are always ready to bemoan their loss of control (guilty). But I haven’t heard many argue that it works better than the old combination of search and Smart Shopping.
- The move to RSAs did not produce any performance benefit vs ETAs. (The often-cited increase in volume is doubtful for reasons we won’t go into here, but even if accurate, it would be one among many cases of Google enforcing a trade of quality for quantity.)
- Most importantly, Google’s smart bidding strategies often make bad decisions. I don’t imagine that it’s easy to configure an algorithm that would always make optimal moves in such a complex system… but the fact is, it sometimes gets it badly wrong.
Usually we’re left to infer those poor decisions from the outputs we can see reflected in our changing metrics, but in Google’s explicit recommendations we sometimes see them writ large:
Look closely at the expected weekly increase in conversion value and cost from this recommended change (I can assure you the campaign did not have a target ROAS under 1).
If Google is recommending a change like this, we can be fairly sure it would enact it left to its own devices. (So please, don’t enable automatically applied recommendations when it comes to target adjustment.)
Hold the line
We should not be too quick to point to any ‘above CPA/ under-ROAS’ activity as a failure on the part of the algorithm. As Brad Geddes explains with typical clarity in this video, target strategies are in the business of ‘averaging’ to a certain CPA. Above CPA traffic is just as important a factor in that effort as below CPA traffic.
But then… What kind of CPA does it end up averaging in practice?
If it is achieving our target CPA or ROAS – then the portion of activity that sits above the average is entirely forgivable…
But if it’s falling short of our target – long term – then it’s fair to question any particularly inefficient portion of activity that the algorithm is not dealing with on its own initiative.
And very often, it does fail to meet our target CPA, while continuing to spend our budget quite happily.
Again that’s understandable. CPA/ROAS targets never promised to be ceilings or floors – but it does mean in turn that feeding the algorithm with the desired target alone is not enough.
We have to do more… and that means taking a step back into active management.
But be ready to step in
Most obviously, there’s ‘target tweaking’ – raising or lowering targets to guide the algorithm in the direction of greater expansion or conservative bidding according to performance at the time.
And that kind of ‘guidance’ can be better directed by focussing it more precisely on the areas that are lagging…
For example, Yes, mobile traffic is a signal in smart bidding’s toolkit, but if mobile traffic consistently produces an inferior CPA to desktop (with the average CPA also being unacceptable) – then the algorithm needs a shove in that particular area.
The same logic applies to ad groups, hence the importance of ad group-level targets, and any other unit of our activity subject to differential performance levels.
All of the above are part of the algorithm’s job – and yes, it has access to a multitude of signals, and real-time auction bidding – but if (as is often the case) we can see that it is not using its many fancy signals to do the job we need it to do, then it is our responsibility to make those changes ourselves.
The better articulated our accounts (and the more surgical our practices) the more precisely we can direct those changes.
Trust Issues
Google does not trust us to fine tune our activity.
I had a striking view of that fact (recounted in the article linked above) when a Google product manager told me directly that they had experimented with RSA asset-level data and found that ‘advertisers were making sub-optimal decisions based on that data’ – so they decided not to roll it out.
Performance Max – the favourite child of aggregation and automation – again starkly illustrates Google’s reluctance to give advertisers the tools to fine-tune their activity.
But there are reasons to use the capabilities we still have. And until the algorithm is perfectly suited to its aims, and those aims are perfectly aligned with ours, there always will be.
The first of those two may happen, though it hasn’t yet.
The second will not.