I have mixed feelings about audits.
On one hand there’s something refreshing about opening up an account and seeing that alien landscape…
A completely fresh set of campaigns, created by different minds, with different aims, making different assumptions…
It can be galling, bemusing or edifying, but in any case it’s more interesting than opening up your own accounts yet again.
On the other hand it can also be a bit like looking up a very tall mountain you’ve been asked to climb.
The very first thing I check when I’m opening an account for an audit, is just how many active campaigns we’re dealing with here…
If I scroll to the bottom of the page and I’m still seeing green dots to the left. Then I’ll be looking with trepidation at the little number at the bottom right to see how much more of the mountain sits below the waterline.
These days I tend to be auditing high-spend accounts, which magnifies both the good and the bad above… and the audits themselves are often quite hefty.
(The last one was a 42-pager, and rounded off with a summary of recommendations which looked like this…)
Still – varied in style and quality as the accounts are – some issues are far more common than others.
Here’s a look through five of the most commonly encountered…
Some metrics are better than others for guiding you to a profitable outcome.
There’s a nice quote on this, along the lines of ‘you get what you measure’… (origin disputed / wisdom undisputed)
With that in mind, it makes sense to measure something as close to your ultimate success as possible, as accurately and comprehensively as possible…
My audits usually kick off with a section titled ‘measurement’, outlining the ways in which this could be done better.
Skipping over the possibility that conversions aren’t being tracked at all. (If that’s the case, you know what to do)…
The recommendation I make most often is to apply conversion values where none are in place.
This would typically be for a lead generation advertiser, where the tracking often stops at ‘enquiry’, with no clear way to ascribe an accurate value to each lead as it comes in…
But there’s usually some way to differentiate those enquiry conversions
At the most basic level. the advertiser might be tracking these different routes to (potential) enquiry:
• Click to call
• Click to email
• Contact form submission
We can then set different conversion values for each enquiry type, using the best estimates we can muster, for the respective rates and values of ultimate conversion… (e.g. $20, $40, $160 respectively)
This gives the account manager access to the ‘conversion value’ metric in addition to conversions… and a more precise evaluation of return on ad spend, both for your own analysis, and for smart bidding algorithms to use as their guiding light.
Remember – accuracy is not essential here. If you can differentiate value per conversion in a way that’s even slightly better than having a completely flat value across all conversions, then you can align the algorithm’s aims better with your own.
If you can segment those enquiries in the process of signup, then you can go a few steps further, and assign differentiated values based on ‘lead score’.
This may simply be based on the selection in a dropdown on the enquiry form.
Or it may be more complex, taking other interactions into account (or even using offline conversions reimported, based on actual lead progress…)
Conversion values are a useful signal for optimisation – and while more granular differentiation is better, even roughly-assigned buckets with heavily estimated values can be a long stride forward.
The measurement section of my audits also often see this sub-heading:
- Add Micro Conversions
Sometimes there are desirable user interactions that don’t quite make the cut as ultimate goals, but do indicate a more-than-averagely successful session. e.g. a newsletter signup or brochure download.
It’s usually worth recording these as goals (especially if you’re lacking in solid conversion data) as extra signals of traffic quality, to provide a further layer of data for analysing campaign performance, and potentially to build relevant audiences for retargeting.
And if you’re concerned about diluting your ‘proper’ conversion value with these, then just keep them out of your general conversion column by turning them into ‘secondary’ goals.
It’s nice to find a bit of low-hanging fruit…
Spend wasted on traffic from outside your geo targets used to be one of the first things we’d check, before Google changed the way the locations report worked in late 2020.
Gone now are the joys of simply toggling to ‘user location’ to see a clear list of stats relating to the countries in which your clickers clicked.
Now we have ‘targeted locations’ and ‘matched locations’ (with more elusive and less distinct definitions than might be expected).
Targeted locations are simply the locations you have explicitly targeted ✅.
Matched locations is a list showing to which of your targeted locations each user (impression/click…) has been assigned. This could be on the basis of their actual presence in that location, or their deemed interest in it.
Matched location potentially goes more granular than targeted, so if you’re targeting e.g Wales, within Matched Locations you can zoom in to see which specific location in Wales the user has been assigned to (down to county, city and sometimes postcode).
Again, that assignment can be either through presence or interest, but what Match Locations never shows is a location outside your targets.
Those users who sit on the other side of the world (whose clicks we do NOT want to buy) are sucked into one of your targeted locations for the purposes of the Matched Locations report. (This was a particularly sneaky and under-appreciated change in the way location reporting works.)
The good news is, you still see clicks and spend from outside of your targeted countries in Google Ads…
But you now have to head to the Reports section to find it, and create a table or chart with ‘user location – country’ (or ‘city’/’region’) as the dimension.
The results of this report have been a common feature in my recent audits.
And the best remedy for extraneous clicks is (as it always was…) the location option to target, ‘people in or regularly in’ your targeted locations (which, at time of writing, has just arrived as an option in Performance Max).
Now we’re getting towards general optimisation… but an aspect of optimisation that is both hugely consequential and underused.
The key point here is: spend more on what’s working; less on what isn’t.
This includes adjusting your CPA/ROAS targets on one hand, to allow more expansive bidding (favouring volume over efficiency) where you are hitting or exceeding your targets, and more conservative bidding where spend is under-rewarded…
And on the other hand, reallocating budgets to favour high-ROAS spend over low-ROAS spend.
It’s a simple idea, and it’s generally well understood (though the vexed question of how hands-on we should be with smart bidding campaigns makes it a little less straightforward…) but audits often uncover some clear-cut cases where action is needed on this principle.
A related point to the last. Lost Impression Share is a key indicator of how much capacity a campaign has for expansion…
Identifying a high-performing campaign is one thing… but it only makes sense to push that campaign harder to bring in ‘more of the same’, if there is ‘more of the same’ to be had.
If there is a substantial value for impression share lost to budget, then you can expect a budget increase to bear some fruit.
If there is impression share lost to rank, then you can expect an increase in bid aggression to have some effect.
So remember to check IS / lost IS as a gauge for what you can usefully do to expand a campaign.
(Les intuitively, IS lost to budget is also an indication of how much you can refine an inefficient campaign without lowering its spend. Watch this short video on taking advantage of impression share lost to budget.
Maximise Clicks is pretty good at doing what it promises to do.
The trouble is, clicks aren’t often what you should really be looking to maximise with your Google Ads activity… and the pursuit of maximum clicks quickly runs counter to those things that we actually value…
Because this strategy will (as promised) maximise clicks, regardless of where those clicks came from – and crappy clicks are often cheaper clicks.
For example, we often see poorly-converting traffic from countries outside of our geo targets (if we haven’t refined our location settings. See point 2 above…) Those clicks are often cheaper than average, but not often cheap enough to be worthwhile.
Under Maximise Clicks – the algorithm would have every incentive to go after those clicks preferentially, over the more expensive clicks that are of real value to us.
So – if you have conversion data to work with, Maximise Clicks is almost never a good idea.
Some PPCers use Maximise Clicks as the starting point… Grab some clicks, kickstart conversion data and then either move to manual once they have some evidence for where to raise or lower bids or, more often, move onto a conversion-based smart-bidding strategy.
This is similar to my preferred approach (Manual first, then test a conversion/value based strategy with a view to moving over…) but when you start with Maximise Clicks, you have to accept that no click will be favoured above any other click in phase one, except on the basis of CPC… ‘the cheaper the better’.
To my mind, there are always reasons to favour some clicks above others, which is why I very rarely use Maximise Clicks, and usually recommend a change when I see it in place.
Alongside the behemoth, the worst type of account you can come across in audit mode, is one that’s excellently set up.
(Don’t get me wrong… In any other mode, I love a good Google Ads setup, but when it’s your job to dig for gold, discovering that you’re in an already well-worked mine does nothing to raise one’s spirits.)
But – whether it’s your own account or another’s, there is always something you can do to improve performance.
The actions you take – or recommend – won’t always improve performance, but you can identify actions that are more likely to cause an improvement than not. Make a habit of that, and you’ll win over the long term.
And this applies whether you have good data or not.
The more (and better) data you have, the more confident you can be about a greater rate range of decisions.
But absent good data, you have best practices, informed estimates, ‘percentage plays’ based on your knowledge of PPC, experiences with other accounts, your understanding of the expected baselines, the market, the audience, the product etc.
No account is perfectly set up, and – while it pays to be cautious when things are going well (especially with smart bidding) – there’s always something you can do to eke out some incremental value.