Crazy Bids, Smart Bidding: Under the Hood of Google’s Wildest CPCs

The days of debating whether or not smart bidding is worth using are long behind us.

But while it is now the norm for almost all of us, some inherent issues remain. One of the clearest and most dramatic, is ‘wild bids’ – producing clicks with obscene-looking CPCs.

Some bid strategies make these far more likely than others, with Targetless Maximise strategies being the most prone to going ‘all-in’ when they get excited about a particular auction. But wild bidding still occurs with sensible CPA/ROAS targets in place.

The issue isn’t just a general tendency towards inflated CPCs; it’s the appearance of extreme outliers where we can see – e.g. in the search term report – paricular clicks with eye-wateringly high CPCs.

When we see CPCs like this, one obvious reaction is to rein them in by, for example, putting our tCPA campaign into a portfolio in order to add a maximum CPC…

But we know there is a balance to strike here. Low CPCs don’t always equate to low CPAs. Some cheap clicks are cheap for a reason, and likewise expensive ones.


More than this – I have increasingly noticed – with some of the most eyebrow-raisingly high bids – that Google actually seems to be almost preternaturally accurate in its prediction of high value?

A couple of examples:

Note how the CPCs of these terms compare to the average. Then note how high the conversion rates are compared to the average. In most cases, these clicks converted. (That’s an extraordinary thing to say about this set of ultra-high CPC clicks against the backdrop of a 7% overall conversion rate).

In the balance – high conversion rate vs high CPC – The very highest CPC clicks achieved better-than-average CPA. Taken as a whole the outlying high-CPC search terms that made it into this table roughly matched the average CPA for the whole data set.

Exhibit 2:

This table (from a different campaign, but ordered by cost rather than CPC) gives a clearer comparison between wild bidding and more sober activity. We can identify wild bids in the three highlighted rows.

In the top of these, the two clicks averaged a CPC well over 10 times the all-data average, but achieved a conversion rate of 50% (again >10x the average) with CPA balancing out around the average level. It seems highly likely in the context of other evidence, that one of these clicks had a CPC in the normal range while the other was even higher than the blended average suggests, and that this ‘crazy’ bid is the one that converted.

In the two other examples – both with absurd looking CPCs – one converted, and one didn’t. Again, an extraordinarily high success rate, but also clear evidence that – obviously – this approach of going ‘all in’ on certain auctions doesn’t always work.

What this ultra-success rate suggests to me, is that Google has at least some indications of auction quality that are extremely strong.

There must be some pattern/s (e.g of user behaviour preceding the auction) capable of giving Google conversion confidence of at least ~50% – even in the context of a campaign whose avg conv rate is under 5%.

And although these snapshots are anecdotal evidence, even on their own, they’re pretty compelling to my eye. Both the confidence required to bid as high as observed here, and the conversion rates to justify that confidence, seem extraordinary…. (And the reason I was prompted to write this blog post is that I’ve seen this in several other instances of the same thing.)

It is at least some degree of evidence that Smart Bidding might know what it’s doing, even when a cursory glance might suggest otherwise.

But understanding these dynamics can only get us so far. Sometimes it is worth quantifying something like this, which we can usefully do by comparing CPC to ROAS to see whether high bids are – in practice – as harmful as they seem.

The level at which I think it makes most sense to run that comparison is within a given campaign. Are your cheaper clicks in that campaign more or less efficient than your more expensive clicks? What is the correlation?

Since by default it’s not straightforward to see the cost of individual clicks (to separate those that did and did not convert) we’ll need some third variable by which to compare CPC with ROAS.

And one information-rich way to do this (as in the examples chosen above) – is by search term. Are our more expensive search terms producing a better or worse CPA than our less expensive search terms?

When it comes to running correlational analysis, I’ll happily outsource the task to an AI companion. (Manus and Genspark are my current favourites for this kind of analysis, extraordinarily making it quicker to create a dedicated SAAS tool than it would be to run the analysis by conventional means 🤯.)

Here is one I knocked up for the purpose:

Simply input a download of your search term report and have at it

I added a filter to address the brand terms skew that blights so many analyses – and in my first test case, found that brand terms generated higher ROAS with lower CPCs while non-brand had the opposite pattern.

The output will differ (which is the point…) but what can we conclude from it?

Setting aside valid arguments about the primacy of ROAS, high CPCs in themselves are not a problem. It’s the relationship between cost per click and value per click that we should care about.

The next time you spot a jaw-dropping CPC, before reaching for the CPC cap, ask: Was it worth it?

That’s the question that really matters – and it’s one you can answer with this little tool.

Run the numbers. If high bids are dragging your ROAS down, cut them. But if they’re pulling their weight, let Google cook.

(Want help interpreting your results? That’s what the Level Up community is for 👇🏽)

Share this post

Lastest Posts

Sticking to the Point in PPC: Lessons from Alexa, Coldplay, and Broad Match

Here’s an actual conversation I had not long ago: Me: “Alexa, set a 13-minute timer.”Alexa:…

Lessons from my charger: misdirection in PPC

This week I spent far too long convinced there was a problem with the power…