When you run ad tests with enough data, certain patterns emerge repeatedly.
Two of them are worth examining in some depth… Both slightly unintuitive, and both hiding insights just under the surface.
When conversion rate and CPC move together
We might intuitively compare ads based on CTR and CvR, on the assumption that CPC shouldn’t be meaningfully different between them. But CPC is an X factor.
I have seen increasing cases of one ad winning comfortably on all key metrics apart from CPC, which then undercuts the advantage of its higher conversions per impression.
The reason this pattern is common is based on how smart bidding works.
Max and Target strategies will bid more when they determine a higher probability of conversion from a click. So an ad that genuinely drives more conversions per impression will attract higher bids – because smart bidding is correctly recognising its value.
In principle, this could come close to balancing out (as it does above). If smart bidding is right to bid more aggressively on a better-converting ad, that conversion advantage should show up in CPA or ROAS, and the higher CPC becomes justifiable.
But it doesn’t always play out that way in practice. Smart bidding can overshoot, bidding up beyond what the conversion advantage can actually support.
This all means comparing ads isn’t as straightforward as it might seem. You’re relying not just on one ad being more effective, but also on smart bidding not getting overexcited about that advantage and hiking bids beyond what it can sustain.
I’ve seen some commentators respond to this pattern by concluding that when Google can see you enjoying a decent return, it reckons it can get away with charging you more per click — as if it’s doing so nefariously.
I think this is the wrong way to look at it.
Yes, Google can see that it can afford to charge a higher CPC, but the relevant point is that – based on your success – you can also afford to bid more aggressively to win the best clicks – and those clicks tend to live in higher-CPC auctions.
It’s not a case of being charged more for the same clicks because you can afford it. It’s a case of bidding more aggressively to win clicks that are genuinely more valuable, which themselves tend to come at a higher price.
So to the takeaway: when comparing ads, don’t treat CPC as a constant. An ad that looks like the winner on CvR may be carrying a CPC premium that erodes the advantage. Always check both together.
When CTR isn't telling you what you think
When you see significant differences in CTR between ads, more often than not, segmenting by top versus other reveals the real story.
Ads appearing in top positions tend to have substantially higher CTR – so far so obvious.
What’s less appreciated is how much the proportion of top placements can vary between different ads running in the same ad group.
When that proportion differs meaningfully, aggregate CTR becomes almost unreadable as a direct measure of ad quality.
The example below illustrates this starkly.
Comparing the two ads on aggregate CTR, you’d naturally conclude that the second ad has the better CTR. But when you segment by ‘top versus other’, and a very different picture emerges: the first ad actually has a higher CTR both in ‘top’ positions and in ‘other’ positions…
The entire difference in aggregate CTR is explained by the fact that the second ads has a higher proportion of its impressions coming from ‘top’ positions (over 60 versus 40%).
Which raises the question of what’s actually driving the difference in position mix (and if one of these ads was paused, would the remaining ad retain its current mix or assume the positions currently occupied by the other?).
The obvious candidate – that the first ad wins more top positions because of a higher CTR – is ruled out by the data here.
Some difference in how the two ads interact at auction must be behind it, but what specifically is hard to say from the outside.
If you’ve seen this pattern and have a view on what’s behind it, I’d be curious to hear it.
What is clear is the segmentation principle itself: don’t draw conclusions from aggregate CTR differences between ads without first checking the top/other split.
The aggregate result can be dominated entirely by position mix, and position mix can vary for reasons that have nothing to do with which ad is actually performing better.
When CTR isn't telling you what you think
Both of these patterns point to the same underlying issue: the metrics that appear most directly relevant in an ad test – CTR, CvR, CPC – can each be shaped by factors that sit one level beneath the surface.
CPC moves because smart bidding responds to conversion probability.
Aggregate CTR moves because position mix varies.
Neither number means quite what it appears to mean until you’ve accounted for what’s underneath it.
The answer isn’t to distrust the data. It’s to look one level deeper before drawing conclusions…
