Paid Search optimisation is largely about choosing what – and whom – to spend your money targeting.
It’s not always a binary decision of course. You will prioritise some targets over others with differential bidding. But it is, in the end, a matter of deciding which searches are worth using your ad budget to compete for.
This is often seen as a process of ‘cutting out the waste’ – where ‘waste’ is the traffic that lacks the potential to enhance your bottom line, and ‘cutting’ means excluding from your set of targets.
- By adding negatives, we exclude users who reveal themselves to have incompatible intent
- By targeting select locations, we exclude billions of users that we can’t profitably sell to
- With audience targeting or exclusion, we exclude users we believe to be outside our target market
At each step, we reduce our target from the set of all searches, towards – we hope – the set of searches most certain to lead to maximally profitable interactions with our company.

With each set of refinements, you reduce your target towards a match with the desired traffic. Ideally you would not cut out any of the good stuff when you refine, but in practice you’ll usually lose some of the baby with the bathwater, hence the reductions are shown slightly misaligned with the golden zone.
What's the problem?
So far, so obvious…
But PPC managers also need to consider carefully when it is beneficial to add further layers of qualification, and when it isn’t…
For example – take the following assumption:
We are targeting a women’s product, therefore it makes sense to target only women under demographics.
This is intuitive on the surface, and yet – if our other layers of qualification (tight keyword targeting / location etc) are already doing their job – it is likely to be counterproductive.
If men are searching on ‘buy hair curlers’, then those men are telling us that they are interested in buying a hair curler. At that point, the fact that they are men is no longer a disqualifying indicator.
Note that this reasoning only works if you start with the most definitive markers of intent (e.g. search term / cart abandoner) and work your way outwards.
If you work the other way round then – however closely a user seems to match your ideal customer avatar – if their search term is off, they’re not ripe for targeting with paid search. Top-of-funnel display / YouTube / Facebook Ads maybe… but not search.
Layering with audiences
Again with audiences, stop and think before whittling down your target with In-Market and Affinity audiences related to your product.
Your keywords (and negatives) should already have reduced your target to those users expressing the right kind of interest… With that refinement in place, it often doesn’t help you to reduce your target further, using a less reliable indicator of the right kind of intent.
Test the effect of those on-topic audiences on observation… But you will often find that audiences unrelated to the product topic show stronger performance correlations.
This is not to say that there aren’t some times when user-based targeting can further ‘qualify the click’…
e.g. a click from an 18-24 year old on ‘website design companies’ might be disproportionately likely to be from a job-seeker rather than a potential client.
But unless you are aware of specific correlations like this (and provided you are targeting search terms that truly qualify the click) there is no need to get ahead of the data and aim for the avatar with your audience and demographic targeting.
Layering with smart bidding
With Smart bidding, the question of optimising by reduction becomes trickier still.
Naturally – we want to use our knowledge and judgement at least to define the field of play for our campaigns.
But in principle, if you trust the algorithm – or want to give it a fair shot – it makes sense to give it a bigger pool of raw material to work from… More broad match keywords; unrestricted audiences; a larger starting set from which to analyse and cherry pick… And this is exactly what Google has been recommending for the last two years or so.
As PPC pros, this goes hard against our instincts – and against what we’ve done historically – but it’s a logic we would do well to acknowledge.
“It’s gut-churning to wait it out. But exerting more control (via enhanced granularity) only constricts the incoming data and prolongs the learning phase.” (Amanda Evans, Machine Learning for Paid Ads)
So again, our instinct for categorisation, control and refinement (and it’s a strong instinct with many of us PPCers…) is one that we should remember to question every now and again.
Sometimes a wider net in the right place is actually the better tool for optimisation.