At time of writing, multiple research teams around the world have reached a tantalising position. They believe that they have found a safe, effective vaccine for SARS-CoV-2, the virus causing COVID-19.
Trusting the rigorous processes that have brought them this far, these teams continue their vital work of proving beyond reasonable doubt that their candidate vaccine is both effective and safe. Only once this work has been completed can the declaration be made, and (most) governments approve their vaccine for use.
All of the vaccines continue along this accepted path, apart from one…
On the 11th of August, after a successful trial on just 76 people, a vaccine developed at the Gamaleya Research Institute in Moscow was declared safe and effective, and approved for use in Russia.
Why have those responsible apparently jumped the gun?
Why declare a vaccine safe and effective after a trial of 76 people, while all other projects are following the generally accepted procedure of awaiting results from a phase 3 trials, involving 1000s?
It’s not because they don’t know what they’re doing…
And it’s not necessarily a different conclusion about the probability of success that led to their different decision… it’s a calculation made by different actors, with different weightings on the costs and benefits of success, failure and inaction.
Let’s say the decision makers calculate an 90% probability of a wholly successful rollout, leading to an earlier end to the pandemic, most of the world taking their hats off (and in some cases, their wallets out…) vs a 10% probability of an ignominious retraction, causing damage to public health, public confidence, and the reputation of those responsible.
If those making the decision believe that they and/or Russia have a disproportionate amount to gain from being first to produce an effective vaccine – then the risk of backfire may well be one that is worth taking for them, while it isn’t for other teams – or other countries – working on other vaccines.
In this case I am implying – fairly or not – that the main, motivating factor for taking a shortcut may be political… but it’s easy to imagine other factors that could shift the cost (and risk) / benefit analysis dramatically.
Imagine for example that COVID-19 had a 20% mortality rate. I think most of us in that case would be willing to accept a quicker, riskier vaccine rollout – cutting at least the sharpest corners to get there… It would be foolish to stick inflexibly to the same, ultra-high confidence requirement when the cost of inaction was so high.
So it is with PPC…
In Google Ads, we’re continually evaluating keywords, ads, and other segments of traffic, and deciding to upweight, downweight or… just wait.
How much data do you need, before making sensible decisions?
As with Sputnik V vs the Oxford vaccine, the answer depends on the context.
The more badly and quickly you need an improvement, the more risk you can tolerate of getting it wrong… and the lower your confidence threshold should be.
While there’s no hard and fast rule for calibrating the appropriate confidence level, there is a formula to tell you exactly how confident you (mathematically) should be about the superiority of one segment of traffic over another.
Let’s say you’re comparing ad A vs ad B for conversion rate. They only have, say, 200 and 250 clicks respectively. Ad A has 5 conversions, while ad B has 7.
You can see that ad B has a higher conversion rate so far, but how confident are you that it will prove to have a genuinely better conversion rate in the long run? (test your intuitions by guessing how likely, as a percentage, ad B is to prove the better ad before you read on for the answer…).
That’s where the formula comes in…
There are several free tools for using it. The best I’ve found is at abtestcalculator.com.
If it’s conversion rate that you’re testing, put #clicks into the ‘participants’ field, #conversions into the second field, and see the results.
So in this case, it’s 58% likely that ad B genuinely sees a better conversion rate in the long term. Good to know!
This probability output can give you a solid, well-founded confidence level to plug into your decisions…
Then you just need to be sensitive to how badly your account needs an improvement in the metric you’re testing… and be more Oxford, or more Moscow, as the situation requires.