The 2 critical factors behind B2B sales forecast confidence
Regular readers will recall that I am no great fan of the default approach taken by so many CRM vendors, in which individual opportunity forecast probabilities are based on applying the same percentage to every opportunity that has reached a given stage in the sales process.
Many CRM users simply accept the default “out of the box” percentages without questioning them, or validating them against actual outcomes, or are confused about whether the % is measuring progress through the process or the probability of winning.
It’s no wonder that sales forecasts are often so wildly inaccurate. But there is a better way of thinking about this…
The problem is particularly acute in high-value, relatively low-volume complex b2b sales environments with a high dependency on new business, where the outcome of one or a few individual opportunities can make a very significant difference to whether you hit your overall revenue target or not.
I wrote at some length about the various factors that influence the probability that any individual opportunity will close here, and I firmly believe that investing in a sales analytics solution can help to identify the underlying patterns and make sense of the complexities.
But let’s assume that you’re not yet ready to commit to sales analytics, but that you still want to do a (much) better job of forecasting the outcome of individual opportunities than you are today. Here are the two critical factors that you need to assess when deciding whether an opportunity is likely to close at the predicted time.
But there’s an essential point here - and I cannot stress this strongly enough - you have to make separate judgements about these two factors before coming up with an overall confidence level:
1: Will the prospect actually do anything?
One of the biggest reasons why forecasted opportunities fail to close is that the prospect failed to make a decision. In most cases, this is due to the fact that they had an insufficiently compelling reason to act, and were happy to delay their decision. Or - in a variation on this theme - their attention was diverted towards other more pressing priorities.
Sometimes, they will make an explicit decision to abandon their current buying decision process - but more often, the decision will simply be deferred until later. And in either case, there is no guarantee that they will communicate their decision to you - they may simply become (much) less accessible.
There’s a fundamental principle here - decisions to take action tend to have a momentum to them. The prospect tends to be highly engaged. There is an obvious reason why they should act, and - just as important - there are significant costs and painful consequences associated with sticking with the status quo. They have concluded that the longer they delay, the worse it will get.
Taking all factors into account, you need to make a realistic (and that normally means conservative) assessment of the chances that each prospect will make a commitment and place an order on anyone in the targeted timeframe. That’s the first half of the equation.
2: What are the chances they will choose you?
There’s another simple principle at play here: if you have failed to compellingly differentiate your solution, and if the prospect struggles to distinguish between your solution and their other shortlisted options, they will choose the option that costs the least or appears to offer the lowest risk.
If - in the eyes of your prospect - you are either a clear market leader or a clear cost leader, you are probably in a reasonably good position. But if you are neither, you need to go out of your way to get your prospect to acknowledge the compelling advantages of your solution and your organisation.
If your differentiation is weak - if your sales person truly has no idea which option their prospect actually prefers - then calling it a “50/50” deal almost inevitably overstates your chances. More on this in just a moment…
Doing the maths
Assuming that you have accurately assessed both of the above factors, in very simple terms, your probability of winning the deal on the targeted date is a function of multiplying the two percentages. Let’s assume that there is a strong (75%) chance the prospect will actually do something, but that you are 1-in-3 with other similar options (33%). The projected confidence is 25%.
When clients go through this exercise, the results are usually a lower but much more realistic percentage than they used to get when they simply asked sales people to assess their chances of closing. But there are a few other wrinkles that can help to further improve the process.
Banning “50/50” judgements
There’s one probability number you should bar your sales people from using in either their “do anything” or “choose you” ratings: 50%. Not because it’s impossible that 50% could (very rarely) be an accurate assessment - but because accepting “50/50” ratings encourages lazy thinking on the part of your sales people.
My strong recommendation is that you should push back, and force your sales people to choose a number that is either marginally more or less positive - 55% or 45% - rather than describing their chances as simply a coin toss. If it genuinely is, they clearly haven’t been doing their job right.
The final strategy that I recommend is to get your sales people to come up with their initial assessment of the “do anything”, “choose you” and “overall” probabilities and then review the evidence with you that led them to their conclusions. You’ll quickly get a sense of whether they are relying on gut feel or on a deep understanding of the prospect’s actual situation.
It would not be unusual for you to disagree with their assessment of certain deals. It is often helpful to review how the sales person's previous similar opportunities actually panned out. If you cannot come to a consensus, I recommend that you record your confidence as a separate field to the sales person’s confidence, together with a short comment explaining the differences - and use this as a learning opportunity.
These strategies - separating the two critical factors, banning 50/50 judgements and implementing collective decision-making - have been proven to dramatically improve forecast accuracy, most particularly in high-value, low-volume new business environments. Why not try implementing them in your next forecast reviews?
I'd be very interested to get your feedback...