IF YOU ENJOY THESE ARTICLES, PLEASE SHARE THEM ON LINKEDIN AND TWITTER AND WITH YOUR COLLEAGUES (AND ENCOURAGE THEM TO SUBSCRIBE)

SELL THE DIFFERENCE: Establishing your Unique Solution Value

How to avoid false signals from automated lead scoring

Posted by Bob Apollo on Sun 6-Jul-2014

Find me on:

Two years ago, I chaired a discussion group at a digital marketing conference on the subject of lead scoring and nurturing - and emerged thoroughly depressed (if unsurprised) by what I heard.

The participants were all apparently experienced B2B marketers, but their implementation of lead scoring had been almost universally unsatisfactory. I’m not convinced that things have improved very much in the meantime.

blue_ticksHere’s the fundamental problem: in many of the complex B2B sales environments I get involved in, the correlation between the ways leads are scored and subsequent sales success at converting lead to close is weak at best.

As a result, most sales people have learned to distrust the lead score figure and choose to ignore it. Needless to say, this does nothing for the relationship between marketing and sales - or for the success of either organisation.

You can't score leads without involving sales

There are a number of reasons why this situation seems so prevalent, and they can all be addressed given enough determination, common sense and goodwill. The most inexcusable reason is that the sales team haven’t been involved in jointly developing and agreeing the quality criteria upon which lead scoring is based.

I’m a hardliner about this: any lead scoring scheme in complex B2B sales environments that hasn't been carefully, thoughtfully and jointly developed between sales and marketing might as well be abandoned for all the good it is doing. I have a similar perspective on any lead scoring scheme that hasn’t been jointly reviewed and if necessary tuned in the past 3 months.

The review must include an analysis of the correlation between the lead score and the subsequent sales outcome. Put simply, if there is not a strong correlation, you’re doing it wrong, and discrediting the idea of generating a lead score in the first place.

Where else does lead scoring go wrong (and by the way, these factors are usually flushed out by a regular evaluation of correlation anyway)?

The risk of raw activity metrics

Giving exaggerated importance to activity levels is the most common factor. Here’s why: in complex, high value B2B environments, your decision makers are typically too busy to make multiple repeat visits to your website, browse the entire site, download every piece of collateral or watch every video.

Anyone with that amount of time on his or her hands is either a student, a competitor, or a hapless junior employee who has been sent on a research mission without any clear idea of context or the underlying need. In fact, beyond a certain point, activity level starts to become an increasingly strong contra-indicator of lead quality.

A careful analysis of your sales success will typically reveal that certain value pieces and other materials are disproportionately important to the buying decision process in winning sales situations. If you aren’t uncovering this in your win loss analyses, you should be. And if you’re not conducting systematic win loss analyses on a regular basis, looking at the end-to-end customer journey, and acting upon the lessons learned, you’re travelling blind and you should not be surprised if you hit a wall or fall into a ditch.

So - what your prospects do is far more important than how often they do it. Activity has a place in lead scoring, but only if it prioritises quality over quantity of interaction.

Profiling your contact

Much more important is what you know about the contact. What organisation do they work for? Is that organisation on your target prospect list? What about their role? Is it one of the roles you have chosen to target? And what about their interests or needs? Do they reflect the action drivers you are really good at addressing?

Please don’t use the excuse that asking for that information will put people off from filling in the form. If you have a high-value, B2B-focused solution, you want to engage with people who are serious about learning. In my experience, those people will not be dissuaded from answering a few intelligently chosen questions that seem relevant to their mission.

The power of progressive profiling

And anyway, you can use progressive profiling to incrementally capture new pieces of information every time your prospect interacts with you (if you’re not using progressive profiling today, you should - it’s an invaluable tool).

I’d also recommend that you seriously consider implementing a phone-and-web-research-based layer of qualification to add to what you can capture automatically before you pass a lead over as being “marketing qualified” to an expensive sales team. You don’t need a big team - depending on your volumes, you can start with just one resource.

Keep the outcome clearly in mind

This additional layer of intelligent human qualification can make a remarkable difference to the percentage of “Marketing Qualified Leads” that end up being accepted by sales and converting into customers. In fact, in most cases, it’s worth diverting money from running campaigns to the qualification layer.

That, of course, assumes that your marketing team are incentivised not by activity levels, but by revenue-based outcomes. And if they are not being measured, motivated and incentivised based on outcomes, how can they credibly call themselves modern B2B marketers?

Perfect Customer Profiles

Topics: B2B Marketing