It’s one of those James Bond things – how are we using supportive data in learning?

When we take a look at compliance training, we often try to “justify” the learning to the reluctant user by listing the all of the empirical stuff that provides the context for the business case. “Data protection is important for us at Compuglobal Hypermeganet because in <insert recent year> there were <insert massive figure> breaches of data for our industry resulting in <insert inordinately large amount of money> in fines.” And yeah, it serves a purpose, to a point. Examples like this are an attempt at what we like to describe as a “war story” – using the worst case scenario to illustrate what a breach in compliance means.

But just trotting out the figures is a cop out for any self-respecting instructional designer; a massive disservice to what good e-learning should set out to do. Every single fact and figure should have to fight to prove its validity, rather than just being lumped in as a pseudo-supportive argument.

Here at Saffron, we like to make a point about the “need-to-know” factor of data or statistics in our learning. Does everyone in your organisation really need to know how many cases were reported to the regulator in the past five years, or how many of those reported were prosecuted? It’s unlikely because in most cases the training has been commissioned so that after you’ve completed it you can make the right decisions and avoid breaking the law. The focus of the training isn’t past performance, but future decisions.

Say you’re a small telecoms company that receives 250,000 calls from your 50000 customers per year. Last year, your staff logged 5000 suspicious calls, where the caller sounded like they were fraudulently trying to obtain personal account details. But it’s come to light in a recent court case that a private detective’s firm actually made over 10000 fraudulent calls, with a 50% success rate.

Some might just trot out this information as a bullet point list and leave the learner to it. But with just a little thought, we get our “need-to-know” number, where we can summarise the implications of the entire example and position it in a way that’s relevant to the learner by using just one figure.

From the stats, we can imply that 1 in every 25 calls made to the business is potentially fraudulent (total calls/fraudulent calls). This provides the context to the learner without going into overwhelming detail, and gets rid of the bits that aren’t really worthwhile. The learner can then relate this number to their everyday working life – if they take 50 calls a day, they can see that it’s likely that at least 2 of them could be fraudulent. With this information, the issue becomes much, much more apparent and resonates with the individual.

These “need-to-know” numbers provide context, whilst also arming the learner with the knowledge that they can use in their daily routines. They aren’t hard to stumble across – you can often reach them by delving a little deeper into the data. They make the learning digestible, relevant, and can often be brought to life with supporting graphics or scenarios. So think about it next time you’re writing and ask the obvious question – what do learners need to know?