Saying goodbye……to that button and assessment

I’ve had a number of conversations about these subjects in public before, and even more arguments about them behind closed doors, so it seems like quite a good time to put this one out there.

Many times in the past year or so I’ve talked about the intention to drop the next button. In fact, the amount I’ve talked about it, you’d think I’d been doing it for ages, but the reality is, it’s actually quite difficult. Within the NHS we are bound by regulators to achieve target pass levels for statutory and mandatory training compliance, and for this reason, we do have a reliance on the “traditional” e-learning (the sort where you have to “pass” somehow). So with this in mind, I need to ensure that any e-learning we produce in these areas must be traceable, monitor-able and reportable. As much as I agree (and promote) the usage of other informal methodologies and concepts, in this area, we have to provide training which meets these fixed (and dated) criteria.

So, with this in mind, it does take longer to produce content in this way. One project we’ve been working on has been a piece about Information Governance (or Data Protection for anyone who’s not familiar with the term). We’ve worked really hard to move away from a click next presentation, moving towards a learner driven experience. Under the concept we’ve worked to, the learner is completely in control of what they learn.

Information Governance Screenshot


By this, I don’t just mean that the learner has a main menu which allows them to choose which bit of the training to do first, or the order in which they do it (well actually, they do get this ability) but the learner can also choose specific areas

to study. So once they choose the main section they wish to delve into, they can then choose the sub-section. From here they can interact with the content in this area (whether this be a scenario video, or a piece of text content). This interaction is then captured and displayed as progress (in this case, by using green indicator dots). The learner can then choose exactly which bits of the content they need to learn, and which they don’t.

What next?

Well once the learner has covered all of the content which they wish to, they can progress to the “assessment” – however this assessment is aware of which content has already been reviewed, and therefore doesn’t assess the user on this area. So the user is assessed on the areas which they have not studied (thus checking the learner’s knowledge of the content they think they already know).

Under this methodology, the learner chooses exactly what they do and don’t know. So – if they already know it all, they can jump straight into an assessment and prove it. Alternatively, if they know nothing about it, they can study it all (and not be assessed) – or they can find a balance between the two.

And this is different? Why?

Well the biggest argument I’ve had with a number of people centres around the use of assessment. Many people have shared the view that you must assess a user at the end of an e-learning package. When I’ve discussed this with them, I ask how many assessments they undertake at the end of their face-to-face delivery – and they always say none. So why must you assess someone at the end of an e-learning package when you don’t in a face-to-face session?

As yet – no one has been able to answer this question – if you can – or you have a view, please leave me a comment below!

E-learning for me is one delivery method for education, much like face-to-face is another. Both should be approached in the same way (with the same thought processes, although the actual delivery may differ) – no one should be treated different than the other – why should you assess your learners at the end of one if you wouldn’t the other?

If you’re that worried about needing to prove that your e-learning students have learnt something from your e-learning, then you need to spend more time developing educationally-sound e-learning content!