The adaptation of a new e-training program have brought questions of performance gaps in our orientation program to light and have brought into question the usefulness of an e-learning system and weather we should proceed with a more classical training approach in this area. The importance of the information conveyed in this training, such as fire evacuation points and health coverage options, combined with the need to create a positive first impression for new employees makes this program a high priority item. As it can affect workplace safety as well as company liability regarding the mandated OH&S training in these fields, it is critical that we ensure that the program is at maximum effectiveness and are able to distinguish training gaps as a result of program delivery or other variables.
As employee training and employee orientation are roles of the HR department, it would seem clear that the responsibility for evaluating the effectiveness of the delivery of this training should fall under the same department. Institution and evaluation of new training systems is something that we should always consider a high priority, but these evaluations are generally carried our as part of the delivery model and not as an afterthought, as in this case.
Common Methods for E-learning Evaluation
Recent writings in the areas of e-learning support a five faucet evaluation of the e-training program and its effectiveness, but it should be noted that this is still an emerging field. These evaluation areas can be considered common industry practice:
(1) (2) Reaction Learning Behavior Results
Return on investment
These areas of focus are considered in context of several factors such as the learner, technology, instruction, instructor, institution, and community (1) to create a rich matrix of evaluation for the program effectiveness and delivery. ROI can be difficult to calculate and so has been grouped in with results in a number of methods of analysis. E-Learning’s upside seems to outweigh any drawbacks, but more research needs to be done in the areas of concrete costs and benefits as well as the contexts for the transfer of knowledge. “Proving connections between e-learning and the outcomes – benefits and drawbacks – would assist in the use of evaluation methods” (2) and the use of evaluation should be considered “at every stage of the e-learning process” (1).
As this field further develops we will undoubtedly see more tools emerge to aid in this evaluation. While the benefits of e-learning are well documented for the right context, the drawbacks highlighted may have some impact for us as well. E-Learning programs may rely heavily on self-discipline, may have a high initial cost, can be difficult to support, and may not be suitable for all types of training or all types of learners. (2) We should keep these limitations in mind when evaluating this program and when looking at implementing further programs of this nature.
Application of Current Practices
The first and probably most obvious learning to take from these writings is that the evaluation of an e-learning program should take place a various places throughout the delivery cycle. Pre-course, formative, and summative evaluations (1) should have been evaluated prior to this point; this in itself is a lesson we can take away to future implementations. From an evaluative approach we can first look at the learner, their skills, attitudes, and preferences as they relate to this e-learning program. Many of these variables are tied to the reaction level of evaluation but the preset attitude towards a training program will carry much weight in the conscious and subconscious effort put forth by individuals towards it. This form of unintentional or deliberate sabotage, as well as extraordinary effort towards a program will greatly affect results.
From this we can see that the attitudes conveyed toward and about the technology, instruction, instructor, institution etc will need to be scanned for any negative presets or experiences with training. An additional factor to be taken from these resources is that the skill sets and resources of the learners may not be capable for this type of training to work. An evaluation of the computer literacy, computer comfort levels, and computer/internet availability that employees may possess is in order. We cannot assume that all individuals will have access to a computer and internet, or the appropriate skills and comfort level for this type of training. Gaps in performance may come about from deficiencies in any of these categories.
Evaluation in terms of knowledge transference should be broken into the categories of reaction (as mentioned above), learning – cognitive knowledge transfer, and behavioral changes. Objective gaps in these areas may be due to factors which are easier to analyze when the barriers are properly identified and grouped. And finally we can learn that the evaluation of this program in an “almighty dollar” may not be a significant as a blended evaluation which takes into account any increases in program availability this will create, the ability for staff to review the material at any time, and the opportunity to create a dynamic first impression within our staff that may translate into a shift in culture. From these and more we can see that there is much to be gained for our organization in evaluating our training systems, not only in terms of due diligence for current operating procedures, nor simply in terms of ROI – but also in terms of how we can improve, and aid our staff in their learning.
1. A Systemic, Flexible, and Multidimensional Model for Evaluating E-Learning Programs. Mungania, Peni and Hatcher, Tim. 2004, Performance Improvement, pp. 33-39. 2. E-Learning: evaluation from an organization’s perspective. Kathawala, Yunus and Wilgen, Andreas. 2004, Training & Management Development Methods, pp. 5.01-5.13. 3. E-learning: The Relationship Amoung Learner Satifaction, Self-efficacy, and Usefulness. Womble, Joy. 2008, The Business Review, Cambridge, pp. 182-188.