In easy terms, training refers to the imparting of particular skills, abilities and understanding to a worker. A formal definition of training is” it is any attempt to improve existing or future worker efficiency by increasing a staff member’s ability to perform through knowing, usually by changing the worker’s mindset or increasing his or her abilities and knowledge. The need for training is determined by employee’s performance shortage, calculated as follows:
Training requirement= Basic performance-Actual efficiency (Schuler et al.
Needs assessment, or requires analysis, is the procedure of figuring out the company’s training requirements and looks for to respond to the question of whether the organization’s needs, goals, and issues can be met or resolved by training. Within this context, requires assessment is a three-step procedure that includes organizational analysis (e.g., which organizational goals can be obtained through personnel training? Where is training required in the company?), job analysis (e.g., what must the trainee find out in order to carry out the job efficiently? What will train cover?) and individual analysis (e.
g., Which people need training and for what?).
A training requires analysis (TNA) is the procedure of recognizing the areas where both individuals and groups in an organisation would benefit from training in order to end up being more reliable at accomplishing their own objectives and the goals of the organisation. Training requirements exist where there is a gap between the knowledge, abilities and characteristics required and those already had by employees.
The gap is determined through the procedure of training needs analysis (TNA). Employers can either train to fill the existing or future requirements of the organisation, or recruit and buy-in particular skills, understanding and experience. A training need exists when there is a space in between what is needed of a person to perform their work properly and what they real understand. A “training requires assessment”, or “training needs analysis”, is the approach of figuring out if a training need exists and if it does, what training is needed to fill the gap.
TNA plays a critical role in planning the use of available training and development resources. Critically it ensures that money is spent on essential training and development that will help drive the business forward to meet its objectives. In the same way it can help highlight occasions where training might not be appropriate but requires alternative action such as recruitment or contracting out work. Sometimes training is not really planned at all. Rather than being a proactive process training tends to be much more reactive. A training plan should prevent the confusion and ultimate inefficiency that tends to result from these ad hoc approaches, because a training plan should cover the whole organisation and should be consistent. That consistency starts with agreement of what the organisation is trying to achieve and what the priorities are at the moment.
Thus, conducting systematic needs assessment is a crucial initial step to training design and development and can substantially influence the overall effectiveness of training programs (Goldstein & Ford, 2002; McGehee & Thayer, 1961; Sleezer, 1993; Zemke, 1994). Specifically, a systematic needs assessment can guide and serve as the basis for the design, development, delivery, and evaluation of the training program; it can be used to specify a number of key features for the implementation (input) and evaluation (outcomes) of training programs. Consequently, the presence and comprehensiveness of needs assessment should be related to the overall effectiveness of training because it provides the mechanism whereby the questions central to successful training programs can be answered.
In the design and development of training programs, systematic attempts to assess the training needs of the organization, identify the job requirements to be trained, and identify who needs training and the kind of training to be delivered should result in more effective training. Thus, the research objective here was to determine the relationship between needs assessment and training outcomes. A product of the needs assessment is the specification of the training objectives that, in turn, identifies or specifies the skills and tasks to be trained. A number of typologies have been offered for categorizing skills and tasks (e.g., Gagne, Briggs, & Wagner, 1992; Rasmussen, 1986; Schneider & Shiffrin, 1977).
Without this coherence training usually consists of individuals attending courses without the realization that it might be cheaper and more effective to engage a trainer to provide training within the organisation; or that money is being spent on training without any clear idea of what that training is trying to achieve. Different people will often be attending courses on the same subject unnecessarily, or going to courses in areas they are already competent at (but enjoy) or are interested in rather than in areas that are required in order for the organisation to meet its current objectives.
A TNA involves five basic steps:
1. Identifying the objectives of the organisation
2. Appointing a training coordinator
3. Gathering information about the skills and abilities of the individuals that are needed now and will be needed in the future
4. Analyzing that information
5. Identifying the gaps that exist between the current situation and what is/will be required
Identifying the objectives of the organisation
The first step in the TNA is to ensure that the organisation has clear, focused business objectives. Top management should agree these so that a clear idea of what the organisation is trying to achieve is understood by everyone in it. Based on these objectives it is possible to assess which areas of the overall business plan take priority at the moment, and which areas link with other areas.
Appointing a training co-coordinator
Even in the smallest organisation it is worth appointing a training co-coordinator. The reason is that training needs to be part of someone’s job. If it is not, it tends to get left out and that is one of the reasons why training is so often poorly planned and implemented. Often the training co-coordinator can also be the training administrator (the person who books the courses or organizes the training sessions).
Time and money should be available to plan training for people according to their needs, and some involvement in the process is essential from the senior managers or owners.
Once we are clear on our objectives and have a training co-coordinator appointed then we can begin to gather knowledge about what needs to be done, what is being done and how well the people involved are doing it.
We can divide the knowledge required into three areas:
1. What do people need to do in order that the objective is achieved?
2. What skills and knowledge do people already possess?
3. What skills and knowledge may be required in the future to continue to achieve future objectives?
Not all information will relate to training. For example, an objective to improve the décor of the patient’s waiting area might well be important, but if an outside contractor is likely to be involved in providing the design and overseeing the implementation then no training is needed. However, sometimes a decision to involve outside help may well identify the need for training in order that the job can be properly commissioned. Thus information can be gathered from CVs or application forms, job descriptions, staff and patient surveys, appraisal forms, interviews and self-assessment forms. All of these will form a rich base from which an analysis can be made.
Analysing the information
This stage in the TNA is both stimulating and challenging. Analytical skills are required, and time to carry out the analysis is essential. If the organisation is to benefit from the effective use of resource in training then the person carrying out the TNA must be free to carry out a full and proper analysis. If not, the whole process is self-defeating and often the process is blamed for failure rather than the lack of resource to carry it out properly.
The analysis should be answering the basic questions:
1. What gaps exist in both knowledge and ability of the current people in the organization to carry out their jobs now?
2. What gaps exist in both knowledge and ability of the current people in the organization to carry out their jobs in the future?
Thus the TNA allows for both current gaps relating to current job descriptions and possible gaps assuming some form of future development.The analysis leads naturally into the final stage of the TNA.
Identifying the gaps
The whole point of the TNA is the actual planning and implementation of relevant training for the people in the organization. Training plans should be documents the organisation uses to plan the training of everyone, costed out and budgeted for. They are working documents (in other words they keep changing as events and circumstances change) and form the core of investment in the development of the people.
They are also the end result of the TNA, as it is hardly worth investing all that time and effort in identifying training needs if nothing happens as a result. TNA is quite simply a way of identifying the existing gaps in either knowledge or ability of the people in the organization to carry out the tasks that enable them to do their jobs. The process assumes that the jobs people carry out have been defined in order that the business objectives of the organization will be achieved. Thus a training need analysis will ensure people are better able to do their jobs because they have improved their knowledge and their skills in relevant areas.
A performance gap is the behavioral area not performed to standards when measuring task performance. Some performance gaps are quite easy to measure. For example, if the standard is to dig a ditch 2 feet wide and 2 feet deep, but it is only dug 1 1/2 foot deep, then there is a performance gap of what the depth is and should be. If the ditch-digger does not know how to dig a ditch two feet deep, then it is a training problem. If the ditch-digger knows how, but did not do it, then it is some other type of performance problem besides training.
In a performance analysis for the present, you subtract the present behavior (B) from the standards (S) to measure the performance gap (G). This measurement, S – B = G, becomes the span that must be bridged in order to reach the objective. To plan for future requirements (visioning), you determine where you are now (the present behavior (B)) and where you want to go (the future standard (S)). Again, the difference between B and S is the performance gap (G).
Looking at the consequences of inadequate or absence of needs assessment can better understand the significance of needs assessment. Failure to conduct needs assessment can contributes to: Loss of business, Constraints on business development, Higher labour turnover, Poorer-quality applicants, Increased overtime working, Higher rates of pay, overtime premiums and supplements, Greater pressure and stress on management and staff to provide cover, Pressure on job evaluation schemes, grading structure, payment systems and career structures, Additional retention costs in the form of flexible working time, job sharing, part-time working, shift working etc. Undermine career paths and structures, Need for job redesign and revision of job specifications, Higher training costs (Darling,1993).
Not all gaps in your competences can be met by training; some may result from problems in the workplace or elsewhere. Examples of these include:
• Lack of equipment or equipment that is inadequate for the task
• Poor working environment, e.g. an unsuitable room
• Failures in communication between members of the practice team
• Too many, or too few tasks allocated in a specific time frame
• Unclear job descriptions or lines of accountability
• Changes in working practices or timing that do not take into account the personal circumstances.
Approaches to needs identification.
Training and development needs can be identified at different levels:
TNA at an organisational level concentrates on focusing needs against business strategy and goals. In many ways, this level is the most important because it starts with an assessment of the organisation’s strategic direction. One important decision is whether training will be the appropriate means by which specific organisational objectives are going to be achieved, or whether some other form of intervention would be more appropriate. One way of implementing an organisational TNA is through a SWOT analysis:
Weaknesses can be managed with training interventions, while strengths can be consolidated with continued training and maintenance of the status quo. Opportunities need to be balanced against costs; training needs should be factored into these costs while the skills required to drive the business forward can be identified. Key threats can be minimized by identifying areas where training interventions could improve the performance of employees and ultimately, of the organisation.
Bespoke solutions for specific departments or teams need to be assessed. There may be unique technological or product development required. Line manager views should be sought to identify these needs and a SWOT analysis can be scaled to assess these needs.
Occupational levels are closely associated with individual needs. Line management can identify issues to be tackled that are associated with those specific jobs or occupations. Individual needs can then be linked to the competence of individual employees within their roles. Methods for analysing the needs of individuals include:
• Appraisal and performance review
• Self-assessment or self-appraisal
• Subordinate appraisal
• Peer appraisal
• Assessment centres
• Client/customer feedback
• Competency assessments
• Reviews against occupational standards including National Vocational Qualifications (NVQs)
Training needs can be benchmarked with external standards. TNA plays a vital role in the learning cycle. The businesses to directly involve employees and ensure that managers are responsible for teams, so that individual and departmental/team needs are equally met and that these requirements are factored into targets for individuals.
Occupational standards like NVQs can be used as means of assessing training needs. Most standards will communicate explicit statements regarding standards of expected performance. These can be used to specify what an individual is required to be able to do and at what level. To identify actual training needs it would be necessary to match the standard against each individual, in order to identify the gap to be filled through appropriate training and development.
Distinguishing between training needs and wants
A distinction needs to be made between what is needed to achieve a specified successful outcome from the organisation’s point of view and what an employee wants. Managing any conflict of interest is key to ensuring that the individual is motivated and that the business needs are met. A common problem that occurs when performance appraisal systems are introduced is the raising of employee expectations of the outcomes arising from the system.
Prioritisation of training needs
Training needs must be prioritised if they are to be aligned with business strategy. There may be levels of training that are required to comply with statutory legislation such as health and safety. In these cases these basic standards must be prioritised over and above other training needs. Managing employee expectations of training must also be factored into prioritising the sequence of training.
There may be some training and development initiatives, which, although accorded a low priority from the organisational viewpoint, might raise motivational levels among employees. Importantly, training needs must always be managed within the constraints of organisational budgets. A cost/benefit analysis is a good way of measuring the return on investment of any training prior to implementation.
This phase insures the systematic development of the training program. This process is driven by the products of the analysis phase and ends in a model or blueprint of the training program for future development.
This model or blueprint will contain five key outputs:
• Entry behaviors
• Learning objectives
• Learning steps (performance steps)
• Performance test
• Structure and sequence program outline
The entry behaviors describe what a learner must know before entering into the training program. Just as a college requires certain standards to be met in order to enroll, a training program should require a base level of knowledge, skills, and attitudes (KSA). The learning objectives tell what tasks the learners will be able to perform after the training, while the learning steps tell how to perform the tasks, while the performance test tells how well the tasks must be met.
Finally, the learning objectives are sequenced in an orderly fashion that provides the best opportunity for learning, such as arranging the learning objective from easy to hard or in the order they are performed on the job.
A multitude of methods of training are used to train employees. The most commonly used methods are categorized in two broader categories i.e. on-the –job methods and off-the- job methods. On-the job methods are applied in the workplace, while the employee is actually working. Off the job methods are used away from workplaces.
Training techniques and delivery system includes the medium of imparting skills and knowledge to employees. It means employed in the training methods. Among the most commonly used techniques are the lectures, films, audio, case study, role-playing, simulations etc. The next question in designing training is to decide on the level of learning. The inputs passed on to trainees in training programmes are education, skills and the like.
In addition there are three basic levels at which these inputs can be taught. At the lowest level, the employee or the potential employee must acquire fundamental knowledge. This means developing a basic understanding of a field and becoming acquainted with the language, concepts and relationships involved in it. The goal of the next level is skill development or acquiring the ability to perform in a particular skill area. The highest level aims at increased operational proficiency. This involves additional experience and improving skills that have already been developed. All the inputs of training can be offered at the three levels. How effectively they are learned depends on several principles of learning.
Training programmes are more likely to be effective when they incorporate the following principles of learning: Employee motivation, Recognition of individual differences, Practice opportunities, Reinforcement, Knowledge of results (feedbacks), Goals Schedules of learning, Meaning of material, Transfer of learning (Schuler et al., 1989; Hinrich,1976).
What is learnt in the training must be transferred to the job. The traditional approach to transfer has been to maximize the identical elements between the training situation and the actual job. This may be possible for training skills such as maintaining a cash register, but not for teaching leadership or conceptual skills. Often, what is learned in the training session faces resistance back to the job. Techniques for overcoming resistance include creating positive expectations on the part of trainee’s supervisor, creating opportunities to implement new behaviour on the job, and ensuring that the behaviour is reinforced when it occurs.
Implementation of the training programme
Once the training programme has been designed, it needs to be implemented. Implementation is beset with certain problems. Programme implementation involves action on the following lines: Deciding the location and organizing training and other facilities, Scheduling the training programme, Conducting the programme, Monitoring the progress of trainees.
Evaluation of the programme:
Evaluation is an integral part of most training programmes. Evaluation tools and methodologies help determine the effectiveness of instructional interventions. Despite its importance, there is evidence that evaluations of training programs are often inconsistent or missing (Carnevale & Schulz, 1990; Holcomb, 1993; McMahon & Carter, 1990; Rossi et al., 1979). Possible explanations for inadequate evaluations include: Insufficient budget allocated; insufficient time allocated; lack of expertise; blind trust in training solutions; or lack of methods and tools (see, for example, McEvoy & Buller, 1990).
Part of the explanation may be that the task of evaluation is complex in itself. Evaluating training interventions with regard to learning, transfer, and organizational impact involves a number of complexity factors. These complexity factors are associated with the dynamic and ongoing interactions of the various dimensions and attributes of organizational and training goals, trainees, training situations, and instructional technologies. Evaluation goals involve multiple purposes at different levels.
These purposes include evaluation of student learning, evaluation of instructional materials, transfer of training, return on investment, and so on. Attaining these multiple purposes may require the collaboration of different people in different parts of an organization. Commonly used approaches to educational evaluation have their roots in systematic approaches to the design of training.
They are typified by the instructional system development (ISD) methodologies, which emerged in the USA in the 1950s and 1960s and are represented in the works of Gagné and Briggs (1974), Goldstein (1993), and Mager (1962). Evaluation is traditionally represented as the final stage in a systematic approach with the purpose being to improve interventions (formative evaluation) or make a judgment about worth and effectiveness (summative evaluation) (Gustafson & Branch, 1997). More recent ISD models incorporate evaluation throughout the process (see, for example, Tennyson, 1999).
Various frameworks for evaluation of training programs have been proposed under the influence of these two approaches. The most influential framework has come from Kirkpatrick (Carnevale & Schulz, 1990; Dixon, 1996; Gordon, 1991; Philips, 1991, 1997). Kirkpatrick’s work generated a great deal of subsequent work (Bramley, 1996; Hamb lin, 1974; Warr et al., 1978). Kirkpatrick’s model (1959) follows the goal-based evaluation approach and is based on four simple questions that translate into four levels of evaluation.
These four levels are widely known as reaction, learning, behavior, and results. On the other hand, under the systems approach, the most influential models include: Context, Input, Process, Product (CIPP) Model (Worthen & Sanders, 1987); Training Validation System (TVS) Approach (Fitz-Enz, 1994); and Input, Process, Output, Outcome (IPO) Model (Bushnell, 1990). For evaluations to have a substantive and pervasive impact on the development of training programs, internal resources and personnel such as training designers, trainers, training managers, and chief personnel will need to become increasingly involved as program evaluators.
While using external evaluation specialists has validity advantages, time and budget constraints make this option highly impractical in most cases. Thus, the mentality that evaluation is strictly the province of experts often results in there being no evaluation at all. These considerations make a case for the convenience and cost-effectiveness of internal evaluations. However, the obvious concern is whether the internal team possesses the expertise required to conduct the evaluation, and if they do, how the bias of internal evaluators can be minimized. Therefore, just as automated expert systems are being developed to guide the design of instructional programs (Spector et al., 1993), so might such systems be created for instructional evaluations?
Lack of expertise of training designers in evaluation, pressures for increased productivity, and the need to standardize evaluation process to ensure effectiveness of training products is some of the elements that may provide motivations for supporting organization’s evaluation with technology. Such systems might also help minimize the potential bias of internal evaluators.
Different approaches to evaluation of training discussed herein indicate that the activities involved in evaluation of training are complex and not always well structured. Since evaluation activities in training situations involve multiple goals associated with multiple levels, evaluation should perhaps be viewed as a collaborative activity between training designers, training managers, trainers, floor managers, and possibly others.
Training activities are designed; considerable costs notwithstanding, to impart specific skills, abilities, knowledge to employees. Training is confined to shop-floor workers. A programme of training is important as it lends stability and flexibility to an organisation, besides contributing to its capacity to grow. Accident, scrap and damage to machinery and equipment can be avoided or minimized. Training process involves several steps: 1. Defining organizational objectives and strategies 2. Assessment of training needs 3. Establishing training goals 4. Devising the training programme 5. Implementation of the programme and 6. Evaluation of the results. So training is an essential part of any organisation and highly technical and professional issue. Able training administrator must properly manage it.
1. Bramley, P. (1996). Evaluating training effectiveness. Maidenhead: McGraw-Hill.
2. Bushnell, D. S. (March, 1990). Input, process, output: A model for evaluating training. Training andDevelopment Journal, 44(3), 41-43.
3. Carnevale, A. P., & Schulz, E.R. (July, 1990). Return on investment: Accounting for training. Training andDevelopment Journal, 44(7), S1-S32.
4. Dixon, N. M. (1996). New routes to evaluation, Training and Development, 50(5), 82-86.
5. Darling, P. (1993), Training for profit, McGraw-Hill, Training series, p.5.
6. Fitz-Enz, J. (July, 1994). Yes…you can weigh training’s value. Training, 31(7), 54-58.
7. Gagné, R., & Briggs, L. J. (1974). Principles of instructional design. New York: Holton, Rinehart & Winston.
8. Gagne, R. M., Briggs, L. J., & Wagner, W. W. (1992). Principles of instructional design. New York: Harcourt Brace Jovanovich.
9. Goldstein, I. (1993). Training in organizations: Needs assessment, development, & evaluation. Monterey, CA:Brooks-Cole.
10. Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needsassessment, development, and evaluation (4th ed.). Belmont, CA:Wadsworth.
11. Gordon, J. (August, 1991). Measuring the “goodness” of training. Training, 28(8), 19-25.98.
12. Gustafson, K. L, & Branch, R. B. (1997). Survey of instructional development models (3rd ed.). Syracuse, NY:
13. Hamblin, A. C. (1974). Evaluation and control of training. Maidenhead: McGraw-Hill.
14. Hinrichs, J.R. (1976), Personal training, handbook of industrial/organizational psychology, Rand Mc Nally, Chicago, p.856.
15. Holcomb, J. (1993). Make training worth every penny. Del Mar, CA: Wharton.
16. Kirkpatrick, D. L. (1959),Techniques for evaluating training programs, Journal of the American Society ofTraining Directors, 13, 3-26.
17. McGehee, W., & Thayer, P. W. (1961). Training in business and industry .New York: Wiley.
18. Mager, R. F. (1962). Preparing objectives for programmed instruction. San Francisco, CA: Fearon Publishers.
19. McEvoy, G. M., & Buller, P. F. (August, 1990), Five uneasy pieces in the training evaluation puzzle. Training and Development Journal, 44(8), 39-42.
20. McMahon, F. A., & Carter, E. M. A. (1990), The great training robbery. New York: The Falmer Press.
21. Phillips, J. J. (1991), Handbook of training evaluation and measurement methods,(2nd ed.). Houston, TX: Gulf.
22. Phillips, J. J. (July, 1997). A rational approach to evaluating training programs including calculating ROI.Journal of Lending and Credit Risk Management, 79(11), 43-50.
23. Rasmussen, J. (1986). Information processing and human–machine interaction:An approach to cognitive engineering. New York: Elsevier.
24. Rossi, P.H., Freeman, H. E., & Wright, S. R. (1979). Evaluation: A systematic approach. Beverly Hills, CA:Sage.
25. Schuler, S.R. et al. (1989), Effective personal management, Third edition, West publishing, Newyork.
26. Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. PsychologicalReview, 84, 1–66.
27. Sleezer, C. M. (1993). Training needs assessment at work: A dynamic process. Human Resource Development Quarterly, 4, 247–264.
28. Spector, J. M., Polson, M. C., & Muraida, D. J. (1993) (Eds.). Automating instructional design: Concepts andissues. Englewood Cliffs, NJ: Educational Technology Publications, Inc.
29. Tennyson, R. D. (1999), Instructional development and ISD4 methodology. Performance Improvement, 38(6), 19-27.
30. Warr, P., Bird, M., & Rackcam, N. (1978), Evaluation of management training. London: Gower.
31. Worthen, B. R., & Sanders, J. R. (1987), Educational evaluation. New York: Longman.
32. Zemke, R. E. (1994). Training needs assessment: The broadening focus of a simple construct. In A. Howard (Ed.), Diagnosis for organizational change: Methods and models (pp. 139–151). New York: Guilford Press.