Cover Image
close this bookDesigning Human Settlements Training in African Countries - Volume 2: Trainer's Tool Kit (HABITAT, 1994, 182 p.)
View the document(introduction...)
View the documentForeword
View the documentIntroduction
View the documentThe lecture
View the documentVisual aids
View the documentQuestion and answer
View the documentDiscussion
View the documentDemonstration
View the documentSimulation
View the documentThe case method
View the documentCritical incidents
View the documentRole-playing
View the documentInstrumentation
View the documentBrainstorming
View the documentNominal group technique (NGT)
View the documentForce field analysis
View the documentAction planning
View the documentOther learning transfer strategies
View the documentPerformance analysis & needs assessment
View the documentTraining impact evaluation
View the documentTraining the staff to train
View the documentCoaching
View the documentTeam development
View the documentRole negotiation
View the documentIntergroup conflict intervention
View the documentOrganizational goal setting
View the documentA closing note

Training impact evaluation

Learning Emphasis

Organization Focus

The stem of the lotus will tell the depth of the water

- Thai Proverb

Evaluation is how trainers find out what effect their programmes are having on their participants. How evaluation is used, of course, depends on what is to be evaluated. For example, the trainer might want to know how well a programme was received by its participants. In other words, did participants find the programme useful, interesting, educational, exhilarating, or not, and to what extent. Usually, this is done by asking the participants - requesting them to fill out a reaction sheet on the programme at the time of their departure or even sometime later after returning to the job. The trainer also might want to know what or how much participants have learned. This is done by comparing what participants thought, knew or could do before the training with what they think, know or can do after the training. Such comparisons are often made using before-and-after testing.

Many trainers want to know, or at least they should want to know, if their training programmes are having long-term impact. It is one thing to confirm that learning has taken place. It is another to show management that training has actually resulted in better performance (i.e., producing evidence from evaluation that training has made a favourable impact on the job behaviour of individuals and the organizations they serve). For obvious reasons, this is called impact evaluation.


Training can have an impact on the behaviour of individuals who have been trained. Just as people who like a programme don’t necessarily learn anything from it, people who learn new things, how to think differently or to do things in different ways won’t necessarily use what they have learned back on the job. Not only that, if they do begin to behave differently after being trained, factors other than training may account for the change.

Among the evaluation techniques often used to determine the impact of a training programme on the behaviour of training participants are these:

· Comparisons of performance before and after the training,
· Data on performance obtained from supervisors, subordinates, and peers,
· The use of statistical comparisons, and
· Follow-ups over time.

Training can have an impact on the organization as well. This may show up as lower operating costs, higher work output, and quality changes that are reliable to the training. Impacts of this kind are measured in relation to the work unit, department, or organization rather than in relation to the performance of individual employees. Nevertheless, these impacts on organizational performance come about because of changes in the behaviour of participants - employees who have been trained to serve the organization more efficiently and effectively and who are putting their new knowledge and skills to use on the job.

There are two important reasons why trainers should take the time to evaluate the impact of their programmes. One reason is to find out how successful the training was in achieving programme objectives and to use this information to make needed changes in design and delivery. Another reason is to make a convincing case for the value of the training for the benefit of clients who may be skeptical about training’s potential for improving individual and organizational performance.

If possible, impact evaluation should be carried out by someone other than the person or persons who will be doing the training. The use of an evaluator who is not involved in the training will add credibility and objectivity to the evaluation report prepared for the client. Evaluators can be selected from the staff of an organization’s training unit, the personnel department or some other organizational unit. Or they can be employed from an outside training organization. If problems of availability or cost will not permit the use of a separate evaluator, then the trainer should do it. The point is, evaluation must be carried out by somebody if there is to be conclusive evidence that the training has done what it was designed to do.

Impact evaluation is a logical process, a series of steps that must be carried out rigorously by the evaluator to obtain valid and reliable data about the value of training. Evaluation begins early in the training process. The evaluator serves as a valuable companion for the trainer at the training design stage in reviewing performance data from needs assessments and questioning the completeness of training objectives. What follows are eight essential steps in the impact evaluation process.

Steps in impact evaluation to be completed before the training

1. Collect background information on the training programme and training needs to be addressed. This information includes the reasons why the training is to be conducted, who is to be trained for what purpose, the design and method of delivery, who will want to know what was learned by the evaluation, and what constraints (time and money) influence the scope and content of the evaluation.

2. Specify the training objectives. The objectives of a training programme are established in collaboration with the client and the individual or organization doing the training. Besides offering direction to the trainer and encouragement to the training participants, objectives provide a solid basis for evaluating the programme’s success. A well-written objective is relevant to the organization’s training needs and contains indicators of wanted performance (i.e., measures of the expected training results contained in the objective). In the following example of a well-written, measurable, training objective, the indicator of wanted performance is shown in italic type.

Example: Seventy-five per cent of the clerical staff of the municipality will demonstrate improved typing skills, showing an increase of not less than 10 per cent in the number of words typed per minute with no errors.

3. Decide on the questions to be answered by the evaluation. There are many questions that can be asked about a training programme. The evaluator, the trainer, and the client must decide what questions are most important. The rest of the evaluation process will focus on getting answers to these questions. Normally, the most important question is, “To what extent did the programme meet its objectives”? Other questions pertaining to impact might be: “What are programme participants doing differently since the training? What impact has this had on customer satisfaction? What evidence is there that unit performance has improved since the training? Were the benefits greater than the cost of the training?”

4. Choose an evaluation design. At a minimum, an evaluation design, or workplan, should specify who will be evaluated, when and how (e.g., questionnaires, interviews, observation, follow-up techniques and so forth). The methods selected should enable the evaluator to determine with reasonable accuracy if the training did what it was supposed to do, what participants intend to do differently, and in what specific ways the performance of the organization or work unit can be expected to improve.

Steps in impact evaluation to be completed during or after the training

5. Collect data. When these four steps are complete, it is up to the evaluator to perform the various tasks specified in the evaluation design. These might include, for example, developing questionnaires, scheduling interviews, administering surveys, tabulating results, preparing reports and so on.

6. Analyse the data. At a minimum, the evaluator must compile the results of any surveys or interviews conducted. It may be necessary to calculate averages from the combined responses to interview or survey questions and to draw summary conclusions from survey results.

7. Report its impact. This step should be relatively easy if the training objectives have been clearly defined and the evaluation carefully designed. The report should relate directly to the programmes objectives and be written simply and concisely so as to be easily understood by decision-makers.

8. Follow-up. This final step is to measure the results to be achieved (impact on the training participants and the organization) by the programme. This is done using questionnaires, interviews, observation and other data-gathering techniques used in the end-of-programme evaluation.

Shown in the next couple of pages is a simple, self-assessment questionnaire for use by evaluators in determining the impact of training on the behaviour of participants and on the organization.


An important focus for training evaluation is the impact of a particular programme on the behaviour of programme participants (what they are doing differently in relation to the objectives of the training) as well as the organization (what favourable changes - lower cost, increased production, higher quality service, fewer errors - can be attributed to the training). Evaluating training impact can be done using a logical, rigorous eight-step process. It must be carefully planned in conjunction with the development of training objectives and in relation to the results of systematic training needs assessment. While evaluation, ideally, should be carried out by someone other than the trainer, the most important thing is that it gets done and that the results are reported to decision-makers.

Exhibit 1

Post Training Questionnaire

This questionnaire is to be completed by participants in training at the conclusion of the programme. It is meant to identify each participant’s satisfaction with the learning content and value of the programme and plans to use what they have learned to improve their work performance. For an adequate evaluation of a particular programme, an evaluator might add more detail to these evaluation questions or include additional questions.

1. The following objectives were stated for this programme. How successful was the programme in achieving these objectives?

(State the objectives below)

Completely Successful

Generally Successful

Limited Success


A. To improve...

B. To develop...


2. Did this programme meet your needs as a ________________________________?

job or position


Not sure

No (please explain)


3. Please indicate what you intend to do differently on the job as a result of this programme (be specific).

a. ________________________________________________________________
b. ________________________________________________________________
c. ________________________________________________________________

4. To what extent do you expect to receive support and encouragement from your organization in making use of the new knowledge and skill on your job?

To a great extent

To some degree


Will be discouraged

5. As a result of making use of programme learnings on the job, please estimate any improvements in organizational performance (i.e., fewer complaints, increased output, greater customer satisfaction, better teamwork, lower costs, and so forth) during the next 12 months.


6. Based on your plan to use new learnings and the support and encouragement you expect to receive from the organization, what confidence, expressed as a percentage, do you put on your estimate of performance improvement? (100% = complete confidence; 0% = no confidence.)


Exhibit 2

Follow-up Questionnaire

1. What have you been doing in your position with the organization since the training programme that you were not doing before, something that you feel is attributable in some way to your participation in the programme?


2. What have you stopped doing in your position since the training programme that you feel is attributable in some way to your participation in the programme?


3. What changes in you, your work, or your relations with others were caused in some way by your participation in this programme?


4. How much of what you learned have you been permitted or encouraged to put into practice since returning to the job from the training?

Nothing of what I learned

Very little of what I learned

Some of what I learned

Most of what I learned

All of what I learned

Comment _________________________________________________________

5. As a result of this training, what do you estimate to be your increase in personal effectiveness, expressed as a percentage?

_____ %

6. As a result of changes in your thinking and practice based on what you and others learned at the training, what improvements in the organization have taken place (i.e., fewer complaints, increased output, greater customer satisfaction, better teamwork, lower costs and so forth)?


7. Looking back, my opinion of the value of the programme overall is...