|Methods for the Evaluation of the Impact of Food and Nutrition Programmes (UNU, 1984, 287 pages)|
|1. Basic concepts for the design of evaluation during programme implementation|
The question "How do we tell if a programme has an effect?" is incomplete without knowing why one needs to know. Common reasons are: - to decide whether to continue the existing programme or not, - to redesign the programme if necessary, or - to decide whether to do similar programmes elsewhere.
Those involved in the programme often have different expectations about the purpose and results of an evaluation. It is important that the decisions which will be made using evaluation findings be clearly understood and agreed on. The evaluators must then tailor not only the design of the evaluation to the purpose of the evaluation but also the presentation of the results to that purpose. For instance, results presented as if the purpose were to decide on the continuation or termination of a programme are inappropriate if the purpose of the evaluation is to improve the programme. Evaluation cannot be seen in isolation from who asks the question. It is not so much that the principles and practice of evaluation of ongoing programmes are unsatisfactory but that the whole decision-making process in nutrition and food aid programmes needs improvement.
The purposes and issues addressed in evaluation depend on who is asking the questions. The following sequence of basic issues to be addressed are of particular interest to different audiences:
- Is the intervention performing as expected? (Programme managers, administrators, and funders)
- Is the intervention worth continuing? (Administrators and funders)
- Should it be extended? (Administrators and funders)
- Is it causally linked to improved nutrition? (Researchers, scientists, and others concerned with basic mechanisms of cause and effect)
This sequence begins with considering whether the programme is performing adequately, and can progress to seeking to ascribe causality between the intervention and the outcome. The sequence approximates the changing concerns of project management and administrators and the researchers concern with causality; however, causality, if it can be shown, is also important to all aspects of management, programme design, and policy. However, causality is difficult and expensive to establish, and the more that certainty on causality can be dispensed with, the easier the evaluation becomes. Project management can often, in fact, get by with the knowledge that the beneficiaries are improving, even if they cannot be sure this is due to the programme.
Part of the information needed to address questions such as those given above can be obtained by evaluating project design and from process data. Moreover, these data can be used to screen out those projects that are unlikely to have any important effect on outcome, and thus are not worth evaluating further. This procedure is set out in subsequent sections. Other decisions required in establishing purposes of an evaluation centre on the degree of certainty required in linking outcome to programme delivery, and these need to be explained in more detail at this stage.
Different purposes of evaluation demand varying degrees of plausibility or certainty for the conclusions reached from the evaluation. The purposes, in the order of increasing need for higher levels of certainty (elaborating from the sequence of questions just given) are:
The methodological and data requirements of responding to the differing needs of these purposes for certainty and plausibility entail, in order of increasing expense and difficulty:
It would be useful to consider how these two lists could be matched. Each item in the first list is taken up individually. (This discussion is summarized in table 1.1. (see
The confidence with which the conclusions in each of the above cases is reached can be considerably improved with strong theory relying on good scientific evidence from elsewhere.