|Methods for the Evaluation of the Impact of Food and Nutrition Programmes (UNU, 1984, 287 pages)|
|1. Basic concepts for the design of evaluation during programme implementation|
Process evaluation demands formulating implementation and performance objectives against which the programme can be evaluated. For the manager's questions (e.g. is the programme performing as expected?) and usually for the funder-administrator's questions (e.g. is the programme worth continuing, extending, etc.?), the comparison is between the procedures and activities of the programme and some preset standards, generally set out in the programme work-plan. The first prerequisite is, therefore, that the essential activities be stated in objectively measurable units. This is possible even for such an amorphous exercise as curative primary health care (1). Actual performance relative to these standards is ascertained through process evaluation.
A requirement for outcome evaluation is to establish objectives prior to assessment. These must be explicitly formulated as an acceptable difference from a standard, or as a minimum improvement from some baseline. These quantitative standards of achievement should correspond to the implicit objectives of the programme and should be understood and agreed on by those who must use the results of the evaluation. Experience shows that the exercise of making stated and implicit objectives more explicit will often reveal hidden objectives, some of which are even contradictory. This is why a consensus about the programme objectives is one of the necessary first steps to an evaluation.
Almost inevitably, programme objectives change as a programme evolves. However, changing definition of objectives during evaluation of single projects should be avoided because it is rare that the design of the evaluation can deal with new objectives. For example, a recent review of supplementary feeding programmes discussed whether the more important effects of these programmes were in terms of income distribution since the supposed objectives of improving child nutrition were seldom reached (8). However, no comparison was made with quite different programmes that might be more efficient in changing income distribution. While this may be a reasonable question in general, changing objectives for an individual programme requires more fundamental decisions.
Once the underlying outcome is identified conceptually. the next step is to identify the measurable variables related to the outcome of concern. The major portion of this book discusses that step relating desired outcome (e.g., improved nutrition) to a measured variable (e.g.. anthropometry). Subsequent chapters develop the relationship between the conceptual outcome and the measurements more fully.
Finally, the statistical test used to judge the reality of a measured difference (either between treatment and control groups, or between treatment and a standard) results in a statement that most of the time (which is usually specified as 95 to 99 per cent of the time) such a measured difference will be found if the true difference is not smaller than some quantity. In designing an evaluation one must further state how one is willing to miss identifying a true difference of more than a specified magnitude. This statement refers to power analyses (see, for example, ). These steps of specifying procedural and impact objectives, translating those objectives into measurable variables, specifying the minimum or maximum acceptable difference in that variable, and doing the power analysis are prerequisites for any quantified evaluation.
The sad fact is that the research giving scientific justification to a programme is often so lacking that these steps are impossible. Experiments in the precise setting of the proposed programme may not always be needed (or possible). However, there needs to be a marshalling of the evidence from previous evaluations, experiments, and scientific knowledge, to serve as a basis for designing a relevant evaluation. Unfortunately, this is all too seldom done.