|Methods for the Evaluation of the Impact of Food and Nutrition Programmes (UNU, 1984, 287 p.)|
|14. Built-in evaluation systems for supplementary feeding programmes why and how|
One reason for holding the conference leading to the publication of this document is the general dissatisfaction with the state of the art in the evaluation of food and nutrition programmes. Previous efforts to evaluate such intervention have had several disappointing outcomes. Often, evaluations have been restricted to a review of the process and procedures employed in the delivery of services because of a lack of available data. When impact data have been available, most evaluations have failed to demonstrate nutritional or health impact, or they have produced inconclusive results. Even in the few cases where nutritional and health benefits have been shown, critics have hastened to point out the methodological weaknesses of those evaluations. (A more detailed discussion of the evaluation methodologies used in recent food-aid evaluation programmes and the findings of those evaluation can be found in 11].) Because of those weaknesses (in data collection, measurement, research design and interpretation of results), different approaches to analysis can reveal competing explanations for the observed outcomes or, in many cases. entirely different outcomes altogether.
Traditionally, nutrition interventions have been evaluated using the basic strategies of social science research. Hypotheses are formulated and an experimental or quasi-experimental design is established and applied to test those hypotheses. The element of this strategy that enables the evaluator to attribute observed changes in nutritional or health status to an intervention is the use of controls: if the participant group fares better than the non-participant group, it is assumed that the programme is the cause. When circumstances preclude identifying a randomly-assigned control population to be compared with a treatment group, it is possible to use statistical controls (multivariate techniques such as regression), reflexive controls (comparisons of the treatment group with itself at different points in time), or other analytical techniques to account for or minimize the effects of extraneous factors. (For an example of a mixed strategy using statistical controls to account for differences between a control group and a treatment group, see .
For a more general discussion of the array of quasi-experimental techniques, see .)
In field settings, the implementation of these strategies for selecting suitable controls has proven to be a difficult and challenging task. The primary source of this difficulty is the lack of constancy of the "real" world and the inability, outside a laboratory, to maintain experimental conditions for a sustained period of time. Specifically, evaluations have faltered because they have:
A consequence of these flaws in evaluation is that there remain numerous competing explanations, other than the programme's impact for changes in the target and/or control groups, whatever the research design and/or analytic methodology. In the literature, common sources of competing explanations, often called "threats to validity," have been catalogued and illustrated. (For a general discussion, see [41. For a discussion related specifically to nutrition, see .)
Even more disturbing than evaluations without positive or ambiguous results are the large number of nutrition projects and programmes that are never evaluated at all. Evaluation has been viewed as a threat to programme continuity or as an expense hardly justifiable in light of the need to concentrate resources on service delivery. Rarely has evaluation been viewed as a tool to help learn how to achieve greater nutritional impact. The result is that the potential of evaluation as a means of improving project design and implementation has not yet been realized.
In response to the difficulties in carrying out adequate evaluations in the health/nutrition field, a number of chapters in this publication provide valuable guidance. Specifically, the choice and utilization of measures and indicators of nutritional status, as well as the collection and analysis of nutrition-related data, are discussed in considerable depth. Despite the informative nature of these chapters, there remains considerable uncertainty as to whether a "one point in time" evaluation of an ongoing feeding or nutrition programme can yield useful and conclusive results. even if we overcome the difficulties in measuring nutritional progress alluded to above. While it remains unclear that such an approach will ever yield reliable data or definitive indication of impact, even more disconcerting is the small likelihood that evaluation, as traditionally practiced, will improve project performance, justifying the time and resources expended. An alternative approach to evaluation must be considered in order to minimize the methodological problems discussed above, and, concurrently, to broaden the usefulness of evaluation in the planning and implementation phases of a large-scale intervention.