|Nutrition education for the public. Discussion papers of the FAO Expert Consultation (Rome, Italy, 18-22 September 1995) - FAO Food and Nutrition Paper 62 - (1997)|
|Evaluation of nutrition education programmes: Implications for programme planners and evaluators|
The procedures employed in efficiency assessment (cost-benefit and cost-effectiveness) are often highly technical, and the analysis is based on numerous assumptions (Sønbø Kristiansen, Eggen & Thelle, 1991; Rossi & Freeman, 1993). Nutrition education for the public aiming at changing behaviour has to compete with other programmes for resources. Policy makers and funding agencies (government agencies, United Nation agencies and NGOs) must decide on how to allocate funding among these various programmes. In this competition a central question is: which programmes would show the biggest payoffs per money unit of expenditure?
For decision makers the reference programme is often the one that produces the most impact on the most targets for a given level of expenditure. This simple principle is the foundation for cost-benefit and cost-effectiveness analyses. These analyses provide systematic approaches to resource allocation. From a conceptual point of view, perhaps the most significant value of efficiency analysis is that it forces evaluators and programme personnel to think in a disciplined fashion about both costs and benefits (Rossi & Freeman, 1993).
In cost-benefit analyses the outcomes of nutrition education programmes are expressed in monetary terms:
For example a cost-benefit analysis would focus on the difference between money expended on the nutrition education programme and the money savings from reduced expenditure for treating dietary-related diseases (anaemia, goitre, vitamin A related blindness, etc.), loss of productive capacity, life years gained, quality of life years saved, etc.
In cost-effectiveness analyses the outcome for nutrition education programmes is expressed in substantive terms:
For example a cost-effectiveness analysis of the same nutrition education programme as above would focus on the estimation of money expended to change the diet of each target.
Efficiency analysis can be done either in the planning or design phases of a programme. It is then called ex ante analysis. Ex ante analyses are not based on empirical information, and therefore run the risk of either under- or over-estimating the benefits of effectiveness. Most commonly, efficiency analyses of programmes take place after their completion, often as part of their impact evaluation. This is called ex post analysis where the purpose is to assess whether the costs of the intervention can be justified by the magnitude of net outcomes (Rossi & Freeman, 1993). An important strategy in efficiency analysis is to undertake several different analyses of the same programme, varying the assumptions made which are open for review and checking. This is called sensitivity analysis.
Cost-benefit analysis is controversial because only a portion of programme inputs and outcomes may reasonably be assigned a monetary value. One must ultimately place a value on human life in order to fully monetise the programme benefits (Zeckhauser, 1975; Sønbø Kristiansen, Eggen & Thelle, 1992; Rossi & Freeman, 1993). Efficiency analysis may be impractical and unwise for several reasons (Rossi & Freeman, 1993):
· The required technical procedures may be beyond the resources of the evaluation programme.
· Political or moral controversies may result from placing economic values on particular input and outcome measures. This may obscure the relevance and minimise the potential utility of an evaluation.
· Efficiency assessment may require taking different costs and outcomes into account, depending on the perspectives and values of sponsors, stakeholders11, targets and evaluators themselves. This may be difficult for at least some of the stakeholders to understand, and may obscure the relevance and utility of evaluations.
11 Stakeholders are individuals or organizations directly or indirectly affected by the implementation and results of interventions programmes (Rossi & Freeman, 1993).
· In some cases, the data needed for undertaking cost-benefit calculations are not fully available. The analytic and conceptual models may be inadequate, and often untested underlying assumptions may lead to faulty, questionable and unreliable results.
There are therefore considerable controversies about converting outcomes into monetary values. Cost-effectiveness analysis is seen as a more appropriate technique than cost-benefit analysis (Rossi & Freeman, 1993). Cost-effectiveness analysis requires monetising only the programmes cost, and the benefits are expressed in outcome units.