Cover Image
close this bookMethods for the Evaluation of the Impact of Food and Nutrition Programmes (UNU, 1984, 287 p.)
close this folder14. Built-in evaluation systems for supplementary feeding programmes why and how
View the document(introduction...)
View the documentIntroduction
View the documentThe concept of a built-in evaluation system
View the documentReasons for built-in evaluation
View the documentCharacteristics of a built-in evaluation system
View the documentImplementing a built-in evaluation system
View the documentUse of the data for overall programme management
View the documentReferences

Characteristics of a built-in evaluation system

As a starting point for describing a built-in evaluation system. we can identify three components of such a system: the data, the analytic methodology. and the management support structure. Then, having described these basic components, we can offer several principles to guide their design.

 

Components

Data System

The underpinnings of a built-in evaluation system lie in the data collection and recording procedures. Analytic results can be no better (more accurate) than the data used in the computational algorithms. To be effective, a data system should include two types of indicators, impact and process indicators. Measures of impact are needed to determine the degree to which a programme is achieving its goal. Process indicators are needed to ascertain the provision of inputs, their costs, as well as the quality and consistency of the service delivery system. Taken together, it becomes possible to relate project activities to impact. Traditionally, processoriented data have been the subject of project monitoring systems, but have been divorced from any attempt to substantiate or explain the achievement of impact. An example of an impact indicator for a supplementary feeding programme might be the percentage of two-year-olds below 70 per cent of a weightfor-age standard, while a process indicator might be the number of kilograms of the supplement distributed each month. (Note that these indicators are not necessarily directly measurable: they may be computed from simpler data elements stored in the data system. To illustrate, in order to derive the percentage of two-yearolds below 70 per cent of the standard, the age and weight for each child must be ascertained, the weight-forage score calculated, and the percentage of the standard computed. Only then can the overall percentage below some given level thought to define a malnourished state be calculated. Similarly, the amount of food distributed might be computed by subtracting stocks on hand at month's end from the sum of the stocks on hand at the first of the month, plus all shipments received during the month.)

Analytic Methodology

Data are just "numbers" unless a defined procedure for reviewing the numbers is carried out. Although certain analytic procedures may seem obvious, it is our experience that considerable skill is required to arrive at the proper interpretation of the statistics used to summarize a batch of "numbers." For example, it is intuitively appealing to compare the percentage of two-year-olds below 70 per cent of the standard at two points in time to estimate impact. However, the experienced analyst will look at drop-outs in the intervening period (are the malnourished disappearing from the programme rolls faster than the well-nourished?) as well as new registrants during that period (were the new children entering the programme actually better off?). Seasonality, a change in the economic system, or bad harvests might also account for changes in the nutritional impact indicators over time. Similarly, distribution estimates may be abnormally high due to spoilage or pilferage.

An analyst must be trained to review the competing explanations for observed changes in outcome measurements (both impact and process} and to accept only those that withstand an effort at discreditation. It is difficult to conceive of a data system that will systematically collect data on all possible alternative explanations in a "real-life" social setting. Thus, the burden of identifying the most plausible competing explanations falls on the local staff who are living in the area and aware of the changing conditions in their communities. Furthermore, a sense of timing must be introduced; that is, the analyst should learn to wait until trends become clear and not draw conclusions precipitously.

Management Support Structure

To be effective, a built-in evaluation system should be supported at the local level with expertise drawn from higher levels of management. Ordinarily, a hierarchical organizational structure - one calling for the supervision of several local distribution centers by someone of a higher authority and/or greater responsibility (and often a higher educational background or more thorough training - oversees an intervention. The supervisory function provided at these higher levels is very important: first, to provide assistance to the manager of each distribution center in the analysis of data, and second, to transfer knowledge derived at one center to the managers of other centers. Our observation of existing programmes suggests that this mid-level management is often lacking in practice though existing on paper. (In the developing world, the logistics of moving supervisory personnel from center to center often preclude a viable supervisory activity.) Such a situation would totally disrupt a built-in evaluation system.

We suggest that supervisors be well versed in the principle of "management by exception". In reviewing data collected at distribution centers. supervisors should identify centers with abnormally bad indicators or extraordinarily good ones. The former need extra help; the latter may hold the keys to success. (Incidentally, centers with abnormal indicators on either end of the spectrum are often those making egregious errors in data collection and recording). By singling out centers performing on the extremes. supervisors can direct their efforts where they can best be used and, simultaneously, gain insight into what project components or community characteristics lend themselves to the attainment of objectives.

 

Principles

We now offer six principles to guide the design of the data component, the analysis component and, finally the management structure.

1. Data should be generated routinely at the local level. The very name "built-in evaluation system" suggests that the data collection procedure originate at the point of service delivery i.e., in targeted supplementary feeding programmes at the center or clinic performing the targeting function, delivery of health and nutrition services, and/or distributing the food. Although it is possible to conceive of a system relying on data generated by a traveling team of survey specialists, such a team would lack familiarity with the context from which the data were drawn. Routine local data collection, on the other hand, is done by those with greatest knowledge of local conditions and, therefore, the best ability to detect irregularities in the numbers and to interpret the results.

2. The data collected must be used for management of the local center or clinic as well as for evaluation of the programme as a whole. It is not uncommon practice in many existing programmes, especially ones attempting to target services to the most vulnerable, to weigh children as a prerequisite for inclusion in the intervention. In some cases, the data are used as a diagnostic tool to determine what additional services are required by the recipient. But, it is rare, indeed, that data on individuals are aggregated for a service center or health clinic to determine the overall impact of the package of services provided. in the absence of such a management-oriented activity, data systems tend to deteriorate, particularly as humanitarian interests, political pressures, or the press of time force practitioners to abandon data collection activities for the sake of delivering more service.

Aggregation of the data facilitates better management. For example, if it is discovered that two-year-olds are consistently in greater need of, and more responsive to, food supplements than are children of other ages, good managers may consider altering their targeting criteria. They may choose to concentrate their efforts more on two-yearolds or introduce community outreach to find two-year-olds not yet in the programme. It is reasonable to believe that once managers perceive the usefulness of data for sound management, they will continue to take care to collect the data accurately and completely.

3. The quantities of data recorded and analysed for any purpose should be kept to a bare minimum, particularly at the initiation of the system. Because data collection and analysis for any purpose is costly. both in time and money, it is important not to overburden the intervention with data-related chores. Too much data can prove to be as harmful as none at all. When food distribution center managers are inundated with data, they are unable to apply any quality control to the collection and recording activities; they are also less likely to look at and analyse what information is available because of the immensity of the task. A system calling for the examination of a few key indicators is, therefore, a workable and appropriate starting point. The design problem and creative task is selecting the proper summary statistics, given the components of the intervention and the skill levels of the managers. It should be anticipated that the initial system design will prove inadequate quickly as questions of interpretation are raised at the local levels. Therefore, provision for expanding the system should be included in the initial design. However, expansion should be dictated by the experience in the field and not the conventional wisdom of outside consultants.

4. Analytic procedures must be well-defined and understood at the local level. The interpretation of changes in selected impact and/or process indicators can be very difficult. For example, a drop in the supply of a food supplement at a center can be due to many factors: spoilage, an unanticipated increase in the number of programme participants, a failure of a shipment to arrive, and so forth. The manager must be sufficiently skilled to recognize that the change in supply is important, to search for the reason, and to take corrective action if needed. At a more complex level, Community-level functionaries must carry out not only tasks such as weighing children, but also more complex functions such as understanding the rudimentary implications and meaning of such data. That is, the village-level worker must be able to interpret the growth parameters being collected and impart such knowledge to project participants. Furthermore, workers must be trained to perform simple aggregation of data at the village level. This facilitates the recognition and anticipation of communitywide trends. But of even greater importance, when we say "well-defined and understood," it is meant that the manager must be able not only to carry out the computations, but also to identify and examine multiple explanations of the results. This suggests that an area for intense training must be the analysis of data. Training should not stop with form filling-out exercises.

5. The staff implementing the intervention must be committed to goal-oriented management. Too often, programme implementers consider the intuitive argument that feeding hungry people is inherently beneficial, and that is sufficient grounds for carrying out a supplementary feeding programme. Unfortunately, the evidence does not often validate this argument. Therefore, a built-in evaluation system must be based on the notion that a programme, as initially defined, might not reach its stated goals. As a result, managerial talent must be employed at all levels of programme design and implementation to use data to verify and/or facilitate the attainment of goals. This necessitates the abandonment of doctrinaire preconceptions and the substitution for them of an attitude that fosters creativity through recognition that the attainment of goals requires a process of iterative learning and experimentation.

6. The built-in evaluation system must be thought of as a dynamic entity subject to evolutionary growth throughout an intervention. We have already alluded to the need to initiate the system with a minimal set of data and a comprehensible set of analytic procedures. The system will grow as managers at all levels of the organizational hierarchy perceive the need for additional information to interpret the basic indicators. But, more importantly, the system will grow in response to the more sophisticated questions asked by managers once the simpler questions are answered satisfactorily. For example, a system might be designed, first and foremost, to verify that a package of services has had a desired impact. Once that is shown, it is logical to ask which components of the package are "cost-effective." This will require a modification of the built-in evaluation system. In this way, the system will evolve over time in response to the needs of management of the intervention.