|New Approaches to New Realities (University of Wisconsin / Universidad de Wisconsin, 1996, 508 p.)|
|Theme FOUR: Social, Psychological, Economic and Developmental Issues|
This paper was prepared by Jeffrey S. Klenk for InterWorks. In addition to the resources listed in the paper, the following people provided significant contributions:
Jean-Chrys Bisetsa - works with the U.S. Agency for International Development in Burundi.
Ken Curtin - works with the International Rescue Committee in New York.
Angelina Muganza - is a Project Manager with the Community Mobilisation Department in Rwanda.
Ron Ockwell - is a consultant working primarily in the field of disaster management.
This paper is a synthesis of the efforts of all of those cited above and as such does not express the viewpoint of any single resource, contributor or organization.
With a growing worldwide demand for a decreasing supply of emergency response resources, donors are expecting an ever greater and more comprehensive level of accountability from emergency response agencies. In response, these agencies are increasingly emphasizing to operational staff the importance of effective monitoring and evaluation activities in their relief programs.
In the past, these functions - monitoring in particular - were often accorded less than top priority. They were carried out with relatively small budgets and with little enthusiasm by the operational relief agencies. As donor attention to resource delivery and end-use has increased, however, agencies are now viewing with utmost seriousness their capacity - or lack thereof - to monitor and evaluate programs effectively and report results. Training in emergency operations is focusing more on available techniques of monitoring and evaluation as well as on staff capacity to develop appropriate program objectives to guide the response.
With this increase in donor demands, it is no longer acceptable for a relief provider simply to channel resources to local implementing partners and then claim that its responsibility for those resources has ended. While all may agree that the relief providers physical control over the resources has ended, donors increasingly insist that the relief providers responsibility continues through end-use. Thus, it is essential for effective monitoring and evaluation systems - systems which can examine resource use, assess outcomes and/or achievements, and recommend needed changes - to be designed into the response plan from the outset.
This paper is an attempt to outline current trends in monitoring and evaluation with particular regard to emergency settlement program design and delivery. It is important from the outset to define the differences between these two activities which are so often classed together.
For the purposes of this paper, monitoring is a function carried out essentially to advise decision-makers of actual or impending deviations from plan in the emergency response program; it is performed to ensure that any events or actions which could cause the program to diverge from planned objectives are fully understood and communicated to those in a position to take corrective action if needed. Generally, the monitoring staff is concerned about compliance with approved policies and procedures, and is closely involved with the program on a day-to-day basis. The monitoring function is an ongoing activity which continues as long as emergency resources are programmed.
Evaluation is an activity carried out to determine the actual value of the program itself. The initial design and fundamental assumptions of the designers are necessarily the purview of the evaluator. While the evaluators, unlike the monitors, may not necessarily be connected with the program on a day-to-day basis, they must, as well, understand the operational environment in which they are working. Evaluations have traditionally been carried out as end-of-program activities (also known as terminal or retrospective activities) to ascertain final impact and to measure actual achievement of planned objectives. Some organizations as well carry out mid-term evaluations to ensure the relevance and effectiveness of the program activities to date.
A number of organizations increasingly stress the importance of ongoing evaluation (i.e., throughout the life of the program) - particularly for social impact programs - to ascertain changes in social awareness, local capacities, and coping mechanisms. In such programs, observable increases in the capacities of the affected population are considered the best determinant of the actual value of the emergency response. Some agencies recommend as well that long-term1 evaluations be carried out years after the response has ended to assess impact and sustainability. Such long-term evaluations may be useful to assess whether the emergency response has actually contributed to a change in behavior - especially in long-term emergency preparedness - of the affected population. In such terminal or long-term evaluations, evaluators should seek to identify lessons learned which can guide future responses.
1 In UNDP parlance: ex post
Each of the activities - monitoring and evaluation - depends upon the findings of the other. The monitoring staff produces information which is consequently used by the evaluator. As well, the findings of the evaluation are fed back into the program design process, in turn, generating subsequent activities for the monitoring staff to follow.
Where the two activities are, perhaps, most closely related is the emphasis each has on determining impact and accountability. Each activity stresses the importance of ascertaining beneficiary impact to ensure that the emergency response program is, in fact, able to achieve its stated goals of addressing the needs of the affected population. Each activity is, as well, concerned with accountability, i.e., ensuring that the relief provider can document that resources have in fact been used in accordance with donor intent.
1. Monitoring and evaluation are basic functions essential to the effective management of emergency response programs and to the achievement of program objectives. These inter-related functions should be planned from the outset of the response to ensure timely adaptation to changing needs and to minimize unintended negative effects.
Monitoring is carried out to (1) detect changes in emergency conditions which may impact achievement of planned program objectives, (2) ascertain operational problems or program constraints, and (3) ensure that resources are used in accordance with the approved plan. Evaluation is performed to determine the actual value of the emergency response, to recommend improvements in future program design, and, in some cases, to transfer knowledge to local nationals. Both activities should be viewed as essential management activities and designed from the outset as integral parts of the overall emergency response.
2. Programmatic or operational problems identified by monitoring and evaluation activities must be followed by the appropriate corrective action.
Monitoring and evaluation activities are intended to detect needed changes in the emergency response. However, the mere capacity to identify problems is insufficient; corrective actions must be taken to improve the present - or similar future - programs. Indeed, in the absence of the political will and/or programmatic capacity needed to take corrective action, monitoring and evaluation activities can be considered an unfortunate waste of valuable emergency program resources.
To ensure that decision-makers can in fact take advantage of monitoring or evaluation findings, the establishment of a clear, understandable reporting system to facilitate the use of information is essential. Information fed into the reporting system should satisfy the necessary and sufficient criteria; i.e., it should be both necessary and sufficient to inform needed decisions.
3. Monitoring and evaluation systems in an emergency settlement should, to the extent possible, make use of the community-management capacities of the affected population. Self-monitoring by the population should be encouraged wherever feasible.
Self-monitoring by emergency program participants is often the most effective - and most inexpensive - means of ensuring that information concerning the quality of emergency goods and services delivery is made known to the relief provider. The design of a program making use of local capacities implies that program developers are both knowledgeable about the affected population and willing to include the affected population in the design of the monitoring program. To be sure, monitors should always seek information from a wide range of sources: local authorities, community leaders, other emergency response agency staff, etc. Monitoring by a motivated community, however, will often provide the best source of information concerning the actual functioning of planned services.
4. Regular on-site visits by monitors and ad hoc visits by supervisory staff - e.g., to the emergency settlement, the warehouse, the distribution center, or the beneficiarys household - to observe conditions first-hand and gather information are essential.
While field reports from operational staff and/or implementing partners in the emergency settlement do, in fact, serve as an important monitoring tool, visits by staff from agencies channeling funds or in-kind resources should be planned as a regular part of the monitoring program. Both planned visits and random checks should be carried out.
5. The primary purpose of ongoing evaluation is to ensure that program objectives are sound and achievable, and to ascertain needed modifications in those objectives. The primary purpose of a terminal or retrospective evaluation is to educate: to determine the ultimate effects of and draw lessons from the emergency response.
The cliché of reinventing the wheel applies only too well to the field of emergency management. Emergency response agencies should ensure that lessons learned are available to agency, government and community leaders for future emergency response programs. Emergency response agencies must be prepared to spend the resources needed to develop, maintain, and make use of institutional memory. Documentation and staff training are the keys to this process.
6. Evaluation of an emergency response should include an assessment of the degree to which the processes of participation by and empowerment of the affected population have been supported by the response. Such an assessment should be considered complementary to the evaluators usual focus on quantifying the progress made in achieving program objectives.
As such, both qualitative and quantitative indicators of progress should be included in the evaluation design. The evaluation must make an effort to understand the changes in capacity - both increases and decreases - of the emergency settlement population brought about by the response. Consideration should be given to the type of participation (economic, political, cultural, etc.) as well as to the degree of participation (control of or involvement in the planning process, implementing an already planned response, etc.)
Planning for monitoring and evaluation
Effective monitoring and evaluation systems must necessarily serve decision-making. To do so, it is essential for emergency respondents to consider the components of those systems from the outset of the response: essential material and human resources, appropriate information gathering methodologies, and effective mechanisms for the feedback of findings and recommendations to decision-makers should be planned in conjunction with the design of the emergency response. These efforts can often reduce the costs of corrective action and other forms of damage control needed later in the response.
Design and communication of program objectives
The writing of clear program objectives is accepted as one of the most critical and difficult program design tasks. The objectives will serve as the guide points by which the importance of the monitors findings can be judged and by which program evaluators can measure program results. It is understood that clear objectives for a relief program designed to serve the needs of an emergency-affected population should include information on population, location, and duration; i.e., the objectives should clarify - at a minimum - who the targeted population is, where the program is to be carried out, and how long the particular intervention is expected to last.
More and more, emergency respondents are accepting that the long-term value of the intervention is the degree to which it assists the capacity of the affected population to take care of its own needs. In such cases, the writing of program objectives takes on a somewhat softer edge with concepts like coping mechanisms, capacity levels, and degrees of participation resisting hard measure or quantification. Evaluations which seek primarily to ascertain the numbers of bags of cornmeal distributed or the numbers of pit latrines constructed may not offer much information concerning the responses social impact on the emergency settlement population. It is, then, incumbent upon planners to develop new indicators of progress which can ascertain changes in beneficiary capacity, and design the response programs objectives accordingly.
The aims of the program must, as well, be clearly communicated to all concerned with program design and implementation. This includes the affected population in the emergency settlement as well as the resident or host population. If those potentially affected by the program understand its intent, then members of that population will become, in effect, monitors of the program. They will know when and if the response is no longer meeting planned goals and will, if encouraged, find a way to communicate those deviations to implementing agency staff.
Ideally, program design and communication of its aims are carried out in close collaboration with representatives of the affected population. In so doing, the emergency response agency maximizes the potential for partnership and credibility.
Planning the M&E methodology
The success of the methodology employed by monitors and evaluators will often hinge upon the number and type of information sources. Generally, a successful methodology will require that monitors and evaluators interview a significant number and a wide variety of people involved in the emergency response. The Rapid Rural Appraisal concept of triangulation applies - information must be gleaned from a variety of sources, using a variety of techniques, by a multi-disciplinary team or group.
Sub-contractors, donors, local and national authorities, other agency staff are all obvious sources of information. Even more importantly, however, the interviewer must ensure that the viewpoints of the affected population are heard and recorded. Monitors and evaluators must make a concerted effort to interview the affected population and ensure that a cross section of that population is included in the interviewing process - women and men, young and elderly, healthy and disabled/vulnerable, low and high income, skilled and unskilled, etc.
To be sure, interviewing beneficiaries can be a difficult and time-consuming process. Communications are often constrained by language differences and/or a lack of cultural awareness. Nonetheless, there will inevitably be some beneficiaries who will have a perspective on the response that differs significantly from that of the official emergency respondents - and sometimes even from that of their own leaders. It is incumbent on the interviewers to develop as complete a picture as possible of the emergency response.
Emergency response staff may view monitoring and evaluation activities as intrusive or as a threat to their reputation or jobs. Monitors and evaluators should strive to counter such attitudes by demonstrating that their purpose is to highlight and reinforce the strengths of the program as well as to identify any existing or potential obstacles to an improved implementation. They must talk to as many program staff and other interested parties as possible to create an atmosphere of transparency. In a program evaluation, one often useful method of limiting staff misgivings is to gather the entire staff together at the start of the process to outline the intended scope of work and then to meet again with the entire staff after the evaluation to review preliminary observations.
Monitors and evaluators can minimize suspicions that tend to develop by assuring interviewees that confidentiality will be maintained. If the intent is to obtain frank feedback on a program, then interviews should be conducted in private - i.e., in the absence of project managers or donor representatives.
Monitoring indicators and information sources
Monitoring is more than the mere tracking of emergency program resources. Rather, the monitoring system should generate the data by which outcomes and effects of programmed activities can be determined and operational problems resolved. It is the function of the monitoring system to alert management when there is a likelihood of the emergency response veering off-track.
Each agency will determine according to its mandate, information needs, and resources the indicators to be examined by the monitoring staff. Choice of indicators to be monitored should be based on program objectives ensure that management. Some of the more critical indicators to be followed in an emergency operation and possible sources of information concerning those indicators include:
· Logistics: The well-being of the emergency settlement often depends upon the proper functioning of the logistics system. While program monitors are rarely trained logisticians, it is essential that they at least be made aware of the need to watch for critical changes in the operating environment which could hamper the relief operation, e.g., worsening road conditions, increasingly corrupt transporter practices, declining security, loss of storage capacity within the emergency settlement, decreasing availability of fuel or spare parts, increasingly inadequate truck fleets.
Information sources: Interviews with key informants (e.g., commercial transporters, suppliers, vendors, government transport ministries, World Food Program staff, other NGOs involved in the transport of relief goods, etc.)
· Distribution: Distribution system monitors should consider efficacy, efficiency and equity of the response, and answer questions such as the following:
1. Efficacy: Are the items being distributed according to plan? Does the current system ensure that the targeted recipients are receiving goods and services as planned? If not, why not? In the case of food distributions: Is a wet feeding program preferable to a dry feeding program? Should distributions be directly to women to ensure that children are fed? Would a women-directed distribution system pose an extra, unacceptable burden on their time?
2. Efficiency: Does the current distribution system seek a balance between the aims of minimizing transport costs and maximizing beneficiary access? Is the current level of centralization efficient, or should there be more/fewer distribution points? Are distribution losses minimized? Are there significant leakages into local markets though losses, sales by recipients, or misuse? Are there sufficient/too many staff involved in the distribution?
3. Equity: Is the current system fair for the maximum possible number in the emergency settlement? Are the same distribution units/measures used throughout the settlement? What groups are gaining/losing access because of the current system of distribution?
Food aid monitors should take note of the adequacy of the general ration being distributed to the emergency settlement population. If the general ration (coupled with other food stocks to which the population has access) cannot ensure the minimum needed for survival, then selective feeding programs are unlikely to function as planned, targeted food supplements are likely to be shared by entire households, and the most vulnerable members of the population will face extreme risk of malnutrition.
Information sources: Interviews with key informants (representatives of the emergency settlements population - women in particular; distribution agency staff; warehouse staff; etc.); visual inspection by the monitors of the actual distribution process; examination of distribution agency reports.
· End-use: Monitoring of end-use of distributed relief goods should generate information on the access of households - and of vulnerable groups in particular - to program resources and on the appropriateness of those resources for the targeted population. Monitors should be trained in basic interview techniques (e.g., RRA or PRA methodologies).
Monitoring of food commodity end-use at the household level would include an assessment of the potential impact of the emergency response on household food security, defined as having physical access, economic access, and longer-term sustainability of access to food. Ideally, that assessment would reveal whether households are currently securing food at the expense of their longer-run capability to do so (FAO, 1992), i.e., whether users must continue to sell assets or borrow against future assets to obtain food.
End-use by the emergency settlement in general can sometimes be monitored by visits to local markets to ascertain the type and amounts of relief items for sale. Substantial amounts being sold might indicate (1) improving overall conditions (i.e., beneficiaries are no longer in need of the goods), (2) the wrong basket of goods is being distributed, or (3) major diversions by corrupt individuals or groups in the settlement.
Information sources: Semi-structured interviews with household members or other agency staff who visit households; visual inspection of local markets and interviews with local sellers (monitors can learn much about the reasons that goods are in the market by talking incognito to sellers; indeed, the actual path of the diversion can sometimes be determined through such talks.)
· Market conditions: Other economic indicators such as local market prices for essential food items, livestock, seeds, etc. are useful for respondents to be able to gauge the changes in economic conditions, understand the changes in magnitude of the emergency, and plan the eventual phase-out of the relief program and the transition to longer-term recovery efforts. Monitors must be trained - and encouraged to take the extra time - to visit markets and record prices.
Information sources: Interviews with household members, local market sellers and buyers, transporters, local authorities.
· Implementing partner performance: Monitors are often asked to report on the progress made by an implementing partner in carrying out agreed tasks. The assessment of a partners performance requires that the monitor understand his/her own agencys plans and information systems and the partners agreed roles and responsibilities. To perform such a function, the monitor must possess strong inter-personal, diplomatic skills and have a willingness to listen to the partners complaints. Skilled monitors are sometimes used to train implementing partner staff in basic administrative systems such as bookkeeping, reporting, warehousing, etc. to help ensure that performance goals can be achieved. A monitor performing this role fosters a spirit of cooperation; in so doing, the monitor performs a positive, developmental function and is not viewed simply as a watch dog.
Information sources: Interviews with implementing partner staff, representatives of the affected population, representatives of donors or response agencies who also may be working with the partner in question. A quick and simple source of information on implementing partner performance is the actual body of reports generated by the partner; the monitor should review these reports prior to visiting the partners offices.
· Financial: Proper financial monitoring requires a plan, i.e., a budget that has been designed around the needs of the program and approved by the agencys financial authorities. Effective financial monitoring will follow rates of expenditure against specific budget lines over time. The level or intensity of financial monitoring will depend upon agency management needs. Generally - at a minimum - monthly comparisons of actual vs. planned expenditures should be carried out.
· Security: As emergencies become increasingly complex in cause and effect, monitors must increasingly concern themselves with conditions of insecurity. The monitoring system should be designed to generate and disseminate information on changes in security risks, particularly those risks that threaten the monitors themselves and those that cause bottlenecks in other sectors of the emergency response.
Information sources: Interviews with key informants (e.g., local authorities, peace-keeping troops, recently-arrived displaced persons, journalists, etc.)
Monitors are often asked to perform what could be considered more precisely a situation assessment function in those cases where changes in prevailing emergency conditions are likely to affect directly the capacity of the program to achieve planned objectives.
Examples of emergency conditions that could directly impact the capacity to achieve program objectives - and would therefore be appropriate subject matter for monitors - include:
Mortality rate: In an emergency response program whose objective is to save lives, the crude mortality rate (i.e., of the entire affected population) should be monitored on a weekly basis. Monitors should take note that a sudden drop in the mortality rate might not necessarily imply that conditions are improving. It may be that high mortality rates have declined momentarily simply because the most vulnerable members of the emergency settlement - infants and young children - have died. Given the relatively high susceptibility to malnutrition and communicable disease of small children, another indicator of emergency conditions to monitor is the Under-Fives Mortality Rate, calculated for under-five-year-old children.
Information sources: Community health center records and community health workers. If possible, in the early stages of the emergency, the emergency response agency might consider paying for members of the affected or host population to act as burial attendants or watchers who should be trained to track the number of burials/day.
Health and nutrition indicators: In programs whose objectives are to improve health and nutrition status, emergency program monitors may be asked to monitor:
· Morbidity rates: For serious diseases that are communicable and/or related to nutritional deficiency. Rates are expressed as percentages of the population sampled. Diseases to be monitored include, among others, measles, cholera, diarrhea, meningitis, and acute respiratory infections.
Information sources: Interviews with key informants (community health workers, public health officials); semi-structured interviews with members of the emergency settlement (particularly members of vulnerable groups); examination of community health center records (to determine immunization rates, changes in medical or severely malnourished referral rates, quantities of Oral Rehydration Treatment distributed, etc.); records of other local implementing agencies (particularly those engaged in W/H measurement and health screening.)
Human rights: Complex emergencies may engender changes in prior social hierarchies; traditional leadership structures may evolve or disappear altogether. In programs whose objective is to secure human rights (including womens rights), monitors should observe the impact of these social changes on human rights in the emergency settlement; e.g., does the rise to power of a private militia mean that the relief item distribution system is likely to be hijacked for the benefit of that militia and its followers? Are there basic guarantees in place for the protection of women if the distribution system plans to target women with direct distributions?
Information sources: Private interviews with members of the affected population (in addition to discussions with their leaders); with implementing agency staff; with the local host populations; the media; military or armed faction representatives.
Focus of emergency evaluations
The focus of the evaluation will depend on the original assumptions of the planners of the emergency response and on the objectives conceived for that response. Traditionally, evaluators have sought whenever possible to quantify progress achieved in terms of those objectives. Such quantitative indicators of change would be developed such that both program proponents and informed skeptics could agree whether or not progress has in fact been achieved (Turner, 1976).
Some advocates of quantitative approaches consider plans well-written only if they offer clearly measurable objectives. Conversely, plans which do not provide such measurable objectives are viewed as ill-conceived, the work of subjective, unscientific minds. Others, however, increasingly view a strict adherence to quantitative, measurable objectives as insufficient if the social changes engendered (or ignored) by the emergency response are to be understood.
Increasingly, emergency program planners view the processes of capacity-building, participation, and/or the empowerment of affected populations as the critical aims of the emergency response, at least as important as the efficient delivery of relief commodities or the effective establishment of a functioning distribution system. The actual process of implementation becomes an equally important focus of the response and, consequently, of the evaluation as well. Given this change, it is incumbent upon evaluators to develop the qualitative tools necessary to gauge progress according to these social indicators. The objective monetary values sought by the cost/benefit analyst may not easily be ascribed here; indeed, they may not be appropriate.
Thus, an evaluation of an emergency response today will not only compare the costs and benefits of repairing the settlements water and sanitation system, it will assess, as well, the degree of emergency settlement community participation in the program, from the outset of the design stage. An evaluation of the food distribution system will focus not only on the size of the ration distributed or the number of cans of vegetable oil lost, it will examine whether, for instance, women in the community have gained or lost control of their lives because of the distribution system adopted.
Increasingly, the evaluator will consider whether a particular response has increased the capacities of the emergency settlement to deal long-term with their own problems or has worsened the underlying vulnerabilities of that population. The focus of the program planners - and consequently of the evaluators as well - will more and more center around such process-oriented aims as helping the population to recover traditional coping mechanisms, assisting the population to restore its sense of dignity, or empowering women to gain control over their lives.
Whatever approach is taken by the evaluators, the assessment of value will most likely consider the evaluative criteria of appropriateness and cost-effectiveness, coverage and clearance, and connectedness and impact (ODI, 1995). Questions to be investigated by evaluators according to these criteria might include:
a. Appropriateness and cost-effectiveness of the response
· What were the actual needs of the population? How were they assessed? Were the assessments accurate?
· What were the needs of the affected population actually addressed by the interventions design?
· To what extent did the actual response(s) meet the needs of the affected population? Were the responses appropriate given (1) needs and (2) available resources?
· How might such a response be better designed in the future to meet those needs?
b. Coverage and coherence of the response
· How were beneficiaries in the affected population identified/targeted?
· What were the principle gaps in the response? Were attempts made to fill those gaps? If no, why not?
· Were the various responses adequately coordinated? What were the results when they were/were not coordinated?
· How might the response be redesigned to ensure coherence among the various actors and responses?
c. Connectedness and impact of the response
· What was the impact of the response on, e.g., mortality and malnutrition rates, security and protection, restoration of coping mechanisms?
· How did the relief effort impact longer-term recovery efforts?
· How might future responses be redesigned to ensure connectedness between the relief program and longer-term recovery needs?
Monitoring via reporting systems
For those unable to visit the emergency settlement, the reporting system is, in effect, the primary monitoring system. In designing the reporting system, the necessary and sufficient rule should be invoked: Is the information to be collected necessary? Do respondents really need this particular information (whose collection, analysis and dissemination always carry a cost)? As well, is the information to be collected sufficient for the effective management of the response? Do respondents need more? If information satisfies the necessary and sufficient rule, than it should - assuming availability of resources - be included in the reporting system.
· SITREPS: Generally, situation reports or SITREPs become the effective monitoring tool for those geographically or organizationally removed from the actual emergency settlement. The frequency of situation reporting will vary with the rate of change of emergency conditions. While each organization maintains its own policies and procedures on situation reports, monitoring via the situation reporting system generally requires information on actual or expected changes in the following categories:
Prevailing Emergency Events, Conditions (deaths, casualties, infrastructure damage)
Security Conditions (for agency staff, for beneficiaries, of resources)
Target population (number and composition, vulnerable groups, etc.)
Basic needs of the affected population (medical/food/water, etc.)
Logistical needs and concerns
Other government, agency responses to-date (planned and actual) and coordination issues
Internal management/operational needs
Implementing partner needs
Proposed corrective actions
A narrative describing actual vs. planned levels of output
· Pipeline Analysis Reports: Pipeline analysis is a monitoring activity performed by respondents to predict potential ruptures in the supply and delivery of relief supplies to the emergency settlement. The capacity to produce a pipeline analysis report depends upon the availability (and accuracy) of the following information (by type of resource):
Opening resource balances (financial and material)
Expected arrivals, purchases, borrowings or other increases to resource inventories
Expected deliveries/distributions, loans, losses, or other reductions to resource inventories
Generally, a pipeline analysis should forecast at least three to six months ahead so that emergency program respondents will have adequate time to take action against a potential rupture in the supply line.
Organizing monitoring and evaluation activities
Setting schedules: determining frequency of monitoring
There is no hard and fast rule concerning the frequency of monitoring visits although generally it can be said that the monitoring program should be intensified when the rate of change of emergency conditions increases. Obviously, the availability of resources will play a major role in the decision to send monitors to the emergency settlement more frequently. Generally, agencies should plan to monitor more closely when the program involves large volumes of assistance or when there is a high risk of improper resource usage. As well, when phase-out of the relief response is being considered, the level of monitoring should increase to detect possible critical changes in the operating environment.
Monitoring schedules should be worked out in advance and coordinated with local government authorities as needed. It is also important to explain to local authorities the importance of an occasional unscheduled, random visit to the emergency settlement.
Monitoring and evaluation team composition
Monitors and evaluators should have strong inter-personal and analytical skills. Particularly in complex emergency situations, the need to interact with such diverse groups as local authorities, implementing partner agency staff, beneficiaries, and - increasingly - the military or members of armed factions demands an extraordinary level of diplomacy and tact not heretofore required.
Experience with rapid rural appraisal techniques (particularly semi-structured interviewing skills, the willingness and ability to listen, the ability to foster discussion among participating beneficiary groups, social organization and labor mapping skills, etc.) is increasingly viewed as useful by emergency program planners. Keen observation skills and a deep sense of curiosity are also needed.
Evaluators should be able to apply basic cost/benefit analysis techniques (as well as other basic indicators: present value, rate of return, payback period, etc.) all the while understanding that these measures should be complemented by more qualitative indicators of the social impact of the response. Obviously, monitors and evaluators should have strong writing, numeracy, and accounting skills as well.
Familiarity with the day to day operations of the program is often essential for the monitors who must fully understand expected program outputs to be able to determine if the current implementation practices are unlikely to produce those outputs. Such familiarity is also needed in a process of ongoing evaluation in which evaluators are asked to ascertain changes in local capacities brought about by the program. For terminal or long-term evaluations, this day-to-day familiarity is not essential. Ideally, evaluators should be able to work from program documents and interviews and determine whether or not planned objectives were in fact achieved.
The decision to perform an external evaluation (i.e., contract evaluators from outside the agency) will depend on the contractors intentions. Such a practice is well-advised in situations where managers are too close to (i.e., personally involved with) the program and clearly in need of a fresh perspective. A mid-term evaluator hired from outside the implementing agency can offer new insight and recommendations to move the program along. Outside evaluators are also hired to support and give credence to decisions that managers wish to take but, for political reasons, find it difficult to announce or implement.
Organizers of the monitoring and the evaluation teams should consider various factors: e.g., leadership, gender, language, ethnic composition of the emergency settlement, experience, religion, ability to function within the operating environment of the emergency settlement, etc. In some cultures, for example, it may not be possible for male members of the evaluation team to address female members of the emergency settlement directly; gender factors, therefore, become critical. It is essential that members of the team be able to speak the local language (through an interpreter if need be.)
The required background or skills will depend upon the type of program. Monitoring the progress of a water supply project will most likely demand the inclusion of a hygiene or sanitation specialist on the team. Evaluating the achievements of a community health program will require that a public health nurse or other type of community health specialist be present.
What is always required is a clearly designated and highly skilled leader who can maintain motivation among team members, negotiate a path around the many bottlenecks that may arise, and ensure that the needed report is produced and disseminated with a high degree of team support.
Preparing the monitoring and evaluation teams
Emergency respondents should take the time to ensure that monitoring and evaluation responsibilities are clearly defined and assigned. Specific guidelines or terms of reference should be prepared, preferably for each visit but certainly as a minimum for each sector and/or phase of the emergency response program to be monitored or evaluated.
Reporting structures and schedules should also be worked out and explained to the teams in advance. Monitoring and evaluation teams should - to the extent possible - know in advance who the users of their findings will be and what, if any, are the particular needs of each of those users.
Where training is needed, respondents should ensure that staff with experience in the functional and/or geographic area are included as resource persons. Of particular importance is training in team-building. The increasing difficulties in working in complex emergency situations call for an ability to function together as a team. Basic training in semi-structured interviewing techniques is advised as well.
Material resource issues
Too often, monitoring or evaluation capacity has been reduced because of budgetary limitations. Emergency respondents establishing monitoring and evaluation programs should plan and budget for these activities from the outset. Budget considerations should include:
· Other travel/transport costs (per diem, fuel, spare parts, etc.)
· Communications equipment (primarily for monitors posted in high risk zones)
· Recording equipment (cameras, tape recorders, calculators, etc.)
· Reporting equipment (computers, printers, etc.)
· Other office supplies (notebooks, pens, etc.)
Staff security concerns
The personal security of the monitoring and evaluation staff must be a primary concern of the emergency intervention. Staff should receive training in basic security management issues (e.g., mine awareness, stress management, negotiating under duress, etc.) Respondents should develop adequate evacuation plans for all parts of the country where monitors or evaluators are to be sent. Communications policies and procedures should be established with radio check-in times planned and agreed prior to departure.
Travel routes should be planned, mapped, and filed with supervisors and/or security officers before monitors or evaluators leave for emergency settlement sites. Travel plans should also be communicated to host government counterparts, faction leaders, peace-keeping troops, etc., and all necessary permits obtained before entering potentially hostile or sensitive secured areas. In the case where the local authority refuses to grant the needed travel permit, emergency respondents should make it clear that such action risks a cutoff in the flow of relief commodities to the area.
Feedback of results and corrective action
Admittedly, adequate feedback of lessons learned from evaluations into future responses is infrequent. The inability of many organizations to develop, maintain, and make use of their own institutional memory has become a sad cliché. What is needed to reverse this usual trend is a system-wide commitment to ongoing staff development where newly hired emergency operations staff are introduced to the lessons gleaned from past responses and more experienced staff receive refresher training courses. For all staff, a review of agency policies and procedures and case study presentations of former emergency response successes and failures are warranted.
Examples of corrective action: A few examples of possible corrective actions to be taken by decision-makers in response to the findings of emergency program monitors follow:
· Increasing, decreasing or shutting off the supply of relief items to the emergency settlement.
· Altering the distribution mechanism to increase equity or target particularly vulnerable groups (e.g., direct distribution to women or household heads instead of indirect distributions to corrupt emergency settlement block leaders; decentralized instead of centralized distributions, etc.)
· Changing the composition of the relief supply basket (e.g., corrugated iron roofing instead of plastic sheeting; micro-nutrient fortified foods instead of unfortified foods, etc.)
· Redefining the actual target group (e.g., vulnerable groups instead of the entire affected population of the emergency settlement.)
· Re-registering the affected population (i.e., conducting a new census.)
· Improving or augmenting the logistics systems to handle new inflows of displaced people.
· Training in-house staff or those of implementing partners in agreed policies and procedures.
· Hiring or firing staff, writing or voiding contracts.
Some of the standards - or of the concerns about the lack thereof - for monitoring or evaluation activities include:
Interviewing methodology: Standards or guidelines concerning interviewing techniques, team composition, acceptable numbers and types of interviewees would be useful. In effect: can a definition of a rigorous interview be developed?
Site visit frequency: There is no standard at present for the number of times that a particular site should be visited during the emergency response although some observers note that a site should be attended by monitors during the delivery or the distribution of a major supply of relief items. Frequency of visits may also be a function of the type of commodity or program being monitored. Programs which distribute goods with a very short shelf-life, for instance, may need more frequent visits to ensure against spoilage. If the beneficiary population is motivated and integrally involved in monitoring activities, then the need for site visits declines. Without such motivated participation, the relief provider will have to plan more frequent visits to the emergency settlement site.
On-site monitoring presence: Most observers agree that there should be no relief item distributions undertaken if the local authorities or faction leaders do not permit monitors to be on-site during the distribution.
Reporting frequency: Relief providers should be able to present - at a minimum - monthly details of actions taken; resource/commodity arrivals, deliveries, and distributions; and recommended changes.
Mortality rates: In monitoring the magnitude of emergency conditions within the settlement, can the following standard measurements be used?
Normal developing country rate
Relief program (situation under control)
Emergency out of control
Major catastrophe (severe famine)
Nutritional status: In monitoring or evaluating nutritional interventions, emergency respondents should ensure that households have access to (from existing household stocks and distributed rations) a minimum of how many kcals per person per day? 1900? 2000?
Water and sanitation indicators: In monitoring the needs of the emergency settlement population (or evaluating the capacity of services), what are the minimum acceptable standards for water and sanitation/hygiene? Are the following acceptable?
Average number of liters of water available/person/day
Number of persons/latrine
Distance from latrine to dwellings
> 6 m
Distance from latrines to drinking water sources
> 30 m
Key Resources for Monitoring and Evaluation
Resources (guidelines, checklists, manuals, computer programs, or case studies) which may be useful to field officers responsible for monitoring or evaluation emergency settlement programs include:
Food and Agriculture Organization. 1992. Approaches to Monitoring Access to Food and Household Food Security Committee on World Food Security, 17th Session, FAO, Ref No: CFS: 92/3.
Hakewill, P.A. and Moren, A. 1991. Monitoring and Evaluation of Relief Programmes. Tropical Doctor 21:24-28.
Office of Foreign Disaster Assistance. 1994. Field Operations Guide for Disaster Assessment and Response, Version 2.0. Washington D.C.
Overseas Development Institute. 1995. Multi-Donor Evaluation of Emergency Assistance to Rwanda, Study III, Analytical Framework, Ref 0217, London: ODI.
Oxfam Field Directors Handbook, Oxfam Publications, Oxford.
Turner, Herbert D. 1976. Principles and Methods of Program Evaluation. IDR/Focus, 3.
United Nations Childrens Fund. 1986. Assisting in Emergencies. New York: UNICEF.
United Nations High Commissioner for Refugees. 1989. Supplies and Food Aid Field Handbook. Geneva: UNHCR.
United Nations High Commissioner for Refugees. 1982. Handbook for Emergencies. Geneva: UNHCR.
USAID/Office of Foreign Disaster Assistance. Monitoring & Evaluation Manual.
UNDP Guidelines for Evaluators, UNDP Central Evaluation Office, August 1993.
World Food Programme. 1993. Food Aid in Emergencies, Policies and Principles and Operational Procedures for WFP Staff. Rome: WFP.