|The Essential Handbook. Radio and HIV/AIDS: Making a Difference. A guide for radio practitioners, health workers and donors (UNAIDS, 2000, 128 p.)|
Monitoring educational radio programmes in Eritrea (photograph by Mary Myers)
What is monitoring and why do it?
Monitoring means assessing the progress and appeal of a programme or campaign during its lifetime. During the broadcasting period you will need to monitor the audiences awareness of your radio programme on a regular basis to check that people are listening to and remain interested in the programmes. You also need to check that your materials or programmes are being broadcast as scheduled and that the reception quality is good enough for the target audience to be able listen easily. Monitoring will help establish who is listening and when, and what they think of the programmes. It can provide feedback on the production process, and feed-forward your audiences reactions and ideas into future programme-making (see Section 4 - Making radio interactive).
Monitoring a soap opera can help you determine which characters are popular and why. Depending on the production schedule, you may be able to tailor future plotlines and character developments accordingly so that the messages and information can be broadcast to maximum effect.
You can also monitor the issue itself - in this case HIV/AIDS and related topics - and update the content of your programming. Keep it contemporary and topical by reacting to news and developments that are of interest to your audience and will hold their attention. You will need to keep track of the changing status of HIV/AIDS and recommended practices. Monitoring can help day-to-day decision making to help bring about changes which are necessary: there is no point in only knowing about the impact of a programme once it is over, when the information cannot be used to improve it.
How do you do it?
There are a variety of methods for monitoring, many of which cost very little. They include
· listeners' letters: these can be a rich source of qualitative and anecdotal evidence of listeners views on the content, timing and reception quality of programmes, the characters in a drama series, even about radio presenters. Broadcasters can provide incentives for listeners to write in by running quizzes and competitions. Unprompted opinions are just as valuable. Remember though that this method favours the literate, although it can be suggested that non-literate people ask someone else - a school child for instance - to write on their behalf. This might not always be appropriate for sensitive subjects relating to sex and HIV/AIDS
· listening panels and focus groups: these are groups of people who meet regularly with a facilitator to discuss openly their reactions to the programmes - this method can provide fairly immediate feedback and feedforward that a more formal survey could not provide, and could contribute to a continuous research process. For example, if numbers tuning in (ratings) are falling, focus group discussions can be conducted to find out why
· audience listenership survey: in the initial phase of your campaign or programme this research should be carried out as a random sample survey to find out who is listening to your programme, and whether in fact you are reaching your intended audience and in what numbers. Later on you could simplify the process to monitor members of the target audience only and find out whether ratings are stable, falling or rising
· broadcast monitoring: (for programme producers, health organisations or funders who have contracted a radio station to broadcast your programme). This is to ensure that your programme is being aired at the times agreed and as regularly as you have agreed with a radio station and is particularly useful for tracking radio spots which require frequent airing. Ask people with access to a radio to monitor the radio programmes: give them a monitoring sheet with a list of times at which they should hear your spots. Ask them to tick the number of times they hear them. These can be collected in on regular basis and reviewed. If your programme is not being broadcast as agreed you have evidence to prove it and should talk to the radio staff to find out what the problem is. If all is going to plan then you could thank those responsible! It may be possible to obtain other information from the same monitors, on issues such as audibility and signal clarity
· on the street interviews or vox pops: these can be carried out in a systematic way asking the same questions of a range of people, or of attendants at a clinic or other health facility, for example, to gain quick impressions of peoples awareness of and reactions to your programme and the issues involved
MONITORING A SOAP OPERA
In New Home, New Life, the BBC radio soap opera from Afghanistan, the writers created a character called Shukria to act as a vehicle for the symptoms of psycho-social trauma and how to deal with them. Shukria turned out to be a strong, finely acted character. But it transpired in one focus group discussion that people were put off by this character. This was disturbing news, since one of the main objectives in the storyline relating to Shukria was to encourage sympathy towards the war-traumatised. If the central character in this was not eliciting sympathy, the storyline would be unlikely to have the desired effect. So the writers toned down the side of her personality which was causing offence such as the hysterical outbursts and the strident tone of voice. The consequences of war trauma were still apparent, but portrayed in a way which the audience could identify with, and learn from.
BBC AED (1997)
If you are an NGO and want to check the scheduling of spots, you may be able to record the output of radio stations most conveniently on a video tape - this gives you six or eight hours broadcast time per tape and records the entire output of a radio station quite conveniently. You will need a video recorder and an appropriate cable connected to the output socket of the radio.
Who does the monitoring?
Ordinary members of the community can be asked to be monitors. For example, you could ask individual members of the target audience to keep a diary of their radio listening, or get groups to form listening panels.
Extension and health workers can be involved in collecting information at their place of work or during the course of visits to farms and households; ideally monitoring should be a routine activity rather than a one-off event.
Production staff can also take responsibility by logging phone calls and letters received at the station, and going out into the community to find out if their audience is enjoying their programmes.
What is evaluation?
Evaluation means measuring or assessing change in a systematic way in order to improve decision-making and future practice. In the context of radio it means two things: firstly, assessing the effectiveness of your radio programmes (audience evaluation) and secondly, learning about the radio production process (internal evaluation).
Monitoring in the Health Unlimited Media Health Education Project in Cambodia used vox pops to follow up a formal KAP survey. The latter had shown that fear of AIDS and PWAs had increased markedly amongst women after a major TV and radio campaign. The purpose of the vox pops was to determine why some people were increasingly afraid of AIDS and Persons With AIDS.
Respondents aged between 15 and 35 were chosen to correspond with the KAP survey sample group. Interviews were conducted in a town and district adjacent to the one where the KAP survey had been carried out, in the market place and the street. Before asking questions the interviewers (a Public Health consultant, a local doctor and a member of the provincial AIDS Committee) explained the purpose of the interview and requested a few minutes of the persons time. Fifty-one people were interviewed in one day. People were asked the following questions after recording their gender, age and occupation
· have you heard of AIDS?
The vox pops revealed that people were confused about AIDS and unsure of what to do about PWAs. The information campaign has increased awareness and therefore increased concern but had failed to allay irrational fears and consequent prejudices. The results suggested a need for greater interpersonal contact and education, as well as longer-term media coverage.
Catherine OBrien (1996)
This means measuring or assessing changes in the target audiences knowledge, attitudes and behaviour that come about as a result of a health education radio programme or campaign. Evaluation therefore measures the impact of the programmes on the target audiences lives. Evaluation has the potential to identify both positive and negative outcomes of a programme, and both expected and unexpected impact (see box on page 93).
CAMPAIGN TO INCREASE CONDOM USE BY LONG-DISTANCE LORRY DRIVERS
What is planned for and looked for in evaluation: eg target audience increase their use of condoms (reported and sales outlet information)
Hoped for outcomes which maybe identified in evaluation but not always systematically assessed eg other male listeners apart from lorry drivers report increased condom use
Sometimes identified as possible outcomes but not always evaluated: eg listeners report less compassion for PWAs after the campaign
Need to minimise these, may be identified but rarely systematically assessed: eg target audience think condoms not necessary for regular relationships
This means learning about yourselves and your work. This requires you to ask questions about the organisation and operations of your station or project. Are principles such as participation, democracy and equality matched by practice and if not, why not? Evaluation of this kind can help identify problems and their solutions which will ultimately contribute to better programme making. An internal evaluation might review technical, personnel, managerial and financial issues.
Why do we do it?
· to find out whether our campaign or radio programme is working effectively - is the health education content making a positive difference to peoples lives?
· to improve the way our project or radio station functions
· to improve communication and relationships between radio project or station personnel and between programme-makers and audience
· to share our experience with others
· to demonstrate value for money
· to report to donors and seek on-going funding
EVALUATION IN ACTION
In the Zimbabwe Male Motivation Project, the radio drama series You Reap What You Sow ran twice a week for six months. It cost $92,000 to produce and reached 41% of men aged 18 to 55 according to a post-project survey of 900 men. Projections from the survey sample to the national population of men aged 18 to 55, who numbered some two million in total, indicated that over 80,000 started to use a family planning method as a result of the radio drama. The cost was US$0.11 per man reached and $1.12 per new family planning user.
Kuseka and Silberman (1990)
It is easier and more efficient to evaluate your programme or campaign if its objectives and indicators are well-defined from the outset and a basis for future evaluation established when carrying out initial research.
What do you want to evaluate?
As with initial research it is essential to assess audience knowledge, attitudes and practices. Behaviour change is a key indicator but measuring changes in knowledge and attitude is also important since these may lead to behavioural changes.
Evaluations are done to find out about the target audiences
· recall of a specific radio programme (spontaneously by the listener and after prompting by the interviewer)
· detailed recall of message, jingle, slogan
· greater knowledge of specific messages
· understanding of specific information and attitude-changing messages (the latter can be harder to gauge than facts eg assessing attitudes towards PWAs
· change in attitude
· desire for further information
· change in behaviour: indicators of this could include reported and observed practices, purchase of goods (eg condoms) and use of services (eg attendance at STD clinics)
· possible negative and unexpected impact on the target audience and other listeners
Internal evaluation assesses
· decision-making processes: are they transparent, inclusive, democratic?
· funding sources: are they sustainable, responsive, separated from the message?
· creativity and innovation: are a variety of programme formats being used?
· involvement of audience in programme design: are you meeting expressed needs?
· awareness of health and related issues (eg gender) among production personnel: are they prepared to admit ignorance and prejudice and seek advice?
· nature of relationships with partner organisations: donors, programme providers, material providers, research organisations etc
It doesnt have to be a large-scale survey. For example in Mali an impact evaluation of a series of tailor-made programmes on the theme of natural regeneration of trees was done with a sample of just 35 respondents.
After the broadcasts a high proportion of respondents within the area reached by Radio Douentza showed increased awareness of how to mark (or visualise) the young tree (2/35 before and 15/35 afterwards), and furthermore were putting the advice into practice by marking the shots with old cups or calabashes as the programme recommended. There was a jump in the numbers of those aware of the correct spacing required between trees in fields (9/35 before and 28/35 after) and also an increased awareness that pruning permits do not need to be paid for. Overall 60% of the sample demonstrated that they had heard our programmes and remembered them in some detail.
Myers et al (1995)
How is evaluation carried out?
Many of the techniques used in initial research are also appropriate for evaluation at the end or midway through a project (see Section 1 - Initial research). A mixture of quantitative and qualitative methods can be used depending on the objectives of the campaign or programme, the indicators you are trying to measure and the time and resources available.
The Mali example demonstrates the use of having baseline data on the before programme situation against which to measure the after situation. However the problem still remains that the observed or reported changes might have taken place without the campaign, or were caused by other events. In the case cited above the best results were often obtained from villages where a local NGO working on the same issues was not present: This shows that in some cases it was the radio alone which was popularising the recommended techniques, as there are no other significant sources of information in these villages other than word of mouth, and these villagers did not have direct contact with the Near Eastern Foundation (NEF) workers. This small survey tells us that behaviour did change in the villages visited by the evaluator, and an assumption can be made that similar behaviour change took place in other villages reached by the broadcasts. We dont know this is the case, but it is a reasonable assumption based on this survey result.
If a baseline survey is not feasible (your programme has already started) then you may be able to compare respondents with exposure to the broadcasts to those not able to receive the broadcasts in your survey (this is known as a control group). In this way, your evaluation can avoid wrongly attributing change to the radio programme by finding out what would have happened anyway. Data collected previously, for instance on the rate of contraceptive use, can be analysed to see what the trend over time was before the campaign and whether there was any increase after the campaign.
You many be able to add questions onto an official or NGO survey, for example, about peoples knowledge, attitudes and practices regarding HIV/AIDS, where the relative influence of the radio can be explored. Dont forget to pretest your evaluation questionnaire to avoid ambiguous wording and iron out other problems.
KAP surveys are often an important part of evaluations and you may well want to be able to extrapolate (generalise) the results to the whole population served by the communication intervention by carrying out a sample survey. In this case certain techniques have been developed which reduce the sample size, and therefore the cost, of carrying out quantitative data collection. Cluster surveys (these require specialist training and analytical skills which may be offered by UN agencies such as UNICEF and WHO) can provide generalisable results by selecting representative sites for research. A minimum of 300 people have to be surveyed to obtain reliable results that can apply to the whole population. However such data does not often examine why people do or do not change their knowledge attitudes, or behaviour, and because the questionnaires are structured with Yes/No or a limited choice of answers, they cannot reveal or probe unexpected outcomes.
Rolling evaluations can be done at specified intervals. For example, a sample of the audience might complete a written questionnaire or take part in a structured interview based on information and messages to be broadcast over the following three months. A similar audience sample (not the same people) then completes the same questionnaire after the broadcast to show what they have learned.
Avoid bias by performing survey work without crowds, preferably in private houses or courtyards, so that the answers given by one respondent do not influence those of the next person, and people do not compete to come up with answers.
A small well chosen number of people may provide information that is just as meaningful as a large statistical survey, especially once you have established your listenership figures are acceptable and steady or rising. Checklists of issues and topics can guide the interview, and unexpected or negative impacts can be probed to gain greater insight into why they have occurred.
Diary packs can be distributed to representative members of a target audience, and can help overcome gaps left by other forms of evaluation - such as collecting information on women or those living in remote areas. Listeners are asked to record their reactions to radio programmes, what information they felt was most or least useful, and whether they put into practice any of the advice they heard. This can be time consuming, and it is often appropriate to offer modest incentives to the diarists. Literacy is a prerequisite, and non-literate listeners will still have to be accessed face to face. A holistic approach using a combination of data collection methods can be revised as a project team gains experience.
It is always problematic to assess whether knowledge is actually being translated into practice but this can be measured by figures on the use of services (attendance at clinics), purchase of products (condoms), reduction in cases of a certain disease etc, demonstrating that people are engaged in health seeking behaviour. Proxy (substitute) measures of audience interest and change in attitude can be used, such as evidence that listeners are seeking further information by writing and phoning in. Sometimes evidence of the popularity of a programme is manifested in ways you might never have thought of.
DISCOVERING EVALUATION INDICATORS
The Youth Variety Show (YVS) in Kenya, a radio phone-in show for young people on the subject of sexuality and sexual behaviour, was guided by intensive research. This included a national baseline survey of youth and parents - Kenya Youth Needs Assessment (6300 interviews), focus group discussions with more than 350 adolescents and parents in five districts, in-depth interviews among opinion formers and opinion leaders, review of legislation and policy environment, content analysis of newspaper coverage of youth issues and, once the programme started, content analysis of letters from youth. During the broadcast of the radio programme, monitoring was carried out: a panel of youth and a separate panel of parents listened to the show. Their critiques were used to improve the content of the next programme. Evaluation was done through a follow up household survey conducted among adults and adolescents to assess audience exposure to the YVS, conducted by Research International, a market research firm that conducts omnibus surveys for the commercial sector several times a year. Johns Hopkins University/Population Communication Services bought some questions as part of this on-going survey. Results showed that 38% of respondents listened to YVS but of 15-24 year olds 53% listened. Surveys at clinics showed that increasing numbers of youth attending the clinics had listened to YVS and, along with friends, YVS was the most important source of referral. Content analysis of letters and radio listener panel studies corroborated this finding.
The cost of research and evaluation was $37,330 of a total budget of $97,170 or nearly 40% of the total cost of making the programme. Limitations encountered included the rudimentary nature of clinic data management; expense and labour intensity of data collection; service providers trying to provide good results; the sensitivity of sexual issues and the intrusive nature of data collection. However the use of a variety of methods, especially in the initial research and, through pretesting and monitoring, is likely to have provided a firm footing for the impact results.
In Mali an informal indicator of the success of two cassettes of health messages, several concerning HIV and AIDS, recorded in the form of traditional songs by folk singers, was the number of pirated copies believed to have been made and sold-on by private individuals. Staff at Radio Douentza which regularly aired the songs estimated that about 50% more cassettes were pirated over and above the 3,500 distributed officially.
The success of Radio Gune Yi, a youth programme in Senegal, has led not only to supportive press reports and letters and calls from listeners but to requests from radio stations elsewhere in Africa to buy and broadcast the programme.
Mary Myers (1997)
DIFFERENT FORMS OF EVALUATION
Evaluations of the Afghan radio serial drama New Home, New Life have taken various forms over its lifetime, and together present a much fuller picture of the impact of the programme than any one method could achieve.
For example anecdotal evidence from interviews revealed personal feelings about the programme: I cant go to sleep without hearing New Home, New Life'. Quantitative surveys on listenership (10,000 interviews) revealed that regular listeners were only half as likely as non-listeners to be injured or killed in landmine accidents as they were more aware of the dangers. A competition was run in which listeners had to write in with the answers to ten educational points featured in the drama, eg What was the basic cause of the spread of cholera in Lower Village? There were over two thousand entries, 90% of whom got eight or more of the ten questions correct.
A before and after survey of 300 families in three Afghan provinces used a random cluster sampling technique which asked 12 key questions on key messages due to be featured in broadcasts over the following three months. After the broadcasts the same questions were asked of different people from similar areas. The numbers giving the correct answers after the broadcasts rose from 45% to 80%.
Two examples: At what age does a child need extra food in addition to mothers milk? Before the programmes, the correct answer was given by about one third of men and women listeners but after the programmes this rose to two thirds. What should you do with cows colostrum? In Afghanistan there is a strong tradition that farmers drink this rather than give it to newborn calves, resulting in heavy mortality among calves. But after the broadcasts some five times more men and four times more women responded with the correct answer. Significantly it was radio alone that conveyed these messages and the chances of them being reinforced by any other source on the ground during the broadcasts was so remote it could be discounted.
Gordon Adam (1995)
Who should evaluate?
The decision to evaluate is usually a joint one made by a programme and its participants, together with a ministry, department, organisation or funding agency. The objectives and expectations need to be clearly agreed by all those concerned.
Project or radio station staff will bring an in-depth knowledge of the programme and to some extent the target audience. If they have learnt research skills these can be gradually refined and expanded over time. However unless there are funds to employ researchers or evaluators on a full-time basis production personnel may be too busy to be able to allot sufficient time to a full evaluation process, and unable or unwilling to be critical of the programme.
Externally commissioned evaluators
International consultants can bring a certain degree of impartiality, highly specialised expertise and have a broad range of experience to apply. The disadvantages are that they are expensive to hire (fees, per diems and travel), often have to rely on translators and will not always be aware of the difficulties and limitations faced by the programme and its staff. Local consultants on the other hand are on the spot and understand the context. They will probably, though not necessarily, speak the appropriate language(s); they are usually cheaper than international consultants and there is a greater possibility of future involvement in future evaluations thereby providing some continuity. Occasionally there may be a problem with bias.
Market research firms
By using recognised professional techniques these companies usually carry out national sample surveys and focus group discussions to a high standard. However they can be expensive and may need careful briefing on how to treat the subject matter, especially the nuances of language and the sensitivities of the respondents involved.
Donor evaluation team
This may be required by the donors. It can contribute to the dissemination of experience and to improving their programmes elsewhere: care is needed to explain the purpose and methods of the evaluation so the staff do not feel that they are being tested or criticised. As with any externally commissioned evaluators, terms of reference should be discussed and negotiated with the programme team so they are involved in the process from the outset.
Pharmacists and other suppliers of HIV related products eg condoms, can be enlisted in data collection as well as providing reinforcement information to their clients. Sophisticated systems for condom distribution such as those used by PSl in the Ivory Coast and Ghana Social Marketing Foundation in Ghana enable them to evaluate condom sales and keep track of where demand is rising and how quickly.
Data can be collected from specialist clinics (eg mother and child, family planning or STD) or other health facilities. This requires effective collaboration with the health authorities at local, provincial and national level. Qualitative data can also be collected at clinics and other target institutions such as schools, but will require the services of programme staff or commissioned evaluators. In this case health workers and teachers, for example, may be key informants.
Through recording their impressions of programmes and campaigns in diaries (see Qualitative Methods, page 97).
Communicating and using research results
Evaluation reports should be kept short and simple, containing practical recommendations for future programming decisions. Researchers should use language appropriate to the readership who will include production personnel, donors and partnership organisations. For both needs assessment and evaluations, it can be worth being creative in presentations, using visual techniques and samples of audio material. Bring reports alive with photographs and quotations from listeners and others involved. State clearly the implications of initial research for the timing and content and style of the programme without being overly prescriptive. Be careful about making assumptions that changes in behaviour are caused only by the existence of the programme or campaign, or that a small-scale evaluation applies to wider group of people.
Training for evaluation
Training is often a major cost in collecting data. Training clinic staff to keep records or training interviewers to conduct household surveys, for example. It is generally cheaper to train a smaller number of evaluators to do 20 or more interviews than to train a large number to carry out ten or less. Also the interviewers become more skilled as they conduct more interviews. Use research and academic institutions within the country concerned: students can make willing survey enumerators. Consultants should focus efforts on training local people and building creative capacity: this requires follow up support and supervision. It takes time and practice to learn the skills and attitudes required to carry out good quality research. Training alone will not produce good research without resources like salaries, transport, fuel and above all time, to get researchers out talking, listening and observing (see Section 10 - Training and sustainability).
References and Further Reading
Adam, G (1995) Article in COMBROAD, September 1995
Almedom, A, Blumenthal, U and Manderson, L (1997) Hygiene Evaluation Procedures: Approaches and Methods for Assessing Water- and Sanitation-Related Hygiene Practices, International Nutrition Foundation for Developing Countries (INDFC), PO Box 500, Charles Street Station, Boston, MA 01224-0500, USA. Available from London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT, UK
de Fossard, E (1997) How to write a Radio Serial Drama for Social Development: A Script-Writers Manual, Johns Hopkins University School of Public Health, Center for Communication Programs, Baltimore, MD
Feuerstein, M-T (1986) Partners in Evaluation: evaluating development and community programmes with participants, Macmillan, London. Distributors: Macmillan and TALC (Teaching Aids at Low Cost), PO Box 49, St Albans, Hertfordshire AL1 4AX
Health Unlimited (1996) Creative Radio for Development: Workshop and Conference Report, London: Health Unlimited, Prince Consort House, 27-29 Albert Embankment, London SE1 7TS Tel: +44 171 5999 Fax: +44 171 582 5900. e-mail: email@example.com
Institute of Development Studies (IDS) (undated) Participatory Rural Appraisal Topic Packs on Health, and Sexual and Reproductive Health, Institute of Development Studies, The University of Sussex, Brighton, BN1 9RE, UK. e-mail: firstname.lastname@example.org
IIED (various) Participatory Learning and Action (PLA) Notes (previously RRA Notes). Special issues include No 16 on Health and No 31 on Participatory Monitoring and Evaluation. Distributor: Sustainable Agriculture Programme, International Institute for Environment and Development (IIED), 3 Endsleigh Street, London WC1H 0DD, UK. e-mail: email@example.com
Kuseka, I and Silberman, T (1990) Male motivation impact evaluation survey, Harare: Zimbabwe National Family Planning Council
Mikkelsen, B (1995) Methods for Development and Research, Sage, New Delhi. Distributor: Sage Publications, 32 M-Block Market, Greater Kailash-I, New Delhi 110048, India; 6 Bonhill Street, London EC2A 4PU, UK; 2455 Teller Road, Thousand Oaks, California 91320, USA
Mody, B (1991) Designing Messages for Development Communication, Delhi: Sage Publications
Myers, M, Adam, G and Lalanne, L (1995) The Effective Use of Radio for Mitigation of Drought in the Sahel, Cranfield Disaster Preparedness Centre, RMCS Shrivenham, Swindon, UK
Myers, M (1997) Media Monitoring Visit to Senegal and Mali, ICHR Radio Partnership, Geneva. Distributor. ICHR Radio Partnership, Villa de Grand Montfleury, 1290 Geneva, Switzerland
Nichols, P (1991) Social Survey Methods: A Fieldguide for Development Workers, Development Guidelines No.6. OXFAM, Oxford. Distributor: OXFAM, 274 Banbury Road, Oxford OX2 7DZ, UK
OBrien, C (1996) Pilot Project on Grassroots Reinforcement of Broadcast AIDS messages in Two Districts of Kampot Province, Cambodia: Health Unlimited
Roberts, P (1996) Abstract from presentation by Johns Hopkins University Center for Communication Programs to the Creative Radio for Development Conference, Birmingham, UK, May 1996