![]() | Guide for Managing Change for Urban Managers and Trainers (HABITAT, 1991, 190 p.) |
![]() | ![]() | Part I |
![]() | ![]() | Action research and planning |
![]() |
|
Figure
Note: The first four steps in the Action Research and Planning process lend themselves to practice within a workshop setting. The final three steps of the process [E Experimentation and Redesign; F Implementation; and G Evaluation], do not. The format of the following materials is changed somewhat to reflect the shift in emphasis.
OVERVIEW
STEP E: EXPERIMENTING AND REDESIGNING
This phase of the Action Research and Planning process is an opportunity to try out your new approaches and strategies in a relatively safe environment. It is the time to: work the bugs out of the system; check for commitment and acceptance; get feedback on what you are doing and how you can do it better; and make adjustments in preparation for a full-blown implementation.
For example, in developing training programs to meet the needs of new staff members, it is often helpful to design the program and test it with a small group of participants. There should be mutual agreement to give and receive feedback to strengthen the training for use with larger audiences.
Another opportunity for experimentation might be the initiation of a new approach to low income housing. Rather than commit totally to a new approach, it would make sense to try using it on a trial basis with a commitment from all concerned to develop good data on the experiment for further decision making.
Experimentation is an opportunity to:
(a) Assess the desirability of the proposed change;(b) Correct unforeseen problems before the change becomes fully operational.
(c) Give participants in the effort an opportunity to deal with any unexpected consequences; and,
(d) Train those who will be involved in later implementation. Not only does it provide an opportunity to train organizations or community people in important skills and knowledge for later use, it also builds understanding and commitment through their early involvement.
During the experimentation stage, it is important to collect good data about what is happening so there can be a thorough analysis of the results. This analysis addresses such questions as:
· Are we doing what we said we would do as well as how we said we would do it?· What new information or resources do we need?
· What was the overall reaction to the change? How did we feel personally about the experimentation?
· Should a total implementation be planned; should the effort be scrapped; or is there a new design that would best serve our needs?
· What can we do to make the proposed effort more effective?
This is a time when everyone involved should be consulted for their insights and assistance. If it is a field test of a training program, trainees need to be heard from - not just the staff. If it is a new approach in working with low income groups involved in a new housing project, they need to be brought into the analysis process.
Finally, this stage of AR may involve redesign based upon the results of the analysis. Much of what has gone on previously should be helpful. For example, the results of the earlier force field analysis and development of options can be a good resource in any redesign that might be necessary.
ANALYSING THE EXPERIMENT
Following is a list of questions to ask about the experimental stage which will help in considering whether a redesign is in order.
(a) Did you meet the objectives you set for yourself? If not, why not?(b) What went well in the experimentation that should be continued in any final effort?
(c) What did not go well that should be discarded?
(d) What kinds of changes should be considered to strengthen upcoming implementation?
(e) Who was not involved that should have been? What can be done to get them involved in the implementation phase?
(f) What resources were lacking to make the experiment as successful as originally expected? How can they be acquired to support total implementation efforts?
(g) Was the timing right for the experimentation? If not, why not?
(h) Given the results of the experimentation, does it make sense to go ahead with the implementation phase?
Half the difficulties of man lie in his desire to answer every question with yes or no. Yes or no may neither of them be the answer. Each side may have in it some yes and some no.
STEP F: IMPLEMENTATION
Implementation is, in theory, the action phase of the action research and planning process. Implementation means to carry out, accomplish, fulfill, produce, complete. But there must be some things prior to implementation - a policy, program, resources, and, above all, decisions.
In reality, implementation relies upon all of the activities we have considered up until this time: building a problem solving relationship; identifying problems and opportunities; the analysis stage, and, finally, planning a course of action.
If your efforts up to this point have been successful, you should be in a good position to begin implementation. This is not to say there will not be delays, problems and stumbling blocks put in your way. If you remember what was said very early about the recycling nature of action research, you can expect a little backtracking to previous steps.
Some decisions to be made along the way to implementation include:
· Do we need to adjust the mix of resources· Will it take more resources to do what we said we wanted to do?
· Should we continue to use our current plan, modify it, or develop a new one?
· When we accomplish our objectives, will we know enough to get out of business - or create new objectives?
If the first four steps in the Action Research and Planning process have been carried out effectively, implementation will be relatively easy.
STEP G: EVALUATING FOR RESULTS
Evaluation is an ongoing process - not something you do at the end of a project or activity. Nevertheless, a final evaluation (summing up) is important and oftentimes a requirement of funding agencies and higher authorities.
Action research and planning as a process has evaluation built into it every step of the way. In many ways, it is a guidance system that keeps us on track and moving from one step in the process to another with reassurance.
A small workbook called the Hip Pocket Guide to Planning and Evaluation has a good set of evaluation criteria. They include:
· Adequacy - Is your plan of action big enough and bold enough to accomplish your objective? Is the objective big enough given the size of the problem? Do you have sufficient resources?· Effectiveness - Was the plan of action carried out, and has it resulted in the objective being met? To what extent has the objective been met and the problem reduced?
· Efficiency - Could the resources be combined differently or different resources used so that the same activities could be produced at lower costs? How costly is the plan of action compared to the benefits obtained? Would another plan of action accomplish the same objective at lower cost?
· Side Effects - What are the good and bad side effects of the actions you implemented? What anticipated side effects occurred?
These four evaluation criteria are most effective when they are applied to:
· Resources - people, funds, materials, equipment, time technology
· Activities - that which is done to carry out goals and objectives (what is done)
· Strategies - the how of the what
· Objectives - a planned and expected result
When the criteria stated earlier are applied to each of these ingredients, they provide an excellent management guidance system, to determine if your efforts are on track and moving toward the objectives that have been established.
Two other issues important to evaluation are:
· Measures: How are you going to measure what it is you decide to do?· Sources of Information: What different sources do you have available and how will you tap them?
MEASURES
A measure is the amount of something that exists at a certain time.
The most difficult part of evaluation may be determining what kinds of measures to use for each of the criteria and inputs to problem solving that have just been discussed.
Some things are easy to measure (number of houses built, cost per house), while others are much more difficult (attitudes of building officials toward builders, effectiveness of a community awareness program).
Two important principles to remember about measuring are:
· Design your measurement tools after you know what it is you want to measure. (Just because something is countable, doesnt mean you should count it.)· Be stingy about what you measure - measure only those things that give you the information you need.
SOURCES OF INFORMATION
Measurement data can be abundant so pick and choose with care.
Example of Measure: The number of Building Liaison Officers who have improved their knowledge of building materials.
Data are the numbers you get when you take the measure.
Example of One Piece of Data: Fifty Building Liaison Officers have improved their knowledge of building materials.
If the objective was to improve the knowledge of 100 Building Liaison Officers within a specific period of time to a certain level, then our evaluation tells us we were only 50% effective.
Data comes in many forms and can be obtained through:
· Interviews
· Questionnaires
· Observation
· Ratings (by peers, staff, experts, the community)
· Tests
· Records and Reports
· Statistics
· Documents
· Examination
For the manager and community worker, evaluation is an on-going process - not a one-time end event. It is the guidance system that keeps resources, activities, strategies, and objectives on track.
Evaluation is a time of accounting for specific actions and their consequences; for plans and making improvements; for planting the seeds of future challenges.
A SLIGHTLY DIFFERENT PERSPECTIVE
Finally, some ideas are included about evaluation from The Universal Traveler.1 It gives a slightly different perspective about evaluation and additional ideas about carrying out the process.
1 Dan Koberg and Jim Bagnall, The Universal Traveler (Los Altos, William Kaufmann, Inc., 1974) pp. 80-84.z
Three Phases of Evaluation: In a most systematic view, an evaluation is a comparison of objectives with results. It initially asks, What did you hope for and plan to happen? and then measures those dreams against what actually did happen. From the measurement, the problem solver can discover the quantity and quality of progress and make plans for improvement in the future.
Example: Guide to Evaluation
· Statement of Goals: Objectives described in measurable terms.· Achievement and Measurement: How far did I go? (the quantitative perspective) How well did I do? (the qualitative dimension)
Were there unplanned contingencies? (e.g., unforeseen benefits outside the objectives; unforeseen problems outside my intention; additional objectives discovered late in the process.)
· Comparison of Goals with Achievement: Point by point comparison
· Plans for the Future: Review and enforcement of behavior changes
Progress Chart: If you have tried making a chart relating your defined objectives (tasks) with your available time, you have already found a simple way to keep a running evaluation. When you keep the chart up-to-date it allows you to see, at a glance, how far along you are in terms of meeting your objectives. This method usually works best for quantitative measures but quality can be added in the form of side notes or comments made as in a journal.
Who Else Has an Opinion?: Being objective about our own achievements is tough enough without also having to depend on our own frame of reference. Others, who view the same world with different perceptions, can often open our eyes to a truth or reality, which was there all the time but unseen. For the opinions from outside to be more effective and less hurtful, they must be understood - not just dropped on you without follow-up explanations.
Step Outside for a Minute: Making plans and setting out to achieve them is positive behavior. The relevance, social value and possible negative consequences of the success or failure of such behavior is another matter. If our self-image is good, we know that our intentions are good and that our behavior is the result of good intentions. When those good intentions are challenged - as in an evaluation - we become self-protective and often offensively defensive.
Evaluation calls for stepping outside of our self-image - at least for a moment - to look objectively at what transpired. The purpose is positive, to make plans for improvement. As for any problem, it requires Acceptance. When the evaluator realizes that it is attainment and not self being studied, measurements can proceed.