|The Courier N° 148 - Nov - Dec 1994 - Dossier: Education - Country Reports: Saint Lucia - St Vincent and The Grenadines (EC Courier, 1994, 104 p.)|
by Christian Platteau
The annual reports published by the UNDP (United Nations Development Programme), the World Bank and UNESCO provide statistical indicators which form one of the 'raw materials' of economic, sociological and political analyses, and which today have a considerable impact on the international scene.
The growing importance of these indicators raises the question of how the various agents select and use them, whether in fact they reflect the phenomena analysed and how accurately they do so. Essentially economic in nature at first, they now cover broad aspects of life which come into play with development, such as state of health, natural and human resources, population, military expenditure, balance of the national accounts, communications, employment and unemployment.
Since the purpose of these indicators is to measure development, this presupposes defining the concept of development itself and rendering its component concepts operational. The UNDP says in its 1990 report that the first researchers into quantification of the economy placed the emphasis on the human factor, a policy which was abandoned somewhat in the last few years but which reappeared recently. Although development has become a constant preoccupation of politicians, economists and other social science specialists, there are a large number of divergences in the appreciation of its nature and in the way of measuring it and achieving it. Whereas the pioneers of the measurement of production and national income stressed the importance of social concerns, after the second world war economic growth became the central issue. Growth in capital was considered the means of achieving development and the per capita GNP growth rate became the most important yardstick.
The international organisations dealing with development aid and finance have drawn up classifications of the Third World. The examples given by the UNDP and the World Bank deserve a certain amount of examination, for they show the difficulty in establishing a consensus.
The table classifies 41 developing countries in Africa by two different criteria: gross national product per capita an exclusively economic criterion used by the World Bank, and the Human Development Index (HDI), a composite indicator devised by the UNDP, which takes into account life expectancy, education and income. This dual classification shows the difficulty inherent in any such exercise and how hard it is to embark on a rigorous classification of the countries of the Third World. There is little agreement between the two. The rankings of some of the countries differ considerably from one classification to the other. Column A of the table shows the differences between the two classifications. A positive figure indicates that the HDI ranking is higher than the GNP ranking. A negative figure indicates the reverse. By grouping together the data from this table, we observe that:
- 15 countries show a difference of between 0 and 5
- 15 countries show a difference of between 6 and 10 positions
- 5 countries show a difference of between 11 and 15 positions
- 3 countries show a difference of between 16 and 20 positions
- 2 countries shows a difference of more than 20 positions.
In the light of these differences, we refer to Cazorla and Drai (1991), who wrote that the classifications by international organisations result from a distortion between an abstract dimension of underdevelopment (stressing its unity) and the divergent situations in reality. Each country, they say, has its own specific geographical, historical, economic and social characteristics and it is increasingly difficult today to consider the Third World as a whole the distinguishing feature of which is that it is different from the other group of developed countries. But there is more to calling into question the concept of a Third World than just finding differences in conditions. The problems of the international organisations in arriving at a strict definition of the Third World show that it is very difficult to find satisfactory criteria by which to define underdevelopment (and development). Which common criteria, Cazorla and Drai ask, should be used for countries with such contrasting levels of development?
In short, the underlying content conveyed by the indicators should therefore be stressed: their function is to express theoretical ideas. Their practical aspect allows measurement and quantification. Their theoretical aspect raises the question of their accuracy, their relevance in expressing abstract concepts. The richness and difficulty of the indicators (per capita GNP and HDI) is to make a concept as complex as development sufficiently tangible to make it measurable. Their weakness is to reduce development to one or two dimensions and to assume it to be a simple entity, acquired to varying degrees by the different nations.
What about education?
Literature on the education sector shows that the controversy in respect of the risks or the need to use performance indicators for the education systems of different countries is the same as the long-standing controversy concerning development indicators. A. Cambier, J.M. De Ketele and Ch. De Pover (1994), authors of current research commissioned by the Administration Grale a Cooption et au Dloppement (AGDC - Belgium), recognise that indicators often only provide partial information and most of the time mask major disparities between genders, regions and environments (rural/urban) and so on.
However, these disparities form an important component in diagnosing the state of the education system. To take a specific case as an example, it is possible to reach the conclusion that, in terms of external effectiveness, growth in the national average primary education enrolment ratio calls for narrowing the gap between the enrolment ratio for boys and that for girls and, in consequence, for specifically increasing the school attendance of girls.
The international tables rarely show the disparity in the traditional indicators, disregarding the fact that any distribution is defined by two parameters: a central trend (mean, mode, median, frequency) and a distribution index (for example, variance or asymmetry of the curve). Too often, the traditional indicators summarise and conceal the disparities they cover.
The pupil: teacher ratio, for example, as indicated in the work of Jarousse and Mingat (1993), reveals that the data collected in the context of experimental work may cover very different situations in reality. An average of 68 pupils per class in the 2nd-year primary school in Togo conceals extreme class sizes of 18 and 116 pupils. This average varies from 77 in towns to 85 in suburban areas, 57 in agglomerated rural environments and 43 in dispersed rural environments. It would not be right to believe that the size of classes is a variable which is entirely subject to manipulation. It is partly dictated by the number of children attending school within the catchment area of the school. There is therefore interaction between the geographical environment and the size of classes on account of shortage of children, at a time when this increase would be a wise decision. Although information on the dispersal of pupils is liable to make general tables more unwieldy, it is helpful in intervention projects: it indicates the scale of the deviation from the average and steers the data collections towards more down-to-earth levels.
The basic problem consists therefore in finding the right balance between the quantity of information needed to define and monitor indicators which are sufficiently 'refined' to be able to reflect the disparities, the quality of this information, the cost of collecting it and the capacity to analyse all these data.
The definition and choice of relevant indicators is no easy matter and consensus on this subject is seldom reached between education experts.
If we compare the indicators of the three international organisations, the UNDP, the World Bank and UNESCO, we find that the UNDP, which started by using 24, then 37 indicators concerning the education sector, were employing 42 by 1992. However, only five of these are to be found in the reports of the past three years. In 1992, the World Bank had been using 14 indicators for the previous three years in its assessment of the education sector in the developing countries. In the same year, UNESCO used 45 indicators which had been identical for the previous three years. However, within this vast range, the only indicator common to the three institutions for the three years concerned (1990 to 1992) is the pupil: teacher ratio in primary schools.
Comparison of the classifications of the World Bank (by GNP) and the UNDP (by HDI) of the countries of sub-Saharan Africa in 1992
The same indicator is not always defined in the same way from one source of information to another. In the case of an indicator such as the 'pupil: teacher ratio in primary schools', the three institutions issue a warning to the reader. Other people involved in the education system, such as librarians and supervisors, were apparently taken into account for the purpose of calculating this indicator.
The question also arises of the usefulness of indicators in a context of rapid change, when the time between the collection of data and their analysis and conversion into indicators often comes to several years (often the average is three years).
Many authors stress the importance of monitoring reforms of the educational system effectively. This must necessarily be based on adequate and reliable information comprising, inter alia, relevant indicators from which a diagnosis can be made. Despite the dangers of comparisons, drawing them between countries, but also between regions of the same country, can be helpful. Besides this spatial dimension, comparisons are also essential over time, i.e. between two successive moments of a development process.
The traditional indicators have the merit of existing and of summarising information collected over large areas. They condense the information to make it accessible at a glance to the majority of operators and users. This state of affairs is the result of constantly growing, universal pressure to measure education (Orivel, 1993). The indicators attempt to reconcile two requirements: to be concrete and quantifiable on the one hand and to represent important, decisive aspects of the educational process on the other (Scheerens, 1991). Although it is clear that at the first level there are serious shortcomings, and all the publications recognise this, it is still necessary to question the capacity of the indicators to be 'good', i.e. to provide information which will be decisive for intervention.
Consideration must be given to three factors:
- How genera/ the indicators are
The traditional indicators appear to be very general measures, since they aggregate, among other things, the geographical, occupational, cultural, gender, socio-economic and philosophical differences. Furthermore, these same differences turn out to be decisive in the field of diagnostic evaluation. The traditional indicators which swamp them and erase them therefore derive from too general a level of analysis. It is appropriate to consider whether data which are widely collected and summarised in a single figure constitute an appropriate level of generalisation where an intervention is involved.
- Knowledge and use of the individual characteristics of the contexts
Without resorting to a case-by-case study, it should be recognised that effective intervention in education presupposes taking account of the individual characteristics of the contexts in which it occurs. The traditional indicators used by the international organisations give an image of reality. There are others which may make the general indicators more unwieldy, but prove to be decisive when it is necessary to act on the ground. Focusing the process of the decision to intervene on a very general indicator, usually chosen and defined in accordance with a point of reference external to the environment in which intervention is to take place, has the disadvantage of focusing observation on the observed variation in this field, to the detriment of other, sometimes fundamental, internal fluctuations.
- Lack of information on the process
A final consideration is the interpretation of the traditional indicators. It is not independent of the first consideration, but relates to the meaning of the indices. These are numerical indicators. They reflect a state of affairs, but contribute no information to the processes. They establish and record situations, leaving the reader to interpret the figures and to imagine the underlying processes.