Specific nutrient deficiencies continue to be one of the world's major public health problems, especially in underdeveloped countries. In terms of number of individuals affected and geographical distribution, vitamin A and iron deficiencies are among the most prominent . One very serious drawback for the design and implementation of nutrition intervention programmes is the inadequacy of dietary information, almost always plagued by the spectre of inaccuracy . Thus, it is not uncommon to find reports describing high iron intakes in areas where iron-deficiency anaemia appears with undesirably high prevalence [2, 9, 11], or extremely low vitamin A intakes which are not accompanied by a compatibly high prevalence of eye lesions [2, 6, 9, 11]. Similar problems exist when trying to establish correlations between the intake of other nutrients and related clinical or biochemical indicators [3,12, 18, 20, 21].
The accuracy of nutrient information depends on the methods of collecting and handling the data. Some of these have been examined [3, 8,14, 20, 21] but, in general, attempts to reconcile dietary and biochemical or clinical information from nutrition surveys are still needed. The present study was designed to evaluate the relative contribution to the inaccuracy of dietary information of both regional differences in the nutrient composition of foods and the differences between those values obtained by calculation and those obtained by direct analysis of foods as eaten.