|Malaria Diagnosis: New Perspectives (WHO - OMS, 2000, 57 p.)|
|5. DIAGNOSTIC PRACTICES|
High malaria transmission occur mostly in Africa south of the Sahara, where P. falciparum predominates and causes an estimated 90% of the deaths attributable to malaria worldwide. High transmission also occurs in other areas of the world (e.g. Papua New Guinea), however, and not all endemic areas in Africa south of the Sahara are characterized by high rates of transmission (see section 5.3). In 1999, it was estimated that there were some 261 million cases of malaria in areas with high transmission (87% of the global total of 300 million) and 870 000 deaths (87% of the global total of >1 million).
In areas with high transmission, malaria occurs frequently and predominantly in young children, communities are familiar with the disease, and access to health care facilities is often difficult. Thus, the great majority of cases are self-treated based on clinical signs and symptoms alone (38). Such practices occur outside the established health care system and patients are often treated - if indeed they are treated - with non-recommended and inadequate regimens.
Most health care providers in these areas also rely on clinical diagnosis, using as their main criterion the presence of fever or a history of fever. While such an approach might first appear undesirable, it is justified because in these situations the demonstration of parasites by microscopy (or by other means such as RDTs) would be of limited diagnostic help. The majority of the population - including asymptomatic individuals - have parasitaemia most of the time. Thus the detection of malaria parasites does not necessarily mean that they are responsible for the patients illness, since they may reflect only a coincidental infection (39). In addition to being only marginally useful, laboratory diagnosis is often not possible owing to severe limitations on resources, particularly at the peripheral level of the health care system. Treatment based on clinical diagnosis alone is therefore a justifiable approach to the management of most cases of malaria in areas with high rates of transmission.
Algorithms have been developed to attempt to improve the clinical diagnosis of malaria, and especially to distinguish it from other febrile illnesses. Such algorithms have met with only limited success, owing mainly to the high degree of overlap between the various febrile illnesses. Clinical diagnosis, as currently practised, uses a broad definition of malaria and will result in high sensitivity at the cost of low specificity. The latter occurs especially when the prevalence of malaria decreases, such as during seasonal reductions in transmission. However, high sensitivity is given precedence because malaria is a potentially fatal though treatable illness (40). This position is reflected in the algorithms developed for the integrated management of childhood illness: in areas of high malaria risk any child with fever or a recent history of fever will be treated with anti-malarial drugs even if other causes of fever are present (41).
Treatment based on clinical diagnosis alone does result in unnecessary and irrational drug use, though this might be acceptable in the case of drugs such as chloroquine or sulfadoxine/pyrimethamine, which are cheap and safe with few adverse reactions. It has been argued that confirming the clinical diagnosis with microscopy or RDTs might, by reducing drug use, decrease the potential selection of drug-resistant parasites. This remains to be proven, however, because no data exist to date to quantitatively correlate patterns of drug use with emergence of resistance. In addition, most drug pressure occurs through self-medication in the community a practice that is difficult to regulate effectively. Thus, whether confirmatory diagnostic tests can decrease the emergence of drug resistance is an issue that needs to be investigated.
In most areas with high rates of transmission, treatment based on clinical diagnosis alone is incorporated in malaria treatment guidelines, drug resistance is still manageable, and chloroquine and sulfadoxine-pyrimethamine remain the drugs of choice. In such situations, there is no immediate need for large-scale use of confirmatory diagnosis. In some circumstances, however, the clinical diagnosis of malaria should be confirmed by microscopy or alternative tests. These circumstances include those set out below.
· In cases of suspected severe malaria, laboratory confirmation can guide initial therapy. In facilities at the central and district levels, microscopy should be the confirmatory diagnostic test of choice. In peripheral locations where microscopy is not available, RDTs might prove particularly useful since they can be performed by health workers with limited training and skills. Compared to blood smears, RDTs provide more timely results for disease management. Theoretically, by measuring circulating antigen, RDTs may also reflect parasite load more accurately. Unlike microscopy, however, currently available RDTs do not yield quantitative results and thus fail to provide a valuable element for prognosis and patient follow-up.
· Where persistence of parasites must be proved to confirm treatment failure (42), microscopy might be preferable because parasite quantification is used to define one type of early treatment failure. If microscopy is not available and RDTs are used, those that detect persistent antigenaemia in spite of parasite clearance should be avoided.
· Private-sector health providers, especially those working in areas of lower transmission, such as cities, might justifiably use RDTs since the lower prevalence of malaria in these areas reduces the predictive value of clinical diagnosis and increases the correlation between parasitemia and disease. RDTs may be more acceptable than microscopy to these practitioners as well as to their clients, who may be willing to pay for the convenience of on-the-spot diagnosis and treatment.
· Multidrug resistance can reach a level at which drug treatment based on clinical diagnosis alone ceases to be a rational policy. Syndromic management can be justified only as long as the anti-malarial drug used is safe, cheap and effective. The two main drugs used for first- or second-line treatment in Africa south of the Sahara, chloroquine and sulfadoxine-pyrimethamine, fit these criteria. Emergence of resistance to both chloroquine and sulfadoxine-pyrimethamine would dictate the use of alternative drugs (such as quinine, mefloquine and artemisinin and its derivatives) that are substantially more expensive and less safe. Under such circumstances, increased diagnostic specificity is desirable and could be achieved through laboratory testing. There are arguments for increasing the availability of microscopy where it is cost-effective (i.e. when used for the diagnosis of other diseases as well as malaria) but there are locations where microscopy is unreliable and difficult to sustain. In such circumstances, RDTs might justifiably be used if the overall cost (including the costs to the patients and to the health care system) of their use proves lower than that of using a more expensive and less safe drug, and if an impact of test results on drug use can be demonstrated.