Combining Robustness and Flexibility
in Learning Drifting Concepts
Department of Medical Cybernetics and Artificial Intelligence,
University of Vienna, and
Austrian Research Institute for Artificial Intelligence,
Schottengasse 3, A-1010 Vienna, Austria
The paper deals with incremental concept learning from classified examples. In many real-world applications, the target concepts of interest may change over time, and incremental learners should be able to track such changes and adapt to them. The problem is known in the literature as concept drift. The paper presents a new method for learning in such changing environments. In particular, it addresses the problem of learning drifting concepts from noisy data. We present an algorithm that is both robust against noise and quick at recognizing and adapting to changes in the target concepts. The method has been implemented in a system named FLORA4, the latest member of a whole family of learning algorithms. Experiments demonstrate significant improvement over previous results, both in noise-free and noisy situations.
In many real-world domains, the context on which some concepts of interest depend may change, resulting in more or less abrupt and radical changes in the definition of the target concept. A typical example are weather prediction rules, which may vary radically with the change of seasons. As another example, consider measuring devices or sensors which may alter their characteristics over longer periods of time, resulting in a perceived change of the world and the necessity to modify prediction rules that rely on these measurements. Incremental learning algorithms operating in such environments should be capable of adapting to and tracking such changes. The problem has been termed concept drift and has been recognized in the machine learning literature for quite some time (see, e.g., Schlimmer and Granger, 1986). Recently, the notions of context-dependence and concept drift have received renewed interest by a number of researchers (e.g., Kilander and Jansson, 1993; Salganicoff, 1993a; Turney, 1993; Widmer and Kubat, 1992, 1993).
A difficult problem in incremental learning is distinguishing between `real' concept drift and slight irregularities that are due to noise in the training data. Methods designed to react quickly to the first signs of concept drift may be misled into over-reacting to noise, erroneously interpreting it as concept drift. This leads to unstable behaviour and low predictive accuracy in noisy environments. On the other hand, an incremental learner that