INFORMATION THEORY AND HEBBIAN
March 25, 1993
This paper investigates 4 models of neural nets which use unsupervised Hebbian learning to optimise information transfer through the system. There have been 2 major approaches to this problem - that of using weight decay and that of using anti-Hebbian learning. This paper compares 4 models: the first, due to Linsker , uses weight decay, the second, from Oja also uses weight decay (though of a different form), the third from Foldi?ak uses anti-Hebbian learning, while the last from Plumbley , uses a mixture of weight decay and anti-Hebbian learning.
Artificial neural nets are (usually) simulations which attempt to represent information by changing the weights of connections between units ("neurons"). This process is commonly known as learning. In a very precise and practical sense, the information which the net has learned lies in the weights o the connections between neurons.
Neural nets can be broadly categorised into those which are trained by supervised learning
and those which learn without supervision. Supervised learning requires a prior knowledge of
the form and type of knowledge which will be represented. Nets which use supervised learning
tend to use one of the error descent methods particularly backpropagation ( of errors). The
major problem with backpropagation is its poor scaling behaviour: any problem which is
more than a "toy problem" takes an inordinate length of time to solve using backpropagation.
Variations of the error descent algorithms can ameliorate this problem, but not remove
it. The problem is innate in error correction.
An alternative considered herein is based on a proposal by Donald Hebb  who wrote: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.
This report analyses 4 models of unsupervised learning in terms of their stated properties vis-a-vis information theory. Unsupervised learning in neural nets is generally realised by using a form of Hebbian learning.