Autoassociators are recurrent collateral networks in which
the output feeds backwards onto itself. As a result, it can learn and
represent the pattern of inputs presented it, even from a very small
sub-fragment of the original memory presented to it. In this manner
autoassociators of this sort are said to be content
addressable.
The learning rule is just a simple Hebb
rule which changes the weight of a synapse based on the activity of
the pre-and post synaaptic neurons. Interestingly, the arcitecture of
this network is such that the connection weights will be symetric
across the diagonal. Also, because of the architecture, there will be
runaway feedback, unless their is some sort of threshold function.
Thus, such a network MUST have a non-linear activation function.
So, to make a "few" points...
Hopfield investigated such networks extensively. Found that the final energy set of the system will either be consistent with each other if the connections were the same (and fall into a stable state), OR frustrated if the conections where opposite. From this, Hopfield not only showed that these networks will be stable if they are symetric, but he also calulated the number of possible stable states per network. Like the pattern associator, the capacity is determined by the number of synapses per neuron (from 12,000 - 50,000 in the rat), divided by the "sparseness" factor. Again, the sparser the representation, the better.
Coding in AutoAssociator Networks
Generalisation, completion, and graceful degradation only come with a sparse or fully distributed networks. Local and fully distributed networks can only store the number of patterns equal to the number of inputs. Sparse can be much larger. In addition, as one moves from local to fully distributed the amount of information in each pattern increases. Thus, fully distributed networks can hold the most, BUT at the greatest cost to efficency.
Biological Plausibility and Things to Explain
Coments to: ghollich@yahoo.com |
Last Modified: Sep 20, 1999 |