Lecture 15 - Edmond Rolls
Error Correction

Ok, given that error correction may occur. How? Good question. Clearly a perceptron must calculate the error, but no one yet has any idea about how the could do this. Within the cerebellum there are "climbing fibres" which seem to bring an individual signal back to the network, but it isn't clear this is any kind of error signal. AKA: we have no idea.

Hippocampus

One notion of the hippocampus is that it is used as an intermediate term episodic memory. Indeed, seems to be linked to association of the outputs from the other association corticies. More specifically the pathway is:  Asscociation Corticies --> Dentate Gyrus (reccurent network with demonstrable recurrent collaterals) --> CA3 (single autoassociation network) --> CA1 (post-processor which seems to get larger the closer you get to humans, evolutionary). In all of these nets, LTP has been demonstrated.
     NUMBERS: 3,600 Perforant path inputs; 12,000 recurrent collaterals; 46 mossy fibre inputs. Incidentally, the rat HC has cells which fire when it is in that place. Humans and primates, on the other hand, have cells which fire when you look at a place. Thus, we don't acutually have to be there to make the association.
     So knowing what we know about the performance of such networks we can calculate the capacity of the rat hippocampus to about 30,000 unique places, and more importantly, we can model such in a "biologically plausile neural network." When we do two points come to the fore: first, we have shown that one needs at least as many back projections as forward to accurately retreive patterns. Second, can analyze multi-layer network in terms of a large single layer network and find out things about its capacity. Indeed, it does store memories very well. Horray! Of course, this doesn't explain how long term memories form.

 

Representation of Shape from Images

Theory of David Marr

You need to have this theory to explain why it is that certain cells seem to respond to certain object regardless of position. For example, face cells seem particularly well documented, but have also recently found evidence for object cells regerdlaess of color, etc.

Neural Network Theory

Invarient detector is a problem for single neurons. Because as soon as we shift the view, a single neuron will loose the similarity to the first pattern. However, in the visual cortex, there are at least six layers between LGN and view independence. (LGN --> V1 (large receptive fields) --> V2 --> V4 (view dependent, configuration sensitive combinationis of features), --> TEO --> TE). So view independence is not an easy problem for the visual cortex. Still, why is this not is combinatorial explosion? Well, not every combination of features needs exist.
   Indeed, if use a trace learning rule which follows a gausian distribution will automatically pick out certain combinations of features invariant across views. This works because of the series of competative networks, organised in hierarcical layers, with the trace learning helps the network recognize that some things are more similiar across time than others.

 

Back to Connectionist Summer School HomePage

 Coments to: ghollich@yahoo.com

 Last Modified: Sep 20, 1999