Lecture1 - Kim Plunkett
Outline

History

All began again with Rumelhart & McClelland's Handbook of PDP. However, structured connectionism has been around for a very long time indeed. G. Hinton, Quinn and others have played with connectionist type accounts of language and cognition for nearly one hundred years.

Types of Networks

Neural Computation in Networks

Ok, given all these different types of networks and psychology's newfound interest in connectionism. How do these networks acctually work? Well, take the XOR problem as an example.

XOR

To neuropsycholgists, this diagram should look resonably comforting, as it seems to show a simplified set of five "neurons" and their connections. As it happens, this network solves the XOR binary problem and exhibits many of the most interesting properties of a connectionist network. Namely,

But to see how this works, we need to get at the heart of the network's function, namely, each neuron's ability to sum up all the inputs and produce some sort of activation. In most cases, this activation is produced by some sort of threshold function (which for the interested, is the inverse of the bias). So our first step is to calculate each neuron's activation.

Activation Calulation

Ok, so we need some way to "sum up" what is coming into the neuron. And this really isn't too scary... all we do is add in the activity of the input units, modified by the strength, or weight, of the connections between to inputs and that neuron. Or to express this mathematically. We say that the netinput to a particular neuron (i) is equal to the weight of the connection from the first of its inputs times the activation of that input, plus the weight of the connection from the second of the inputs times the activation of that input, and so on, and so on, until we have added all of the inputs. However, at this point we aren't yet done, as we now would still like to decide whether the neuron fires or doesn't, and so we modify our result by the threshold function.


Terms

Auto-Association - Type of fully reccurrent network which learns to output its input. Turns out that such training can create interesting compression algorithms and can reproduce the appropriate output even give degraded, noisy, or even missing, input. Back to Types of Networks.

Competative Learning Network - Hybrid Network (Feedforward net inbetween layers and recurrent within layers), which has hidden layer which is organized in a Winner-Take-All fashion. This kind of unsupervised network, begins to form structures that look very much like concepts. Types of Networks

Constraint Satisfaction Network - Network with pools of units, compatible neurons are exitatory to each other, uncompatible neurons are inhibitory. Thus, the network has several possible end states, reflecting the different interpretations possible to any one problem. Interestingly, for conectionist model like this, the more constraints, the more structure, the more cues in the environment toward a possible solution, the faster, more reliably the network will converge to a solution.

Feedforward - Network in which connections are only in one direction (usually from input to output.) Differentiated from a recurrent network.

Pattern Association Network - Classic example of a feedforward net, fully connected which propagates inputs from an input layer to an output layer. Learns to recognize input patterns.

Recurrent Connections - Type of connectivity in which neurons a multiply connected, both to each other, forward, and backward. As differentiated from feedforward.

Structured Connectionism - Deals with neural networks whose weights are set, aprioriby the modeler.

Unsupervised learning networks - Networks which have no clear target.

Winner-Take-All - Cluster of neurons which litterally fight for supremacy of activation such that one neuron "wins out" over the others in the group. Every time a neuron "wins" it becomes a little faster the next time.

Back to Connectionist Summer School HomePage

 Coments to: ghollich@yahoo.com

 Last Modified: August 8, 1998