The Pattern Associator

"Look out! A giant safe is about to fall on your head!" While an average person is unlikely to face anything like the previous situation, one of the primary functions of a nervous system is to help us adapt to the environment around us. We must be able to take the wealth of stimuli available to us from the outside world, make some "sense" of it, and behave in such a way that the likelihood of our prospering increases. The ultimate outcome of which is nothing less than the fate of our species. In our world, only the most adaptable survive.

In the quest to be the best, the human nervous system has evolved into a vast array of trillions of processing units and their connections, the strengths and structure of which are constantly changing. All this is presumably an attempt to better adapt our behaviors for the milieu in which we live and thrive. And yet, with so many trillions of neurons and many more trillions of complex connections to make, it would seem an impossible task, even for mother nature, to "know," at birth, which connections to make and where. In fact, she doesn't. Instead, nature makes an overabundance of connections, strengthens those connections which show themselves to be the most active, and "prunes" back the connections that prove to have no value. The end result of which is a slightly less complex network, optimized to "do the right thing," and that could be anything from recognizing letters on a printed page, to picking out the finer points of a Shakespearean sonnet. And it is this phenomenon (the strengthening of advantageous connections and "pruning back" less important ones) that pattern associator networks attempt to mimic.

If the IAC networks show a plausible structure for content addressable memories, then the Pattern Associator networks show a plausible method of how such networks may form.

Definition of the Pattern Associator

Formally then, pattern associator networks are neural networks which take a given set of input and output patterns, and physically change their structure (using some sort of learning rule) in order to correspond to those patterns. In other words, these networks "learn" to "recognize" particular incoming stimuli and associate certain optimum responses with them. In reality, all they are doing is strengthening those connections which lead to the proper responses and weakening those that don't.

Interestingly, along the way, these networks tend to form "hidden" nodes of neurons which are "specialized" for doing certain things: very similar to the specialized ganglia within our own brains. For example, networks trained to recognize pictures tend to form edge detection nodes and orientation nodes in much the same way as the occipital lobe has groups of neurons devoted to sensing orientation, and other groups apparently devoted to dealing with edge detection. Indeed, it has been demonstrated that for the known patterns of neurons seen in the occipital cortex to form, all that is needed is some kind of lateral inhibition, some kind of Hebbian learning rule, one similar, but not exactly like the one's used by pattern associators. In fact,many of the learning rules are not very much like biological models at all, and that is a problem (see the case for and against the pattern associator).

Perhaps the finer points of the pattern associator model are best illustrated by an example.

DartNet

DartNet is a free neural network program, designed to help interested students familarize themselves with a simple working neural network. In this case, it is a pattern associator type network making use of backpropagation for it's learning rule. DartNet can be found in the DartNet folder of this PDP primer, and is also available via anonymous FTP from dartvax.dartmouth.edu in the pub/dartnet/ folder.

Start the DartNet application by double clicking on the DartNet icon or by clicking the DartNet icon once, and then choosing "Open" from the "File" menu. Remember that complete help can be found in the "?"(Help) menu under "DartNet Help."

Select "Open Network" from the "File" menu and open the network labeled "3-Layer XOR Network." Then select "Open Pattern Set" from the file menu, and open the pattern set labeled "XOR Pattern Set". You will now have before you, on the left side of the screen: a simple network which has two input units, two hidden units, and one output unit. On the right side of the screen is a pattern set which corresponds to an XOR logic expression.

By selecting "Learn" from the "Session" menu, you can watch as the network changes the weights of its connections in order to produce the proper output, given any two inputs. Once the network has "learned" the proper connections (to a criterion), you can test its output by selecting "Test" from the "Session" menu and entering any two inputs (keeping in mind that it learned using only 0's and 1's).

Just as this simple network learned the XOR situation, more complex networks are capable of learning far more detailed descriminations. With DartNet, you might wish to experiment with different size networks, with different kinds of connection patterns. Complete information on building your own networks and pattern sets can be found in the "DartNet Help" section of the "?"(Help) menu.

The Case for and Against the Pattern Associator

Although the theory underlying the Pattern Associators is not without some biological plausibility, generally speaking, those who take issue with the Pattern Associators do so because of the mathematical methods used to create the"learning." Many of these are nothing more than mathematical devices which spring from the imagination with no consideration of biological or even physical limitations at all.

Disadvantage of Backpropagation

Take backpropagation for example. It is one of the most common types of learning rules and the one used by the DartNet program included with this handbook. At it's simplest, backpropagation involves taking a network's current performance, calculating the error, and then changing every connection which in some way contributed to that error, in proportion to each connection's responsibility. In a way, this is similar to our judicial system: meting out punishment to those who err and saving the most extreme kinds of punishment for those who err the most. Mathematically, this involves "propagating" the error backwards through the network: hence backpropagation. Unfortunately, as was hinted at, backpropagation is not intended to be biologically realistic. It merely provides a substitute for a process of feedback within neuronal systems which is, at present, not well understood. The strongest opponents of Pattern Associators claim that "cheating" like this robs a network of any validity it might have otherwise possessed. They say that those who use such rules have fallen prey to the "Gee, it looks the same" syndrome. In other words, just because pattern associators appear to act like people, doesn't mean that they learn, or even operate in a manner similar to our own. Quite simply, behavioral similarity does not necessitate structural similarity. (The same could be said of IAC Networks.)

Pragmatic Advantages

On the other hand, the structural similarities which arise between such networks and our own seem to indicate a more than accidental co-occurrence. Moreover, the pattern associators do, in fact, recognize patterns very well and have accomplished a remarkable number of feats even for a computerized "intelligence," the value of which should stand on its own regardless of the biological plausibility. For example, some networks can read the printed word, others can recognize the human voice, and one, named ALVIN, even can safely drive a car.

Special Case: The Economy

In point of fact, the ability of pattern associators to recognize and respond to patterns, any patterns, should make them invaluable in many diverse situations. One group of economic forecasters has even turned to pattern associators to help pick out trends in world markets. Needless to say, if successful, this could prove very profitable. The forecasters ought to be warned however. Just because a network can recognize patterns, doesn't mean it is recognizing the right ones.

Tale of a Tank Killer

Once upon a time, a group of army scientists thought it might be handy to have a pattern associator recognize the difference between hostile enemy tanks and harmless civilian transports. Said experts set up pictures of the two classes and had a rather complex network attempt to"learn" the difference. Surprisingly, the network was able to tell the difference almost instantly. Because the network, though complex, was far more simple than a human brain, the specialists were curious as to how it could make such distinctions when trained officers often had problems. After examining the inner structure of the network, the mystery was solved. You see, the pictures of the tanks had been taken in the morning while the civilian transports had been photographed later in the day. Far from discriminating between the pictured objects, the network was actually keying off the lighting differences in the photograph: just a word of caution to those who would let computers govern their lives.


Onward to the next section: Neural Networks and Chaos theory.

Back to Table of Contents.


Last Modified: Apr 21,1995.

George Hollich
Lebanon Valley College
G_Hollich@Acad.LVC.edu