General Framework for PDP

According to McClelland, there are eight major aspects of PDP:

     
  1. A set of processing units.
  2. A state of activation.
  3. An output function.
  4. A pattern of connectivity among the units.
  5. A propagation rule for propagating patterns of activations through the network of connectivities.
  6. An activation rule for combining the inputs impinging on a particular unit and realizing a new level of activation.
  7. A learning rule whereby patterns of connectivity are modified by experience.
  8. An environment within which the system must operate.


Processing Units

Traditionally, these had been called neurons, with total disregard for the complexities of actual organic neurons. Now, after several generations the "neurons" of connectionist modeling have been renamed. They are now and hereafter referred to as processing units. Of course, this is my convention. At other conventions, the processing unit, or unit, for short, is often referred to as a formal neuron, and is defined in the same manner.


Formal Neurons

Formal neurons, or processing units, are simple functional units that add the inputs coming into them in some manner and, depending on whether that input increases their activation above some threshold, fire and send the signal onward.

Formal Neurons are based on the relevant characteristics of Actual Neurons.


Actual Neurons

Within the nervous sytem, neurons are simply cells. Cells with very special electrochemical properties to be sure, but cells nonetheless. As such, they possess all the things that one might expect a cell to possess: a bipolar cellular membrane, a neucleus, cytoplasm, etc. But it is the neuron's own talent for electrochemical communication, which makes it the basic building block of nervous systems everywhere, from that of the simple snail to the fiercely complex nervous system found in humans.

Connectionist modelers believe that the two most important properties exhibited by these actual neurons are their activation levels, and the pattern of connections between the units. Connectionist modelers also believe that it is soley in these connections and the activation levels that the essence of behavior originates. Consequently, connectionist models tend to disregard the intimate biological details involved, in preference for the mathematical simplicity of formal neurons.

Activation Levels

At any given time, neurons have a particular level of activation. That is, they exhibit a particular frequency of firing. This frequency is called the activation level of a neuron. In a way, this is similar to the frequency of waves crashing on a beach. In stormy weather, many waves will be hitting the beach in a short period of time (increasing the beach's activation), at times the waves will come slower. The number of waves hitting the beach in any given minute is similar to the activation level of a neuron, and like the frequency of those waves, the activation levels of a neurons have limits. There is a maximum and a minimum rate of firing possible. Just like you'll never see 100 waves hit the beach in 10 seconds, neurons can't fire faster than a certain rate.
Also neurons have a particular resting rate of firing and will tend to fall back to this resting rate whenever outside inputs flag, just like there is an average frequency for the number of waves hitting the beach over any given time period. Any force seeking to increase the activation of a neuron must combat this natural tendency toward the lower energy state.
Just like in physics, the inputs to this neuron are a vector, with both a number quantity and a direction. This vector (or multi-valued quantity) is found by multiplying the input unit's activation level by the strength or weight of the connection between that input and the neuron one is studying (see The Math).

The Math

(Math phobics may wish to skip this section)

The total or net input into a formal neuron at any given moment is the sum of all the inputs into it. That is:

The net input into neuronI = (activation of input1 * weight between input1 and neuronI) + (activation of input2 * weight between input2 and neuronI) + ... (activation of inputn * weight between inputn and neuronI)

Put another way:

(1)

However, a key thing to note is that how the level of activation changes in this neuron is not just dependent on the total or net input, but is also dependent on the prior level of activation and the strength of the decay.
In mathematical terms, this can be described by:

(2)
(3)

For example, suppose the neuron was already operating at its maximum activation level. Intuitively we can see that the neuron can't fire any faster. So it doesn't matter what the net input is, the neuron's activation level can only go down. This situation represented in mathematical terms looks like:

(4)

Likewise, if the neuron is at rest, any input whatsoever will drive the neuron away from its resting state. This situation represented in mathematical terms looks like:

(5)

Notice that for any given net input and strength of decay, somewhere in between these two extremes, the change in activation is zero. Intuitively, one can picture this by imagining that the two opposing forces are equal and opposite. Mathematically, this works out to be:

(6)

The lesson from this is that the larger the net input, the higher the level at which the neuron stabilizes (delta "a"=0), and the larger the strength of the decay, the lower the level at which the change in activation is zero.

This lesson is important if one is to understand how Interactive Activation and Competition works.
This also forms the essence of any PDP Network.


Onward to the next section: The Iac Approach.

Back to Table of Contents.


Last Modified: Sept 11,1995.

George Hollich
Temple University
GHollich@astro.ocis.temple.edu