Lecture 2 - Peter MacLeod
Simulating Deep Dyslexia
by lesioning a model of reading via semantics

Outline

Connectionism & Neuropsycholgy - Connectionist modeling lends itself naturally to simulation of neuropsychological deficits, because one can create a model and simply remove some connections or units AKA "Lesion" the model.
     In addition, connectionist nets have a number of very interesting properties which are similar to actual human behaviors.

Deep Dyslexia

Patients with deep dyslexia make both semantic and visual errors.

Semantic: NIGHT --> "sleep"
Visual: SCANDAL --> "sandals"

In addition, patients show Part of speech effects:

Hinton, Plaut, Shallic Model

Inputs:28 position specific letter units
Hidden: 40 Hidden units
Output:68 semantic feature units.
Cleanup: 60 Cleanup units (recurrently connected with the sememe output units).

Task: To turn on the correct sub-set of semantic features for each input word. This is a real problem beacuse the relation between semanitic similarity and visual similarity is purely arbitrary. That is, semanitically related words aren't usually the same. CAT is less like DOG than COT, but dog is the semantically related.

Procedure: Present a grapheme/sememe pairing. First, let the network settle to some, and then incorporate backpropagation.

Results: In this manner, for example, although COT and CAT might enter sememe space in about the same place, the cleanup units (which encode semantic relationships) will pull the respective words into their own sememe attractor basin.

Lesioning: By removing connections, units, or adding noise, the network produces both semantic, visual, mixed, and unrelated errors. However, this is much better than just chance. First, there are virtually no unrelated errors. Second, if disrupt from grapheme to hidden, get visual (but some semantic), and if disrupt from hidden to sememe get mostly semantic errors (but still get visual errors). Thus, any lesioning at all will produce both Visual and Semantic Errors. Why? Because, lesioning has the effect of changing the basins of attraction, and those basins are the combined product of all network activity. Moreover, such change really increases the likelihood of V+ S errors. Indeed, this is true of the model and patients.

Note: Even if drastically revised the connectivity and/or move the cleanup units to the hidden layer, the model still provides the same qualitative data. What's more, if use a different learning algorithm (say Hebbian) still get the same results!!!

Conclusions

So what does all this tell us?

 

Back to Connectionist Summer School HomePage

 Coments to: ghollich@yahoo.com

 Last Modified: Sep 20, 1999