Applying Neural Network Computer Models to Aphasia Research Behind the Science
Free
Behind the Science  |   November 01, 2013
Applying Neural Network Computer Models to Aphasia Research
Author Affiliations & Notes
  • Gary S. Dell
    University of Illinois at Urbana-Champaign
  • The content of this page is based on selected clips from a video interview conducted at the ASHA Convention.
    The content of this page is based on selected clips from a video interview conducted at the ASHA Convention.×
Article Information
Research Issues, Methods & Evidence-Based Practice / Language Disorders / Aphasia / Clinical Practice Research / Overview, Trends and Environment
Behind the Science   |   November 01, 2013
Applying Neural Network Computer Models to Aphasia Research
CREd Library, November 2013, doi:10.1044/cred-ote-bts-006
CREd Library, November 2013, doi:10.1044/cred-ote-bts-006

I'm interested in how thoughts inside the head become transmitted into speech -- in trying to specify what happens when we talk. The way I do that is by making computational models. I try to make computer programs that simulate the processes involved in speaking -- how we have ideas and how we treat words, how we say those words, how we get everything in the right order, and produce it.

The way that I test my computer models to see if they're an accurate model of the way the brain works is by looking at the slips of the tongue that the computer model makes. So if the computer model slips in the same way that people do, then I have a theory of how this works.

How can computer models help us understand and treat language disorders?

Let me put it this way, when people first started making computer models of things, they imagined that the mind was like a computer program. The thing is that computers don't really work the way the brain works at all. Computers do things millions and millions of times faster than the brain. The brain does things in different ways.

Within the last 20 to 30 years, a form of computer modeling called neural networks has been devised in which you actually try to simulate the way that neurons operate. That's the kind of model that I use to try to understand how language production works and how pathological language production works. And so, we can take the model, and we can change it and assume that there's something wrong with it. The model is based on a principle of activation spreading within a neural network. When we make that activation spread less efficiently, it starts making a whole lot of errors -- in the way that aphasic individuals make errors. So we can use this to diagnose people -- that is, we can take an individual, gather their errors and then try to make the model behave like them. And in making the model behave like them, we now have a theory of what's wrong, of what component of language is damaged and how it's damaged and things like that.

We have a website where people in the communication sciences field can enter properties of somebody they are testing -- the error patterns they would make. So if you were giving somebody a picture naming test, with a picture of a cat, and they say "rat" or something like that instead of cat, and you collect many of these errors, you can enter them into our website, and the website will make the model try to behave like the person. Then what you get out of that is some numbers that say, how strong is this person's semantics, how strong is their phonology, and so on and so forth. Then you can use that, possibly, to develop therapies.

One thing I'll be talking about this afternoon is using this model-based way to characterize individuals as a way to track their recovery as well. Most of the people we're testing are people who have aphasia resulting from a left-hemisphere stroke. But we can use something about the way the model characterizes these people to make predictions about how rapidly they'll recover, or things like that.

What have you learned from your collaborations with scientists in the CSD field?

Before I started working on this, I didn't know anything about communication sciences or clinical applications of the field. I was a psychologist, my technical field is called cognitive science which is studying the mind as a kind of computational system. But then I met up with Myrna Schwartz and Eleanor Saffran from Philadelphia, and Nadine Martin, and they all suggested that we should apply these models to aphasia. And the rest, to me, is history. I've been learning about aphasia, I've been learning about how to take a model of unimpaired speech production and apply it to pathological cases.

What I've learned is that, first of all, aphasic individuals on the surface seem to be quite diverse. Some make this kind of error, some make that kind of error. Some detect their errors easily, some don't detect their errors. They seem very diverse. But, from the perspective of our model, they are maybe less diverse than we thought. We've been able to characterize much of the variation among aphasic individuals with two dimensions.

This is only variation with regard to word retrieval processes, not with regard to actual grammatical processes. But with regard to word retrieval, we've narrowed it down to two dimensions of phonological dimensions and the semantic dimension. And just those two dimensions define a two-dimensional space in which aphasic individuals can sit.

The other thing that I've learned is that -- when I first started working on this, and my colleagues showed me the data, I looked at it and I said, "This is so complicated. These individuals make such complicated errors. I don't know how you could possibly characterize it. I'm just amazed that you can even code these errors in a reliable way." So, I almost said, "This isn't going to work." But, my colleagues say, "Oh no, this will work. This will work." And I was amazed that it was working.

What I've learned is that there's much more of a continuity between the errors that you and I would make just talking right now and those that aphasic individuals make. It's not as if their errors are fundamentally different, there are just more of them -- and just a little more complicated. This idea actually was proposed by Sigmund Freud. He said that the paraphasias that aphasic individuals make are not fundamentally different from regular slips of the tongue. That's kind of been what we've discovered, and that's the basis for how this computational neural network model works. We first set it up so it makes slips in a sort of normal way. Then just by tweaking it, we can turn it into a model of how aphasia works.

Perhaps the most important clinical application of these things is then using the model to make recommendations about therapeutic interventions. There are a number of people around the world who are doing this. Neural network models have the property that they can be used to understand learning or change. That is, you can take the neural network and train it, teach it things. So if there is something it doesn't do right, you can give it some experience so it gets better. In this way it can be used also as a model of learning, or re- learning, in therapeutic situations. What I really hope would happen is the computer models would be used to develop effective therapies.

Further Reading: Referenced Resources
Dell, G.S. (2013). What Freud got right about speech errors: Evidence from aphasia. [Presentation Slides] ASHA Convention.
University of Illinois at Urbana-Champaign Language Production Lab. Aphasia Modeling Project (WebFit). (Available from the Language Production Laboratory website at http://langprod.cogsci.illinois.edu)
[Presentation Slides/PDF] What Freud Got Right About Speech Errors: Evidence From Aphasia