Dyslexia Brain Differences Show Up Before Formal Reading Instruction

Thursday, January 28, 2010

Last time, we talked about early behavioral differences between prereading children that predicted future reading impairment. Today, we’re continuing on the theme of early predictive differences, this time in the brain.



The question of how early brain differences arise is a worthwhile one. We want to know whether the dyslexic brain is tackling reading differently from the very beginning or if these brain differences arise after some reading experience, perhaps reflecting compensatory strategies that the children may have developed.

Specht and colleagues (Scandinavian Journal of Psychology 2008) conducted a brain imaging study on Norwegian children (a good population to study because reading instruction starts in second grade in Norway). The basic goal of their experiment was to scan 6 year olds (before they learned to read) and see if they process words differently depending on their risk for dyslexia. Unlike the Lervag study, this study was not longitudinal. Specht and colleagues determined which kids were at risk for dyslexia using a risk index that took into account factors like heredity, language development, and other factors.

Kids looked at four kinds of stimuli during an fmri scan: pictures, logos, regular words and irregular words while performing a categorization task (“Is this something you can play with?” and similar questions). I won’t spend too much time comparing between conditions because I’m not clear on what characteristics were controlled for between the stimulus types.

There were differences between the at-risk and normal reading group in all conditions. There were several interesting findings. First, risk index score correlated with increased activation when looking at words in the angular gyrus, an area that has been reported to be involved in language/phonological processing.

Our old friend, the visual word form area, also shows up. At a more liberal statistical threshold ( p<.001 uncorrected and with a small volume correction) they found that risk index score correlates negatively with left occipitotemporal activation when viewing irregular words.

So what does this mean? For one thing, differences arise early, before formal reading instruction. The two groups did not differ significantly on standardized reading measures at the time of testing (although there was a trend (p<.096) towards a difference in reading scores. So there seems to be something different about how these kids approach words from the very beginning. It would be interesting to know what is driving these differences. I wonder what strategies the kids were using to in the scanner to complete the categorization task, especially since they couldn’t read yet. For word conditions, kids only had an accuracy of 20-30%. What were they doing for the words they couldn’t read? Were the scanner differences driven by the words they could recognize, or all the words?

I’m particularly puzzled by the VWFA findings. The VWFA is usually thought to develop based on expertise with letters Does this mean that even before reading instruction there is some difference in expertise between kids at risk and not at risk? Interesting questions for future investigation.

Specht K, Hugdahl K, Ofte S, Nygård M, Bjørnerud A, Plante E, & Helland T (2009). Brain activation on pre-reading tasks reveals at-risk status for dyslexia in 6-year-old children. Scandinavian journal of psychology, 50 (1), 79-91 PMID: 18826418

Read more...

Color and Object Naming Speed Predicts Future Risk for Dyslexia

Wednesday, January 20, 2010

An important goal for any developmental disorder research is early detection. The earlier the detection, the earlier we can start intervention and treatment. Dyslexia is tricky though. It’s a reading disorder, and by definition cannot be diagnosed until reading instruction begins. However, we can still look for signs that predict future risk for dyslexia.

 One predictor of future dyslexia is rapid automatized naming (RAN) speed. A RAN test consists of naming an array of objects, colors, letters, or symbols as quickly as possible. It makes sense that letter and symbol naming speed (also called alphanumeric RAN) might predict reading skill. Surprisingly however, speed at naming pictures of objects and color patches also predicts future reading skill. Lervag and Hulme (Psychological Science 2009) studied this in a longitudinal study of Norwegian schoolchildren.


The idea was to test children before they learn to read and look for test results that predicted future reading skill. Formal reading instruction in Norway begins in second grade. Therefore, Lervag and colleagues conducted several tests on first graders. These tests included RAN and phoneme awareness (tasks like picking a word that begins with a certain sound). They then retested the students in second, third and fourth grade on these skills as well as reading fluency. They found that performance on nonalphanumeric RAN in first grade predicts phoneme awareness and reading fluency later on.

Lervag and colleagues investigated this finding in further detail by separating the RAN response times into articulation times and pause times between words. They found that the pause times were a much better predictor of future reading performance than articulation times. It should be noted though, that articulation times were much less variable than pause times, with a standard deviation of around 4.5 ms rather than about 16 ms.

These are pretty interesting results, and it raises the obvious question -- what is it about rapid color and object naming that predicts future reading skill? I can think of several possibilities.

1. Lervag suggests that reading may be tapping into the same pathways used for object recognition and naming. Remember the mirror invariance paper from last week that suggested the visual word form area might be involved both in word and object recognition? It could be that both reading and object naming are served by the same brain areas.
Both reading and rapid naming involve seeing an object, retrieving its phonological representation, and outputting the phonology via motor routines. Perhaps performance in RAN reflects the strength of the connections between visual, phonological, and speech areas.

2. Attentional focusing could be another factor. Rapid automatized naming requires directing visual attention from one object to the next in a controlled manner, and reading requires the same skill. Perhaps a deficit in attentional focus could be the underlying factor.

3. An even lower level explanation would be visual motor control. Both RAN and reading require controlled eye movements from one item to the next. Deficits in motor control have been reported in dyslexia, but I don’t know of any reported eye movement deficits. If anyone knows more about this, do let me know.

What do you think is the connection between RAN and dyslexia?

Read more...

Haiti Fundraiser on Other Blog

Tuesday, January 19, 2010

There will be more science articles coming soon, but just wanted to announce that there's a fundraiser for Haiti on my Creative Writing Blog Please go take a look!

Read more...

A Rose From Any Other Direction is Still a Rose, But Its Name is Not

Tuesday, January 12, 2010

Imagine a cup. Now rotate it 180 degrees. What is it now? It’s still a cup. Surprising? Not really. We live in a three dimensional world and take for granted that something flipped around an axis doesn’t change into something else. This generalization across mirror images is called mirror invariance and holds true for most visual stimuli.

But what about the letter b? Draw it on something transparent, turn it around – and lo and behold, it changes into the letter d! Writing systems are one of the few domains where an object’s orientation matters. Our brains have trouble with this, and this difficulty becomes apparent when children learn to read and write. It is well documented that children often flip letters when writing, and sometimes even write entire words spontaneously flipped. It is only after years of practice that they get over this tendency.

Neuroscientist Stanislas Dehaene and colleagues recently found evidence of mirror invariance (and the lack thereof for words) in the brain.


They used a technique known as fMRI repetition suppression. The basic logic of the technique is this: brain regions responding to a repeated stimulus will show a decreased activation. This is thought to be due to neuron adaptation or fatigue. Therefore, by showing someone different visual stimuli and looking for repetition suppression, we can get an idea of which stimuli the brain categorizes as a the same (a repeat), and which are different.

In this particular study, reported in Neuroimage ( 29, (2010) 1837-1848), participants saw words and pictures that were preceded either by a normal or mirror reversed image of the same category. (The participants were asked to perform an unrelated size judgment task.) The researchers found that picture and word processing regions both show repetition suppression to repeated identical images. This was not surprising. It’s the basic repetition suppression effect.
Dehaene and colleagues then looked for areas that showed repetition suppression for words/pictures and their mirror images. They found a region in the left fusiform gyrus that showed mirror repetition suppression for pictures, but no such region that showed mirror repetition suppression for words. Therefore, they found evidence for brain regions that considered pictures equivalent to their mirror images, but no such regions for words.

Surprisingly, the region that showed the strongest mirror invariance effect for objects was an area of the left fusiform known as the Visual Word Form Area, an area that has been shown in many studies to be active during word processing. Therefore, it’s possible that the same exact brain region that processes pictures in a mirror invariant way knows to behave differently for words. With the low resolution of fmri, we can’t rule out the possibility that the word network and picture network are in fact separate, but it should be an interesting question to pursue.

Is this lack of mirror invariance specific to writing systems we’ve learned, or does it hold for any script-like stimulus? A follow up behavioral study suggests the latter. In this study, people saw words, tools, faces, and scripts. Each image was presented for a quick flash of 200ms each, and was preceded by another image of the same category, either normal orientation or flipped. Their task was to say whether the two stimuli were the same, where a mirror image was also considered the same.

As measured by reaction time, people were relatively quick to judge an object and its mirror reflection to be the same, but slower to judge two mirror image scripts to be the same. This difficulty held true even for unfamiliar or false scripts, suggesting that the brain generalized the importance of orientation to unfamiliar, unlearned scripts.

Ideas for future research? The obvious next step would be to do this with children. Also, does anyone know of any studies of word to object cross categorical priming? That would help answer the question of whether the word network and object networks are the same.

Blog Carnival Coverage:
Psychology Articles Carnival

Read more...

About

Monday, January 11, 2010

Hello, my name is Livia and I'm a graduate student at the Brain and Cognitive Sciences department at MIT. The main purpose of this blog is to force myself to review the scientific literature in my dissertation area. This is also an experiment to see whether it's possible to discuss current neuroimaging research in a way that's specific enough to be useful to people in the field, but general enough so that laypeople can follow along.  We'll see...

If you write fiction, you may also be interested in my writing blog:  A Brain Scientist's Take on Creative Writing.

Email:  liviablackburne [at] gmail [dot] com

Read more...

  © Blogger template The Professional Template II by Ourblogtemplates.com 2009

Back to TOP