White Matter and Reading Ability

Thursday, October 7, 2010

Accessibility:  Intermediate-Advanced

Hello folks.  Things are pretty busy over here and I might be having to review a lot of papers soon, so there's a possibility that entries here will get shorter and a bit more technical.  But we'll see.

Since reading is by nature a multimodal task involving both visual and language regions, it makes sense to look at brain connections in dyslexia. I've written once about white matter in dyslexia, when I blogged Bernard Chang’s PNH study. Today I'll cover two other studies that look at white matter and reading.



As a quick recap, brain tissue is often categorized into gray and white matter. White matter consists mostly of axons, the parts of neurons that send signals to other neurons. Therefore, white matter tracts carry information between brain regions and diffusion tensor imaging is a technique often used to study white matter.  You can take several measures with DTI, but one common one is fractional anisotropy, a measure of the directionality of water diffusion.  You can think of it as a measure of white matter integrity.

In one study, James Andrews and colleagues measured white matter integrity in preterm*  and term children. They found a correlation between reading skill and fractional anisotropy  in the corpus callosum, the large white matter tract that connects the two hemispheres. They also found a trend toward a correlation between reading skill and fractional anisotropy in the left temporal parietal region, a region often associated with reading. I'm surprised by the corpus callosum finding, and wonder its role might be in reading. Is the corpus callosum connecting language regions to their right hemisphere homologues? I also wonder if this is something general to the population, or a difference unique to preterm children. I guess we’ll have to see if this finding comes up in later studies.

Another DTI study found some more predictable results. Rimrodt and colleagues scanned the brains of children with dyslexia and normal-reading children between the ages of seven and 16 years. They found that children with dyslexia had lower FA in the left inferior frontal gyrus and the left temporoparietal region, both areas previously implicated in reading. Interestingly, they also found that the FA in some posterior areas involved in visual word processing (including the left fusiform) were correlated with speeded word reading.

*mean gestational age 30.5 weeks

ANDREWS, J., BEN-SHACHAR, M., YEATMAN, J., FLOM, L., LUNA, B., & FELDMAN, H. (2009). Reading performance correlates with white-matter properties in preterm and term children Developmental Medicine & Child Neurology, 52 (6) DOI: 10.1111/j.1469-8749.2009.03456.x

Rimrodt, S., Peterson, D., Denckla, M., Kaufmann, W., & Cutting, L. (2010). White matter microstructural differences linked to left perisylvian language network in children with dyslexia Cortex, 46 (6), 739-749 DOI: 10.1016/j.cortex.2009.07.008

Read more...

Noise Exclusion Deficits in Dyslexia

Wednesday, August 18, 2010

Accessibility:  Intermediate-Advanced

The human visual system includes two pathways, magnocellular and parvocellular, deriving from two types of retinal ganglion cells that project to different layers of the lateral geniculate nucleus. Generally speaking, the magnocelluar pathway is specialized for movement while the parvocellular pathway is specialized for color and detail.  Some researchers have found dyslexia to be associated with magnocelluar impairment, although evidence has been mixed.

A paper from Sperling and colleagues argues that magnocelluar deficits in dyslexica may actually be a deficit in noise exclusion.  The authors tested children with and without dyslexia using stimuli that were designed to activate the magnocellular or parvocellular pathways. The magnocellular stimulus was a patch with white bars that alternated rapidly between light and dark. The parvocellular stimulus had thin light and dark bars that did not alternate.



In addition to the two stimulus types, there was a high noise and low noise condition. In the low noise condition, one of the stimuli appeared to the left or right of the fixation mark. In the high noise condition, noise patches appeared on either side of fixation and the stimulus was overlaid onto one of the noise patches. In both cases, child had to say on which side the stimulus appeared.

The authors calculated contrast thresholds (the amount of contrast needed between the light and dark bars for accurate detection) for both groups of children. They found no difference in the contrast thresholds for the low noise condition. In the high noise condition, dyslexic children had higher contrast thresholds (more difficulty detecting) for both the magnocellular and parvocellular stimuli. In addition, thresholds in the high noise condition were correlated with language measures.

These are interesting results. While one study cannot rule out the magnocellular theory of dyslexia, this does open the possibility that many of the results that pointed to a magnocellular deficit were actually cases of noise exclusion deficit.   I do remember one paper about motion perception and dyslexia that can't be explained by noise, so I'll see if I can write about that later.

Another question is, how does noise exclusion lead to dyslexia? It could be that a noise exclusion deficit results in difficulties building phonological categories, which in turn affect reading. The authors also mention that noise exclusion could affect learning in the visual modality by making it harder to extract regularities from different fonts and scripts.


Sperling, A., Lu, Z., Manis, F., & Seidenberg, M. (2005). Deficits in perceptual noise exclusion in developmental dyslexia Nature Neuroscience DOI: 10.1038/nn1474

Read more...

Sensitivity and Specialization in the Occipitatemporal Region: Differences in Dyslexic Children

Wednesday, August 4, 2010

Accessibility: Advanced/intermediate

Early research on the role of the occipitotemporal region in reading often focused on characterizing a single region in the mid fusiform, commonly called the visual word form area. Since then, focus has gradually shifted from a single region to the entire length of the occipitotemporal region, looking at how the sensitivity and tuning changes as you move from posterior to anterior regions.



Van der mark used an approach like this to look at dyslexic and control children aged 9-12 years. Eighteen normal reading and twenty four dyslexic children performed a phonological lexical decision task in the scanner. Children saw words, pseudohomophones (words that sounded like real words but spelled differently, like “taksi”), pseudowords (pronounceable nonwords), and false fonts. The children were asked to decide whether something sounded like a real word. For example, the correct response would be “yes” for words and pseudohomophones and “no” for pseudowords and false fonts.

The children with dyslexia did worse for pseudohomophones and pseudowords and performed similarly to the controls for words and false fonts.

The authors report two main findings. First, the control children showed a gradient of print specialization in the occipitotemporal region, with more activation to false fonts in posterior regions and more activation to real letters and anterior regions. The control children did not show this trend.

Second, control showed more activation for pseudowords and pseudohomophones than words, while children with dyslexia didn't.

This is a nice study that takes a more nuanced approach to dyslexia brain differences. Brem and colleagues also got similar results with the words and false fonts.

By now there's quite a bit of literature on the specialization of the visual word form area. My own struggle, as I’m also doing this type of research, is the question of what does it all mean? We have all the studies now showing brain differences between control and dyslexic children, but what does it mean to have more or less activation? That the brains of dyslexic children process words differently? I could've told you that before we stared.

So what would help? Perhaps the next step in dyslexia research, now that we've mapped out the basic differences, is to zoom in as much as we can on the relationships between brain differences and behavioral differences. Perhaps more fine grained behavioral measures would help, or more interventional studies that looked at brain activation before and after training. It may also help to look at functional connectivity and how different brain regions interact. Anyone else have ideas?



van der Mark S, Bucher K, Maurer U, Schulz E, Brem S, Buckelmüller J, Kronbichler M, Loenneker T, Klaver P, Martin E, & Brandeis D (2009). Children with dyslexia lack multiple specializations along the visual word-form (VWF) system. NeuroImage, 47 (4), 1940-9 PMID: 19446640

Read more...

Reading induced epilepsy

Monday, July 12, 2010

Accessibility:  Intermediate

Just a short entry today.  Clinical research is not my specialty, but I ran across a case study today on reading induced epilepsy.


Seizures began during silent reading with the feeling of no longer being able to understand what she was reading (a- or dyslexia). After looking up from the page, she then continued to see letters and words despite actual disappearance of that image from either visual field (palinopsia). She had a feeling of strangeness. She could then have right hemi-body jerks and secondary generalisation. Seizures usually occurred soon after the onset of reading (less than 10 min). All seizures occurred during silent reading. She had not abandoned reading altogether but had developed a distinct style of reading to try to avoid the onset of seizures, in that she read only for short periods and tended to scan the page diagonally.

Not surprisingly, clinical tests revealed that these seizures started in the occipitotemporal region.



Gavaret, M., Guedj, E., Koessler, L., Trebuchon-Da Fonseca, A., Aubert, S., Mundler, O., Chauvel, P., & Bartolomei, F. (2009). Reading epilepsy from the dominant temporo-occipital region Journal of Neurology, Neurosurgery & Psychiatry, 81 (7), 710-715 DOI: 10.1136/jnnp.2009.175935

Read more...

fMRI of Letter Processing in Children and Adults

Thursday, July 8, 2010

Accessibility: Intermediate-Advanced

How is letter processing different from word processing? Since letters compose words, many reading models have letter processing earlier in the reading stream, but there is still room for more imaging work.

Turkeltaub and colleagues compared the neural basis of letter processing in children (age 6-11) and adults (age 20-22). The participants were scanned while naming either letters or line drawings out loud. Here are four of their findings.



1. Adults had more activation than children in visual regions. This appeared to be driven mostly by differences in letter naming*. This suggests that object processing might be more adult-like in kids at this age.

2. Areas showing a change in letter processing with age were posterior to regions found in other studies to respond to words. Since visual processing moves from back to front, this fits with a model in which letters are processed before words.

3. The authors found no left hemisphere dominance for letters. This very different words, which are heavily left lateralized. This is also different from Cantlon 2010 which did find letter processing to be left lateralized. I wonder if the results here could be different if the authors had used another method to pick their analysis region**.

4. The authors also found that no regions activated more for letters than for objects. This is consistent with what I also find in my data. Objects are more visually complex than letters, so it's not surprising that you get more activation for objects. I should note that Cantlon found regions that responded more to letters than objects, but Cantlon only used shoes, which as a set are more uniform than line drawings of different objects.

*although there is no interaction between objects and letter naming

**ROI selection based on activation for all tasks.


Turkeltaub PE, Flowers DL, Lyon LG, & Eden GF (2008). Development of ventral stream representations for single letters. Annals of the New York Academy of Sciences, 1145, 13-29 PMID: 19076386

Read more...

A Meta-Analysis of Dyslexia Brain Imaging Studies

Wednesday, June 23, 2010

Accessibility: Advanced

fMRI experiments, with their small sample sizes, can easily fall victim to variability within the subject pool. This is especially true for patient studies. So it’s nice to step back and look at the big picture once in a while, and see where different studies agree and disagree.



Richlan and colleagues recently did meta-analysis of dyslexia brain imaging studies. They used an algorithm called Activation Likelihood Estimation (ALE), which models foci of activation as Gaussian probability distributions. (The software is called GingleALE. Ha.)

Richlan and colleagues picked studies  with the following criteria:
1) Uses words, pseudowords or single letters as stimuli
2) Uses reading or reading related task in the scanner, and
3) Group comparisons are done in a standard stereotactic space.
The studies included PET and fMRI studies.

The take away message is that people with dyslexia underactivate posterior reading regions and may overactivate  frontal regions.

The authors found underactivation in regions associated with the phonological reading pathway (reading by sounding out words), including the superior temporal gyrus and inferior parietal lobule. Interestingly, they found no difference in the angular gyrus, a region that has often been reported to be important to reading.

They also found underactivation the pathway associated with automatic whole word reading, including the left fusiform, inferior temporal and middle temporal regions.

At a less conservative threshold, the authors found that people with dyslexia overactivated the left inferor frontal region. This is typically interpreted as frontal regions being brought in to compensate for posterior reading regions.

They did find one posterior region as well that was overactivated in people with dyslexia : the left lingual gyrus, a lower level visual region. Perhaps again, a case of compensation.

All in all, a nice summary of dyslexia results. Again, I wonder about the relative variablility of dyslexics and controls, and how they affected the results.


Richlan, F., Kronbichler, M., & Wimmer, H. (2009). Functional abnormalities in the dyslexic brain: A quantitative meta-analysis of neuroimaging studies Human Brain Mapping, 30 (10), 3299-3308 DOI: 10.1002/hbm.20752

Read more...

Evidence Suggesting that Specialized Visual Regions Are Formed by Pruning in Early Childhood

Monday, May 24, 2010

There are quite a few specialized visual regions in the brain. For example, the fusiform face area (FFA) activates for faces, and the visual word form area (VWFA) in the left fusiform is consistently active for words.

How do these specialized cortical regions develop? Is it experience dependent? Do regions have a preexisting preference for certain visual features? (For example, perhaps the visual word form region prefers high contrast stimuli with sharp borders). Do these regions form by increasing activation to preferred stimuli, or a decreasing activation to nonpreferred stimuli? Cantlon and colleagues investigated these questions in a recent study.



They tested prereading five year olds and adults in an fMRI experiment. Participants saw faces, letters, numbers, shoes and scrambled images and pressed a button if a green border appeared around the picture. There were two interesting findings.

The first concerned the visual word form area. Both adults and children had a specialized brain region in the left fusiform that activated more for letters than objects. However, while adults activated that region more for letters than for numbers, children had equally high activation for letters and numbers.*

These results support a role for both experience and low level visual features in the development of the visual word form area. Note that these children are nonreaders, but they already activate the left fusiform for letters and numbers. So perhaps there’s something hardwired in the left fusiform that prefers symbol-like, high contrast, visual stimuli. But only adults, who have had extensive experience with letters, show differential activation for words and numbers.

The authors then investigated the relationship between activation level and behavior. They tested children on a face matching task and a letter naming task. Contrary to what you might expect, activation in the fusiform face area did not correlate with face matching skill, and activation in the visual word form area did not correlate with letter naming skill.

Rather, skill was negatively correlated with activation to the nonpreferred category. Face matching performance was inversely correlated with FFA activation to shoes. And letter naming was inversely correlated with VWFA activation to faces. This suggests that that increased skill in face and letter recognition is associated not with enhancing activation to preferred stimuli, but with pruning back activation to unrelated stimuli. **

*Methodological note: ROI selection, 10 strongest voxels within a sphere 10mm radius around peaks of All>scrambled.

**Note that not all nonpreferred stimuli show this inverse correlation. In the face area, there was no correlation between face skill and symbols, and in the VWFA, there is no correlation between letter naming skill and shoe activation. Perhaps these nonpreferred stimuli too far from the preferred stimulus, so no pruning is needed?



Cantlon JF, Pinel P, Dehaene S, & Pelphrey KA (2010). Cortical Representations of Symbols, Objects, and Faces Are Pruned Back during Early Childhood. Cerebral cortex (New York, N.Y. : 1991) PMID: 20457691

Read more...

Multimodal Investigation of Reading in Children: More from Brem and Colleagues

Tuesday, May 18, 2010

Accessibility: Advanced

Last time we read an article from Brem and colleagues that compared word processing in adolescents (age 15-17) and adults (19-30). In follow-up paper from 2009, Brem expanded the report to include children (9-11).

If you didn’t read the last post, it’s probably a good idea to do that first. I won’t repeat any of the methodological details or background information here, just gonna make few quick notes on their results.



The 2006 paper found that adolescents had higher N1 amplitude than adults. Here, Brem reports that children have an even higher N1 amplitude with adolescents, thus suggesting a steady decrease in N1 amplitude from age 9 onwards.

For all groups, the N1 amplitude was higher for words than symbols. However, the difference between words and symbols declined with age. At first, I found this counterintuitive. I would have expected the opposite, with kids treating words and symbol similarly and the word/symbol difference getting larger as they matured and became better readers. The kids in this study, however, have already been reading for a few years. Perhaps they’re at the stage where they can process the words but are less efficient in doing so, thus resulting in a higher N1 amplitude for words than symbols.

On the fMRI front, Brem found the same  posterior to anterior gradient in the fusiform gyrus, with posterior regions being more responsive to symbols, and anterior regions being more responsive to words. There didn’t seem to be any difference between age groups there.

Brem also increases that a higher signal in anterior fusiform is correlated with slower reading.
(This is opposite of what was reported in other paper, perhaps I’m misreading the paper.)

There were some discrepancies between EEG and fMRI results. The N1 ERP component shows clear difference between words and symbols, but the fMRI analysis doesn’t show differences in the occipital temporal region, the calculated source of the N1. The could be due to temporal resolution. The N1 component only lasts about 100 ms.  EEG has good enough temporal resolution to pick up on the difference, but fMRI may not.


Brem S, Halder P, Bucher K, Summers P, Martin E, & Brandeis D (2009). Tuning of the visual word processing system: distinct developmental ERP and fMRI effects. Human brain mapping, 30 (6), 1833-44 PMID: 19288464

Read more...

Developmental Changes in Word Processing After Adolescence

Friday, May 14, 2010

When does brain development for reading stop? We often focus on school aged children, but what about the later teen years? To answer this question, Brem and colleagues tested adolescents (age 15-17) and adults (19-31) in a study using fMRI and EEG.



Participants were presented with words and symbols strings and asked to detect repeats. It’s an easy task, so it’s not surprising that the two groups had equal reading accuracy and speed. However, there were brain differences.

Brem focused on two early ERP components. The P1 component, a positive peak at 100 ms, is sensitive to low level stimulus characteristics like luminance and size. Brem found that this component had a higher amplitude for symbol strings and for words in both groups.

The N1 component occurs later (140-220ms) and is sensitive to higher level factors like stimulus category. Brem found that the later part of the N1 component was more pronounced to words than symbol strings. Source localization on the N1 component  found that the early part of the N1 localized to the temporal parietal occipital junction, while the late N1 localized to the left fusiform.

There were differences between the two groups. Adolescents had higher P1 and N1 amplitudes than adults. The N1 latency also became faster with age for words but not symbol strings.

Brem also used fMRI to look at the spatial organization of the fusiform gyrus*. Posterior fusiform regions responded more to symbol strings than words, while anterior regions responded more to words than symbol strings.

The left fusiform region seems to be related to reading skill. Bigger N1 amplitude was correlated with fewer mistakes in a reading test.  Higher fMRI signal in the anterior fusiform was correlated with faster reading.

It’s interesting that despite similar behavior between groups, brain measures still differ. I do wonder about differences within the adults as well. 19-31 is a pretty big range, so I'd like to see what happens after age 18.

*Using five regions of interest. 6 mm spheres based on Taleraich coordinates.


Brem S, Bucher K, Halder P, Summers P, Dietrich T, Martin E, & Brandeis D (2006). Evidence for developmental changes in the visual word processing network beyond adolescence. NeuroImage, 29 (3), 822-37 PMID: 16257546

Read more...

Brief Introduction to ERP Components

Thursday, May 13, 2010

Accessibility: Basic

EEG (electroencephalography) uses scalp electrodes to measure electrical field potentials that result from brain activity. Many EEG studies focus on event related potentials (ERP), patterns of activity that occur in response to a stimulus or cognitive event (Hence, they’re “event related.”).

Usually, an experimenter averages the brain response over many trials to achieve adequate signal to noise ratio. The end result is a waveform representing the average pattern over trials of a certain type. Peaks and troughs in waveform are known as components. While the naming of components isn’t systematic, they are often named with a letter (P if it’s a peak in the positive direction and N if it’s in the negative direction), and a number that either corresponds to the approximate time of the peak or its order of appearance. Commonly studied components include the N400 and P300. To learn more about how ERPs are used in research, take a look at entries with the EEG label.

Read more...

Letter-sound Training in Children Causes Brain Specialization for Letters

Thursday, April 29, 2010

My research focuses on the left occipitotemporal region. One area in this region, also commonly referred to as the visual word form area, has been shown to activate selectively for letters. Presumably, since reading is too recent a phenomenon to have evolved a specialized brain region, the area develops as a result of  experience with words and letters.

To verify this, some studies have trained adults on a new writing system and scanned them pre and post training to see the effects on the occipitotemporal region. The results have been mixed, and complicated by the fact that adults already know a writing system. It would be simpler and more relevant to look at a training effect in children, and that is what Brem and colleagues did. They trained prereading kindergarteners on letters and found that sensitivity to words developed in the occipitotemporal cortex.



The children in this experiment trained on a computerized grapheme-phoneme correspondence game that taught them the sounds associated with individual letters. As a control, they also trained on a nonlinguistic number-knowledge game. The participants did eight weeks on each game, with half the group doing the grapheme training first and the other half doing the number training first. This resulted in a nice within-subject control.

The authors evaluated the children with fMRI and EEG at three time points: 1) Before training, 2) After training with the first game, and 3) after training with the second game. During the fMRI and EEG sessions, the children performed a simple modality judgment task. They were presented with either spoken or written words, false fonts, or unintelligible speech and simply had to say whether the stimulus was in the visual or auditory modality.

After grapheme-phoneme training, kids showed increased activation to words (as compared to false fonts) in the left occipitotemporal region.* The authors then looked more closely along the length of the fusiform gyrus (located in the occipitotemporal region) and found that there was an increase in activation to words in a posterior region a (MNI coordinates 46,-78, -12)**. This region is posterior to what is usually reported as the adult visual word form area. It would be interesting to see if the region shifts with age.

The EEG results supported the fMRI findings. One of the ERP components, the N1 peak, was stronger in response to words after training. The source of the N1 localized to the left occipitotemporal region, right cuneus, and posterior cingulate.

This is a nice study because we can see word expertise development in action, in the age group in which it presumably happens in real life. The authors argue based on previous literature that it’s the visual-phonological mapping that increases specialization in the fusiform, not just visual training. Apparently, previous studies with primarily visual training have not increased activation in the fusiform gyrus, while training adults on phoneme grapheme mapping did. I haven’t looked at those papers recently, but perhaps I’ll investigate them next.

* The posterior fusiform, right inferior temporal gyrus, and cuenus showed this effect.
**More specifically, the authors did an ROI analysis where they picked 5 ROIS along the length of the fusiform gyrus.  The 4th ROI from the front showed this effect.


Brem S, Bach S, Kucian K, Guttorm TK, Martin E, Lyytinen H, Brandeis D, & Richardson U (2010). Brain sensitivity to print emerges when children learn letter-speech sound correspondences. Proceedings of the National Academy of Sciences of the United States of America PMID: 20395549

Read more...

Posterior Brain Differences in Children with Dyslexia

Wednesday, April 21, 2010

Accessibility:  Intermediate-Advanced

I realized after the last post that we haven’t actually spent much time discussing brain differences between dyslexic and nonimpaired readers. So today, I’m covering an earlier experiment by the Shaywitz’s.



In a 2002 paper, Shaywitz and colleagues reported an experiment with 144 children aged 7-17, half dyslexic and half nonimpaired. The children performed several tasks in the scanner, but the paper focuses on two: nonword rhyme (NWR) (Does [PEAT] rhyme with [LEAT]?) and semantic categorization (CAT)(Are [CORN] and [RICE] in the same category?). A line match task was used as a baseline.

During fMRI, the nonimpaired readers showed more activation than dyslexic readers in a large number of left and right hemisphere brain regions.*

The authors also looked at brain regions where reading skill correlated with activation. The left occipitotemporal (OT) region correlated with skill in both tasks, while bilateral parietotemporal regions showed a correlation with skill in the categorization task only.

This isn’t the first time activation in the OT has been linked to reading skill. Specht 2009 found that OT activation during a categorization task correlates with reading score even before formal reading instruction. Shaywitz 2004 found activation increases in the left OT region a year after completion of a phonological intervention. Also, this paper reported negative correlation between reading skill and activation in right OT gyrus during a categorization task, a correlation that was also reported in Turkeltaub 2003**.

Finally, the authors looked at brain regions where activation correlates with age and found striking differences between dyslexic and nonimpaired readers. Dyslexic readers had many regions that increased in activation with age***. In normal readers, there were few correlations with increasing age, and age correlated negatively with superior frontal and middle frontal regions. One possible explanation is that dyslexics learn to compensate with other brain regions as they grow older. The normal readers, on the other hand, get more efficient in their reading.****

The age result also highlights the variability in the sample. Children with dyslexia change greatly in brain activation as they grow older. You can imagine the variability that this would produce in an random experimental sample of 15 kids. I wonder if there’s been much work on relative variability in dyslexic children vs. nonimpaired children.

*NWR in left hemisphere sites (inferior frontal gyrus, superior temporal sulcus, superior temporal gyrus, middle temporal gyrus, middle occipital gyrus) and right hemisphere (Inferior frontal, superior temporal sulcus, middle temporal gyrus, medial orbital.) CAT in left (angular gyrus, middle temporal gyrus, middle occipital) and in right (middle temporal gyrus, middle occipital)

**Turkeltaub didn’t find a positive correlation in left OT, and also used a lower level task (tall letter detection.

*** IN NWR in DYS, increased age correlated with activation in bilateral IFG, basal ganglia, posterior cingulate, cuneus, middle occipital gyri and left STG.

****Correlations with age in dyslexics and normal readers are also explored in Shaywitz 2007. In that paper, they do report regions in nonimpaired readers that increase activation with age. It might be the same dataset, but I’m not sure.



Shaywitz, B. (2002). Disruption of posterior brain systems for reading in children with developmental dyslexia Biological Psychiatry, 52 (2), 101-110 DOI: 10.1016/S0006-3223(02)01365-3

Read more...

Phonological Training Changes Brain Activation in Dyslexic Children

Thursday, April 15, 2010

Note: Online Universities has included me in their list of top 50 female science bloggers. It’s not actually for this blog, but for my Brain Science and Creative Writing blog. Anyways, check out the list if you get a chance. There are lot of interesting bloggers.

 Accessibility:  Intermediate-Advanced

We’ve looked at the neuroscience of dyslexia and how the dyslexic brain processes words. Our ultimate goal, however, is treatment. Therefore, we’d like to see whether reading interventions cause brain changes in reading-impaired children. In a 2004 paper in Biological Psychiatry, Shaywitz and colleagues investigated this question.



The study focused on kids aged 6-9, divided into three groups. The experimental group consisted of reading-disabled students who went through an eight month experimental intervention that focused on  phonology: letter-sound associations, combining sounds, etc.. Another group of reading-impaired children were put in community intervention control group that participated in a variety of reading interventions, including remedial reading and tutoring. However, there was no specific focus on phonology. A third group, community control, consisted of normal reading children.*

All groups improved in their reading measures after 8 months (not surprising, since they continued to attend school). The experimental group showed more improvement than the community intervention group in one reading measure.

Shaywitz and colleagues were interested in brain differences before and after intervention. They scanned the kids pre/post intervention in a letter identification task.** Their main analysis was a second order comparion. They first determined the pre/post intervention changes within each group. Then they compared the changes between groups.

Compared to the community intervention control, both the experimental intervention and normal-reading control group showed a greater increase in left inferior frontal gyrus (often involved in phonological processing) activation. The experimental intervention group showed more increase in left middle temporal gyrus activation compared to the community intervention group.

In addition to comparing pre/post intervention differences between groups, Shaywitz also scanned the experimental group a year after finishing the intervention. The group showed continued increases in several left hemisphere areas, including left inferior frontal, superior temporal, and left occipitotemporal regions***. Also, they showed a decrease in right MTG and right caudate activation. This falls in line with the increase in left lateralization we saw in Turkeltaub 2003.

What’s the take home message? This study shows us that phonological intervention results in measurable brain changes in the left inferior frontal gyrus, a phonological region. This is encouraging. However, how does this actually impact reading performance? The experimental group only performed significantly better than the community intervention group in one reading measure, although it looks like they performed slightly better (but not statistically significant) in other measures. So there is a hint that phonological interventions might be more valuable than other interventions, but we’d have to get more data on this.

The study also shows that brain regions in the experimental group continue to develop during the year after the intervention. When did these changes start – during intervention or afterwards? It's hard to tell because they don't report the changes in the experimental intervention group right after intervention. They only report on the difference in changes between groups.

Also are these changes jumpstarted by the intervention, or would they have occurred anyway? Unfortunately, we can’t answer that question either. While the authors had hoped to also scan the two other groups a year afterwards, they were unable to.

Anyways, it's kinda cool to see brain differences as a result of training. It will be interesting to see in future studies what is going on in more detail.

*Children from the EI group were from Syracuse, NY, while the other two groups were recruited from New Haven.
**The kids heard a letter name and had to choose the correct letter from two options. This task was compared against baseline of hearing tone and specifying position of asterisk.
***LIFG, STG, left OT, left lingual, and left inferior occipital


Shaywitz BA, Shaywitz SE, Blachman BA, Pugh KR, Fulbright RK, Skudlarski P, Mencl WE, Constable RT, Holahan JM, Marchione KE, Fletcher JM, Lyon GR, & Gore JC (2004). Development of left occipitotemporal systems for skilled reading in children after a phonologically- based intervention. Biological psychiatry, 55 (9), 926-33 PMID: 15110736

Read more...

The Development of Visual Word Recognition

Tuesday, April 6, 2010

Accessibility:  Intermediate-Advanced

We’ve looked at brain regions and development during word related tasks (word generation, reading and repeating), but we haven’t yet looked at a straight up study of word recognition and development.



What’s the best task to use to study visual word recognition? You can have people read out loud, but that involves processes like speech generation. Likewise, reading sentences or paragraphs requires the reader to process meaning and grammar in addition to the words on the page.

One segment of the field has gravitated towards tasks of single word processing that don’t require reading at all. In this particular study, Turkeltaub and colleagues use a tall letter detection task. The subjects press a button if the word has a tall letter (like d or l). As a control condition, subjects perform the same task on false fonts. Even though you can do this task without reading the words, the assumption is that reading, being highly automatic, will occur anyways. This approach, focusing on the automatic, bottom up process, allows for a more tightly controlled study. However, it also limits the findings to that very thin slice of the reading process.

Turkeltaub and colleagues tested forty one subjects ranging from 8 to 20 years old. In the whole group, the words > symbols contrast gives activation in the left posterior temporal, left inferior frontal, and right inferior parietal regions.

The authors also looked at correlations between activation and reading ability. The trend here seems to be increasing lateralization (more reliance on left hemisphere regions and less reliance on right hemisphere regions), with reading skill.*   Interesting.  I wonder how this relates to lateralization of spoken language.

Finally, the authors looked for regions that correlated with other behavioral measures, including phonetic working memory (left intraparietal sulcus and left and right middle frontal gyri), phonological awareness (left hemisphere network, incluing posterior STS and ventral inferior frontal), and phonological naming (bilateral network, including right posterior superior temporal, right middle tempral, and left ventral inferior frontal.) Surprisingly (to me at least) there is almost no overlap between the regions for the three measures. This could either mean that these measures involve very different cognitive and neural processes, or that the automatic task used in this experiment was not suited for accurately tapping into these abilities.

*Reading ability correlated positively with activation left hemisphere frontal and temporal cortical areas, and negatively with right hemisphere posterior regions. There was no correlation in the left fusiform (visual word form area), but there is a negative correlation in right posterior fusiform. 

Turkeltaub, P., Gareau, L., Flowers, D., Zeffiro, T., & Eden, G. (2003). Development of neural mechanisms for reading Nature Neuroscience, 6 (7), 767-773 DOI: 10.1038/nn1065

Read more...

Rats Who Can't Read Good: A Rodent Model for Dyslexia

Thursday, April 1, 2010

Accesibility:  Intermediate

Dyslexic rats? Really? Well, these rats can’t read, but they’re still used as an animal model for dyslexia.



First, some background. The underlying cause of dyslexia is still under debate, but it’s generally accepted that it involves deficits in auditory and phonological (language sounds) processing, with a possibility of visual deficits as well. Post mortem studies of dyslexic human brains have turned up brain anomalies, including cortical ectopias (nests of neurons in the wrong layer in the cortex) and focal microgyri (micro folding). Researchers have also found abnormalities in the thalamus and cerebellum.

Dyslexia rat models are created by inducing these same abnormalities, usually focal microgyria and molecular layer ectopias, in rats. Interestingly, some of these rats develop deficits in rapid auditory processing, which is important for phonological processing in humans. Introducing microgyria also causes thalamic changes in male rats, similar to dyslexic thalami in humans. The thalamic changes are also associated with auditory perceptual deficits in the males.

Another interesting observation:  boys are more at risk than girls for dyslexia, and the same trend occurs in rats. Young male rats have a higher risk for developing rapid auditory processing deficits from induced cortical malformations. There seems to be something about the male brain that increases risk for language related disorders.

Hmm, anyone want to start Hooked on Phonics for rodents?

[And kudos to anyone who got the Zoolander reference in the title]

Galaburda AM, LoTurco J, Ramus F, Fitch RH, & Rosen GD (2006). From genes to behavior in developmental dyslexia. Nature neuroscience, 9 (10), 1213-7 PMID: 17001339

Read more...

Dyslexic vs. Nonimpaired Readers: Differences in Brain Development

Wednesday, March 31, 2010

Accessibility:Intermediate/Advanced

Studies comparing normal reading and dyslexic children often take a snapshot approach, comparing brain function at specific ages. However, these studies don’t tell us how these differences fit into the developmental picture. Are dyslexics following the same developmental course as normal readers, just at a different rate? Or do dyslexic brains develop in a completely different way?



Instead of comparing activation at each age, Shaywitz and colleagues compared the way the two groups changed throughout development. They conducted a massive imaging study involving 113 dyslexic children (ages 7-18) and 119 nonimpaired children aged (7-17). The participants did two tasks: a line match task (Do ///\ and //// match?) and a nonword rhyme task (Do leat and kete rhyme?)

In all the imaging results, the authors looked at the rhyming> line match contrast*. (For an explanation of contrasts and subtraction logic in fMRI, see this post). Both groups had brain regions that changed in activation with age. However, the regions were different. In normal readers, the left anterior lateral occipital region (close to the visual word form area) became more active with age. In dyslexics, however, a more posterior region of the left occipitotemporal cortex became more active.

Developmental patterns in the front of the brain were also different. Normal readers showed an activation decrease in the right middle frontal/superior frontal region while dyslexic readers showed a decrease in the right superior frontal region.

The authors also looked at asymmetry. In normal readers (but not dyslexic), activity in the anterior lateral occipitotemporal region became increasingly asymmetric with age.

From these results, it appears that dyslexic readers aren’t just delayed versions of normal readers. Different regions are developing in each group, and the two groups are learning to use different brain regions to perform the same task. What does this mean? Different strategies? Compensatory processing? Hrmm…

Addendum: Careful readers might notice that there are some differences between these results and other papers I’ve discussed. Brown 2004 found an increase in left inferior frontal regions with age, but this paper only found it in dyslexic readers. Brown also found decreases in left extrastriate regions, while this group found increases. This could be due to the different tasks or subject variation.

*I’d be curious to see the correlations with age and task activations separately rather than just the rhyme>match contrast. It’d be interesting to see whether these correlations are due to changes in rhyming activation, line match, or both.



Shaywitz BA, Skudlarski P, Holahan JM, Marchione KE, Constable RT, Fulbright RK, Zelterman D, Lacadie C, & Shaywitz SE (2007). Age-related changes in reading systems of dyslexic children. Annals of neurology, 61 (4), 363-70 PMID: 17444510

Read more...

fMRI Subraction Analysis and Why it Matters

Tuesday, March 30, 2010

Accessibility:  Basic

Let’s say you wanted to do an experiment about color processing. We could do the following:

1. Roll someone into the scanner.
2. Show them two colors
3. Have them press the button corresponding to the color they prefer.
4. Look at the resulting activations, and voila, we have the “color preference area.”

But it’s not that simple. The brain is very active, even when supposedly at rest. While performing the task described, the subject is also breathing, processing ambient noise, thinking about grocery shopping, as well as who knows what else. How do you tell what activation is due to the color judgment, and what is due to other processes?



The traditional fMRI solution is to compare activation with a baseline condition. For our example experiment, we may want a comparison condition where the subject sees the same images, but presses a random button rather than picking a color. We then take the activation from this comparison condition and subtract it form the condition we’re interested in. The assumption (and it’s an assumption, meaning that it may not always be true) is that we’re subtracting out irrelevant brain activation– for example, brain activation due to seeing colors, pressing buttons, being inside a scanner, etc.

This is important to keep in mind when evaluating fMRI results. If someone tells you that brain region X is active during a certain task, you always want to ask what the comparison condition is. If region X is active during task Y, but the comparison condition is super simple (say just laying there in the scanner, for example), that’s not very impressive – lots of other regions will be active in that comparison, and it may not be simply due to that task.

Read more...

Development of Modality Tuning During Reading and Repetition

Monday, March 22, 2010

Accessibility Level:  Intermediate/Advanced

Today we’re again looking at the theme of increasing specialization in the brain over development. Rather than specialization in terms of spatial extent, as touched on in Brown 2004, Cerebral Cortex, this paper’s finding suggests specialization in processing of sensory modalities.



Church and colleagues tested children (age 7-10) and adults (18-35) in a word generation task. During the experiment they read words off a screen and repeated words presented aurally. Like the two papers previously discussed here by this group, the authors matched for behavior between children and adults.
They authors report several findings.

1. First, most brain regions did not change in activation over time. In well known language areas like the inferior frontal gyrus and superior temporal gyrus, the authors found no difference between children and adults. Also, they found no differences in lateralization (how much one side of the brain was favored over another) between children and adults.

2. The regions that differed between the two groups were mainly extrastriate visual regions, and all regions with differences had greater activation for children than adults. Unlike the Brown 2005 paper, where some frontal regions were found to have greater activation in adults, this paper found no such regions. This could be due to the different task (word generation vs. reading/repeating), or variation in the participant pools of the two the studies.

3. In several visual regions, including a cluster very close to the visual word form area, adults had more activation to the visual presentation than to auditory presentation, while children had similar activation to the two modalities. This suggests that these areas might be more specialized for the visual modality in adults. In other words, the region gets “tuned” to the visual modality as the children mature (However, the interaction between modality and age was not statistically significant).  The authors propose several possible mechanisms responsible for this modality tuning difference. Perhaps the kids are using a different strategy, visualizing more during the auditory task. Or perhaps their brains are just organized differently.

Together with the Brown 2004 paper, this paper presents an interesting story about increasing specialization and efficiency in the maturing brain, in which the immature brain starts out with relatively nonspecialized brain regions and recruits more brain regions to accomplish the tasks at hand. Then, maturation and expertise result in more specialization, finer tuning, and fewer recruited regions.



Church JA, Coalson RS, Lugar HM, Petersen SE, & Schlaggar BL (2008). A developmental fMRI study of reading and repetition reveals changes in phonological and visual mechanisms over age. Cerebral cortex (New York, N.Y. : 1991), 18 (9), 2054-65 PMID: 18245043

Read more...

Dyslexia and Brain Connectivity: Insights from Periventricular Nodular Heterotopia

Monday, March 15, 2010

Accessibility Level:  Intermediate

One theory of dyslexia is that it stems from abnormal brain connectivity -- that faulty connections between different language areas result in reading difficulty. Now, some evidence from another condition offers some support for this theory.

Periventricular nodular heterotopia (PNH) is a neurological condition in which neurons don’t migrate to the correct location during brain development. Instead of moving to the cortex where they belong, neurons stay close to the ventricles, the fluid filled cavities in the center of the brain. The results in tiny nodules of gray matter along the ventricles, hence the name of the condition. People with PNH tend to suffer from adolescent onset epilepsy, although their intelligence and cognitive functioning is within the average range.



Bernard Chang and colleagues found that a strikingly large proportion of PNH patients had low scores on reading related tests, specifically reading fluency (timed reading) and rapid naming (remember the previous post on rapid naming?). In a smaller proportion of patients they also observed a deficit in processing speed.

In addition to behavioral measures, the authors also used diffusion tensor imaging to measure the integrity of the white matter tracts that link different brain regions. They found that white matter integrity in the PNH patients was correlated with reading fluency.

But wait, PNH has to do with nodules of gray matter near the ventricles. What does that have to do with white matter integrity? It turns out that these gray matter nodules are disrupting the nearby white tracts. Fiber tracts deviated around these nodules, and no fiber tracts projected into or from them.

There are certainly limitations to the conclusions we can draw from one study. The study is correlationial by nature, so it can’t prove whether the connectivity issues cause the reading difficulties. Also, it remains to be seen whether and how PNH patients differ from dyslexic people without PNH. But these are interesting findings. Yet another piece of the puzzle.


Chang, B., Katzir, T., Liu, T., Corriveau, K., Barzillai, M., Apse, K., Bodell, A., Hackney, D., Alsop, D., Wong, S., & Walsh, C. (2007). A structural basis for reading fluency: White matter defects in a genetic brain malformation Neurology, 69 (23), 2146-2154 DOI: 10.1212/01.wnl.0000286365.41070.54

Read more...

Brain Change Patterns in Developing Children

Monday, March 8, 2010

Accessibility Level: Intermediate-Advanced

What changes in the brain as children mature? Are there patterns in the way the changes occur? Do some regions mature more quickly than others?



Last time, we talked about a paper by Schlaggar et al that examined brain differences between children and adults during a word generation task. A study published in Cerebral Cortex by Brown and colleagues extends that study, looking at changes in more detail.

The authors scanned children and adults aged 7-32 while performing a word generation task. Participants were given a word, either visually or aurally, and had to say a response based on an instruction ("opposite" for example). The authors then looked for the regions that differed in activation between the youngest (7-8) and oldest groups (23-32). For more details on this, and how they controlled for performance differences, see the entry on the Shlaggar et al paper.

The authors found many regions that either increased or decreased in activation between ages 8 and 23. That’s not surprising. It’s a lot of years and a lot of development. They did notice a few patterns though.

1. Posterior regions, generally involved in sensory processing, tended to decrease in activation with age. Frontal regions, generally involved in controlling and modifying the activity of the lower level regions, tended to increase with age.

2. The posterior and frontal regions differed not only in direction of change, but in speed of change. The frontal regions became more adult-like first, with the posterior regions maturing later.

The authors propose a model explaining the results. In this model, children first use lower level, sensory regions to perform a task. Because they are unskilled, their brains are less efficient and activate more. Then, as the children mature, frontal control regions develop and kick in, at which point they help fine-tune the posterior regions. The posterior sensory regions then become more efficient, and activation in these regions decreases.

This is an interesting model, and it will be interesting to see whether future developmental studies confirm it.


Brown, T. (2004). Developmental Changes in Human Cerebral Functional Organization for Word Generation Cerebral Cortex, 15 (3), 275-290 DOI: 10.1093/cercor/bhh129

Read more...

Comparing Child and Adult Brains: How to Account for Performance Differences?

Wednesday, March 3, 2010

In an ideal world, we’d be able to study maturational brain changes by scanning a group of adults, a group of children, and comparing the brain images. Unfortunately, there are complications.



One complication is that these studies usually require doing some kind of task in the scanner, and children usually have lower accuracy and longer reaction times on this task. These differences, especially reaction time differences, can have a significant effect on brain activation. (Activation is averaged over seconds, so the longer your reaction time, the higher the brain activation, simply because you spend more time on the task). So how do we know what differences are due to actual brain maturation and what differences are due to poor performance inside the scanner?

In a 2002 study, Schlaggar and colleagues addressed the issue of performance differences by comparing only those children and adults that performed similarly on the task. They were interested in word processing in children (aged 7-10) and adults (age 18-35). In their experiment, participants saw single words on a screen and had to say a response word based on a cue (for example, a rhyming word, or the opposite word).

Instead of comparing all adults and all children, Schlaggar and colleagues divided the participants into two subgroups. The top scoring children and lower scoring adults to formed a Performance matched subgroup, where the children’s performance did not differ significantly from adults. The rest formed a Performance Non-matched subgroup, where there were clear differences in accuracy and response time between adults and children.

Schlaggar and colleagues looked at several regions of interest in the left frontal and left extrastriate regions, all traditional language and word processing areas. When there were differences between age groups, the children almost always had greater activation.

But the interesting results occur when you compare the subgroups. Some regions showed differences in the Non-Matched subgroup that disappeared when you look at the Performance Matched groups, suggesting that the brain differences here were due to performance differences inside the scanner.

Other regions, however showed differences between children and adults in both the Performance Matched and the Performance Non-matched subgroups. In these regions, one can safely assume that there’s more to these differences than simple in scanner performance.

What does this tell us? For one thing, it tells us that in-scanner performance differences between children and adults should not be ignored. There were several “developmental” differences here that disappeared as soon as you controlled for in-scanner performance. On the other hand, there do appear to be differences that remain even when you have children and adults that perform equally.

A few things to think about with this study. First, it’s admirable that the authors control for performance, but doing so also introduces opposite selection biases in children and adults. You have to wonder what population of children would perform as well as adults almost twice their age, and conversely, what population of adults would perform at the level of 7-10 year olds. Is it fair to compare these two groups and generalize these comparisons to the entire population?

Second, what does it mean to control for in-scanner performance? If we treat it as a confounding factor and control for it, we’re assuming that performance differences on this word generation task are irrelevant to the process we’re studying. However, that can’t be completely true. We’re interested in word processing differences between children and adults, so if we look for children and adults that perform similarly on a word generation task, we’re filtering out some of the differences that we set out to study.

Third, it might be helpful, as mentioned in BJ Casey’s commentary on the study, to differentiate between differences from maturation alone and differences due to skill level. In a field such as reading, this might be hard to tease apart. While children’s brains mature between ages 7 and 18, they also undergo thousands of hours of reading instruction that introduce changes in the brain. When we study reading acquisition in children, therefore, we should think about kind of brain changes we’re interested in, and that will affect the comparisons and analyses we do.

Schlaggar BL, Brown TT, Lugar HM, Visscher KM, Miezin FM, & Petersen SE (2002). Functional neuroanatomical differences between adults and school-age children in the processing of single words. Science (New York, N.Y.), 296 (5572), 1476-9 PMID: 12029136

Casey, B. (2002). NEUROSCIENCE: Windows into the Human Brain Science, 296 (5572), 1408-1409 DOI: 10.1126/science.1072684

Read more...

Is Dyslexia a Visual or Phonological Deficit?

Thursday, February 11, 2010

It's interesting how the public's impression of dyslexia differs from the impressions of researchers in the field. I recently read an article by Vidyasagar and Pammer arguing that dyslexia is a visual deficit. To the general public, this claim seems obvious because most people believe that people with dyslexia see things backwards.



Many dyslexia researchers, however, will find this claim unusual, if not controversial.  This is because the field concluded long ago that the "reading backwards" theory is a myth. As we touched on in a previous article, all children will write backwards to some extent, not just dyslexic children

The current prevailing theory of dyslexia is actually poor auditory or phonological processing. People with dyslexia score lower than controls on phonological awareness, and phonological awareness in children predicts reading skill later on. In addition, some phonological training studies have succeeded in improving reading performance in children with dyslexia.

Vidyasagar and Pammer propose a different theory.  They argue that dyslexia is due to a deficit in visual attention and the dorsal visual stream.

Human vision is processed in two pathways in the brain. The ventral stream processes object identity, and the dorsal stream processes object location. When we read, the dorsal stream helps us direct our attention smoothly from one word to the next. Vidyasagar argues that a deficit in the dorsal stream is the underlying cause for dyslexia.

They cite some evidence for causality. Studies have found that deficits in coherent motion detection and visual contrast sensitivity, both dorsal stream functions, predict future reading skill.

But what about all the phonological results? Vidyasagar argues that  both visual and auditory input are required to develop a good understanding and awareness of phonetics, and that the  phonological deficits in dyslexia arise from a lack of high quality visual input during the time period when phonological awareness is maturing.  It will be up to researchers to conduct comprehensive studies to test this.

So what do I think?  My favorite theory is neither purely phonological nor visual one, but rather another possibility that Vidyasagar touches on – a deficit in rapid sensory processing. Rapid auditory processing is important for phonological awareness, and rapid visual processing is important for visual attention and visual search.  There is evidence that people with dyslexia have deficits in both.

Genetically, the theory makes sense. Genes don’t just code for one function. More often, they code for proteins that show up in multiple systems. You could imagine a gene that codes for some rapid processing neuron that plays a roll in both visual and auditory processing, resulting in the complex disorder we know as dyslexia.

It’s an exiting time to be a brain researcher.

Vidyasagar, T., & Pammer, K. (2010). Dyslexia: a deficit in visuo-spatial attention, not in phonological processing Trends in Cognitive Sciences, 14 (2), 57-63 DOI: 10.1016/j.tics.2009.12.003

Read more...

Dyslexia Brain Differences Show Up Before Formal Reading Instruction

Thursday, January 28, 2010

Last time, we talked about early behavioral differences between prereading children that predicted future reading impairment. Today, we’re continuing on the theme of early predictive differences, this time in the brain.



The question of how early brain differences arise is a worthwhile one. We want to know whether the dyslexic brain is tackling reading differently from the very beginning or if these brain differences arise after some reading experience, perhaps reflecting compensatory strategies that the children may have developed.

Specht and colleagues (Scandinavian Journal of Psychology 2008) conducted a brain imaging study on Norwegian children (a good population to study because reading instruction starts in second grade in Norway). The basic goal of their experiment was to scan 6 year olds (before they learned to read) and see if they process words differently depending on their risk for dyslexia. Unlike the Lervag study, this study was not longitudinal. Specht and colleagues determined which kids were at risk for dyslexia using a risk index that took into account factors like heredity, language development, and other factors.

Kids looked at four kinds of stimuli during an fmri scan: pictures, logos, regular words and irregular words while performing a categorization task (“Is this something you can play with?” and similar questions). I won’t spend too much time comparing between conditions because I’m not clear on what characteristics were controlled for between the stimulus types.

There were differences between the at-risk and normal reading group in all conditions. There were several interesting findings. First, risk index score correlated with increased activation when looking at words in the angular gyrus, an area that has been reported to be involved in language/phonological processing.

Our old friend, the visual word form area, also shows up. At a more liberal statistical threshold ( p<.001 uncorrected and with a small volume correction) they found that risk index score correlates negatively with left occipitotemporal activation when viewing irregular words.

So what does this mean? For one thing, differences arise early, before formal reading instruction. The two groups did not differ significantly on standardized reading measures at the time of testing (although there was a trend (p<.096) towards a difference in reading scores. So there seems to be something different about how these kids approach words from the very beginning. It would be interesting to know what is driving these differences. I wonder what strategies the kids were using to in the scanner to complete the categorization task, especially since they couldn’t read yet. For word conditions, kids only had an accuracy of 20-30%. What were they doing for the words they couldn’t read? Were the scanner differences driven by the words they could recognize, or all the words?

I’m particularly puzzled by the VWFA findings. The VWFA is usually thought to develop based on expertise with letters Does this mean that even before reading instruction there is some difference in expertise between kids at risk and not at risk? Interesting questions for future investigation.

Specht K, Hugdahl K, Ofte S, Nygård M, Bjørnerud A, Plante E, & Helland T (2009). Brain activation on pre-reading tasks reveals at-risk status for dyslexia in 6-year-old children. Scandinavian journal of psychology, 50 (1), 79-91 PMID: 18826418

Read more...

Color and Object Naming Speed Predicts Future Risk for Dyslexia

Wednesday, January 20, 2010

An important goal for any developmental disorder research is early detection. The earlier the detection, the earlier we can start intervention and treatment. Dyslexia is tricky though. It’s a reading disorder, and by definition cannot be diagnosed until reading instruction begins. However, we can still look for signs that predict future risk for dyslexia.

 One predictor of future dyslexia is rapid automatized naming (RAN) speed. A RAN test consists of naming an array of objects, colors, letters, or symbols as quickly as possible. It makes sense that letter and symbol naming speed (also called alphanumeric RAN) might predict reading skill. Surprisingly however, speed at naming pictures of objects and color patches also predicts future reading skill. Lervag and Hulme (Psychological Science 2009) studied this in a longitudinal study of Norwegian schoolchildren.


The idea was to test children before they learn to read and look for test results that predicted future reading skill. Formal reading instruction in Norway begins in second grade. Therefore, Lervag and colleagues conducted several tests on first graders. These tests included RAN and phoneme awareness (tasks like picking a word that begins with a certain sound). They then retested the students in second, third and fourth grade on these skills as well as reading fluency. They found that performance on nonalphanumeric RAN in first grade predicts phoneme awareness and reading fluency later on.

Lervag and colleagues investigated this finding in further detail by separating the RAN response times into articulation times and pause times between words. They found that the pause times were a much better predictor of future reading performance than articulation times. It should be noted though, that articulation times were much less variable than pause times, with a standard deviation of around 4.5 ms rather than about 16 ms.

These are pretty interesting results, and it raises the obvious question -- what is it about rapid color and object naming that predicts future reading skill? I can think of several possibilities.

1. Lervag suggests that reading may be tapping into the same pathways used for object recognition and naming. Remember the mirror invariance paper from last week that suggested the visual word form area might be involved both in word and object recognition? It could be that both reading and object naming are served by the same brain areas.
Both reading and rapid naming involve seeing an object, retrieving its phonological representation, and outputting the phonology via motor routines. Perhaps performance in RAN reflects the strength of the connections between visual, phonological, and speech areas.

2. Attentional focusing could be another factor. Rapid automatized naming requires directing visual attention from one object to the next in a controlled manner, and reading requires the same skill. Perhaps a deficit in attentional focus could be the underlying factor.

3. An even lower level explanation would be visual motor control. Both RAN and reading require controlled eye movements from one item to the next. Deficits in motor control have been reported in dyslexia, but I don’t know of any reported eye movement deficits. If anyone knows more about this, do let me know.

What do you think is the connection between RAN and dyslexia?

Read more...

Haiti Fundraiser on Other Blog

Tuesday, January 19, 2010

There will be more science articles coming soon, but just wanted to announce that there's a fundraiser for Haiti on my Creative Writing Blog Please go take a look!

Read more...

A Rose From Any Other Direction is Still a Rose, But Its Name is Not

Tuesday, January 12, 2010

Imagine a cup. Now rotate it 180 degrees. What is it now? It’s still a cup. Surprising? Not really. We live in a three dimensional world and take for granted that something flipped around an axis doesn’t change into something else. This generalization across mirror images is called mirror invariance and holds true for most visual stimuli.

But what about the letter b? Draw it on something transparent, turn it around – and lo and behold, it changes into the letter d! Writing systems are one of the few domains where an object’s orientation matters. Our brains have trouble with this, and this difficulty becomes apparent when children learn to read and write. It is well documented that children often flip letters when writing, and sometimes even write entire words spontaneously flipped. It is only after years of practice that they get over this tendency.

Neuroscientist Stanislas Dehaene and colleagues recently found evidence of mirror invariance (and the lack thereof for words) in the brain.


They used a technique known as fMRI repetition suppression. The basic logic of the technique is this: brain regions responding to a repeated stimulus will show a decreased activation. This is thought to be due to neuron adaptation or fatigue. Therefore, by showing someone different visual stimuli and looking for repetition suppression, we can get an idea of which stimuli the brain categorizes as a the same (a repeat), and which are different.

In this particular study, reported in Neuroimage ( 29, (2010) 1837-1848), participants saw words and pictures that were preceded either by a normal or mirror reversed image of the same category. (The participants were asked to perform an unrelated size judgment task.) The researchers found that picture and word processing regions both show repetition suppression to repeated identical images. This was not surprising. It’s the basic repetition suppression effect.
Dehaene and colleagues then looked for areas that showed repetition suppression for words/pictures and their mirror images. They found a region in the left fusiform gyrus that showed mirror repetition suppression for pictures, but no such region that showed mirror repetition suppression for words. Therefore, they found evidence for brain regions that considered pictures equivalent to their mirror images, but no such regions for words.

Surprisingly, the region that showed the strongest mirror invariance effect for objects was an area of the left fusiform known as the Visual Word Form Area, an area that has been shown in many studies to be active during word processing. Therefore, it’s possible that the same exact brain region that processes pictures in a mirror invariant way knows to behave differently for words. With the low resolution of fmri, we can’t rule out the possibility that the word network and picture network are in fact separate, but it should be an interesting question to pursue.

Is this lack of mirror invariance specific to writing systems we’ve learned, or does it hold for any script-like stimulus? A follow up behavioral study suggests the latter. In this study, people saw words, tools, faces, and scripts. Each image was presented for a quick flash of 200ms each, and was preceded by another image of the same category, either normal orientation or flipped. Their task was to say whether the two stimuli were the same, where a mirror image was also considered the same.

As measured by reaction time, people were relatively quick to judge an object and its mirror reflection to be the same, but slower to judge two mirror image scripts to be the same. This difficulty held true even for unfamiliar or false scripts, suggesting that the brain generalized the importance of orientation to unfamiliar, unlearned scripts.

Ideas for future research? The obvious next step would be to do this with children. Also, does anyone know of any studies of word to object cross categorical priming? That would help answer the question of whether the word network and object networks are the same.

Blog Carnival Coverage:
Psychology Articles Carnival

Read more...

  © Blogger template The Professional Template II by Ourblogtemplates.com 2009

Back to TOP