Social cognitive deficits in autism spectrum disorder

ResearchBlogging.org One of the hallmarks of Autism Spectrum Disorder (ASD) is an impairment in social cognitive skills. This manifests in individuals with ADS having trouble orienting their attention towards people. Accordingly, they also show deficits orienting their attention in response to social cues from others, such as eye gaze, head turns and pointing gestures.

Understanding the social cognitive impairments associated with ASD has been challenging in that studies set in naturalistic settings often reveal the deficit but lab experiments performed on computers don’t.

For example, some naturalistic studies have looked at home movies of infants and found that those later diagnosed with ASD showed less social orienting and were less responsive to cues from others to orient to objects. For example, if their mom was in the room, they would look at her a lot less and they’d also be less likely to respond when their mothers tried to direct their attention to a toy in the room by looking or pointing at it.

However, people with ASD have been shown to respond to non-naturalistic social cues in the lab. Social orienting has been frequently been tested by use of a variation on Michael Posner’s spatial cueing paradigm. This works as follows:

1. Participants are seated in front of a computer
2. A stimulus – a pair of eyes gazing to either side (or straight ahead) or arrows pointing to either side or neither – appears on the screen
3. Shortly after, a stimulus (the target object) appears to one side or the other, either on the side which the eyes or arrows were pointing towards or the opposite side.
4. Participants have to indicate which side the target object appeared on by pressing either a right or left button.
5. Performance on the task is assessed by measuring the amount of time it takes to participants to press the button indicating on which side the target appeared. Most participants, including ASD patients, are as quick with the gaze cue (the eyes) as with the arrow cue.

Posner cue paradigm

(The left side of the above figure shows a single trial (with “directional eyes”), in which participants first see a fixation cross, then one of four directional/non-directional stimuli, after which the target appears either on the same side indicated by the cue or the opposite side. Participants need to indicate which side a target stimulus appeared on by pushing a button. The right side shows the three other trial types (from top to bottom): neutral arrow, directional arrow, neutral eyes)

Past studies have shown that people orient faster to cued (like in the left side of the above figure) versus noncued locations, known as the facilitation effect. Previous studies using this task have produced inconsistent results, but most of them have shown ASD populations performing comparably to non-ASD populations.

In this study, researchers used the above-described cue task to examine the neural mechanisms underlying social orienting in ASD, with the hope that if there were no behavioral differences, neural activity might reveal that ASD individuals are performing the task differently. Other studies have shown that non-ASD populations treat social and non-social cue stimuli differently. It was hoped that neural activity revealed in this study would shed light on the discrepancies in behavioral results for ASD populations in lab versus computer settings.

Results
In terms of behavior, both the control and the ASD group showed quicker responses for gaze and arrow cues with no between group difference, which is consistent with previous lab studies.

However, neural activation patterns showed significant group differences. The control group showed greater activation for social vs. nonsocial cues in many different brain regions, with gaze (eyeball) cues eliciting increased activity in many frontoparietal areas, supporting the idea that neurotypical brains treat social stimuli different from non-social stimuli. The ASD group, on the other hand, showed much less difference in neural activation between social vs. non-social cues. Although these differences in neural activation are too numerous to cover here, one region of interest, superior temporal sulcus (STS), stood out. The STS has been shown to be associated with the perception of eye gaze and other work has suggested the region may be involved in understanding the intentions and mental states of others. In this study, ASD individuals showed decreased STS in the gaze cue condition (versus controls). This data suggests that the STS may not be sensitive toward the social significance of eye gaze in ASD individuals.

Implications
The authors point out that although ASD individuals don’t seem to rely on the same neural circuitry to perceive social cues such as eye gaze, they have found a way to use the low-level perceptual information available in social cues to adapt a strategy that allows them to discern that gaze direction conveys meaning about the environment. That being said, ASD individuals mostly don’t do this very well in more naturalistic environments. So, although this strategy might work in a scanner with “cartoon” eyes and where there are no environmental distractions, it’s unlikely that ASD individuals could adapt this strategy in a naturalistic environment. On the contrary, one could also frame these results from the perspective of the ASD individual; that is, given the non-naturalistic environment of the scanner, and the fact that the task demands were very simple and not dependent on social cognitive processing, why should non-ASD individuals treat the gaze vs. arrow stimuli differently? Why not just rely on low-level information and thus expend less cognitive energy? It’s a good example of the automaticity of social cognitive processes. Give humans a set of cartoon eyeballs to look at and they can’t help but process these as distinct from something non-social.

An additional take away from this paper is that even when one finds no behavioral differences between groups, there might be some interesting differences in neural activity worth exploring via fMRI or EEG.

References

Greene DJ, Colich N, Iacoboni M, Zaidel E, Bookheimer SY, & Dapretto M (2011). Atypical neural networks for social orienting in autism spectrum disorders. NeuroImage, 56 (1), 354-62 PMID: 21334443

Mirror Neurons and Mentalizing

Perhaps few findings in the cognitive sciences have received more press in recent years than the discovery by Rizolatti and colleagues in macque monkeys of mirror neurons; that is, neurons that preferentially activate both when a monkey performs some action and when observing someone else perform the same action. There is evidence that these neurons exist in humans, although it’s indirect (however, see Keysers 2010). They’ve quite captivated the publics’ attention, these crafty little neurons.

The mirror neuron system is thought to help primates, non-human and human, understand what others are doing by simulating the motor plan of an observed action and also allowing for prediction of the most likely outcome of an observed action. In other words, mirror neurons are sensitive both to actions and outcomes, and to some extent, inferring the why behind the what. Many have suggested that they play a significant role in comprehending mental states and empathic processes. But it’s in regards to these latter claims where the evidence is not as clear.

So, how does the brain intuit others’ inherently unobservable mental states in the absence of biological action? Much of the research evidence points to the mentalizing system, also known as the theory-of-mind network, as the neural network tasked to the job (see meta-analysis by Van Overwalle and Baetens, 2009). Anatomically speaking, these networks are distinct, with the mirror neurons located primarily in the ifraparietal sulcus, superior temporal sulcus and the prefrontal cortex, while the mentalizing system constitutes a distinct set of brain regions that lie along the cortical midline and in the temporal lobes, including the mPFC, TPJ, temporal poles, PCC and posterior STS.

One of the big challenges in this area of research is in designing tasks that are able to effectively disentangle processing of motor action from mentalizing. This is quite a challenge because it’s difficult to know what kind of mental process participants are applying to any given set of social stimuli. Do participants engage in higher-order abstract mentalizing automatically, and even when the stimuli might not necessarily demand it? How can we know what mental process subjects are engaging in? In other words, how might one capture the distinction between perceiving what others are doing vs. obtaining a more abstract representation of why they might be doing it?

UCLA’s Bob Spunt and colleagues (2011) designed a study that would attempt to do just that. They had participants observe short video clips of a human performing an action and directed the participants, in the scanner, to covertly describe each video clip in terms of (1) what an actor was doing, (2) why he was doing it, (3) how we was doing it or (4) to just passively view the video. They were to start the process of covert description once the video started playing, begin their description with the word “he” (e.g. he is reading) and to press a button once they were done.

(Thanks to the researchers for providing the video)

For example, in the above example, participants might have covertly described that the man is reading (WHAT), that he wants to learn or is bored (WHY), or that he is flipping pages or gripping the book (HOW).

This had the effect of creating three levels of mentalizing “depth” while holding the action component constant. If the mirror neuron network was involved in the mentalizing process, then one would expect to see neural activation increases in the mirror neuron network covarying with the increase in participants presumed mentalizing about the actor. And if the mirror neuron network was involved in mentalizing, then one would expect to see increased activations in neural regions which have been previously suggested to contain mirror neurons.

Results
In support of the theory that mirror neurons don’t play a significant role in mentalizing, the researchers found no increase in the mirror neuron network in response to increases in mentalizing. But they did find increased activation in brain regions associated with mentalizing, including dorsal and ventral medial pFC, posterior cingulate cortex, and the temporal poles.

Conclusion
The study does provide another piece of support to the position that although the mirror neuron system might be necessary in understanding actions of the body, it’s not sufficient to explain the cognitive processes required to infer unobservable mental states.

References
Spunt, R., Satpute, A., & Lieberman, M. (2011). Identifying the What, Why, and How of an Observed Action: An fMRI Study of Mentalizing and Mechanizing during Action Observation Journal of Cognitive Neuroscience, 23 (1), 63-74 DOI: 10.1162/jocn.2010.21446

Keysers, C., & Gazzola, V. (2010). Social Neuroscience: Mirror Neurons Recorded in Humans Current Biology, 20 (8) DOI: 10.1016/j.cub.2010.03.013

Van Overwalle F, & Baetens K (2009). Understanding others’ actions and goals by mirror and mentalizing systems: a meta-analysis. NeuroImage, 48 (3), 564-84 PMID: 19524046

ResearchBlogging.org