Neurophilia

People love pictures of brains. And, as a result, companies have been trying hard to find ways to incorporate MRI data into their sales pitches and business plans. One such company, Johnson O’Connor Research Foundation, has jumped on the bandwagon in a big way, having recently added a brain scan to the standard occupational aptitude test they offer to job seekers (they charge around $700 for the assessment):

The Johnson O’Connor Research Foundation is a nonprofit scientific research and educational organization with two primary commitments: to study human abilities and to provide people with a knowledge of their aptitudes that will help them in making decisions about school and work. Since 1922, hundreds of thousands of people have used our aptitude testing service to learn more about themselves and to derive more satisfaction from their lives.

See the Neurocritic for a spot-on criticism of the “study” upon which their new marketing pitch is based.

Bad neuroscience seems to be appearing increasingly frequently in the public media space. From misleading articles in the mainstream press to the poorly conducted studies that often form the basis for one or another misconceived business plan, fMRI research runs the danger of being victimized by its own success. Part of the problem stems from the general public’s inability to properly interpret neuroscientific data in the context of human psychology studies. Not that they should be blamed. Neuropsychology is a somewhat complicated discipline, and there isn’t any reason to believe that someone lacking in understanding of the basic principles of neural science, or psychology, or both, should be able to parse such data out correctly. The problem, however, is that the average public citizen isn’t neutral toward such data, but tends to be more satisfied by psychological explanations that include neuroscientific data, regardless of whether that data adds value to the explanation or not. The mere mention of something vaguely neuroscientific seems to increase the average reader’s satisfaction with a psychological finding, legitimizing it. Even worse, its the bad studies that benefit the most from this so-called “neurophlia”, the love of brain pictures. That’s according to a study from a research team led by Jeremy Grey at Yale University.

Participants read a series of summaries of psychological findings from one of four categories: Either a good or bad explanation, with or without a meaningless reference to neuroscience. After reading each explanation, participants rated how satisfying they found the explanation. The experiment was run on three different groups of participants: random undergraduates, undergrads who had taken intermediate-level cognitive neuroscience course and a slightly older group who had either already earned PhDs in neuroscience, or were in or about to enter graduate neuroscience programs.

The first group of regular undergrads were able to distinguish between good and bad explanations without neuroscience, but were much more satisfied by bad explanations that included reference to neural data ( The y-axis on the following figures stands for self-rated satisfaction):

Nor were the cognitive neuroscience students any more discerning. If anything, they were a bit worse than the non-cognitive neuroscience undergrads, in that they found good explanations with meaningless neuroscience more satisfying than good ones without :

But the PhD neural science people showed the benefits of their training. Not only did they not find bad explanations to be more satisfying by the addition of meaningless neuroscience, they found good explanations with meaningless neuroscience to be less satisfying.

As to why non-experts might have been fooled? The authors suggest that non-experts could be falling pray to the “the seductive details effect,” whereby “related but logically irrelevant details presented as part of an argument, tend to make it more difficult for subjects to encode and later recall the main argument of a text.” In other words, it might not be the neuroscience per se that leads to the increased satisfaction, but some more general property of the neuroscience information. As to what that property might be, it could be that people are biased towards arguments that possess a reductionist structure. That is, in science, “higher level” arguments that refer to macroscopic phenomena often refer to “lower level” explanations that invoke microscopic explanation. Neuroscientific explanations fit the bill in this case, by seeming to provide hard, low level data in support of higher level behavioral phenomenon. The mere mention of lower level data – albeit meaningless data – might have made it seem as if the “bad” higher level explanation was connected to some “larger explanatory system” and therefore more valid or meaningful. It could be simply that bad explanations – those involving neuroscience or otherwise – are buffered by the allure of complex, multilevel explanatory structures. Or it could be that people are easily seduced by fancy jargon like “ventral medial prefrontal connectivity” and “NMDA-type glutamate receptor regions.”

Whatever the proximal mechanisms of the “neurophilia” effect, the public infatuation with all things neural probably won’t be fading any time soon and, as such, its imperative that scientists, journalists and others who communicate with the public about brain science be on the lookout for bad, and incorrectly presented good, neuroscience, and be quick to issue correctives when it appears.

Go here for the Yale study.