Make mine a triple

In response to a couple of questions about the implications of long-term caffeine intake, I’d thought I’d throw out a couple of findings.

I recently wrote about a study that localized the receptors underlying the arousing effects of caffeine. (A2a receptors, in cells located in the shell of the nucleus accumbens). It’s only natural then to wonder what effect chronic caffeine intake might have on these receptors (and elsehwere in the brain).

That study didn’t look at chronic effects. But back in 1996, Glass et al. found that chronic caffeine consumption increased the global expression of adenosine receptors in the brain , suggesting that this increase was to compensate for caffeine’s antagonistic effects. Withdrawal from caffeine is, at least in part, likley related to a hypersensitivity to adenosine due to this increased number of adenosine receptors. The headaches that accompany caffeine withdrawal are thought to be related to the fact that adenosine is a known vasodilator and the increased receptor density + withdrawal of caffeine from the system leads to a significant drop in blood pressure.

A couple other interesting notes in regards to long-term effects of caffeine:

The Good News
Some case control studies have shown lower incidents of Parkinson’s disease in coffee drinkers vs. non-coffee drinkers, although this finding has not always been replicated. The correlation, when it has been found, was strongest in heavy consumers. (This is certainly a finding I would love to be true!) More evidence in support of these findings come from mouse studies showing that physiological doses of caffeine were able to reduce one of the major toxic factors associated with Parkinsons (MPTP-induced dopaminergic toxicity). It’s been suggested that caffeine may offer neuroprotective effects in the brain via action occurring at A2a receptors, which are the same receptors responsible for the arousing effect of caffeine (and which are also co-localized with dopamine D2 receptors). Additional support for this idea comes from studies in which mice who had their A2a receptors knocked out showed reduced MPTP-induced injury compared to wild types. How this all might be happening on a mechanistic level, however, is not well known.

The less good (but not totally bad) news:
Unfortunately, it seems that acute doses of caffeine often cause a rise in systolic and diastolic blood pressure, increase in catecholamine release and vasodilation (wideneing of blood vessels). However, some studies have shown this effect occurs primarily in non-regular consumers of caffeine. Many studies have shown either slight increases or no difference in blood pressure for regular users of caffeine. In fact, several large scale studies have found that heavy, regular use is protective against heart disease. (yes!) The findings are quite contradictory.

So what to make of all this? Is heavy coffee drinking bad or good for you?
There’s no simple answer to that question. But the paradoxical findings suggest that different individuals have varying levels of risk. And it’s likely that genetics play a significant role.

If one thinks of coffee as a drug, then the notion that the benefits of heavy coffee consumption might outweigh the risks seems very counterintuitive. That is, due to the brain’s propensity to maintain homeostasis, drug taking, either legal or illegal, usually involves some significant cost/benefit analysis, a trade off between the good (the high, buzz, relief from psychic or physical pain) and the bad (side effects, withdrawal, expense, long-term effects on health). Yet, the evidence on long-term caffeine intake seems to put it in a distinctive class of its own.

I once chatted with an extremely energetic and sprightly 93-year old Italian man from the old country, curious to know the the secret to his longevity and good health. “Five espressos a day,” he said.  Anecdotes aren’t very informative in an empirical sense, of course, but, nonetheless, the old codger may have been on to something.*

* In addition to the  espressos, he’d also claimed that he smoked a pack a day of American Winstons and was convinced that brand loyalty was one of the other secrets to his good health.

Why caffeine jacks you up

Have you ever wondered why, and exactly where in the brain, coffee (or any caffeinated product, for that matter) is able to exert its arousing effects? Well, wonder no longer, because an international team of researchers from Japan, China and the US, have located the primary neurons upon which caffeine works its magic (Lazarus 2011).

It was previously known that caffeine wakes you up through inhibiting activity at adenosine A2a receptors (adenosine is an inhibitory neuromodulator involved in regulating the sleep-wake cycle). However, it was not known exactly where in the brain the receptors that exerted this effect are located.

How did they do it?
The researchers utilized a method whereby the gene that codes for A2a receptors (A2aRs) is marked such that they can be deleted, but only in a specific regions of the brain. Using a rat model, the team utilized these gene deletion strategies and found that when they knocked out A2aRs in the shell of the nucleus accumbens, rats no longer experienced the arousing effects of caffeine.

How does this work?
Adenosine activates A2a receptors in the nucleus accumbens shell, activation of which receptors inhibit the arousal system. That is, the more adenosine activation there is, the sleepier an organism becomes. Caffeine, which binds to these same receptors and blocks adenosine from exerting its activity there, essentially disinhibits the arousal system, promoting wakefulness. (Amazingly, based on similarities between the brains of mice and men, the area of the human brain in which caffeine acts to counteract fatigue is approximately the size of a pea.)

What does this mean in practical terms? (or, in other words, why should we find this so cool?)

Well, for one, it gives us a more specific mechanistic explanation for the arousing effects of caffeine. It says that in order for caffeine to work, it not only has to be effective as an A2aR antagonist, but that excitatory A2aRs on nucleus accumbens shell neurons must be tonically activated by endogenous adenosine. This is especially important in consideration of individual differences in the subjective effects of caffeine.

What if A2aRs are more densely packed in the shell of your nucleus accumbens than in mine? Might you be more sensitive to the effects of caffeine than me? That certainly seems likely. And the reason that one person might over or underexpress these receptors vs. another seems to be related to variation in the gene that produces those receptors (the gene knocked out in the rat study described above). In fact, we’ve already have evidence that this is the case. Past studies have shown genetic variations in genes coding for A2aRs were associated with greater sensitivity to caffeine and sleep impairment (Retey 2007), and greater anxiety after caffeine (Childs 2008). This study refines the existing model and should inspire, and lead to more accurate interpretation of, future genetics studies.*

*Other significant genes that underly individual differences in the subjective effects of caffeine include CYP1A2, or cytochrome enzyme P-450 1A2, which is associated with caffeine metabolism, and those coding for dopamine D2 receptors.

Lazarus M, Shen HY, Cherasse Y, Qu WM, Huang ZL, Bass CE, Winsky-Sommerer R, Semba K, Fredholm BB, Boison D, Hayaishi O, Urade Y, & Chen JF (2011). Arousal Effect of Caffeine Depends on Adenosine A2A Receptors in the Shell of the Nucleus Accumbens. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (27), 10067-10075 PMID: 21734299

Childs E, Hohoff C, Deckert J, Xu K, Badner J, de Wit H (2008) Association between ADORA2A and DRD2 polymorphisms and caffeine-induced anxiety. Neuropsychopharmacology. 33:2791– 2800

Retey JV, Adam M, Khatami R, Luhmann UF, Jung HH, Berger W, Landolt HP (2007) A genetic variation in the adenosine A2A receptor gene (ADORA2A) contributes to individual sensitivity
to caffeine effects on sleep. Clin Pharmacol Ther. 81:692–698

Thank you, Al Franken!

Speaking of the misuse of research studies for ideological purposes, check out Senator Al Franken (D-Minn) calling out apparent homophobe Tom Minnery, who represents a group of conservative christian extremists calling themselves “Focus on the Family”, during a Senate hearing on the repeal of The Defense of Marriage Act (DOMA).

In a nutshell, Minnery had misinterpreted a 2010 study by the Department of Human and Health Services in support of his conclusion that …

“… children living with their own married, biological, and/or adoptive mothers and fathers were generally happier and healthier, had better access to health care; less likely to suffer mild or severe emotional problems; did better in school; were protected from physical, emotional, sexual abuse; and almost never live in poverty compared with children in any other family form.”

Franken pointed out that he had read the study, and this is not what it said.

“I checked the study out,” said Franken, “and I would like to enter into the record, if I may, it actually doesn’t say what you said it says. It says that nuclear families — not opposite sex married families — are associated with those positive outcomes. Isn’t it true, Mr. Minnery, that a married same sex couple that has had or adopted kids would fall under the definition of a nuclear family in the study that you cite?”

Minnery responded that he thought nuclear family, as defined in the study, meant one headed by a husband and wife.

“It doesn’t,” Franklin responded. “The study defines a nuclear family as one or more children living with two parents who are married to one another and are each biological or adoptive parents to all the children in the family. And I frankly don’t really know how we can trust the rest of your testimony if you are reading studies these ways.”

There was much laughter in the chamber during the exchange.

The authors of the study confirmed (via Politico) that Franken’s interpretation of the study was correct and said the study does not provide evidence that straight couples’ children necessarily fare better than same-sex couples’ kids, as Minnery had so hopefully claimed.

Of course, this won’t change the minds of the religious nutters who go around spouting this nonsense, but it still felt good to watch nonetheless. Minnery and his colleagues should know better than to expect to find empirical evidence to support their claims. Anyhow, why should they need evidence? They’ve got their faith!

Drugging and Driving: Benzos, opioids and antidepressants and increased risk of driving accidents

ResearchBlogging.orgThere are many reasons why one might find it preferable not to drive an automobile: For one, it’s expensive (gas, insurance, repairs, and tickets). It pollutes the environment. And its dangerous. Based on data from the Federal Highway Administration, there are over 6 million auto accidents in the United States every year on average. And around 40,000 of those accidents result in people being killed by people driving under the influence of alcohol.

A new study from Australian researchers provides another reason to hop on the bus or train rather than get behind the wheel. The study looked at the association between driving and taking prescription medications. And the results were not very promising, showing that users of many prescription medications are at increased risk for car accidents.

The researchers performed what is called a meta-study, in which all the research that can be located pertaining to a given topic, and meeting certain criteria of validity and reliability, is combined into a single pool of data in an attempt to achieve maximal statistical power. Two different types of studies were examined:

1. Epidemiological studies. These are studies of patterns of association between prescription drugs and driving accidents based on real-life data coming from a variety of sources. There are several advantages and drawbacks to these kinds of studies and Wikipedia is a good place to get some background. These studies utilize real world data, so one can at least be relatively confident that the data represent natural phenomena. On the other hand, epidemiological studies are correlational; in other words, the data can indicate two variables are related, but can’t definitively tell you about the causal direction of the relationship.

2. Experimental Studies. These are controlled studies that allow researchers to explore causal relationships between variables. Again, wiki is a good place to go for a primer. Experimental studies can explore causality, if they’re designed correctly, but may lack “ecological validity”; that is, they may not represent “real world” conditions.

The goal of this  meta-study was to ascertain whether the data from numerous sources, including epidemiological and experimental studies, converged on the same conclusions.

Several classes of prescription drugs were examined:
1. Benzodiazepines: these include drugs such as diazepam, flurazepam, flunitrazepam and nitrazepam. They’re commonly prescribed for generalized anxiety disorder, panic disorder, insomnia, seizures and alcohol withdrawal.
2. Non-benzo hypnotics: Include drugs like pentobarbital. These are frequently prescribed for insomnia.
3. Antidepressants, which can be divided into two classes: SSRIs and TCAs. SSRIs include drugs like Lexapro, Prozac, and Celexa. TCAs, or trycylic antidepressants, include drugs like mipramine (Tofranil) and maprotiline (Ludiomil).
4. Anxiolytics (anti-anxiety drugs)
5. Opioids

For those interested in the details, please consult the study. I’ll just be presenting a simplified summary of the findings. But before I get there, just a couple of quick thoughts. Meta studies can often be difficult to interpret. In this study there are many potential confounding variables, such as a huge variety of different types of drug, varying range of dose, the problem that those on medication also have depression, anxiety, and other disorders (making it difficult to parse out the effects of the drug alone), tolerance effects, age and gender effects, the possibility that the epidemiological studies only include the worst cases (only accidents that resulted in injury), and so on. It becomes very difficult to make conclusive or generalizable statements about the findings. Some researchers are opposed to meta studies for that very reason. That being said, the evidence here does seem to have reasonably converged toward a handful of conclusions. Keeping the limitations in mind, here they are:

1. Benzodiazepine users show 60-80% increased risk of traffic accidents. Drivers responsible for causing an accident are 40% more likely to be positive for benzos than those who are not responsible. Elderly people show decreased risk (versus non-elderly).

2. Benzodiazepine users who also drink alcohol show a 7.7 fold increase in risk for traffic accidents

The 2- to 3-fold increase in accident risk associated with … long-acting benzodiazepines and zopiclone is equivalent to what has been observed with a blood alcohol concentration of 0.05–0.08 g/dL,[100,101] which is above the legal limits for driving in most countries…

The authors recommend that anyone prescribed benzodiazepine should abstain from driving for the first four weeks of treatment.

3. Anxiolytics seems to impair drivers independent of the drug’s half life. (A half life is the duration of action of a drug and indicates the period of time required for the concentration of the  drug in the body to be reduced in half.)

4. Impairment caused by hypnotics tends to be related to the drug’s half life.

For hypnotic medication, an option for prescribers is to avoid these hypnotics (flurazepam, flunitrazepam, nitrazepam and zopiclone) if patients are engaged in driving. Relatively safer alternatives would be shorter acting hypnotics, such as triazolam, temazepam, zolpidem and zaleplon, which were not found to cause driving impairment, at least in experimental studies (although there is evidence that some of the drugs are associated with increased accident risk)…

5. As far as antidepressants go, no clear distinction emerged between sedative and non-sedative subclasses (according to epidemiological studies). One major confounding variable in the studies examined is depression itself, as cognitive and psychomotor deficits are associated with depression alone. Furthermore, antidepressants might interact differently depending on stage of treatment, e.g. effects of antidepressants take one to two weeks to appear, so driving may be even more impaired over this time period than depression alone or after drug effects kick in.

Sedative antidepressants probably lead to worse driving for the first 3-4 weeks, and until tolerance to sedative effects increases and depression lifts. This is supported by some experimental evidence. (Patient groups with sedative/non-sedative antidepressants improved their driving skills after a few weeks). Epidemiological studies suffer from the confound of comparing groups on anti-depressants (people with depression) with those not on anti-depressants (people who don’t have depression) and are therefore of limited utility.

6. Opioids – There weren’t enough studies of opioids and driving to make any conclusions.

I wasn’t able to locate data indicating how many people in the US are currently taking the drugs mentioned in this study. What I did find was that antidepressants (many of which are probably sedatives) are the most popular prescription drug for adults aged 20 to 59 in the US. And the most recent annual data (from the CDC) suggests that 48% of Americans took at least one prescription drug in the past month. This suggests the possibility that the number of those driving under the influence of cognitively-impairing prescription drugs is likely to be in the millions country wide. Cause for concern? Perhaps. Prescription drugs use is on the rise. And much of the US population lives in geographic regions where there are few alternatives to driving.

Dassanayake T, Michie P, Carter G, & Jones A (2011). Effects of benzodiazepines, antidepressants and opioids on driving: a systematic review and meta-analysis of epidemiological and experimental evidence. Drug safety : an international journal of medical toxicology and drug experience, 34 (2), 125-56 PMID: 21247221

Social cognitive deficits in autism spectrum disorder One of the hallmarks of Autism Spectrum Disorder (ASD) is an impairment in social cognitive skills. This manifests in individuals with ADS having trouble orienting their attention towards people. Accordingly, they also show deficits orienting their attention in response to social cues from others, such as eye gaze, head turns and pointing gestures.

Understanding the social cognitive impairments associated with ASD has been challenging in that studies set in naturalistic settings often reveal the deficit but lab experiments performed on computers don’t.

For example, some naturalistic studies have looked at home movies of infants and found that those later diagnosed with ASD showed less social orienting and were less responsive to cues from others to orient to objects. For example, if their mom was in the room, they would look at her a lot less and they’d also be less likely to respond when their mothers tried to direct their attention to a toy in the room by looking or pointing at it.

However, people with ASD have been shown to respond to non-naturalistic social cues in the lab. Social orienting has been frequently been tested by use of a variation on Michael Posner’s spatial cueing paradigm. This works as follows:

1. Participants are seated in front of a computer
2. A stimulus – a pair of eyes gazing to either side (or straight ahead) or arrows pointing to either side or neither – appears on the screen
3. Shortly after, a stimulus (the target object) appears to one side or the other, either on the side which the eyes or arrows were pointing towards or the opposite side.
4. Participants have to indicate which side the target object appeared on by pressing either a right or left button.
5. Performance on the task is assessed by measuring the amount of time it takes to participants to press the button indicating on which side the target appeared. Most participants, including ASD patients, are as quick with the gaze cue (the eyes) as with the arrow cue.

Posner cue paradigm

(The left side of the above figure shows a single trial (with “directional eyes”), in which participants first see a fixation cross, then one of four directional/non-directional stimuli, after which the target appears either on the same side indicated by the cue or the opposite side. Participants need to indicate which side a target stimulus appeared on by pushing a button. The right side shows the three other trial types (from top to bottom): neutral arrow, directional arrow, neutral eyes)

Past studies have shown that people orient faster to cued (like in the left side of the above figure) versus noncued locations, known as the facilitation effect. Previous studies using this task have produced inconsistent results, but most of them have shown ASD populations performing comparably to non-ASD populations.

In this study, researchers used the above-described cue task to examine the neural mechanisms underlying social orienting in ASD, with the hope that if there were no behavioral differences, neural activity might reveal that ASD individuals are performing the task differently. Other studies have shown that non-ASD populations treat social and non-social cue stimuli differently. It was hoped that neural activity revealed in this study would shed light on the discrepancies in behavioral results for ASD populations in lab versus computer settings.

In terms of behavior, both the control and the ASD group showed quicker responses for gaze and arrow cues with no between group difference, which is consistent with previous lab studies.

However, neural activation patterns showed significant group differences. The control group showed greater activation for social vs. nonsocial cues in many different brain regions, with gaze (eyeball) cues eliciting increased activity in many frontoparietal areas, supporting the idea that neurotypical brains treat social stimuli different from non-social stimuli. The ASD group, on the other hand, showed much less difference in neural activation between social vs. non-social cues. Although these differences in neural activation are too numerous to cover here, one region of interest, superior temporal sulcus (STS), stood out. The STS has been shown to be associated with the perception of eye gaze and other work has suggested the region may be involved in understanding the intentions and mental states of others. In this study, ASD individuals showed decreased STS in the gaze cue condition (versus controls). This data suggests that the STS may not be sensitive toward the social significance of eye gaze in ASD individuals.

The authors point out that although ASD individuals don’t seem to rely on the same neural circuitry to perceive social cues such as eye gaze, they have found a way to use the low-level perceptual information available in social cues to adapt a strategy that allows them to discern that gaze direction conveys meaning about the environment. That being said, ASD individuals mostly don’t do this very well in more naturalistic environments. So, although this strategy might work in a scanner with “cartoon” eyes and where there are no environmental distractions, it’s unlikely that ASD individuals could adapt this strategy in a naturalistic environment. On the contrary, one could also frame these results from the perspective of the ASD individual; that is, given the non-naturalistic environment of the scanner, and the fact that the task demands were very simple and not dependent on social cognitive processing, why should non-ASD individuals treat the gaze vs. arrow stimuli differently? Why not just rely on low-level information and thus expend less cognitive energy? It’s a good example of the automaticity of social cognitive processes. Give humans a set of cartoon eyeballs to look at and they can’t help but process these as distinct from something non-social.

An additional take away from this paper is that even when one finds no behavioral differences between groups, there might be some interesting differences in neural activity worth exploring via fMRI or EEG.


Greene DJ, Colich N, Iacoboni M, Zaidel E, Bookheimer SY, & Dapretto M (2011). Atypical neural networks for social orienting in autism spectrum disorders. NeuroImage, 56 (1), 354-62 PMID: 21334443

Google crosses the web/brain barrier?

Google, are you reading my mind?

One interesting aspect of having a blog is checking out the search terms that people used to land at one’s site. It’s often difficult to figure out why a particular and seemingly unrelated term might bring someone this way.

But one recent search seems to have transcended the blog and gone straight into my brain-o-sphere, into the existential recess where some of my darker thoughts about grad school are stored:

“PhD meaninglessness”

Google, you know me so well. Now stop it, you’re freaking me out.

Mortality among users of marijuana, cocaine, amphetamine, ecstasy and opioids

This post was chosen as an Editor's Selection for ResearchBlogging.orgWhile Illicit drugs have long been linked to higher mortality rates, the data is wildly variable.

In a paper recently published in the journal Drug and Alcohol Dependence, Danish researchers attempted to establish standard mortality ratios for the drugs cannabis, cocaine, amphetamine, MDMA (ecstasy) and opioids (e.g. heroin)*, while taking into consideration the effects of two intervening variables: drug injection with needles and psychiatric disorders (Is the mortality rate of cocaine users mediated by whether they have, for example, clinical depression?)

(*Individuals’ primary drug of choice)

The population they looked at included 20,581 people treated for drug abuse in Denmark over a 10-year period from 1996-2006. (These data are correlational and, therefore, the possibility of unidentified moderating variables exerting an effect on death rates is high.)

In brief, the results showed the following:

1. Those who injected drugs showed significantly higher mortality rates across the board. (This does conflict with past findings, which found no difference.)

2. Overall, psychiatric illness was not associated with higher mortality rates, with the exception of cocaine/amphetamine users, who, if they presented with psychiatric disorders, did show higher morality rates.

3. Pot smokers showed 5x increase in mortality rates (compared to the general population). Researchers suggest that increased mortality among pot smokers could be related to driving accidents, violent injuries and various other types of accidents. (a personal note: Based on my personal experience, this seems unlikely. Pot smokers tend to drive very conservatively (too slow, if anything!) and are famously not prone to violence.) What seems more likely to explain pot smokers’ higher mortality rate is that they are also using other drugs. Other studies have borne this out.

4. Cocaine and amphetamine users showed 6x death rates of the general population. Previous reports on stimulant abuse related deaths are highly variable. The variability is likely the result of other factors including physical conditions, HIV/AIDS, overdose, cardiovascular problems, injuries accidents, violent deaths and suicides.

5. Opiod users show increased mortality rates. Findings for both stimulants and opioids are in accordance with studies from other countries. Users of Heroin and other opioids showed by far the highest mortality rates of all drugs of abuse.

6. Ecstasy (MDMA) users did not show increased mortality rates. (However, it’s possible that a low number of deaths from MDMA contribute to low statistical power).

Conclusions that can be drawn from this report? Stay away from all drugs if you want to increase your chances of staying alive; but, especially, don’t do intravenous heroin. Psychiatric disorders plus drugs of abuse aren’t associated with increased mortality risks except for in the case of cocaine/amphetamine. Ecstasy is unlikely to kill you on its own, but that’s not to say it won’t do some long-term damage if abused. Although marijuana users showed higher mortality rates, there’s not good reason to believe this is solely the effect of marijuana, but other factors. Finally, the population under study here consisted of people seeking treatment, so it’s unknown if this represents the drug using population as a whole.

I think it’s pretty clear, given the number of questions and unknowns this study presents, that there is a lot more to learn about drug-related mortality risk.

Arendt M, Munk-Jørgensen P, Sher L, & Jensen SO (2011). Mortality among individuals with cannabis, cocaine, amphetamine, MDMA, and opioid use disorders: a nationwide follow-up study of Danish substance users in treatment. Drug and alcohol dependence, 114 (2-3), 134-9 PMID: 20971585