Tag Archives: neuroscience

Bilingualism for a Healthy Brain

A post originally published at Nautilus discusses emerging research showing that bilingualism may prevent cognitive decline in old age and stave off dementia. Read the full post here.

Parlez-vous francais? If you answered yes, then you’re well on your way to enjoying the many benefits of bilingualism. Speaking both English and French, for example, can enrich your cultural experiences in multilingual destinations like Belgium, Morocco, or Egypt, and broaden your access to books, music, and films.

But the benefits of speaking another language aren’t limited to just cultural perks. “Studies have shown that bilingual individuals consistently outperform their monolingual counterparts on tasks involving executive control,” says Ellen Bialystok, a cognitive psychologist at York University. In other words, speaking more than one language can improve your ability to pay attention, plan, solve problems, or switch between tasks (like making sure you don’t miss your freeway exit while attending to your kids in the back seat). You may think it’s just higher intelligence that underlies these benefits, but evidence suggests otherwise. A 2014 study, for example, showed that those who learned a second language, in youth or adulthood, had better executive functions than those who didn’t, even after accounting for childhood IQ …

Read more here.

Tagged , , , ,

Brain stimulation holds promise for anorexia

Originally published on the PLOS Neuroscience Community

Anorexia nervosa is a devastating, often fatal disease, in which voluntary food restriction severely compromises physical and mental health. Because current treatments, such as psychotherapy, are only minimally effective, many disease victims fight a lifelong battle and most will never recover. However, recent research into the neural underpinnings of anorexia provides hope that relief may be possible by regulating aberrant neural circuitry. A new study published in PLOS One by Jessica McClelland and colleagues from King’s College London offers preliminary support for repetitive transcranial magnetic stimulation (rTMS) as a viable therapy for anorexia.

Hitting reset to dysfunctional brain circuits

Anorexia is characterized by maladaptive behavior, such as abnormal cognitive flexibility, emotional regulation, and habit learning, which has been linked to dysfunction in frontal, limbic and striatal brain networks. The prefrontal cortex in particular is thought to be a critical hub in this aberrant network, as its hypoactivity may lead to poor impulse control underlying many of the symptoms of eating disorders. Restoring function to the prefrontal cortex therefore holds promise for resetting dysfunctional neural circuits and ultimately ameliorating disease symptoms. Modulating neural function noninvasively can be achieved with brain stimulation techniques such as rTMS, which, by applying repeated magnetic pulses over the scalp, induces an electric current in nearby neurons and alters cortical excitability. RTMS to the prefrontal cortex is FDA approved to treat depression, is effective at treating other psychiatric disorders including addiction and schizophrenia, and has shown early but inconsistent efficacy for alleviating symptoms of eating disorders.

To determine whether rTMS may be similarly beneficial for normalizing brain function in anorexia, the researchers tested 49 women with either restrictive or binge/purge subtypes of anorexia. Twenty-one of the women underwent real rTMS, while 28 underwent sham rTMS, to the left dorsolateral prefrontal cortex. Before and after treatment both groups completed a food challenge test to assess their response to enticing foods and gauge their symptoms. They also performed a temporal discounting task, which measures impulsivity and may therefore be sensitive to the altered inhibitory control that occurs in eating disorders.

A hint of therapeutic promise

The researchers were mainly interested in whether rTMS could attenuate “core” symptoms of anorexia, such as urges to restrict eating, or feelings of fullness and fatness. Core symptoms were reduced after both real and sham treatments, suggesting at least some degree of a placebo effect. However, there was a trend for stronger symptom attenuation at multiple time-points – up to a day after treatment – for those who experienced real compared to sham rTMS. A closer look revealed a trend for lower reports of feeling fat after real rTMS compared to sham. Furthermore, the real treatment was also associated with less impulsivity (as assessed with the temporal discounting task) after rTMS, an effect that was only significant for the restrictive subtype of anorexia.


Core anorexia symptoms were lower at all time-points after real rTMS than sham. (McClelland et al., 2016)

The path to the clinic

Based on these findings, should practitioners begin prescribing brain stimulation to eating disorder patients? Well, not quite yet. While the study’s results are encouraging, their effects were not overwhelmingly strong, and the authors acknowledge their study was possibly underpowered. Before rTMS translates to the clinic, its therapeutic potential needs to be further evaluated and the target patient population and optimal protocol should be better characterized. Even if these results hold up, it’s still not clear whether the benefits of rTMS are specific to anorexia, perhaps by resetting underactive neural function to normal levels, or whether rTMS similarly modulates healthy brain activity and behaviors. For example, would altered impulsivity and emotional responses to food also occur in healthy individuals after rTMS? McClelland admits that “Including healthy controls in this study would have been useful as a comparison group, to see if the rTMS in people with anorexia encourages more ‘normal’ responses,” and her group plans to include a healthy comparison group in future studies.

Furthermore, other brain stimulation tools may be equally or more effective, so comparative studies are needed to determine the ideal therapeutic technique. McClelland explains that

“We selected rTMS because there is more literature on its use and efficacy in other neuro-circuit based psychiatric disorders. Transcranial direct current stimulation (tDCS) is a slightly newer technique and hasn’t yet been investigated in eating disorders as widely as rTMS. This doesn’t mean that tDCS may not be as, if not more, effective than rTMS.”

And because the neural mechanisms of eating disorder subtypes presumably differ, the efficacy of rTMS may depend on the patient’s particular set of disease symptoms and illness duration. Targeting therapies at the individual patient level could be an ideal treatment approach, but will require further research to determine what stimulation protocols are optimal for particular symptom profiles.

According to McClelland, “Our single-session study was experimental and therefore only looked at the short-term, transient effects of rTMS in people with anorexia. We wouldn’t expect a single-session of rTMS to have long lasting therapeutic effects.” Therefore, in a critical step along the path to the clinic, McClelland and colleagues are currently doing a randomized clinical trial of 20 sessions of either real or placebo rTMS in individuals with anorexia.

Although preliminary, this study offers early hope for the millions of Americans presently suffering from anorexia. Effective relief from an otherwise debilitating, unrelenting disease may be just around the corner.


Bartholdy S et al. (2015). Clinical outcomes and neural correlates of 20 sessions of repetitive transcranial magnetic stimulation in severe and enduring anorexia nervosa (the TIARA study): study protocol for a randomised controlled feasibility trial. Trials. 16:548. doi:10.1186/s13063-015-1069-3

Grall-Bronnec M, Sauvaget A (2014). The use of repetitive transcranial magnetic stimulation for modulating craving and addictive behaviours: a critical literature review of efficacy, technical and methodological considerations. Neurosci Biobehav Rev. 47:592-613. doi:10.1016/j.neubiorev.2014.10.013

Lam RW, Chan P, Wilkins-Ho M, Yatham LN (2008). Repetitive transcranial magnetic stimulation for treatment-resistant depression: a systematic review and metaanalysis. Can J Psychiatry. 53(9):621-631.

McClelland J, Bozhilova N, Campbell I, Schmidt U (2013). A systematic review of the effects of neuromodulation on eating and body weight: evidence from human and animal studies. Eur Eat Disord Rev. 21(6):436-455. doi:10.1002/erv.2256

McClelland J et al. (2016). A Randomised Controlled Trial of Neuronavigated Repetitive Transcranial Magnetic Stimulation (rTMS) in Anorexia Nervosa. PLOS ONE. 11(3): e0148606. doi:10.1371/journal.pone.0148606

Oberndorfer TA, Kaye WH, Simmons AN, Strigo IA, Matthews SC (2011). Demand-specific alteration of medial prefrontal cortex response during an inhibition task in recovered anorexic women. Int J Eat Disord. 44(1):1-8. doi:10.1002/eat.20750

Shi C, Yu X, Cheung EFC, Shum DHK, Chan RCK (2014). Revisiting the therapeutic effect of rTMS on negative symptoms in schizophrenia: A meta-analysis. Psychiatry Res. 215(3):505-513. doi:10.1016/j.psychres.2013.12.019

Steinhausen HC (2002). The outcome of anorexia nervosa in the 20th century. Am J Psychiatry. 159(8):1284-1293. doi.:10.1176/appi.ajp.159.8.1284

Zhu Y et al. (2012). Processing of food, body and emotional stimuli in anorexia nervosa: a systematic review and meta-analysis of functional magnetic resonance imaging studies. Eur Eat Disord Rev. 20(6):439-450. doi:10.1002/erv.2197

Tagged , , ,

The supremely intelligent rat-cyborg

Originally published on the PLOS Neuroscience Community

When Deep Blue battled the reigning human chess champion the world held its breath. Who was smarter … man or machine? A human victory would confirm the superiority of human intelligence, while a victory for Deep Blue would offer great promise for the potential applications of artificial intelligence to benefit mankind. And with the defeat of Garry Kasparov by an algorithm, the debate heated over what constitutes intelligence and whether computers can possess it. But perhaps the answer to the man-versus-machine debate isn’t so black and white. Perhaps both synthetic and biological systems have unique, complementary strengths that, when merged, could yield an optimally functioning “brain” – a supremely intelligent cyborg, if you will. In their new PLOS ONE paper, Yipeng Yu and colleagues tested this possibility, comparing the problem-solving abilities of rats, computers and rat-computer “cyborgs.”

Maze-solving rats, computers and cyborgs

Six rats were trained over the course of a week to run a series of unique mazes. The rats were implanted with microelectrodes in their somatosensory cortex and medial forebrain bundle, which releases dopamine to the nucleus accumbens and is a key node of the brain’s reward system. They were enticed to reach the maze target by the fragrance of peanut butter, a sip of water (they were mildly dehydrated) and stimulation of the medial forebrain bundle once they solved the puzzle. After training, the researchers tested the rats on 14 new mazes, monitoring their paths, strategies and time spent solving the mazes.

To compare the performance of the rats to that of a computer, the research team developed a maze-solving algorithm implementing left-hand and right-hand wall-following rules. This algorithm completed the same 14 mazes run by the rats.

Rat cyborgs integrated the computational powers of organic and artificial intelligence systems. Rats completed the same set of mazes, but this time with the assistance of the computer algorithm. By stimulating the rats’ left and right somatosensory cortex to prompt them to move left or right, the algorithm intervened when the rats needed help, directing them to traverse unique paths and avoid dead ends and loops.

Intelligent cyborgs

Performance of the rats, computer and rat-cyborgs were compared by evaluating how many times they visited the same location (steps), how many locations they visited, and total time spent to reach the target. Although the cyborgs and computers took roughly the same number of steps, the cyborgs took fewer than the rats, a sign of more efficient problem solving. Furthermore, the cyborgs visited fewer locations than computers or rats (see Figure), and took less time than the rats to solve the mazes. Across the various maze layouts, the number of steps and locations covered were strongly correlated between the types of beings (rats and cyborgs, rats and computer, cyborgs and computer). Thus, a maze that was challenging for a rat was similarly challenging for the computer and the rat’s cyborg counterpart.

Rat cyborgs covered fewer maze locations than rats or computers. (Yu et al., 2016)

Rat cyborgs covered fewer maze locations than rats or computers. (Yu et al., 2016)

The ethics of a human cyborg

These findings from Yu and colleagues suggest that optimal intelligence may not reside exclusively in man or machine, but in the integration of the two. By harnessing the speed and logic of artificial computing systems, we may be able to augment the already remarkable cognitive abilities of biological neural systems, including the human brain. The prospect of computer-assisted human intelligence raises obvious concerns over the safety and ethics of their application. Are there conditions under which a human “cyborg” could put humans at risk? Is altering human behavior with a machine tantamount to “playing god” and a dangerous overreach of our powers?

Despite these concerns, such computer-assisted intelligent systems are already available and in surprisingly wide-spread use. Human brain-computer interface has been in use for decades, helping to restore vision, movement and communication in impaired individuals. Although your brain may not be directly wired to a computer, it’s likely that you often function as a human cyborg on a daily basis. Many of us rely on a GPS to augment our navigation abilities while driving, on a word-processor’s spell-checker to enhance our writing, or on a digital planner to organize a busy schedule. Few would argue that these daily uses of computer assistance are unethical. But where does one draw the line between harmless lifestyle enhancement and dangerous mind-control? Yu and colleague’s findings suggest that, at least for now, we need not fear overtake by super-smart robots; perhaps instead it’s time to embrace the computing abilities of machines as complementary – and beneficial – to our own natural powers of intelligence.


Dobelle WH (2000). Artificial vision for the blind by connecting a television camera to the visual cortex. ASAIOJ. 46(1):3–9

Grau CJ et al. (2014). Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PLOS ONE. 9(8): e105225. doi:10.1371/journal.pone.0105225

Hochberg LR et al. (2013). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 485(7398):372–375. doi:10.1038/nature11076

Yu Y et al. (2016). Intelligence-Augmented Rat Cyborgs in Maze Solving. PLOS ONE. 11(2): e0147754. doi:10.1371/journal.pone.0147754

Tagged , , , ,

A cocktail party in a dish: How neurons filter the chatter

Originally published on the PLOS Neuroscience Community

While dining with a friend at a noisy restaurant, you listen attentively to her entertaining account of last night’s date. Despite the cacophony flooding your auditory system, your brain remarkably filters your friend’s voice from the irrelevant conversations at neighboring tables. This “cocktail party effect,” the ability to attend to select input amidst a distracting background, has fascinated researchers since its characterization in the 1950’s. Although psychological and sensory models have offered insight into why human brains are so exquisitely equipped to perform this selective attention, researchers haven’t yet pinned down how neurons process mixed information to respond to the important and suppress the irrelevant. In their new paper published in PLOS Computational Biology, researchers from the University of Tokyo revealed that individual neurons learn to “tune in” to one input while ignoring others, offering an intriguing explanation for how rapid neural plasticity may give rise to the cocktail party effect.

Sending neurons mixed messages

Based on many earlier studies showing that neural networks can learn by changing their activity based on experience, the authors wondered whether neurons could also be trained to distinguish among sensory experiences. To test this idea, they recorded electrical activity from cultured rat cortical neurons. They electrically stimulated the neurons according to two stimulation patterns, to provide two unique hidden sources of input, simulating the cocktail party effect of hearing a mixture of voices. In some conditions both input patterns were activated, while in others one, the other, or neither input pattern was activated. They repeated variations of these stimulations for 100 trials in many samples to track how the neural responses changed over exposure to the stimuli.

Distinct patterns of neural activation simulate the cocktail party effect of hearing multiple speakers. (Isomura et al., 2015)

Distinct patterns of neural activation simulate the cocktail party effect of hearing multiple speakers. (Isomura et al., 2015)

Learning to discriminate

Over the course of training, neurons altered their likelihood of spiking to the input patterns. Roughly half of the neurons increased their response to one source and reduced their response to the other, while the other half increased responsiveness to the other source. A discrimination index used to measure preference for one input over the other showed that this bias increased across all electrodes over the training period. Even neurons exposed to the stimuli only briefly – trained on only a fraction of the trials – still demonstrated a response preference up to a day later, suggesting that neural learning occurred rapidly and was long-lasting. Although first author Takuya Isomura speculates that “this could last several days,” it’s not permanent. “We have confirmed that training with another stimulation pattern could overwrite the neural preference to the past source. That is, even cultures that have learned a pattern set could learn another one.”

Neurons increased their discrimination (DKLi) over the course of training when fully trained (red) and partially trained (white) but not when NMDA receptors were blocked (black). (Isomura et al., 2015)

Neurons increased their discrimination (DKLi) over the course of training when fully trained (red) and partially trained (white) but not when NMDA receptors were blocked (black). (Isomura et al., 2015)

But how, since biological systems can learn in various ways, did these cells so efficiently acquire and maintain this source bias? Blocking the cultures with an NMDA receptor antagonist largely prevented the neurons from developing an input preference, suggesting that learning occurred through NMDA receptor-dependent signaling, known to be important for long-term synaptic plasticity supporting memory formation. Furthermore, neurons only demonstrated discriminability if there was variance in the balance and frequency of the input patterns. This requirement for variance hinted that the neurons may follow independent components analysis (ICA)-like learning rules.

To better understand these learning dynamics, Isomura’s group examined changes at the neuronal population level. A simple Hebbian learning model predicted that connectivity should increase both within and across neuron groups. Instead, synaptic connectivity increased between neuron groups with the same source preference, but decreased between neuron groups with different source preferences. A modified model of Hebbian learning (including state-dependent plasticity) better accounted for these observations, as it allowed for competition between neurons. As Isomura explains, “state-dependent Hebbian plasticity could facilitate the neural response to the source that effectively stimulates the nearest electrode, while it could depress that to the other source. In the future, using the connection strength estimation, we might be able to predict the neural preference before the stimulations.”

As the neural networks changed, their internal and free energy decreased, whereas entropy increased. These energy changes did not occur with NMDA receptor blockade, suggesting that they are indeed attributable to learning-related synaptic plasticity. As connections strengthen between a neuronal group and its preferred source, the authors explain, mutual information increases between the neural system’s inputs and outputs, lowering its overall free energy.

How does a discriminating neuron make a discriminating brain?

Although it’s well established that neural activity changes with experience, Isomura and colleagues have shown for the first time that neurons can invoke these learning mechanisms to recognize and discriminate information. Neural networks accomplished this impressive feat by performing unsupervised learning – adhering to ICA and free-energy principles – to self-organize via activity-dependent plasticity.

So how might these findings help you stay engrossed in your friend’s tale of first date mishaps, amidst distraction? There are obvious differences between an integrated brain, which can direct its attention at will to a sound it deems meaningful and important, and a neuronal culture, which (presumably) lacks this power of guided attention. However, in both cases, a brain or neuron must decorrelate a mishmash of inputs. Although speculative, the authors propose that attentional shifts towards important information can only occur if the brain can distinguish sensory input, beginning at the level of discrimination by individual neurons. Further research will help to explain how feedback between attentional and sensory systems orchestrates this elegant goal-directed sensory filtering. Despite the sense that “tuning in” to a friend’s voice is automatic and effortless, studies have shown that this is a learned skill acquired early during life. Like other forms of learning, developing this ability likely relies on the plasticity of neurons adapting and responding to their experiences.

To Isomura, it’s “a fascinating mystery why people can learn faster than machine learning that typically needs huge training. Interestingly, some learning properties (e.g., speed) of culture networks are more similar to machine learning rather than human behavior, while they consist of living cells. Thus, a series of this kind of studies might have a potential to fill the gap.”


Bronkhorst AW (2015). The cocktail-party problem revisited: early processing and selection of multi-talker speech. Atten Percept Psychophys. 77(5):1465–1487. doi: 10.3758/s13414-015-0882-9

Cherry EC (1953). Some Experiments on the Recognition of Speech, with One and with Two Ears. J Acoustic Soc Amer. 25:975–979. doi: dx.doi.org/10.1121/1.1907229

Hohwy J (2014). The neural organ explains the mind. Open MIND. Frankfurt am Main: MIND Group

Isomura R, Kotani K, Jimbo Y (2015). Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle. PLOS Comp Biol. doi: 10.1371/journal.pcbi.1004643

Jimbo Y, Tateno T, Robinson HPC (1999). Simultaneous Induction of Pathway-Specific Potentiation and Depression in Networks of Cortical Neurons. Biophys J. 76(2):670–678. doi: 10.1016/S0006-3495(99)77234-6

Plude DJ, Enns JT, Brodeur D (1994). The development of selective attention: a life-span overview. Acta Psychol. 86(2-3):227–272

Tsien JZ, Huerta PT, Tonegawa S (1996). The essential role of hippocampal CA1 NMDA receptor-dependent synaptic plasticity in spatial memory. Cell. 87(7):1327–1338. doi: 10.1016/S0092-8674(00)81827-9

Image credit https://www.flickr.com/photos/dinnerseries

Tagged , , , , ,

How reliable is resting state fMRI?

Originally published on the PLOS Neuroscience Community

Arguably, no advance has revolutionized neuroscience as much as the invention of functional magnetic resonance imaging (fMRI). Since its appearance in the early 1990’s, its popularity has surged; a PubMed search returns nearly 30,000 publications with the term “fMRI” since its first mention in 1993, including 4,404 last year alone. Still today, fMRI stands as one of the best available methods to noninvasively image activity in the living brain with exceptional spatiotemporal resolution. But the quality of any research tool depends foremost on its ability to produce results in a predictable and reasonable way. Despite its widespread use, and general acceptance its efficacy and power, neuroscientists have had to interpret fMRI results with a large dose of partially-blind faith, given our incomplete grasp of its physiological origins and reliability. In a monumental step towards validation of fMRI, in their new PLOS One study Ann Choe and colleagues evaluated the reproducibility of resting-state fMRI in weekly scans of the same individual over the course of 3.5 years.

One devoted brain

Although previous studies have reported high reproducibility of fMRI outcomes within individuals, they’ve compared only few sessions over brief periods of weeks to months. Dr. Choe and her team instead set out to thoroughly characterize resting state brain activity at an unprecedented time scale. To track patterns of the fMRI signal, one dedicated 40 year-old male offered his brain for regular resting-state fMRI sessions. Over the course of 185 weeks, he participated in 158 scans, roughly occurring on the same day of the week and time of day. For comparison – just in case this particular individual’s brain was not representative of the general population – a group of 20 other participants (22-61 years old) from a prior study were used as reference.

Reproducibility of brain networks and BOLD fluctuations

The researchers identified 14 unique resting state brain networks. Networks derived from the subject’s individual scans were spatially quite similar to those identified from that subject’s average network map and the multi-subject average map, and these network similarity measures were highly reproducible. Whereas executive function networks were the most reproducible, visual and sensorimotor networks were least. The relatively low reproducibility of “externally directed” networks could be attributable to the nature of the unrestrained scanning conditions, in which mind-wandering or undirected thoughts could engage an array of sensory experiences. Dr. Choe suspects “that under truly controlled conditions, exteroceptive networks would become more reproducible. Differences in reproducibility in exteroceptive versus interoceptive networks should be seen as an observation that requires follow up study.”

Figure 1. Spatial similarity of weekly fMRI sessions for sensorimotor, visual and executive networks. (Choe et al., 2015)

Figure 1. Spatial similarity of weekly fMRI sessions for sensorimotor, visual and executive networks. (Choe et al., 2015)

The basic signal underlying fMRI is the blood oxygen level dependent (BOLD) response, a measure of changes in blood flow and oxygenation thought to reflect vascular and metabolic responses to neural activity. The magnitudes of BOLD fluctuations were similar both across the single subject’s scans and the group’s scans, although these fluctuations were generally more reliable within-subject. Similar to the spatial overlap between networks, BOLD signal in executive networks was most reproducible, while that in default mode and sensorimotor networks were least reproducible across the subject’s sessions.

Between-network connectivity

In the brain, no network is an island, but rather, is in constant communication with other regions, near and far. This functional connectivity can be assessed with fMRI by computing correlations in the signal between areas. As might be expected, connectivity was highest between networks involved in related functions, for example between sensorimotor and auditory networks, and between sensorimotor and visual networks. Connectivity between networks was similar in the single subject and multi-subject datasets, and was highly reproducible both across the single subject’s sessions and within the multi-subject dataset.

Figure 2. Between network connectivity for single-subject and multi-subject datasets. (Choe et al., 2015)

Figure 2. Between network connectivity for single-subject and multi-subject datasets. (Choe et al., 2015)

fMRI over the years

A unique advantage of their study design was the rich temporal information provided from repeated scanning over a multi-year period. This allowed them to not only assess the reproducibility of the BOLD signal, but also to explore trends in how it may change with the passage of years or seasonal fluctuations. Significant temporal trends were found in spatial similarity for the majority (11 of 14) of networks, in BOLD fluctuations for two networks, and in between-network connectivity for many (29 of 105) network pairs. All but one of these trends were positive, indicating increased stability of the fMRI signal over time. What drives these changes over the years isn’t entirely clear. It could simply reflect habituation to the scanning environment, for example, if the experience becomes increasingly repetitive and familiar with exposure. Alternatively, the authors suggest, it might involve physiological changes to the aging brain, such as synaptic or neuronal pruning. Over the 3.5-year study, the 40-year old participant indeed showed decline in his gray matter volume; this neural reorganization could feasibly impact the stability of the fMRI signal. However, Dr Choe cautions that “although three years is a long time, it is certainly not long enough to address the issue of say, an aging brain.”

Notably, many networks showed annual periodicity in their spatial similarity (9 of 14 networks) and BOLD fluctuations (3 networks). These measures also correlated with the local temperature, linking reliability of the fMRI signal with seasonal patterns. Although speculative, the authors suggest that this may in part relate to circadian or other homeostatic rhythms that regulate brain activity. Dr. Choe and her group “were surprised to discover annual periodicity in rs-fMRI outcome measures. If future studies, in a large number of participants, find significant annual periodicity in rsfMRI outcomes, then it would be prudent to take such temporal structure into consideration, especially when designing studies in chronic conditions, or for extended therapeutic interventions.”

Reason to rest easy?

The findings from Dr. Choe and colleagues’ ambitious study provides convincing evidence that the resting fMRI signal is reproducible over extensive time periods, giving reason for cognitive neuroscientists everywhere to breathe a small sigh of relief. Perhaps more importantly, it characterizes the nuanced patterns of its spatial and temporal stability, unraveling how it differs across brain networks and might be vulnerable to moderators such as aging or environment. This new understanding of fMRI dynamics will be incredibly useful to researchers aiming to optimize their fMRI study design, and holds particularly important implications for longitudinal studies in which aging or seasonal effects may be of concern. According to Dr. Choe,

“The high reproducibility of rs-fMRI network measures supports the candidacy of such measures as potential biomarkers for long-term therapeutic studies.”

One future application her team is currently pursuing is “using rs-fMRI to study brain reorganization in persons with chronic spinal cord injury, having recently reported significantly increased visuo-motor connectivity following recovery. We are interested in whether such measures can be used as biomarkers for prognosis and to help monitor responses to long-term therapy.”


Bandettini PA (2012). Twenty years of functional MRI: The science and the stories. Neuroimage. 62(2):575–588. doi: 10.1016/j.neuroimage.2012.04.026

Chen S, Ross TJ, Zhan W et al (2008). Group independent component analysis reveals consistent resting-state networks across multiple sessions. Brain Research. 1239:141-151. doi: 10.1016/j.brainres.2008.08.028

Choe AS, Belegu V, Yoshida S, Joet al (2013). Extensive neurological recovery from a complete spinal cord injury: a case report and hypothesis on the role of cortical plasticity. Front Hum Neurosci 7, 290.

Choe AS, Jones CK, Joel SE et al (2015). Reproducibility and Temporal Structure in Weekly Resting-State fMRI over a Period of 3.5 Years. PLOS One. 10(10):e0140134. doi: 10.1371/journal.pone.0140134

Guo CC, Kurth F, Zhou J et al (2012). One-year test–retest reliability of intrinsic connectivity network fMRI in older adults. Neuroimage. 61(4):1471–1483. doi: 10.1016/j.neuroimage.2012.03.027

Hodkinson DJ, O’Daly O, Zunzzain PA et al (2013). Circadian and homeostatic modulation of functional connectivity and regional cerebral blood flow in humans under normal entrained conditions. J Cereb Blood Flow & Metab. 34:1493–1499. doi: 10.1038/jcbfm.2014.109

Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature. 412:150–157. doi: 10.1038/nature35084005

Logothetis NK (2008). What we can do and what we cannot do with fMRI. Nature. 453:869–878. doi: 10.1038/nature06976

Wisner KM, Atluri G, Lim KO, MacDonald AW (2013). Neurometrics of intrinsic connectivity networks at rest using fMRI: Retest reliability and cross-validation using a meta-level method. Neuroimage. 76(1):236–251. doi: 10.1016/j.neuroimage.2013.02.066

Tagged , , ,

While you were sleeping: Neural reactivations promote motor learning

Originally published on the PLOS Neuroscience Community

Do you recall that moment when you first learned to ride a bike? After days of practice, it finally clicked. Almost effortlessly, your legs cycled in perfect harmony as you maneuvered gracefully around turns and maintained impeccable balance. You may have learned this skill decades ago but it will likely stick with you for the rest of your life. How did your brain accomplish this remarkable feat of transforming a series of forced and foreign actions into an automatic, fluid movement sequence? A new PLOS Biology study by Dr. Dhakshin Ramanathan and colleagues explored the neural substrates of motor memory acquisition, reporting that reactivation of task-related neural activity patterns during sleep promotes motor skill learning in mice.

Sleep enhances motor learning

To evaluate motor learning, the researchers trained mice to perform a task in which they had to reach for a food pellet. Since sleep is known to be important for memory consolidation, the mice were allowed to sleep before and again after the reach task. The mice performed the task a second time after sleeping to assess how sleep affected their performance. Accuracy on the reaching task improved over the course of the first training period, whereas the mice responded more quickly after sleeping; hence, “online learning” (during the task) improved accuracy and “offline learning” (while sleeping) improved speed. No changes occurred when the mice were sleep-deprived between task sessions, pinpointing sleep – rather than the passage of time – as the source of the performance boost.

Behavioral paradigm (Ramanathan et al., 2015)

Behavioral paradigm (Ramanathan et al., 2015)

Sleep-dependent neural changes

The researchers next explored the neurophysiological basis for this sleep-dependent learning, recording neural activity from the forelimb area of motor cortex. As co-author Dr. Karunesh Ganguly explained,

“Studies had previously studied hippocampal-based memory systems. It remained unclear specifically how the motor system (i.e., procedural memory) processes memories during sleep.”

When the mice performed the task the second time, after sleeping, the onset of neural firing (time-locked to the reach) peaked earlier, and this activity was more strongly modulated by the task. Notably, firing onset did not change after sleep deprivation, confirming that sleep was necessary for this temporal shift in the neural response.

Learning-related reactivation

Elsewhere in the cortex and hippocampus, during rest neurons will fire in a particular sequence matching the temporal pattern during a prior experience, an event known as “replay” that is thought to support formation of memory for that experience. The researchers speculated that reactivation or replay in motor cortex may similarly promote motor learning. Neural activity patterns identified from the reach task were more prevalent during sleep after the task, showing – as predicted – reactivation of task-related activity after motor learning. When mice performed the reach task on multiple days, over the course of all days the degree of reactivation during sleep correlated with reduced reaction time, linking stronger neural reactivation with behavioral improvements.

A) Reactivation of task-related neural activity after learning. B) Reactivation correlates with improvements in reaction time. (Ramanathan et al., 2015)

A) Reactivation of task-related neural activity after learning. B) Reactivation correlates with improvements in reaction time. (Ramanathan et al., 2015)

Since the authors observed neural reactivation during motor learning, they next wondered whether the temporal sequence of this reactivation may be an important element of the memory code. The neural activation pattern during sleep more closely matched the task-related activity pattern after learning than before, although some of the temporal information in the sleep sequence was lost. Although prior studies have shown a role for hippocampal or cortical replay in memory consolidation, Dr. Ganguly raises the important distinctions that here, they “did not find evidence of ‘replay’ (i.e., sequences) but ‘reactivation’ (i.e., synchronous bursts).”

Learning-related plasticity was evident even at the single neuron level. Those neurons with the highest task-related activity were most strongly reactivated during sleep, and those showing the strongest reactivation also happened to show the most dramatic shift in the onset of their response to reaching. The authors speculate that this increased temporal coupling of neural activity to the task could facilitate binding the neurons into a distributed “movement complex” that aids formation of the motor memory.

Locking to spindles and slow waves for widespread plasticity 

Burst of high-frequency activity – known as spindles – and slow wave oscillations have both been implicated in offline learning. If task-related reactivations during sleep are important for memory consolidation, the authors reasoned, they may be temporally linked to spindles or slow waves. After learning the reach task, reactivations were in fact more closely time-locked and phase-locked (i.e., occurred at a particular phase of the cycle) to fast spindles. Reactivations were also more strongly time-locked and shifted their phase-locking to slow oscillations. Thus, during sleep, neural activation patterns related to the motor task were not only more prevalent after training, but their timing was also refined to coincide with particular neural events that may facilitate memory formation. Since spindles may be involved in synchronizing long-range cortical activity, locking task-specific reactivations to spindles could tie them into neuroplastic changes throughout widespread brain networks supporting consolidation.

At a recent talk on sleep-dependent memory consolidation, the speaker compared the neural reorganization that takes place during memory formation to a house renovation. Just as it’s more comfortable and effective for us to check into a hotel while our house is renovated, learning may be more effective when the brain checks out – into the quietude of sleep – while neural reorganization occurs. This may explain why sleep is so important for learning, yet it doesn’t explain how the brain stores new memories during sleep. Past studies have identified neuronal reactivation, coordinated with spindles and slow waves, as critical for forming declarative memories. Dr. Ramanathan and colleagues’ findings suggest that these mechanisms also occur in the motor cortex to support a radically different form of memory – the kind that helped you learn to ride a bike many years ago.

Clarifying the neural dynamics of sleep-dependent learning holds profound implications not just for those of us hoping to learn to play a new instrument or refine our dance moves. It may also help us better understand the remarkable neuroplasticity that underlies rapid motor learning during early development, and hold potential to promote recovery from motor impairments following brain injury. Dr. Ganguly is optimistic about the possible applications of their findings:

“Motor learning is likely an essential process during rehabilitation. Surprisingly little is known about the role of sleep and replay during recovery. With further study, one could imagine using sleep and offline processing to maximize the learning during rehabilitation.”


Ji D, Wilson MA (2007). Coordinated memory replay in the visual cortex and hippocampus during sleep. Nat Neurosci. 10:100-107. doi: 10.1038/nn1825

Ramanathan DS, Gulati T, Ganguly K (2015). Sleep-Dependent Reactivation of Ensembles in Motor Cortex Promotes Skill Consolidation. PLOS Biol. doi: 10.1371/journal.pbio.1002263

Stickgold R (2005). Sleep-dependent memory consolidation. Nature. 437:1272–1278. doi: 10.1038/nature04286

Image credit https://www.flickr.com/photos/echoforsberg/

Tagged , , , ,

#PLOS #SfN15 Recap: Hidden variables of behavior

Originally published on the PLOS Neuroscience Community

The Society for Neuroscience meeting is unique in both is breadth and depth. There are sessions on literally everything Neuro, each delving with exquisite detail and nuance into their given topic. While this level of focus is great for those seeking comprehensive coverage of their niche, it can be daunting for those looking for a broader sampling of the field’s cutting edge. The Hidden Variables of Behavior symposium was one of the rare sessions to stray from the single-track convention to elegantly bridge seemingly disparate topics, methodologies and applications, producing a standout session with exceptionally broad appeal. It accomplished this by exploring a theme that is perhaps the unifying motivation underlying nearly all Neuroscience research: how does the brain engender behavior? How does neural activity give rise to the thoughts, interactions with our environment, and engagements with others that define our experiences? In an enthralling series of talks by Loren Frank, Mark Schnitzer, Yang Dan and Catherine Dulac, the symposium covered topics ranging from learning and memory to sleep and social behavior. This session had it all.

Rapidly alternating representations of present and past in hippocampal-cortical networks

frankLoren Frank kicked off the symposium by exploring how the hippocampus supports our ability to remember the past and plan for the future. Hippocampal cells have a remarkable ability to replay past experiences via high-frequency oscillations during sharp waves known as ripples. When an animal traverses a path its hippocampal neurons will fire in a characteristic sequence that codes its trajectory; later, at rest or while sleeping, this spiking sequence will repeat, with the sequence sped up approximately twenty times the original rate! Disrupting hippocampal ripples impairs sequence learning, indicating that they’re critical for acquiring memories. However, the mechanisms, at both regional and whole-brain levels, by which sharp wave ripples (SWRs) help to consolidate memories are unclear.


Much attention has been paid to neurons in hippocampal subregions CA1 and CA3, which are excitable by high-speed motion and positively modulated by SWRs. However, Frank’s group identified a new group of hippocampal neurons – CA2P and CA2N – that are also positively and negatively modulated by SWRs, respectively. Notably, the CA2N population has an exceptionally high level of baseline activity and preferentially fires during rest or low speed motion. Because of their distinct function, these rebellious cells may be crucial for ongoing processing of the current state while maintaining representations of the past and future.

Although some (including yours truly) may hold a hippocampo-centric view of memory, Frank reminds us that memory is “not just a hippocampal thing.” Looking to the rest of the brain, his group found that SWRs recruit 35% of prelimbic regions, including cells that are both excited and inhibited by SWRs. Similar to the distinct populations of CA2P and CA2N cells, prefrontal cortex neurons may activate during either high-speed motion or immobility. This balance of excitation and inhibition in the hippocampus and surrounding cortex may promote rapid transitions between representations of the past and future, and facilitate their integration for learning and planning.

Large-scale ensemble neural dynamics underlying learning and long-term associative memory

schnitzer_mark_107Mark Schnitzer continued with this theme, presenting intriguing findings regarding the spatiotemporal properties of neural adaptations subserving learning. However, equally impressive are the advanced imaging tools his lab is developing to explore these issues. Their techniques allow neural recordings in behaving animals at unprecedented spatial depths and extents over long time scales. For instance, their current best is recording 1202 hippocampal cells in a freely moving mouse. Someone give this man the “I-recorded-the-most-neurons” award!

Using these tools, Schnitzer has been exploring hippocampal morphological and physiological changes that contribute to learning. CA1 neurons are a likely target for spatial learning, as they show place-cell activity, preferentially responding to particular regions of an animal’s environment. Surprisingly, dendrites in subregion CA1 are remarkably stable, suggesting that dendritic plasticity is unlikely to be the critical factor underlying learning. However, CA1 spine turnover is relatively rapid – on the order of 8-10 days – in comparison to cortical spines, of which 50% are permanent over a month. Schnitzer explained that although these cells are temporally stochastic in that they sometimes take breaks from their place-coding activity, when they return to the neuronal “spatial ensemble” they always return to encode the same place. What’s more, CA1 spatial representations are refined by learning, becoming both more accurate and reliable in their coding. I’ll be eagerly following Schnitzer’s work to see how their ongoing methodological innovations and applications advance our understanding of the hippocampal dynamics supporting long-term memories.

Neural circuits for sleep control

YIR_RH_DanYang Dan turned from this fast-paced discussion of rapid neural plasticity, spatial navigation and learning to examine neural regulation of sleep. Historically, neurons that trigger alertness and waking have been easy to identify, but researchers have struggled to track down those “sleep neurons.” Past lesion and c-fos studies have shown that hypothalamic – particularly preoptic – neurons are important for inducing sleep.

Combining optogenetics with electrophysiology, Dan’s lab has expanded upon these findings to pinpoint both the responsible cell types and their specific sleep-inducing effects. In particular, activating GABAergic preoptic cells projecting to the tuberomammillary nucleus (also of the hypothalamus) promotes non-REM sleep initially, and REM sleep later. The midbrain’s ventrolateral periaqueductal gray also promotes sleep, but only the non-REM type. Dan’s findings together suggest that mutual inhibition across these key hypothalamic and brainstem regions regulates transitions across three general brain states of waking, REM sleep and non-REM sleep.

Long-term changes in the representation of social information in the mouse medial amygdala

DulacAfter all this talk about sleep, my hypothalamic sleep neurons had begun batting the morning’s adenosine antagonists. Fortunately, Catherine Dulac’s captivating talk exploring the bases of social interactions and sex-specific behavior kept me alert and engaged. Two key circuits working in concert to process social information, she explained, are the olfactory and vomeronasal systems. This latter system in particular may act as a switch to promote appropriate (and suppress inappropriate) sex-specific behavior.

Dulac’s research, fusing molecular, genetic and electrophysiological techniques, has identified the medial amygdala as a critical stop along the vomeronasal circuit for mediating sex-specific social signaling in mice. Medial amygdalar encoding of social cues is not only sexually dimorphic, but is also regulated by salient social experiences including mating and co-housing. Furthermore, the efficiency of medial amygdalar signaling also changes after mating in a sex-specific manner, increasing in males but decreasing in females. Together, Dulac’s work has pinpointed the medial amygdala as an indispensible hub within an extensive neural circuit that regulates social behavior and in turn, is modulated by sexual and social experience.

Every SfN has at least one session that reminds me why I love the brain and re-ignites my passion for Neuroscience. This year, the Hidden Variables of Behavior symposium was it! It may be a year away, but I’m eagerly awaiting #SfN16 for similarly inspiring talks.

For an abbreviated play-by-play, visit my Storified live-tweeting of the symposium’s highlights.


Anderson EB, Grossrubatscher I, Frank L (2015). Dynamic Hippocampal Circuits Support Learning- and Memory-Guided Behaviors. Cold Spring Harb Symp Quant Biol. 79:51–58. doi: 10.1101/sqb.2014.79.024760

Attardo A, Fitzgerald JE, Schnitzer MJ (2015). Impermanence of dendritic spines in live adult CA1 hippocampus. Nature. 523:592–596. doi: 10.1038/nature14467

Bergan JF, Ben-Shaul Y, Dulac C (2014). Sex-specific processing of social cues in the medial amygdala. eLife. 3:e02743. doi: http://dx.doi.org/10.7554/eLife.02743

Brennan PA (2001). The vomeronasal system. Cell Mol Life Sci. 58(4):546–555.

Ego-Stengel V, Wilson MA (2010). Disruption of ripple-associated hippocampal activity during rest impairs spatial learning in the rat. Hippocampus. 20(1):1–10. doi: 10.1002/hipo.20707

Weber F, Chung S, Beier KT, Xu M, Luo L, Dan Y (2015). Control of REM sleep by ventral medulla GABAergic neurons. Nature. 526:435–438. doi: 10.1038/nature14979

Tagged , , , , , , ,

That All-Nighter is not without Neuroconsequences

Originally published on the PLOS Neuroscience Community

As you put the finishing touches on your paper, you notice the sun rising and fantasize about crawling in bed. Your vision and hearing are beginning to distort and the words staring back at you from the monitor have lost their meaning. Your brain … well, feels like mush. We’ve all been there. That debilitating brain fog that inevitably sets in after an all-nighter prompts the obvious question: what does sleep deprivation actually do to the brain? Neuroscientists from Norway set out to answer this question in their recent PLOS ONE study, examining how a night forgoing sleep affects brain microstructure. Among their findings, sleep deprivation induced widespread structural alterations throughout the brain. The lead author shares his thoughts on the possible biological causes of these changes, and whether they may be long-lasting.

Inducing sleep deprivation

The researchers assessed a group of 21 healthy young men over the course of a day. The participants underwent diffusion tensor imaging (DTI; a form of MRI that measures water diffusion and can be used to evaluate white matter integrity) when they first awoke, at 7:30 am. They were free to go about their day as normal before returning for a second DTI scan at 9:30 pm. They remained in the lab for monitoring until a final scan at 6:30 am the following morning, for a total period of 23 hours of continued waking. Since we’re now learning that anything and everything can influence brain structure on surprisingly short time-scales, the researchers finely controlled as many confounding factors as possible. The participants were not allowed to exercise or consume alcohol, caffeine or nicotine during the study, or to eat right before the scans. Since DTI measures water diffusion, hydration was evaluated at all sessions and accounted for in their analysis.

Rapid microstructural changes to waking

The researchers were interested in two main questions: How does the brain change after a normal day of wakefulness and after sleep deprivation? They focused on three DTI metrics to probe how different features of neuronal tissue may change with waking. Radial diffusivity (RD) measures how water diffuses across fibers, whereas axial diffusivity (AD) measures diffusion along the length of a tract. Fractional anisotropy (FA) is the ratio of axial to radial diffusivity and therefore measures how strongly water diffuses along a single direction.

From morning to evening, FA increased and this was driven mostly by reduced RD (Figure, left). From the evening to the next morning – after the all-nighter – FA values decreased to levels comparable to the prior morning, and this drop was coupled with a decrease in AD (Figure, right). Thus, over the course of a full day of wakefulness FA fluctuated, temporarily rising but eventually rebounding. In contrast, both RD and AD declined but at different rates, RD dropping by the end of a normal day, and AD dropping later, only after considerable sleep deprivation. These changes were non-specific, occurring throughout the brain, including in the corpus callosum, brainstem, thalamus and frontotemporal and parieto-occipital tracts.

Throughout the brain, FA values increase from morning to evening (left) and decrease from the evening to the next morning after a night without sleep (right). Elvsåshagen et al., 2015.

Throughout the brain, FA values increase from morning to evening (left) and decrease from the evening to the next morning after a night without sleep (right). Elvsåshagen et al., 2015.

How bad are the neuroconsequences of sleep deprivation?

Other studies have corroborated these reports that wakefulness alters the brain, including reduced diffusion with increasing time awake, and altered functional connectivity after sleep deprivation. How this plasticity reflects the consequences of waking on the brain, however, isn’t clear. Sleep is known to be essential to tissue repair and is particularly important for promoting lipid integrity to maintain healthy cell membranes and myelination. The question remains, therefore, how detrimental the structural reorganization from sleep deprivation really is. Does the plasticity reported here and elsewhere persist for days, weeks or longer, or can a long night of deep catch-up sleep reverse any detriment that all-nighter caused?

“My hypothesis,” says first author Dr. Torbjørn Elvsåshagen, “would be that the putative effects of one night of sleep deprivation on white matter microstructure are short term and reverse after one to a few nights of normal sleep. However, it could be hypothesized that chronic sleep insufficiency might lead to longer-lasting alterations in brain structure. Consistent with this idea, evidence for an association between impaired sleep and localized cortical thinning was found in obstructive sleep apnea syndrome, idiopathic rapid eye movement sleep behavior disorder, mild cognitive impairment and community-dwelling adults. Whether chronic sleep insufficiency can lead to longer-lasting alterations in white matter structure remains to be clarified.”

Is sleepiness really to blame?

It’s likely that multiple factors contribute to these distinct patterns of change in neuronal tissue. After sleep deprivation, the extent of AD decline correlated with subjective sleepiness ratings, suggesting that microstructural alterations may in fact be attributable to changes in alertness or arousal. This possibility is in line with the finding that changes occurred in both the thalamus and brainstem, regions important for arousal and wakefulness. However, the non-linear changes in FA suggest that some microstructural changes may be less related to sleepiness and more directly driven by circadian effects. FA increased late in the day, but – despite fatigue– dropped back after sleep deprivation to the same levels as the day prior. This rebounding may have been due to declining levels of AD and RD reaching equilibrium (reminder, FA is the ratio of AD to RD) or to neuronal features that fluctuate with our circadian rhythms, at least partially independent of our sleep habits. What’s more, other studies have found that presumably mundane activities, for example juggling or spatial learning, also induce gray and white matter changes within hours, and presumably many more as-of-yet unstudied activities also cause similarly rapid plasticity. Given that participants were free to engage in various physical and cognitive activities between the scans, it’s reasonable to assume that some of these behaviors may have also influenced brain structure. Whatever the mechanism, these effects underscore the importance of accounting for time of day in structural neuroimaging studies.

Dr. Elvsåshagen elaborates on these possible factors: “The precise neurobiological substrate for the observed DTI changes after waking remain to be clarified. We cannot rule out the possibility that both activity-independent and activity-dependent processes could contribute to DTI changes after waking. In support of potential activity-dependent white matter alterations, there is interesting evidence from in vitro studies indicating that hours of electrical activity can lead to changes in myelination. To further explore the possibility of activity-dependent white matter alterations, one could examine whether different physical or cognitive tasks lead to task-specific white matter changes.”

Sleepy outliers?

Notably, two of the 21 participants did not show the same rise in FA throughout the day as the others, and showed the smallest change in FA and AD after sleep deprivation. While variability across individuals in terms of brain structure and biological responses to the environment is expected, the remarkable consistency of the study’s other findings raises the possibility that some other factors may explain these outliers. Dr. Elvsåshagen conjectures, “These individuals were also the least tired individuals after sleep deprivation. Although highly speculative, one possible explanation for the lesser changes in these two participants might be a particular resistance to the effects of waking and sleep deprivation.” A follow-up with additional time-points and closer monitoring of activities could help more finely track how the patterns of brain microstructural change shift over periods of waking, and vary across individuals.

Linking diffusion to neurons

How sleep, fatigue, activity or circadian rhythms affect particular neuronal structural properties remains to be seen. RD and AD are thought to depend on myelin and axon integrity, respectively, but DTI metrics in general are sensitive to various other tissue features as well, including cell membrane permeability, axon diameter, tissue perfusion or glial processes. While these properties are difficult to image in living humans, insight from animal studies will help determine how waking impacts specific neuronal characteristics.

Longer-term studies are needed to answer these questions. Dr. Elvsåshagen shared that his team has since replicated their results in a larger sample, and are planning a follow-up study on the effects of waking and sleep deprivation on gray matter structure. Until these outstanding questions are answered, keeping a regular sleep schedule – and avoiding those early morning paper-writing marathons – may be better option for your brain health.


Bellesi M, Pfister-Genskow M, Maret S, Keles S, Tononi G, Cirelli C (2013). Effects of sleep and wake on oligodendrocytes and their precursors. J Neurosci. 33: 14288–14300. doi: 10.1523/JNEUROSCI.5102-12.2013

Budde MD, Xie M, Cross AH, Song SK (2009). Axial diffusivity is the primary correlate of axonal injury in the experimental autoimmune encephalomyelitis spinal cord: a quantitative pixelwise analysis. J Neurosci. 29: 2805–2813. doi: 10.1523/JNEUROSCI.4605-08.2009

De Havas JA, Parimal S, Soon CS, Chee MW (2012). Sleep deprivation reduces default mode network connectivity and anti-correlation during rest and task performance. NeuroImage. 59: 1745–1751. doi: 10.1016/j.neuroimage.2011.08.026

Driemeyer J, Boyke J, Gaser C, Buchel C, May A (2008). Changes in gray matter induced by learning—revisited. PLOS ONE. 3: e2669. doi: 10.1371/journal.pone.0002669

Elvsåshagen T, Norbom LB, Pedersen PØ, Quraishi SH, Bjørnerud A, Malt UK (2015). Widespread Changes in White Matter Microstructure after a Day of Waking and Sleep Deprivation. PLOS ONE. 10(5): e0124859. doi: 10.1371/journal.pone.0127351

Hinard V, et al. (2012). Key electrophysiological, molecular, and metabolic signatures of sleep and wakefulness revealed in primary cortical cultures. J Neurosci. 32: 12506–12517. doi: 10.1523/JNEUROSCI.2306-12.2012

Hofstetter S, Tavor I, Tzur Moryosef S, Assaf Y (2013). Short-term learning induces white matter plasticity in the fornix. J Neurosci. 33: 12844–12850. doi: 10.1523/JNEUROSCI.4520-12.2013

Jiang C, , et al. (2014). Diurnal microstructural variations in healthy adult brain revealed by diffusion tensor imaging. PLOS ONE. 9: e84822. doi: 10.1371/journal.pone.0084822

Joo EY, et al. (2013). Brain Gray Matter Deficits in Patients with Chronic Primary Insomnia. Sleep. 36(7): 999-1007. doi: 10.5665/sleep.2796

Rayayel S, et al. (2015). Patterns of cortical thinning in idiopathic rapid eye movement sleep behavior disorder. Mov Disord. 30(5): 680–687. doi: 10.1002/mds.25820

Sanchez-Espinosa MP, Atienza M, Cantero JL (2014). Sleep deficits in mild cognitive impairment are related to increased levels of plasma amyloid-β and cortical thinning. NeuroImage. 98: 395-404. doi: 10.1016/j.neuroimage.2014.05.027

Song SK, Sun SW, Ramsbottom MJ, Chang C, Russell J, Cross AH (2002). Dysmyelination revealed through MRI as increased radial (but unchanged axial) diffusion of water. NeuroImage. 17: 1429–1436. doi: 10.1006/nimg.2002.1267

Sexton CE, et al. (2014). Accelerated changes in white matter microstructure during aging: a longitudinal diffusion tensor imaging study. J Neurosci. 34(46): 15425–15436. doi: 10.1523/JNEUROSCI.0203-14.2014

Wake H, Lee PR, Fields RD (2011). Control of Local Protein Synthesis and Initial Events in Myelination by Action Potentials. Science. 333(6049): 1647–1651. doi: 10.1126/science.1206998

Tagged , , , , ,

Astrocytes may Hold the Key to Exercise-Induced Cognitive Enhancement

Originally published on the PLOS Neuroscience Community

Forget expensive pills or exotic miracle supplements. Exercise may be the most effective – not to mention free and accessible – cognitive enhancer on the market. Research in humans has shown that physical activity can improve cognitive function and may help stave off dementia, yet the biological mechanisms underlying these benefits aren’t fully understood. Animal studies have made substantial progress on this front, demonstrating such positive responses to running as enhanced neurogenesis and elevated levels of neural growth factors. However, much of this research has been relatively narrowly focused, with particular attention devoted to neuronal changes and one notable brain region – the hippocampus. The hippocampus is selectively important for certain functions like learning and episodic memory, but exercise improves a range of cognitive processes, many of which depend on other, non-hippocampal brain regions. Therefore, researchers from Princeton University looked beyond the hippocampus and neurons to more thoroughly characterize the neural events that impart cognitive protection from physical activity. In their study recently published in PLOS ONE, Adam Brockett and colleagues report that running enhances performance on various cognitive tasks, improvements which may be mediated by changes in astrocytes, the lesser appreciated brain cells.

Running selectively boosts some cognitive functions

To manipulate levels of physical activity, rats were divided into a group of runners who were allowed free access to running wheels for 12 days, and another group of sedentary controls. Prior studies have shown that running improves performance on tasks requiring the hippocampus, like learning and memory. Here, the runners and non-runners were subjected to three tests to determine how exercise affects cognitive functions that are not dependent on the hippocampus. An object-in-place task, which tests how well rats remember the location of previously encountered objects, relies on the medial prefrontal cortex, hippocampus and perirhinal cortex. A novel object task, in which rats distinguish novel from familiar objects, selectively depends on the perirhinal cortex. Lastly, a set-shifting task, supported by the orbitofrontal and medial prefrontal cortices, measures attention and cognitive flexibility.

Compared to their non-runner companions, the runners performed better on the object-in-place test and on several measures of the set-shifting task. However, there were no differences between runners and non-runners in performance on the novel object recognition test. Of course, the cognitive benefits of running don’t end here, since many cognitive domains were not assessed in this test battery. But these findings highlight a striking selectivity of the brain-boosting powers of exercise. In particular, they suggest that running may enhance functions that specifically depend on the medial prefrontal and orbitofrontal cortices, along with the hippocampus, but it does not appear to modulate perirhinal-dependent functions.

Cognitive enhancement is linked to astrocytes

Although behavioral changes provide a window into the underlying neural events, they do not tell the complete mechanistic story. To directly examine how running affects the brain, the researchers assessed changes to both neuronal and non-neuronal brain cells. Running induced widespread neuronal changes, including higher levels of pre- and postsynaptic markers throughout the brain (including in the hippocampus and orbitofrontal, medial prefrontal and perirhinal cortices), and increased density and length of dendritic spines in the medial prefrontal cortex. While these effects suggest that exercise elicits generalized synaptic changes, they do not explain why particular cognitive functions are selectively boosted over others.

The researchers therefore looked for this crucial link to behavior in astrocytes. As Brockett explains, “We hypothesized that all cells likely change as a function of experience. We chose to focus on astrocytes because there is lots of evidence suggesting that astrocytes could be implicated in cognitive behavior. Loss of astrocytes correlate with impairment on a cognitive task and astrocytes connect the majority of neurons to blood vessels. They extend numerous processes that envelop nearby synapses, and gliotransmitters have been implicated directly in LTP-induction.”

Confirming their suspicions, in runners, astrocytes increased in size (Figure, A-B) and showed more contacts with blood vessels (Figure, C-D). But these changes only occurred in the hippocampus, medial prefrontal cortex and orbitofrontal cortex – critically, all regions that support the tasks showing running-related improvement. In contrast, running did not alter astrocytes in the perirhinal cortex, a region necessary for novel object recognition, which did not benefit from running. Thus, while running modified both neurons and astrocytes, the pattern of selective cognitive enhancements corresponded only with changes to astrocytes.

In the hippocampus, medial prefrontal cortex and orbitofrontal cortex, astrocytes were larger and made more contacts with blood vessels for rats who ran than those who did not. Brockett et al., 2015

In the hippocampus, medial prefrontal cortex and orbitofrontal cortex, astrocytes were larger and made more contacts with blood vessels for rats who ran than those who did not. Brockett et al., 2015

Implications for the active human

Although the varied and widespread cognitive benefits of exercise have long been appreciated, this study provides some of the first insight into the remarkable selectivity of these enhancements. Follow-up studies will help elucidate why, from both biological and evolutionary perspectives, running would demonstrate such selectivity. Might, for example, attention or task-switching abilities have been more important than object recognition for the efficiency of both animals and our persistence-hunting, distance-runner ancestors? Does running more heavily recruit certain brain regions over others, making them more susceptible to remodeling?

Given the cognitive and neurobiological differences between rats and humans, future research will be important to help extrapolate beyond rodents. Currently it’s unclear how different forms of exercise enjoyed by humans – for instance, swimming, yoga or strength training – uniquely influence distinct cognitive functions. According to Brockett,

“There is a lot of evidence that running has numerous beneficial effects on rodent and human cognitive functioning, but it is likely that aerobic exercise in general is responsible for these effects rather than running per se.”

Perhaps most notably, these findings add to the growing pool of studies underscoring the importance of astrocytes in neural processes that support cognition, and reveal a novel role for these cells in experience-dependent plasticity. As Brockett explains:

“Astrocytes are a unique cell type that haven’t been explored as much as neurons by the field of Neuroscience at large. Few studies have directly examined the role of astrocytes in complex behavior, and this was our first attempt at investigating this question.”


Alaei H, et al (2007). Daily running promotes spatial learning and memory in rats. Pathophysiology. 14:105–8. doi:10.1016/j.pathophys.2007.07.001

Brockett AT, LaMarca EA, Gould E (2015) Physical Exercise Enhances Cognitive Flexibility as Well as Astrocytic and Synaptic Markers in the Medial Prefrontal Cortex. PLoS ONE 10(5): e0124859. doi:10.1371/journal.pone.0124859

Gibbs ME, O’Dowd BS, Hertz E, Hertz L (2006) Astrocytic energy metabolism consolidates memory in young chicks. Neuroscience 141(1): 9-13. doi:10.1016/j.neuroscience.2006.04.038

Henneberger C, Papouin T, Oliet SH, Rusakov DA (2010). Long-term potentiation depends on release of D-serine from astrocytes. Nature. 463:232-6. doi:10.1038/nature08673

Kramer AF, Erickson KI (2007). Capitalizing on cortical plasticity: influence of physical activity on cognition and brain function. Trends Cogn Sci. 11: 342–8. doi:10.1016/j.tics.2007.06.009

Marlatt MW, Potter MC, Lucassen PJ, van Praag H (2012). Running throughout middle-age improves memory function, hippocampal neurogenesis, and BDNF levels in female C57BL/6J mice. Dev Neurobiol. 72:943–52. doi:10.1002/dneu.22009

van Praag H, Kempermann G, Gage FH (1999). Running increases cell proliferation and neurogenesis in the adult mouse dentate gyrus. Nat Neurosci. 2:266–70. doi:10.1038/6368

Tagged , , , , , , , ,

The Mitochondrial Hypothesis: Is Alzheimer’s a Metabolic Disease?

Originally published on the PLOS Neuroscience Community

Despite decades of research devoted to understanding its origins, Alzheimer’s disease remains a daunting and devastating neurological mystery, ranking as the sixth leading killer of Americans. Countless therapeutic attempts, each designed with fresh anticipation, have repeatedly failed. A common thread across many of these drugs is their targeting the defining marker of the disease, amyloid plaques – those nasty extracellular deposits of beta-amyloid protein that invariably present in the Alzheimer’s brain and are thought to be toxic to neurons. Given the frustrating loss of research money, time and effort, many scientists agree it’s time to stop running circles around the amyloid hypothesis and begin seriously considering alternative explanations. One such theory showing increasing promise is the “mitochondrial hypothesis”. Its proponents posit that mitochondrial dysfunction lies at the heart of neural degeneration, driven by metabolic abnormalities which lead to classic Alzheimer’s pathology.

Steps by which mitochondrial function may lead to Alzheimer's. Based on the model outlined in Swerdlow et al (2010).

Steps by which mitochondrial function may lead to Alzheimer’s. Based on the model outlined in Swerdlow et al (2010).

Thank mom for your genetic risk

The first hints at this possibility arose from epidemiological observations about the genetic patterns of Alzheimer’s prevalence. These findings suggest that genetic influences may include more nuanced interactions than the better-known contributions from genes such as ApoE and TOMM40. Although both parents determine genetic risk, your likelihood of getting Alzheimer’s is much higher if the affected parent was your mother. This argues strongly that some maternal element underlies the association. Mitochondrial DNA is a logical target, as this subset of DNA is solely passed down from the mother. Many features of Alzheimer’s show this same maternal-dominant inheritance; those whose mother (but not father) had the disease also show reduced glucose metabolism and cognitive function, as well as elevated PIB uptake (a marker of amyloid) and brain atrophy.

Are metabolic enzymes the pathological trigger?

So if mitochondrial dysfunction initiates the Alzheimer’s cascade, what are the steps leading from metabolic disruption to neurodegeneration and ultimately, dementia? Studies point to cytochrome oxidase – a key enzyme for mitochondrial metabolism that’s encoded by both mitochondrial and nuclear DNA – as a likely trigger for early pathological events. Studies suggest that the enzyme is dysfunctional in the earliest disease stages; its activity is reduced not just in those with Alzheimer’s, but even in asymptomatic individuals who are at genetic risk for the disease or had a mother with Alzheimer’s. Furthermore, this stunted activity is linked directly to mitochondrial (or maternal) genetic contributions. By simply replacing the mitochondrial portion of the cytochrome oxidase DNA with DNA from Alzheimer’s patients, an otherwise normal cell will now have reduced cytochrome oxidase activity.

Bridging metabolism to Alzheimer’s pathology

For the mitochondrial theory to hold water, it must critically account for the classic pathological markers that define Alzheimer’s and have shaped traditional disease models – namely, amyloid plaques, tau tangles and brain atrophy. Indeed, growing evidence is elegantly bridging altered mitochondrial function to these key markers. For instance, disrupting mitochondrial electron transport chain activity (if you’ve forgotten your basic biochemistry, this is essential to cell metabolism) increases phosphorylated tau. What’s more, inhibiting cytochrome oxidase promotes a host of neurotoxic downstream effects including increased oxidative stress, apoptosis and amyloid production. Conversely, there’s also evidence that amyloid disrupts electron transport chain and cytochrome oxidase function, posing a chicken-or-egg conundrum. Amyloid has been found to buddy-up to mitochondria, but which comes first, the amyloid or the mitochondrial dysfunction, isn’t entirely clear. Both events occur early in the disease process, even before individuals show any symptoms of cognitive impairment. Whatever the mechanism, neurons from Alzheimer’s patients show signs of increased mitochondrial degradation. And when a neuron’s “powerhouse” begins to degrade, it cannot possibly support normal cognitive function.

A promising path for progress

It remains to be seen whether metabolic dysfunction is the key to unlocking the mechanisms of Alzheimer’s, and to ultimately developing effective therapeutics. While the current evidence is quite promising, many of the issues underlying the failure of other theories (poor translation of animal findings to humans, the challenge of identifying causal mechanistic pathways, etc.) similarly apply to the mitochondrial hypothesis. But at the very least, the proposal lays new ground for neuroscientists to continue progressing forward after a recent history of frustrating dead-ends. Even if mitochondria don’t hold the answer researchers have been seeking, understanding its contributions to Alzheimer’s pathology can only bring us closer to solving the mystery of this devastating disease.


Crouch PJ et al (2005). Copper-dependent inhibition of human cytochrome c oxidase by a dimeric conformer of amyloid-beta1-42. J Neurosci. 25(3):672-9. doi: ​10.​1523/​JNEUROSCI.​4276-04.​2005

Debette S et al. (2009). Association of parental dementia with cognitive and brain MRI measures in middle-aged adults. Neurology. 73(24):2071-8. doi: 10.1212/WNL.0b013e3181c67833

Edland SD et al (1996). Increased risk of dementia in mothers of Alzheimer’s disease cases: evidence for maternal inheritance. Neurology. 47:254–6. doi: 10.​1212/​WNL.​47.​1.​254

Hirai K et al. (2001). Mitochondrial abnormalities in Alzheimer’s disease. J Neurosci. 21(9):3017-23

Hogliner GU et al. (2005). The mitochondrial complex I inhibitor rotenone triggers a cerebral tauopathy. J Neurochem. 95(4):930-9. doi: 10.1111/j.1471-4159.2005.03493

Honea RA et al. (2010). Reduced gray matter volume in normal adults with a maternal family history of Alzheimer disease. Neurology. 74(2):113-20. doi: 10.1212/WNL.0b013e3181c918cb

Kish SJ et al. (1992). Brain cytochrome oxidase in Alzheimer’s disease. J Neurochem. 59(2):776-9. doi: 10.1111/j.1471-4159.1992.tb09439

Mosconi L et al (2007). Maternal family history of Alzheimer’s disease predisposes to reduced brain glucose metabolism. Proc Natl Acad Sci. 104(48):19067-72. doi: 10.1073/pnas.0705036104

Mosconi L et al. (2010). Increased fibrillar amyloid-{beta} burden in normal individuals with a family history of late-onset Alzheimer’s. Proc Natl Acad Sci. 107(13):5949-54. doi: 10.1073/pnas.0914141107

Mosconi L et al. (2011). Reduced Mitochondria Cytochrome Oxidase Activity in Adult Children of Mothers with Alzheimer’s Disease. J Alzheimers Dis. 27(3): 483–490. doi: 10.3233/JAD-2011-110866

Roses AD et al (2010). A TOMM40 variable-length polymorphism predicts the age of late-onset Alzheimer’s disease. Pharmacogenomics J. 10(5): 375–84. doi: 10.1038/tpj.2009.69

Swerdlow RH et al. (1997). Cybrids in Alzheimer’s disease: a cellular model of the disease? Neurology. 49(4):918-25. doi: ​10.​1212/​WNL.​49.​4.​918

Swerdlow RH, Burns JM and Khan SM (2010). The Alzheimer’s Disease Mitochondrial Cascade Hypothesis. J Alzheimers Dis. 20 Suppl 2:S265-79. doi: 10.3233/JAD-2010-100339

Swerdlow RH. (2012). Mitochondria and cell bioenergetics: increasingly recognized components and a possible etiologic cause of Alzheimer’s disease. Antioxid Redox Signal. 16(12):1434-55. doi: 10.1089/ars.2011.4149

Valla J et al. (2010). Reduced posterior cingulate mitochondrial activity in expired young adult carriers of the APOE ε4 allele, the major late-onset Alzheimer’s susceptibility gene. J Alzheimers Dis. 22(1):307-13. doi: 10.3233/JAD-2010-100129

Tagged , , , , , ,