Bilingualism for a Healthy Brain

A post originally published at Nautilus discusses emerging research showing that bilingualism may prevent cognitive decline in old age and stave off dementia. Read the full post here.

Parlez-vous francais? If you answered yes, then you’re well on your way to enjoying the many benefits of bilingualism. Speaking both English and French, for example, can enrich your cultural experiences in multilingual destinations like Belgium, Morocco, or Egypt, and broaden your access to books, music, and films.

But the benefits of speaking another language aren’t limited to just cultural perks. “Studies have shown that bilingual individuals consistently outperform their monolingual counterparts on tasks involving executive control,” says Ellen Bialystok, a cognitive psychologist at York University. In other words, speaking more than one language can improve your ability to pay attention, plan, solve problems, or switch between tasks (like making sure you don’t miss your freeway exit while attending to your kids in the back seat). You may think it’s just higher intelligence that underlies these benefits, but evidence suggests otherwise. A 2014 study, for example, showed that those who learned a second language, in youth or adulthood, had better executive functions than those who didn’t, even after accounting for childhood IQ …

Read more here.

Tagged , , , ,

Brain stimulation holds promise for anorexia

Originally published on the PLOS Neuroscience Community

Anorexia nervosa is a devastating, often fatal disease, in which voluntary food restriction severely compromises physical and mental health. Because current treatments, such as psychotherapy, are only minimally effective, many disease victims fight a lifelong battle and most will never recover. However, recent research into the neural underpinnings of anorexia provides hope that relief may be possible by regulating aberrant neural circuitry. A new study published in PLOS One by Jessica McClelland and colleagues from King’s College London offers preliminary support for repetitive transcranial magnetic stimulation (rTMS) as a viable therapy for anorexia.

Hitting reset to dysfunctional brain circuits

Anorexia is characterized by maladaptive behavior, such as abnormal cognitive flexibility, emotional regulation, and habit learning, which has been linked to dysfunction in frontal, limbic and striatal brain networks. The prefrontal cortex in particular is thought to be a critical hub in this aberrant network, as its hypoactivity may lead to poor impulse control underlying many of the symptoms of eating disorders. Restoring function to the prefrontal cortex therefore holds promise for resetting dysfunctional neural circuits and ultimately ameliorating disease symptoms. Modulating neural function noninvasively can be achieved with brain stimulation techniques such as rTMS, which, by applying repeated magnetic pulses over the scalp, induces an electric current in nearby neurons and alters cortical excitability. RTMS to the prefrontal cortex is FDA approved to treat depression, is effective at treating other psychiatric disorders including addiction and schizophrenia, and has shown early but inconsistent efficacy for alleviating symptoms of eating disorders.

To determine whether rTMS may be similarly beneficial for normalizing brain function in anorexia, the researchers tested 49 women with either restrictive or binge/purge subtypes of anorexia. Twenty-one of the women underwent real rTMS, while 28 underwent sham rTMS, to the left dorsolateral prefrontal cortex. Before and after treatment both groups completed a food challenge test to assess their response to enticing foods and gauge their symptoms. They also performed a temporal discounting task, which measures impulsivity and may therefore be sensitive to the altered inhibitory control that occurs in eating disorders.

A hint of therapeutic promise

The researchers were mainly interested in whether rTMS could attenuate “core” symptoms of anorexia, such as urges to restrict eating, or feelings of fullness and fatness. Core symptoms were reduced after both real and sham treatments, suggesting at least some degree of a placebo effect. However, there was a trend for stronger symptom attenuation at multiple time-points – up to a day after treatment – for those who experienced real compared to sham rTMS. A closer look revealed a trend for lower reports of feeling fat after real rTMS compared to sham. Furthermore, the real treatment was also associated with less impulsivity (as assessed with the temporal discounting task) after rTMS, an effect that was only significant for the restrictive subtype of anorexia.


Core anorexia symptoms were lower at all time-points after real rTMS than sham. (McClelland et al., 2016)

The path to the clinic

Based on these findings, should practitioners begin prescribing brain stimulation to eating disorder patients? Well, not quite yet. While the study’s results are encouraging, their effects were not overwhelmingly strong, and the authors acknowledge their study was possibly underpowered. Before rTMS translates to the clinic, its therapeutic potential needs to be further evaluated and the target patient population and optimal protocol should be better characterized. Even if these results hold up, it’s still not clear whether the benefits of rTMS are specific to anorexia, perhaps by resetting underactive neural function to normal levels, or whether rTMS similarly modulates healthy brain activity and behaviors. For example, would altered impulsivity and emotional responses to food also occur in healthy individuals after rTMS? McClelland admits that “Including healthy controls in this study would have been useful as a comparison group, to see if the rTMS in people with anorexia encourages more ‘normal’ responses,” and her group plans to include a healthy comparison group in future studies.

Furthermore, other brain stimulation tools may be equally or more effective, so comparative studies are needed to determine the ideal therapeutic technique. McClelland explains that

“We selected rTMS because there is more literature on its use and efficacy in other neuro-circuit based psychiatric disorders. Transcranial direct current stimulation (tDCS) is a slightly newer technique and hasn’t yet been investigated in eating disorders as widely as rTMS. This doesn’t mean that tDCS may not be as, if not more, effective than rTMS.”

And because the neural mechanisms of eating disorder subtypes presumably differ, the efficacy of rTMS may depend on the patient’s particular set of disease symptoms and illness duration. Targeting therapies at the individual patient level could be an ideal treatment approach, but will require further research to determine what stimulation protocols are optimal for particular symptom profiles.

According to McClelland, “Our single-session study was experimental and therefore only looked at the short-term, transient effects of rTMS in people with anorexia. We wouldn’t expect a single-session of rTMS to have long lasting therapeutic effects.” Therefore, in a critical step along the path to the clinic, McClelland and colleagues are currently doing a randomized clinical trial of 20 sessions of either real or placebo rTMS in individuals with anorexia.

Although preliminary, this study offers early hope for the millions of Americans presently suffering from anorexia. Effective relief from an otherwise debilitating, unrelenting disease may be just around the corner.


Bartholdy S et al. (2015). Clinical outcomes and neural correlates of 20 sessions of repetitive transcranial magnetic stimulation in severe and enduring anorexia nervosa (the TIARA study): study protocol for a randomised controlled feasibility trial. Trials. 16:548. doi:10.1186/s13063-015-1069-3

Grall-Bronnec M, Sauvaget A (2014). The use of repetitive transcranial magnetic stimulation for modulating craving and addictive behaviours: a critical literature review of efficacy, technical and methodological considerations. Neurosci Biobehav Rev. 47:592-613. doi:10.1016/j.neubiorev.2014.10.013

Lam RW, Chan P, Wilkins-Ho M, Yatham LN (2008). Repetitive transcranial magnetic stimulation for treatment-resistant depression: a systematic review and metaanalysis. Can J Psychiatry. 53(9):621-631.

McClelland J, Bozhilova N, Campbell I, Schmidt U (2013). A systematic review of the effects of neuromodulation on eating and body weight: evidence from human and animal studies. Eur Eat Disord Rev. 21(6):436-455. doi:10.1002/erv.2256

McClelland J et al. (2016). A Randomised Controlled Trial of Neuronavigated Repetitive Transcranial Magnetic Stimulation (rTMS) in Anorexia Nervosa. PLOS ONE. 11(3): e0148606. doi:10.1371/journal.pone.0148606

Oberndorfer TA, Kaye WH, Simmons AN, Strigo IA, Matthews SC (2011). Demand-specific alteration of medial prefrontal cortex response during an inhibition task in recovered anorexic women. Int J Eat Disord. 44(1):1-8. doi:10.1002/eat.20750

Shi C, Yu X, Cheung EFC, Shum DHK, Chan RCK (2014). Revisiting the therapeutic effect of rTMS on negative symptoms in schizophrenia: A meta-analysis. Psychiatry Res. 215(3):505-513. doi:10.1016/j.psychres.2013.12.019

Steinhausen HC (2002). The outcome of anorexia nervosa in the 20th century. Am J Psychiatry. 159(8):1284-1293. doi.:10.1176/appi.ajp.159.8.1284

Zhu Y et al. (2012). Processing of food, body and emotional stimuli in anorexia nervosa: a systematic review and meta-analysis of functional magnetic resonance imaging studies. Eur Eat Disord Rev. 20(6):439-450. doi:10.1002/erv.2197

Tagged , , ,

Runner down, in body and soul

Since abandoning my running shoes in 2013, I’ve been blessed with the resolution of most of the debilitating overuse running injuries that plagued me as a shod runner (think torn tendons and fractured feet). However, life has a way of keeping the scorecard even, throwing a host of almost comically improbable accidents and injuries my way. It’s just three months into the year, and I’ve already battled a spontaneous low back strain and a sprained (possibly broken?) fourth toe (neither related to running). My patience and devotion to smart recovery have yet again paid off, as I’ve recently returned to a mostly-normal running routine. But consistent with its “Let Emilie never remain uninjured” rule, life has once more reminded me who’s boss and kicked me back down, literally … with a car door.

While dodging through the crowds of tourists on yesterday’s run through San Francisco’s Fisherman’s Warf, I felt as though I was racing through a virtual world, navigating the hazards of a video game. Feeling strong and anticipating a powerful finish to the day’s 11-miler, my gut suddenly exploded in stabbing pain and my vision momentarily when dark. Runner. Down. A child in a car stopped on the road beside the sidewalk on which I was running, had carelessly swung open his door at the very moment I passed. The door collided into my lower abdomen with epic force. My brain went blank. I could do nothing besides grip my belly, utter some profanities and wonder how much time remained before I died of internal bleeding. The sound of the crowds faded, but their stares and horrified dropped jaws further seared my aching body. As the boy’s and my eyes connected, his wide in shock and fear, a voice from the car cried out “It’s not the kid’s fault!” Without apology or concern, the car drove off. Relieved to discover I was still alive, I moved my hand from my abdomen to discover blood seeping through my shorts. I stumbled behind some dumpsters, the only remotely private corner I could find amidst the crowds of tourists, to further inspect. The collision had left a deep, two-inch long gash along my pubic bone (I’ll spare you the graphic image) and some very bruised abs. Borderline delirious but fueled with adrenaline, I somehow finished – in fact, nearly sprinted, as I fought back tears – the final three miles of the run. As I reflected on the incident on the run home, more painful than the gash in my belly was the disturbing realization that not a single person had offered help or asked if I was okay.

Over my decades of running I’ve been repeatedly impressed by the uniquely compassionate and supportive spirit of the running community. Perhaps the unifying thread is that mutual understanding of the common mental and physical obstacles we face each time we set out for a run. The struggles of another are no different from our own. Even in the fiercest competitions, this spirit shines through, with runners often sacrificing their own comfort and chance for victory to help another through fatigue, discouragement or pain. It was on the backdrop of this exceptionally caring community that made the crowd’s response – or lack thereof – to my situation so strikingly ugly. To the rubberneckers who witnessed the accident, I was not a fellow human who needed help, but instead the barefoot stranger who got doored in a running hit-and-run. I was simply someone’s amusing anecdote from their crazy San Francisco vacation.

I’m committed to tackle this hurdle with the same determination as I’ve done with each past injury. The physical and emotional trauma of yesterday’s accident took this runner down, body and soul. But my love for running is too strong to let a bump in the road keep me off that road. The incident admittedly shook me and undermined my faith in humanity. Yet, it’s also been an invaluable reminder of what sets runners apart from the masses. Now more than ever, I’m deeply grateful to be part of such an extraordinary breed whose compassion for their fellow humans runs true and deep.

Tagged , ,

The supremely intelligent rat-cyborg

Originally published on the PLOS Neuroscience Community

When Deep Blue battled the reigning human chess champion the world held its breath. Who was smarter … man or machine? A human victory would confirm the superiority of human intelligence, while a victory for Deep Blue would offer great promise for the potential applications of artificial intelligence to benefit mankind. And with the defeat of Garry Kasparov by an algorithm, the debate heated over what constitutes intelligence and whether computers can possess it. But perhaps the answer to the man-versus-machine debate isn’t so black and white. Perhaps both synthetic and biological systems have unique, complementary strengths that, when merged, could yield an optimally functioning “brain” – a supremely intelligent cyborg, if you will. In their new PLOS ONE paper, Yipeng Yu and colleagues tested this possibility, comparing the problem-solving abilities of rats, computers and rat-computer “cyborgs.”

Maze-solving rats, computers and cyborgs

Six rats were trained over the course of a week to run a series of unique mazes. The rats were implanted with microelectrodes in their somatosensory cortex and medial forebrain bundle, which releases dopamine to the nucleus accumbens and is a key node of the brain’s reward system. They were enticed to reach the maze target by the fragrance of peanut butter, a sip of water (they were mildly dehydrated) and stimulation of the medial forebrain bundle once they solved the puzzle. After training, the researchers tested the rats on 14 new mazes, monitoring their paths, strategies and time spent solving the mazes.

To compare the performance of the rats to that of a computer, the research team developed a maze-solving algorithm implementing left-hand and right-hand wall-following rules. This algorithm completed the same 14 mazes run by the rats.

Rat cyborgs integrated the computational powers of organic and artificial intelligence systems. Rats completed the same set of mazes, but this time with the assistance of the computer algorithm. By stimulating the rats’ left and right somatosensory cortex to prompt them to move left or right, the algorithm intervened when the rats needed help, directing them to traverse unique paths and avoid dead ends and loops.

Intelligent cyborgs

Performance of the rats, computer and rat-cyborgs were compared by evaluating how many times they visited the same location (steps), how many locations they visited, and total time spent to reach the target. Although the cyborgs and computers took roughly the same number of steps, the cyborgs took fewer than the rats, a sign of more efficient problem solving. Furthermore, the cyborgs visited fewer locations than computers or rats (see Figure), and took less time than the rats to solve the mazes. Across the various maze layouts, the number of steps and locations covered were strongly correlated between the types of beings (rats and cyborgs, rats and computer, cyborgs and computer). Thus, a maze that was challenging for a rat was similarly challenging for the computer and the rat’s cyborg counterpart.

Rat cyborgs covered fewer maze locations than rats or computers. (Yu et al., 2016)

Rat cyborgs covered fewer maze locations than rats or computers. (Yu et al., 2016)

The ethics of a human cyborg

These findings from Yu and colleagues suggest that optimal intelligence may not reside exclusively in man or machine, but in the integration of the two. By harnessing the speed and logic of artificial computing systems, we may be able to augment the already remarkable cognitive abilities of biological neural systems, including the human brain. The prospect of computer-assisted human intelligence raises obvious concerns over the safety and ethics of their application. Are there conditions under which a human “cyborg” could put humans at risk? Is altering human behavior with a machine tantamount to “playing god” and a dangerous overreach of our powers?

Despite these concerns, such computer-assisted intelligent systems are already available and in surprisingly wide-spread use. Human brain-computer interface has been in use for decades, helping to restore vision, movement and communication in impaired individuals. Although your brain may not be directly wired to a computer, it’s likely that you often function as a human cyborg on a daily basis. Many of us rely on a GPS to augment our navigation abilities while driving, on a word-processor’s spell-checker to enhance our writing, or on a digital planner to organize a busy schedule. Few would argue that these daily uses of computer assistance are unethical. But where does one draw the line between harmless lifestyle enhancement and dangerous mind-control? Yu and colleague’s findings suggest that, at least for now, we need not fear overtake by super-smart robots; perhaps instead it’s time to embrace the computing abilities of machines as complementary – and beneficial – to our own natural powers of intelligence.


Dobelle WH (2000). Artificial vision for the blind by connecting a television camera to the visual cortex. ASAIOJ. 46(1):3–9

Grau CJ et al. (2014). Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PLOS ONE. 9(8): e105225. doi:10.1371/journal.pone.0105225

Hochberg LR et al. (2013). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 485(7398):372–375. doi:10.1038/nature11076

Yu Y et al. (2016). Intelligence-Augmented Rat Cyborgs in Maze Solving. PLOS ONE. 11(2): e0147754. doi:10.1371/journal.pone.0147754

Tagged , , , ,

A cocktail party in a dish: How neurons filter the chatter

Originally published on the PLOS Neuroscience Community

While dining with a friend at a noisy restaurant, you listen attentively to her entertaining account of last night’s date. Despite the cacophony flooding your auditory system, your brain remarkably filters your friend’s voice from the irrelevant conversations at neighboring tables. This “cocktail party effect,” the ability to attend to select input amidst a distracting background, has fascinated researchers since its characterization in the 1950’s. Although psychological and sensory models have offered insight into why human brains are so exquisitely equipped to perform this selective attention, researchers haven’t yet pinned down how neurons process mixed information to respond to the important and suppress the irrelevant. In their new paper published in PLOS Computational Biology, researchers from the University of Tokyo revealed that individual neurons learn to “tune in” to one input while ignoring others, offering an intriguing explanation for how rapid neural plasticity may give rise to the cocktail party effect.

Sending neurons mixed messages

Based on many earlier studies showing that neural networks can learn by changing their activity based on experience, the authors wondered whether neurons could also be trained to distinguish among sensory experiences. To test this idea, they recorded electrical activity from cultured rat cortical neurons. They electrically stimulated the neurons according to two stimulation patterns, to provide two unique hidden sources of input, simulating the cocktail party effect of hearing a mixture of voices. In some conditions both input patterns were activated, while in others one, the other, or neither input pattern was activated. They repeated variations of these stimulations for 100 trials in many samples to track how the neural responses changed over exposure to the stimuli.

Distinct patterns of neural activation simulate the cocktail party effect of hearing multiple speakers. (Isomura et al., 2015)

Distinct patterns of neural activation simulate the cocktail party effect of hearing multiple speakers. (Isomura et al., 2015)

Learning to discriminate

Over the course of training, neurons altered their likelihood of spiking to the input patterns. Roughly half of the neurons increased their response to one source and reduced their response to the other, while the other half increased responsiveness to the other source. A discrimination index used to measure preference for one input over the other showed that this bias increased across all electrodes over the training period. Even neurons exposed to the stimuli only briefly – trained on only a fraction of the trials – still demonstrated a response preference up to a day later, suggesting that neural learning occurred rapidly and was long-lasting. Although first author Takuya Isomura speculates that “this could last several days,” it’s not permanent. “We have confirmed that training with another stimulation pattern could overwrite the neural preference to the past source. That is, even cultures that have learned a pattern set could learn another one.”

Neurons increased their discrimination (DKLi) over the course of training when fully trained (red) and partially trained (white) but not when NMDA receptors were blocked (black). (Isomura et al., 2015)

Neurons increased their discrimination (DKLi) over the course of training when fully trained (red) and partially trained (white) but not when NMDA receptors were blocked (black). (Isomura et al., 2015)

But how, since biological systems can learn in various ways, did these cells so efficiently acquire and maintain this source bias? Blocking the cultures with an NMDA receptor antagonist largely prevented the neurons from developing an input preference, suggesting that learning occurred through NMDA receptor-dependent signaling, known to be important for long-term synaptic plasticity supporting memory formation. Furthermore, neurons only demonstrated discriminability if there was variance in the balance and frequency of the input patterns. This requirement for variance hinted that the neurons may follow independent components analysis (ICA)-like learning rules.

To better understand these learning dynamics, Isomura’s group examined changes at the neuronal population level. A simple Hebbian learning model predicted that connectivity should increase both within and across neuron groups. Instead, synaptic connectivity increased between neuron groups with the same source preference, but decreased between neuron groups with different source preferences. A modified model of Hebbian learning (including state-dependent plasticity) better accounted for these observations, as it allowed for competition between neurons. As Isomura explains, “state-dependent Hebbian plasticity could facilitate the neural response to the source that effectively stimulates the nearest electrode, while it could depress that to the other source. In the future, using the connection strength estimation, we might be able to predict the neural preference before the stimulations.”

As the neural networks changed, their internal and free energy decreased, whereas entropy increased. These energy changes did not occur with NMDA receptor blockade, suggesting that they are indeed attributable to learning-related synaptic plasticity. As connections strengthen between a neuronal group and its preferred source, the authors explain, mutual information increases between the neural system’s inputs and outputs, lowering its overall free energy.

How does a discriminating neuron make a discriminating brain?

Although it’s well established that neural activity changes with experience, Isomura and colleagues have shown for the first time that neurons can invoke these learning mechanisms to recognize and discriminate information. Neural networks accomplished this impressive feat by performing unsupervised learning – adhering to ICA and free-energy principles – to self-organize via activity-dependent plasticity.

So how might these findings help you stay engrossed in your friend’s tale of first date mishaps, amidst distraction? There are obvious differences between an integrated brain, which can direct its attention at will to a sound it deems meaningful and important, and a neuronal culture, which (presumably) lacks this power of guided attention. However, in both cases, a brain or neuron must decorrelate a mishmash of inputs. Although speculative, the authors propose that attentional shifts towards important information can only occur if the brain can distinguish sensory input, beginning at the level of discrimination by individual neurons. Further research will help to explain how feedback between attentional and sensory systems orchestrates this elegant goal-directed sensory filtering. Despite the sense that “tuning in” to a friend’s voice is automatic and effortless, studies have shown that this is a learned skill acquired early during life. Like other forms of learning, developing this ability likely relies on the plasticity of neurons adapting and responding to their experiences.

To Isomura, it’s “a fascinating mystery why people can learn faster than machine learning that typically needs huge training. Interestingly, some learning properties (e.g., speed) of culture networks are more similar to machine learning rather than human behavior, while they consist of living cells. Thus, a series of this kind of studies might have a potential to fill the gap.”


Bronkhorst AW (2015). The cocktail-party problem revisited: early processing and selection of multi-talker speech. Atten Percept Psychophys. 77(5):1465–1487. doi: 10.3758/s13414-015-0882-9

Cherry EC (1953). Some Experiments on the Recognition of Speech, with One and with Two Ears. J Acoustic Soc Amer. 25:975–979. doi:

Hohwy J (2014). The neural organ explains the mind. Open MIND. Frankfurt am Main: MIND Group

Isomura R, Kotani K, Jimbo Y (2015). Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle. PLOS Comp Biol. doi: 10.1371/journal.pcbi.1004643

Jimbo Y, Tateno T, Robinson HPC (1999). Simultaneous Induction of Pathway-Specific Potentiation and Depression in Networks of Cortical Neurons. Biophys J. 76(2):670–678. doi: 10.1016/S0006-3495(99)77234-6

Plude DJ, Enns JT, Brodeur D (1994). The development of selective attention: a life-span overview. Acta Psychol. 86(2-3):227–272

Tsien JZ, Huerta PT, Tonegawa S (1996). The essential role of hippocampal CA1 NMDA receptor-dependent synaptic plasticity in spatial memory. Cell. 87(7):1327–1338. doi: 10.1016/S0092-8674(00)81827-9

Image credit

Tagged , , , , ,

How reliable is resting state fMRI?

Originally published on the PLOS Neuroscience Community

Arguably, no advance has revolutionized neuroscience as much as the invention of functional magnetic resonance imaging (fMRI). Since its appearance in the early 1990’s, its popularity has surged; a PubMed search returns nearly 30,000 publications with the term “fMRI” since its first mention in 1993, including 4,404 last year alone. Still today, fMRI stands as one of the best available methods to noninvasively image activity in the living brain with exceptional spatiotemporal resolution. But the quality of any research tool depends foremost on its ability to produce results in a predictable and reasonable way. Despite its widespread use, and general acceptance its efficacy and power, neuroscientists have had to interpret fMRI results with a large dose of partially-blind faith, given our incomplete grasp of its physiological origins and reliability. In a monumental step towards validation of fMRI, in their new PLOS One study Ann Choe and colleagues evaluated the reproducibility of resting-state fMRI in weekly scans of the same individual over the course of 3.5 years.

One devoted brain

Although previous studies have reported high reproducibility of fMRI outcomes within individuals, they’ve compared only few sessions over brief periods of weeks to months. Dr. Choe and her team instead set out to thoroughly characterize resting state brain activity at an unprecedented time scale. To track patterns of the fMRI signal, one dedicated 40 year-old male offered his brain for regular resting-state fMRI sessions. Over the course of 185 weeks, he participated in 158 scans, roughly occurring on the same day of the week and time of day. For comparison – just in case this particular individual’s brain was not representative of the general population – a group of 20 other participants (22-61 years old) from a prior study were used as reference.

Reproducibility of brain networks and BOLD fluctuations

The researchers identified 14 unique resting state brain networks. Networks derived from the subject’s individual scans were spatially quite similar to those identified from that subject’s average network map and the multi-subject average map, and these network similarity measures were highly reproducible. Whereas executive function networks were the most reproducible, visual and sensorimotor networks were least. The relatively low reproducibility of “externally directed” networks could be attributable to the nature of the unrestrained scanning conditions, in which mind-wandering or undirected thoughts could engage an array of sensory experiences. Dr. Choe suspects “that under truly controlled conditions, exteroceptive networks would become more reproducible. Differences in reproducibility in exteroceptive versus interoceptive networks should be seen as an observation that requires follow up study.”

Figure 1. Spatial similarity of weekly fMRI sessions for sensorimotor, visual and executive networks. (Choe et al., 2015)

Figure 1. Spatial similarity of weekly fMRI sessions for sensorimotor, visual and executive networks. (Choe et al., 2015)

The basic signal underlying fMRI is the blood oxygen level dependent (BOLD) response, a measure of changes in blood flow and oxygenation thought to reflect vascular and metabolic responses to neural activity. The magnitudes of BOLD fluctuations were similar both across the single subject’s scans and the group’s scans, although these fluctuations were generally more reliable within-subject. Similar to the spatial overlap between networks, BOLD signal in executive networks was most reproducible, while that in default mode and sensorimotor networks were least reproducible across the subject’s sessions.

Between-network connectivity

In the brain, no network is an island, but rather, is in constant communication with other regions, near and far. This functional connectivity can be assessed with fMRI by computing correlations in the signal between areas. As might be expected, connectivity was highest between networks involved in related functions, for example between sensorimotor and auditory networks, and between sensorimotor and visual networks. Connectivity between networks was similar in the single subject and multi-subject datasets, and was highly reproducible both across the single subject’s sessions and within the multi-subject dataset.

Figure 2. Between network connectivity for single-subject and multi-subject datasets. (Choe et al., 2015)

Figure 2. Between network connectivity for single-subject and multi-subject datasets. (Choe et al., 2015)

fMRI over the years

A unique advantage of their study design was the rich temporal information provided from repeated scanning over a multi-year period. This allowed them to not only assess the reproducibility of the BOLD signal, but also to explore trends in how it may change with the passage of years or seasonal fluctuations. Significant temporal trends were found in spatial similarity for the majority (11 of 14) of networks, in BOLD fluctuations for two networks, and in between-network connectivity for many (29 of 105) network pairs. All but one of these trends were positive, indicating increased stability of the fMRI signal over time. What drives these changes over the years isn’t entirely clear. It could simply reflect habituation to the scanning environment, for example, if the experience becomes increasingly repetitive and familiar with exposure. Alternatively, the authors suggest, it might involve physiological changes to the aging brain, such as synaptic or neuronal pruning. Over the 3.5-year study, the 40-year old participant indeed showed decline in his gray matter volume; this neural reorganization could feasibly impact the stability of the fMRI signal. However, Dr Choe cautions that “although three years is a long time, it is certainly not long enough to address the issue of say, an aging brain.”

Notably, many networks showed annual periodicity in their spatial similarity (9 of 14 networks) and BOLD fluctuations (3 networks). These measures also correlated with the local temperature, linking reliability of the fMRI signal with seasonal patterns. Although speculative, the authors suggest that this may in part relate to circadian or other homeostatic rhythms that regulate brain activity. Dr. Choe and her group “were surprised to discover annual periodicity in rs-fMRI outcome measures. If future studies, in a large number of participants, find significant annual periodicity in rsfMRI outcomes, then it would be prudent to take such temporal structure into consideration, especially when designing studies in chronic conditions, or for extended therapeutic interventions.”

Reason to rest easy?

The findings from Dr. Choe and colleagues’ ambitious study provides convincing evidence that the resting fMRI signal is reproducible over extensive time periods, giving reason for cognitive neuroscientists everywhere to breathe a small sigh of relief. Perhaps more importantly, it characterizes the nuanced patterns of its spatial and temporal stability, unraveling how it differs across brain networks and might be vulnerable to moderators such as aging or environment. This new understanding of fMRI dynamics will be incredibly useful to researchers aiming to optimize their fMRI study design, and holds particularly important implications for longitudinal studies in which aging or seasonal effects may be of concern. According to Dr. Choe,

“The high reproducibility of rs-fMRI network measures supports the candidacy of such measures as potential biomarkers for long-term therapeutic studies.”

One future application her team is currently pursuing is “using rs-fMRI to study brain reorganization in persons with chronic spinal cord injury, having recently reported significantly increased visuo-motor connectivity following recovery. We are interested in whether such measures can be used as biomarkers for prognosis and to help monitor responses to long-term therapy.”


Bandettini PA (2012). Twenty years of functional MRI: The science and the stories. Neuroimage. 62(2):575–588. doi: 10.1016/j.neuroimage.2012.04.026

Chen S, Ross TJ, Zhan W et al (2008). Group independent component analysis reveals consistent resting-state networks across multiple sessions. Brain Research. 1239:141-151. doi: 10.1016/j.brainres.2008.08.028

Choe AS, Belegu V, Yoshida S, Joet al (2013). Extensive neurological recovery from a complete spinal cord injury: a case report and hypothesis on the role of cortical plasticity. Front Hum Neurosci 7, 290.

Choe AS, Jones CK, Joel SE et al (2015). Reproducibility and Temporal Structure in Weekly Resting-State fMRI over a Period of 3.5 Years. PLOS One. 10(10):e0140134. doi: 10.1371/journal.pone.0140134

Guo CC, Kurth F, Zhou J et al (2012). One-year test–retest reliability of intrinsic connectivity network fMRI in older adults. Neuroimage. 61(4):1471–1483. doi: 10.1016/j.neuroimage.2012.03.027

Hodkinson DJ, O’Daly O, Zunzzain PA et al (2013). Circadian and homeostatic modulation of functional connectivity and regional cerebral blood flow in humans under normal entrained conditions. J Cereb Blood Flow & Metab. 34:1493–1499. doi: 10.1038/jcbfm.2014.109

Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A (2001). Neurophysiological investigation of the basis of the fMRI signal. Nature. 412:150–157. doi: 10.1038/nature35084005

Logothetis NK (2008). What we can do and what we cannot do with fMRI. Nature. 453:869–878. doi: 10.1038/nature06976

Wisner KM, Atluri G, Lim KO, MacDonald AW (2013). Neurometrics of intrinsic connectivity networks at rest using fMRI: Retest reliability and cross-validation using a meta-level method. Neuroimage. 76(1):236–251. doi: 10.1016/j.neuroimage.2013.02.066

Tagged , , ,

While you were sleeping: Neural reactivations promote motor learning

Originally published on the PLOS Neuroscience Community

Do you recall that moment when you first learned to ride a bike? After days of practice, it finally clicked. Almost effortlessly, your legs cycled in perfect harmony as you maneuvered gracefully around turns and maintained impeccable balance. You may have learned this skill decades ago but it will likely stick with you for the rest of your life. How did your brain accomplish this remarkable feat of transforming a series of forced and foreign actions into an automatic, fluid movement sequence? A new PLOS Biology study by Dr. Dhakshin Ramanathan and colleagues explored the neural substrates of motor memory acquisition, reporting that reactivation of task-related neural activity patterns during sleep promotes motor skill learning in mice.

Sleep enhances motor learning

To evaluate motor learning, the researchers trained mice to perform a task in which they had to reach for a food pellet. Since sleep is known to be important for memory consolidation, the mice were allowed to sleep before and again after the reach task. The mice performed the task a second time after sleeping to assess how sleep affected their performance. Accuracy on the reaching task improved over the course of the first training period, whereas the mice responded more quickly after sleeping; hence, “online learning” (during the task) improved accuracy and “offline learning” (while sleeping) improved speed. No changes occurred when the mice were sleep-deprived between task sessions, pinpointing sleep – rather than the passage of time – as the source of the performance boost.

Behavioral paradigm (Ramanathan et al., 2015)

Behavioral paradigm (Ramanathan et al., 2015)

Sleep-dependent neural changes

The researchers next explored the neurophysiological basis for this sleep-dependent learning, recording neural activity from the forelimb area of motor cortex. As co-author Dr. Karunesh Ganguly explained,

“Studies had previously studied hippocampal-based memory systems. It remained unclear specifically how the motor system (i.e., procedural memory) processes memories during sleep.”

When the mice performed the task the second time, after sleeping, the onset of neural firing (time-locked to the reach) peaked earlier, and this activity was more strongly modulated by the task. Notably, firing onset did not change after sleep deprivation, confirming that sleep was necessary for this temporal shift in the neural response.

Learning-related reactivation

Elsewhere in the cortex and hippocampus, during rest neurons will fire in a particular sequence matching the temporal pattern during a prior experience, an event known as “replay” that is thought to support formation of memory for that experience. The researchers speculated that reactivation or replay in motor cortex may similarly promote motor learning. Neural activity patterns identified from the reach task were more prevalent during sleep after the task, showing – as predicted – reactivation of task-related activity after motor learning. When mice performed the reach task on multiple days, over the course of all days the degree of reactivation during sleep correlated with reduced reaction time, linking stronger neural reactivation with behavioral improvements.

A) Reactivation of task-related neural activity after learning. B) Reactivation correlates with improvements in reaction time. (Ramanathan et al., 2015)

A) Reactivation of task-related neural activity after learning. B) Reactivation correlates with improvements in reaction time. (Ramanathan et al., 2015)

Since the authors observed neural reactivation during motor learning, they next wondered whether the temporal sequence of this reactivation may be an important element of the memory code. The neural activation pattern during sleep more closely matched the task-related activity pattern after learning than before, although some of the temporal information in the sleep sequence was lost. Although prior studies have shown a role for hippocampal or cortical replay in memory consolidation, Dr. Ganguly raises the important distinctions that here, they “did not find evidence of ‘replay’ (i.e., sequences) but ‘reactivation’ (i.e., synchronous bursts).”

Learning-related plasticity was evident even at the single neuron level. Those neurons with the highest task-related activity were most strongly reactivated during sleep, and those showing the strongest reactivation also happened to show the most dramatic shift in the onset of their response to reaching. The authors speculate that this increased temporal coupling of neural activity to the task could facilitate binding the neurons into a distributed “movement complex” that aids formation of the motor memory.

Locking to spindles and slow waves for widespread plasticity 

Burst of high-frequency activity – known as spindles – and slow wave oscillations have both been implicated in offline learning. If task-related reactivations during sleep are important for memory consolidation, the authors reasoned, they may be temporally linked to spindles or slow waves. After learning the reach task, reactivations were in fact more closely time-locked and phase-locked (i.e., occurred at a particular phase of the cycle) to fast spindles. Reactivations were also more strongly time-locked and shifted their phase-locking to slow oscillations. Thus, during sleep, neural activation patterns related to the motor task were not only more prevalent after training, but their timing was also refined to coincide with particular neural events that may facilitate memory formation. Since spindles may be involved in synchronizing long-range cortical activity, locking task-specific reactivations to spindles could tie them into neuroplastic changes throughout widespread brain networks supporting consolidation.

At a recent talk on sleep-dependent memory consolidation, the speaker compared the neural reorganization that takes place during memory formation to a house renovation. Just as it’s more comfortable and effective for us to check into a hotel while our house is renovated, learning may be more effective when the brain checks out – into the quietude of sleep – while neural reorganization occurs. This may explain why sleep is so important for learning, yet it doesn’t explain how the brain stores new memories during sleep. Past studies have identified neuronal reactivation, coordinated with spindles and slow waves, as critical for forming declarative memories. Dr. Ramanathan and colleagues’ findings suggest that these mechanisms also occur in the motor cortex to support a radically different form of memory – the kind that helped you learn to ride a bike many years ago.

Clarifying the neural dynamics of sleep-dependent learning holds profound implications not just for those of us hoping to learn to play a new instrument or refine our dance moves. It may also help us better understand the remarkable neuroplasticity that underlies rapid motor learning during early development, and hold potential to promote recovery from motor impairments following brain injury. Dr. Ganguly is optimistic about the possible applications of their findings:

“Motor learning is likely an essential process during rehabilitation. Surprisingly little is known about the role of sleep and replay during recovery. With further study, one could imagine using sleep and offline processing to maximize the learning during rehabilitation.”


Ji D, Wilson MA (2007). Coordinated memory replay in the visual cortex and hippocampus during sleep. Nat Neurosci. 10:100-107. doi: 10.1038/nn1825

Ramanathan DS, Gulati T, Ganguly K (2015). Sleep-Dependent Reactivation of Ensembles in Motor Cortex Promotes Skill Consolidation. PLOS Biol. doi: 10.1371/journal.pbio.1002263

Stickgold R (2005). Sleep-dependent memory consolidation. Nature. 437:1272–1278. doi: 10.1038/nature04286

Image credit

Tagged , , , ,

#PLOS #SfN15 Recap: Hidden variables of behavior

Originally published on the PLOS Neuroscience Community

The Society for Neuroscience meeting is unique in both is breadth and depth. There are sessions on literally everything Neuro, each delving with exquisite detail and nuance into their given topic. While this level of focus is great for those seeking comprehensive coverage of their niche, it can be daunting for those looking for a broader sampling of the field’s cutting edge. The Hidden Variables of Behavior symposium was one of the rare sessions to stray from the single-track convention to elegantly bridge seemingly disparate topics, methodologies and applications, producing a standout session with exceptionally broad appeal. It accomplished this by exploring a theme that is perhaps the unifying motivation underlying nearly all Neuroscience research: how does the brain engender behavior? How does neural activity give rise to the thoughts, interactions with our environment, and engagements with others that define our experiences? In an enthralling series of talks by Loren Frank, Mark Schnitzer, Yang Dan and Catherine Dulac, the symposium covered topics ranging from learning and memory to sleep and social behavior. This session had it all.

Rapidly alternating representations of present and past in hippocampal-cortical networks

frankLoren Frank kicked off the symposium by exploring how the hippocampus supports our ability to remember the past and plan for the future. Hippocampal cells have a remarkable ability to replay past experiences via high-frequency oscillations during sharp waves known as ripples. When an animal traverses a path its hippocampal neurons will fire in a characteristic sequence that codes its trajectory; later, at rest or while sleeping, this spiking sequence will repeat, with the sequence sped up approximately twenty times the original rate! Disrupting hippocampal ripples impairs sequence learning, indicating that they’re critical for acquiring memories. However, the mechanisms, at both regional and whole-brain levels, by which sharp wave ripples (SWRs) help to consolidate memories are unclear.


Much attention has been paid to neurons in hippocampal subregions CA1 and CA3, which are excitable by high-speed motion and positively modulated by SWRs. However, Frank’s group identified a new group of hippocampal neurons – CA2P and CA2N – that are also positively and negatively modulated by SWRs, respectively. Notably, the CA2N population has an exceptionally high level of baseline activity and preferentially fires during rest or low speed motion. Because of their distinct function, these rebellious cells may be crucial for ongoing processing of the current state while maintaining representations of the past and future.

Although some (including yours truly) may hold a hippocampo-centric view of memory, Frank reminds us that memory is “not just a hippocampal thing.” Looking to the rest of the brain, his group found that SWRs recruit 35% of prelimbic regions, including cells that are both excited and inhibited by SWRs. Similar to the distinct populations of CA2P and CA2N cells, prefrontal cortex neurons may activate during either high-speed motion or immobility. This balance of excitation and inhibition in the hippocampus and surrounding cortex may promote rapid transitions between representations of the past and future, and facilitate their integration for learning and planning.

Large-scale ensemble neural dynamics underlying learning and long-term associative memory

schnitzer_mark_107Mark Schnitzer continued with this theme, presenting intriguing findings regarding the spatiotemporal properties of neural adaptations subserving learning. However, equally impressive are the advanced imaging tools his lab is developing to explore these issues. Their techniques allow neural recordings in behaving animals at unprecedented spatial depths and extents over long time scales. For instance, their current best is recording 1202 hippocampal cells in a freely moving mouse. Someone give this man the “I-recorded-the-most-neurons” award!

Using these tools, Schnitzer has been exploring hippocampal morphological and physiological changes that contribute to learning. CA1 neurons are a likely target for spatial learning, as they show place-cell activity, preferentially responding to particular regions of an animal’s environment. Surprisingly, dendrites in subregion CA1 are remarkably stable, suggesting that dendritic plasticity is unlikely to be the critical factor underlying learning. However, CA1 spine turnover is relatively rapid – on the order of 8-10 days – in comparison to cortical spines, of which 50% are permanent over a month. Schnitzer explained that although these cells are temporally stochastic in that they sometimes take breaks from their place-coding activity, when they return to the neuronal “spatial ensemble” they always return to encode the same place. What’s more, CA1 spatial representations are refined by learning, becoming both more accurate and reliable in their coding. I’ll be eagerly following Schnitzer’s work to see how their ongoing methodological innovations and applications advance our understanding of the hippocampal dynamics supporting long-term memories.

Neural circuits for sleep control

YIR_RH_DanYang Dan turned from this fast-paced discussion of rapid neural plasticity, spatial navigation and learning to examine neural regulation of sleep. Historically, neurons that trigger alertness and waking have been easy to identify, but researchers have struggled to track down those “sleep neurons.” Past lesion and c-fos studies have shown that hypothalamic – particularly preoptic – neurons are important for inducing sleep.

Combining optogenetics with electrophysiology, Dan’s lab has expanded upon these findings to pinpoint both the responsible cell types and their specific sleep-inducing effects. In particular, activating GABAergic preoptic cells projecting to the tuberomammillary nucleus (also of the hypothalamus) promotes non-REM sleep initially, and REM sleep later. The midbrain’s ventrolateral periaqueductal gray also promotes sleep, but only the non-REM type. Dan’s findings together suggest that mutual inhibition across these key hypothalamic and brainstem regions regulates transitions across three general brain states of waking, REM sleep and non-REM sleep.

Long-term changes in the representation of social information in the mouse medial amygdala

DulacAfter all this talk about sleep, my hypothalamic sleep neurons had begun batting the morning’s adenosine antagonists. Fortunately, Catherine Dulac’s captivating talk exploring the bases of social interactions and sex-specific behavior kept me alert and engaged. Two key circuits working in concert to process social information, she explained, are the olfactory and vomeronasal systems. This latter system in particular may act as a switch to promote appropriate (and suppress inappropriate) sex-specific behavior.

Dulac’s research, fusing molecular, genetic and electrophysiological techniques, has identified the medial amygdala as a critical stop along the vomeronasal circuit for mediating sex-specific social signaling in mice. Medial amygdalar encoding of social cues is not only sexually dimorphic, but is also regulated by salient social experiences including mating and co-housing. Furthermore, the efficiency of medial amygdalar signaling also changes after mating in a sex-specific manner, increasing in males but decreasing in females. Together, Dulac’s work has pinpointed the medial amygdala as an indispensible hub within an extensive neural circuit that regulates social behavior and in turn, is modulated by sexual and social experience.

Every SfN has at least one session that reminds me why I love the brain and re-ignites my passion for Neuroscience. This year, the Hidden Variables of Behavior symposium was it! It may be a year away, but I’m eagerly awaiting #SfN16 for similarly inspiring talks.

For an abbreviated play-by-play, visit my Storified live-tweeting of the symposium’s highlights.


Anderson EB, Grossrubatscher I, Frank L (2015). Dynamic Hippocampal Circuits Support Learning- and Memory-Guided Behaviors. Cold Spring Harb Symp Quant Biol. 79:51–58. doi: 10.1101/sqb.2014.79.024760

Attardo A, Fitzgerald JE, Schnitzer MJ (2015). Impermanence of dendritic spines in live adult CA1 hippocampus. Nature. 523:592–596. doi: 10.1038/nature14467

Bergan JF, Ben-Shaul Y, Dulac C (2014). Sex-specific processing of social cues in the medial amygdala. eLife. 3:e02743. doi:

Brennan PA (2001). The vomeronasal system. Cell Mol Life Sci. 58(4):546–555.

Ego-Stengel V, Wilson MA (2010). Disruption of ripple-associated hippocampal activity during rest impairs spatial learning in the rat. Hippocampus. 20(1):1–10. doi: 10.1002/hipo.20707

Weber F, Chung S, Beier KT, Xu M, Luo L, Dan Y (2015). Control of REM sleep by ventral medulla GABAergic neurons. Nature. 526:435–438. doi: 10.1038/nature14979

Tagged , , , , , , ,

Embracing sadness

Years ago, I was a chronically injured runner – stress fractures, torn tendons, you name it, I had it. This came to a blissful end about a year ago when I bid adieu to my most frustrating injury, a chronic hamstring tendinopathy that’s haunted me nearly my entire running career. Since then I’ve enjoyed remarkable growth as a runner, witnessing improvements in my strength, endurance and technique. I’ve returned to racing after a several-year hiatus and recently registered for what would be my eleventh marathon and first barefoot marathon!

My mirage of invincibility disintegrated suddenly just 10 days ago. Having completed a particularly hard week including a great 18-mile barefoot run just a few days prior, I was fatigued and sore. Just five miles into an “easy recovery run”, my hamstring seized up and left me limping home. My heart sank, recognizing that all-too-familiar pattern of pain that I’d battled since high school track and cross-country. I had re-strained my hamstring.

Over the years of incessant injuries I developed resilience and adaptability as I learned the invaluable benefits of a positive attitude for healing (and sanity!). This optimism has kept me on the fast track to healing, nurturing my health for optimal rehabilitation. During past injuries I would attack cross-training and strength work, diligently adhere to my physical therapy, and target my diet to heal as efficiently as possible. I have always been convinced that this is the best way – the right way – to approach recovery. The athletes I most admire would never let injury get them down, but attack it head-on with hard work and determination. The first few days after re-injuring my hamstring I was similarly optimistic. This is a minor bump in the road … just some fleeting soreness that a little massage, active release and acupuncture will nip in the bud, I convinced myself.

Yet 10 days of essentially no running (excepting a few very painful failed efforts) later, I can no longer feign positivity. I am in mourning and I am embracing it. This is a sadness that only a runner could understand. I am sad to have lost a defining piece of myself. A source of inspiration, energy, passion and power. My source of life. This admission comes with a heavy dose of embarrassment and guilt for such distress over what is truly a trivial matter. Rationally I’m deeply grateful for my remarkable fortune for my otherwise great health, a job I love, and the most wonderful friends and family. I have tried to deny this melancholy, to convince myself that this sadness is no match for my optimism. But that is a lie. Pretending that I don’t miss those hours alone on the road, that I don’t fear another long struggle with injury, is perhaps even more toxic than the negativity itself.

One day I will heal. One day I will run again. I know this. But for now, I am sad.

Tagged , , ,

That All-Nighter is not without Neuroconsequences

Originally published on the PLOS Neuroscience Community

As you put the finishing touches on your paper, you notice the sun rising and fantasize about crawling in bed. Your vision and hearing are beginning to distort and the words staring back at you from the monitor have lost their meaning. Your brain … well, feels like mush. We’ve all been there. That debilitating brain fog that inevitably sets in after an all-nighter prompts the obvious question: what does sleep deprivation actually do to the brain? Neuroscientists from Norway set out to answer this question in their recent PLOS ONE study, examining how a night forgoing sleep affects brain microstructure. Among their findings, sleep deprivation induced widespread structural alterations throughout the brain. The lead author shares his thoughts on the possible biological causes of these changes, and whether they may be long-lasting.

Inducing sleep deprivation

The researchers assessed a group of 21 healthy young men over the course of a day. The participants underwent diffusion tensor imaging (DTI; a form of MRI that measures water diffusion and can be used to evaluate white matter integrity) when they first awoke, at 7:30 am. They were free to go about their day as normal before returning for a second DTI scan at 9:30 pm. They remained in the lab for monitoring until a final scan at 6:30 am the following morning, for a total period of 23 hours of continued waking. Since we’re now learning that anything and everything can influence brain structure on surprisingly short time-scales, the researchers finely controlled as many confounding factors as possible. The participants were not allowed to exercise or consume alcohol, caffeine or nicotine during the study, or to eat right before the scans. Since DTI measures water diffusion, hydration was evaluated at all sessions and accounted for in their analysis.

Rapid microstructural changes to waking

The researchers were interested in two main questions: How does the brain change after a normal day of wakefulness and after sleep deprivation? They focused on three DTI metrics to probe how different features of neuronal tissue may change with waking. Radial diffusivity (RD) measures how water diffuses across fibers, whereas axial diffusivity (AD) measures diffusion along the length of a tract. Fractional anisotropy (FA) is the ratio of axial to radial diffusivity and therefore measures how strongly water diffuses along a single direction.

From morning to evening, FA increased and this was driven mostly by reduced RD (Figure, left). From the evening to the next morning – after the all-nighter – FA values decreased to levels comparable to the prior morning, and this drop was coupled with a decrease in AD (Figure, right). Thus, over the course of a full day of wakefulness FA fluctuated, temporarily rising but eventually rebounding. In contrast, both RD and AD declined but at different rates, RD dropping by the end of a normal day, and AD dropping later, only after considerable sleep deprivation. These changes were non-specific, occurring throughout the brain, including in the corpus callosum, brainstem, thalamus and frontotemporal and parieto-occipital tracts.

Throughout the brain, FA values increase from morning to evening (left) and decrease from the evening to the next morning after a night without sleep (right). Elvsåshagen et al., 2015.

Throughout the brain, FA values increase from morning to evening (left) and decrease from the evening to the next morning after a night without sleep (right). Elvsåshagen et al., 2015.

How bad are the neuroconsequences of sleep deprivation?

Other studies have corroborated these reports that wakefulness alters the brain, including reduced diffusion with increasing time awake, and altered functional connectivity after sleep deprivation. How this plasticity reflects the consequences of waking on the brain, however, isn’t clear. Sleep is known to be essential to tissue repair and is particularly important for promoting lipid integrity to maintain healthy cell membranes and myelination. The question remains, therefore, how detrimental the structural reorganization from sleep deprivation really is. Does the plasticity reported here and elsewhere persist for days, weeks or longer, or can a long night of deep catch-up sleep reverse any detriment that all-nighter caused?

“My hypothesis,” says first author Dr. Torbjørn Elvsåshagen, “would be that the putative effects of one night of sleep deprivation on white matter microstructure are short term and reverse after one to a few nights of normal sleep. However, it could be hypothesized that chronic sleep insufficiency might lead to longer-lasting alterations in brain structure. Consistent with this idea, evidence for an association between impaired sleep and localized cortical thinning was found in obstructive sleep apnea syndrome, idiopathic rapid eye movement sleep behavior disorder, mild cognitive impairment and community-dwelling adults. Whether chronic sleep insufficiency can lead to longer-lasting alterations in white matter structure remains to be clarified.”

Is sleepiness really to blame?

It’s likely that multiple factors contribute to these distinct patterns of change in neuronal tissue. After sleep deprivation, the extent of AD decline correlated with subjective sleepiness ratings, suggesting that microstructural alterations may in fact be attributable to changes in alertness or arousal. This possibility is in line with the finding that changes occurred in both the thalamus and brainstem, regions important for arousal and wakefulness. However, the non-linear changes in FA suggest that some microstructural changes may be less related to sleepiness and more directly driven by circadian effects. FA increased late in the day, but – despite fatigue– dropped back after sleep deprivation to the same levels as the day prior. This rebounding may have been due to declining levels of AD and RD reaching equilibrium (reminder, FA is the ratio of AD to RD) or to neuronal features that fluctuate with our circadian rhythms, at least partially independent of our sleep habits. What’s more, other studies have found that presumably mundane activities, for example juggling or spatial learning, also induce gray and white matter changes within hours, and presumably many more as-of-yet unstudied activities also cause similarly rapid plasticity. Given that participants were free to engage in various physical and cognitive activities between the scans, it’s reasonable to assume that some of these behaviors may have also influenced brain structure. Whatever the mechanism, these effects underscore the importance of accounting for time of day in structural neuroimaging studies.

Dr. Elvsåshagen elaborates on these possible factors: “The precise neurobiological substrate for the observed DTI changes after waking remain to be clarified. We cannot rule out the possibility that both activity-independent and activity-dependent processes could contribute to DTI changes after waking. In support of potential activity-dependent white matter alterations, there is interesting evidence from in vitro studies indicating that hours of electrical activity can lead to changes in myelination. To further explore the possibility of activity-dependent white matter alterations, one could examine whether different physical or cognitive tasks lead to task-specific white matter changes.”

Sleepy outliers?

Notably, two of the 21 participants did not show the same rise in FA throughout the day as the others, and showed the smallest change in FA and AD after sleep deprivation. While variability across individuals in terms of brain structure and biological responses to the environment is expected, the remarkable consistency of the study’s other findings raises the possibility that some other factors may explain these outliers. Dr. Elvsåshagen conjectures, “These individuals were also the least tired individuals after sleep deprivation. Although highly speculative, one possible explanation for the lesser changes in these two participants might be a particular resistance to the effects of waking and sleep deprivation.” A follow-up with additional time-points and closer monitoring of activities could help more finely track how the patterns of brain microstructural change shift over periods of waking, and vary across individuals.

Linking diffusion to neurons

How sleep, fatigue, activity or circadian rhythms affect particular neuronal structural properties remains to be seen. RD and AD are thought to depend on myelin and axon integrity, respectively, but DTI metrics in general are sensitive to various other tissue features as well, including cell membrane permeability, axon diameter, tissue perfusion or glial processes. While these properties are difficult to image in living humans, insight from animal studies will help determine how waking impacts specific neuronal characteristics.

Longer-term studies are needed to answer these questions. Dr. Elvsåshagen shared that his team has since replicated their results in a larger sample, and are planning a follow-up study on the effects of waking and sleep deprivation on gray matter structure. Until these outstanding questions are answered, keeping a regular sleep schedule – and avoiding those early morning paper-writing marathons – may be better option for your brain health.


Bellesi M, Pfister-Genskow M, Maret S, Keles S, Tononi G, Cirelli C (2013). Effects of sleep and wake on oligodendrocytes and their precursors. J Neurosci. 33: 14288–14300. doi: 10.1523/JNEUROSCI.5102-12.2013

Budde MD, Xie M, Cross AH, Song SK (2009). Axial diffusivity is the primary correlate of axonal injury in the experimental autoimmune encephalomyelitis spinal cord: a quantitative pixelwise analysis. J Neurosci. 29: 2805–2813. doi: 10.1523/JNEUROSCI.4605-08.2009

De Havas JA, Parimal S, Soon CS, Chee MW (2012). Sleep deprivation reduces default mode network connectivity and anti-correlation during rest and task performance. NeuroImage. 59: 1745–1751. doi: 10.1016/j.neuroimage.2011.08.026

Driemeyer J, Boyke J, Gaser C, Buchel C, May A (2008). Changes in gray matter induced by learning—revisited. PLOS ONE. 3: e2669. doi: 10.1371/journal.pone.0002669

Elvsåshagen T, Norbom LB, Pedersen PØ, Quraishi SH, Bjørnerud A, Malt UK (2015). Widespread Changes in White Matter Microstructure after a Day of Waking and Sleep Deprivation. PLOS ONE. 10(5): e0124859. doi: 10.1371/journal.pone.0127351

Hinard V, et al. (2012). Key electrophysiological, molecular, and metabolic signatures of sleep and wakefulness revealed in primary cortical cultures. J Neurosci. 32: 12506–12517. doi: 10.1523/JNEUROSCI.2306-12.2012

Hofstetter S, Tavor I, Tzur Moryosef S, Assaf Y (2013). Short-term learning induces white matter plasticity in the fornix. J Neurosci. 33: 12844–12850. doi: 10.1523/JNEUROSCI.4520-12.2013

Jiang C, , et al. (2014). Diurnal microstructural variations in healthy adult brain revealed by diffusion tensor imaging. PLOS ONE. 9: e84822. doi: 10.1371/journal.pone.0084822

Joo EY, et al. (2013). Brain Gray Matter Deficits in Patients with Chronic Primary Insomnia. Sleep. 36(7): 999-1007. doi: 10.5665/sleep.2796

Rayayel S, et al. (2015). Patterns of cortical thinning in idiopathic rapid eye movement sleep behavior disorder. Mov Disord. 30(5): 680–687. doi: 10.1002/mds.25820

Sanchez-Espinosa MP, Atienza M, Cantero JL (2014). Sleep deficits in mild cognitive impairment are related to increased levels of plasma amyloid-β and cortical thinning. NeuroImage. 98: 395-404. doi: 10.1016/j.neuroimage.2014.05.027

Song SK, Sun SW, Ramsbottom MJ, Chang C, Russell J, Cross AH (2002). Dysmyelination revealed through MRI as increased radial (but unchanged axial) diffusion of water. NeuroImage. 17: 1429–1436. doi: 10.1006/nimg.2002.1267

Sexton CE, et al. (2014). Accelerated changes in white matter microstructure during aging: a longitudinal diffusion tensor imaging study. J Neurosci. 34(46): 15425–15436. doi: 10.1523/JNEUROSCI.0203-14.2014

Wake H, Lee PR, Fields RD (2011). Control of Local Protein Synthesis and Initial Events in Myelination by Action Potentials. Science. 333(6049): 1647–1651. doi: 10.1126/science.1206998

Tagged , , , , ,

Get every new post delivered to your Inbox.

Join 47 other followers