Blog

Share this page on your social media


The Brain of a Musician: Case study of Sting
By Jason von Stietz, M.A.
August 27, 2016
Photo Credit: Owen Egan

 

What does musical ability look like in the brain? Researchers at McGill University were presented with a unique opportunity. In an effort to understand how a musician’s brain operates, they were able to perform brain imaging on multiple Grammy Award winning artist Sting. The study was discussed in a recent article NeuroScientistNews: 

 

What does the 1960s Beatles hit "Girl" have in common with Astor Piazolla's evocative tango composition "Libertango?"

 

Probably not much, to the casual listener. But in the mind of one famously eclectic singer-songwriter, the two songs are highly similar. That's one of the surprising findings of an unusual neuroscience study based on brain scans of the musician Sting.

 

The paper, published in the journal Neurocase, uses recently developed imaging-analysis techniques to provide a window into the mind of a masterful musician. It also represents an approach that could offer insights into how gifted individuals find connections between seemingly disparate thoughts or sounds, in fields ranging from arts to politics or science.

 

"These state-of the-art techniques really allowed us to make maps of how Sting's brain organizes music," says lead author Daniel Levitin, a cognitive psychologist at McGill University. "That's important because at the heart of great musicianship is the ability to manipulate in one's mind rich representations of the desired soundscape."

 

Lab tour with a twist

 

The research stemmed from a serendipitous encounter several years ago. Sting had read Levitin's book This Is Your Brain on Music and was coming to Montreal to play a concert. His representatives contacted Levitin and asked if he might take a tour of the lab at McGill. Levitin—whose lab has hosted visits from many popular musicians over the years—readily agreed. And he added a unique twist: "I asked if he also wanted to have his brain scanned. He said 'yes'."

 

So it was that McGill students in the Stewart Biology Building one day found themselves sharing an elevator with the former lead singer of The Police, who has won 16 Grammy Awards, including one in 1982 for the hit single "Don't Stand So Close To Me."

 

Both functional and structural scans were conducted in a single session at the brain imaging unit of McGill's Montreal Neurological Institute, the hot afternoon before a Police concert. A power outage knocked the entire campus off-line for several hours, threatening to cancel the experiment. Because it takes over an hour to reboot the fMRI machine, time grew short. Sting generously agreed to skip his soundcheck in order to do the scan.

 

Levitin then teamed up with Scott Grafton, a leading brain-scan expert at the University of California at Santa Barbara, to use two novel techniques to analyze the scans. The techniques, known as multivoxel pattern analysis and representational dissimilarity analysis, showed which songs Sting found similar to one another and which ones are dissimilar—based not on tests or questionnaires, but on activations of brain regions.

 

"At the heart of these methods is the ability to test if patterns of brain activity are more alike for two similar styles of music compared to different styles. This approach has never before been considered in brain imaging experiments of music," notes Scott Grafton.

 

Unexpected connections

 

"Sting's brain scan pointed us to several connections between pieces of music that I know well but had never seen as related before," Levitin says. Piazzola's "Libertango" and the Beatles' "Girl" proved to be two of the most similar. Both are in minor keys and include similar melodic motifs, the paper reveals. Another example: Sting's own "Moon over Bourbon Street" and Booker T. and the MG's "Green Onions," both of which are in the key of F minor, have the same tempo (132 beats per minute) and a swing rhythm.

 

The methods introduced in this paper, Levitin says, "can be used to study all sorts of things: how athletes organize their thoughts about body movements; how writers organize their thoughts about characters; how painters think about color, form and space."
 

 

Read the original article Here

Comments (0)
The Myth of the Second Head Trauma Cure
By Jason von Stietz, M.A.

 

According to cartoons and other popular media, the only cure for head trauma induced amnesia is a second head trauma. When a cartoon characters such as Fred Flintsone loses his memory following a lump on the head, it is only a second lump that brings it back. This idea seems silly to modern scientists and clinicians. However, where did this myth come from? The history of this myth was discussed in a recent article in Drexel Now: 

 

While that worked in “The Flintstones” world and many other fictional realms, the medical community knows that like doesn’t cure like when it comes to head trauma.

 

However, a shockingly high level of the general public endorse the Flintstones solution, with 38–46 percent believing that a second blow to the head could cure amnesia, according to Drexel’s Mary Spiers. And, believe it or not, that belief was spurred by members of the medical community dating as far back as the early 19th century.

 

Spiers, PhD, associate professor in the College of Arts and Sciences’ Department of Psychology, traced the origins of the double-trauma amnesia cure belief in a paper for Neurology titled, “The Head Trauma Amnesia Cure: The Making of a Medical Myth.”

 

For a long time, scientists worked to figure out why the brain had two hemispheres.

 

“Studying the brain in the past was very challenging for several reasons,” Spiers explained. “There was no way to look into the living brain, as powerful functional imaging now allows us to do. Also, many people, including physicians, philosophers and those in the arts, speculated about the function of the brain, the soul and consciousness, so there were many competing ideas.”

 

At one point, scientists landed on the idea that it was a double organ, like a person’s eyes or ears, two pieces that were redundant — doing the same work.

 

Around the turn of the 19th century, a French scientist named Francois Xavier Bichat decided that the two hemispheres acted in synchrony. One side mirrored the other, and vice versa.

 

As such, he reasoned that an injury to one side of the head would throw off the other, “untouched” side.

 

“[Bichat] seriously proposed the notion that a second blow could restore the wits of someone who had a previous concussion,” Spiers wrote in her paper. “Bichat justified this idea by reasoning that hemispheres that are in balance with each other functioned better, while those out of balance cause perceptual and intellectual confusion.”

 

Bichat never cited any specific cases to back up his theory and, ironically enough, he died of a probable head injury in 1802.

 

“From my reading of Bichat’s work, it seems that he felt that the second trauma amnesia cure was a common occurrence and didn’t need the citation of an individual case,” Spiers said. “This was not unusual at the time, to forgo evidence like that.”

 

Despite backup to his claims, Bichat’s ideas continued on after his death and became known as Bichat’s Law of Symmetry. Books in the following decades cited brain asymmetry as the root of different mental health issues.

 

Compounding the symmetry idea was also the dominant thought that human memories could never be lost. However, Samuel Taylor Coleridge — a philosopher, not a physician — was credited with popularizing that idea.

 

It wasn’t until the mid-1800s that scientists began to realize that taking a hit to the head might just destroy memories completely. A second blow wasn’t likely to jump-start the brain, they realized, but create further damage. 

 

By this time, however, enough anecdotes about curing amnesia with a second head trauma were floating around from otherwise respectable scientists that the theory invaded the general public’s consciousness. With “no hard and fast lines between scientific and popular writing,” myths like the second trauma amnesia cure circulated out of control, according to Spiers.

 

Even as modern scientists began to fully understand the brain, the theory still stuck with a large amount of the public, resulting in the lumps we continue to see on cartoon characters’ heads. 

 

“One of the issues we see in the persistence of this myth is that understanding how the brain forgets, recovers and/or loses information is a complicated matter that is still being studied by brain scientists,” Spiers said. “As individuals, we may have had the experience of a ‘memory jog’ or cue that reminds us of a long-forgotten memory. Because our own experiences serve as powerful evidence to us, this reinforces the myth that all memories are forever stored in the brain and only need some sort of jolt to come back.”

 

But, obviously, that jolt isn’t exactly advisable.

 

“In the case of a traumatic brain injury, learning and memory may be temporarily or permanently impaired due to swelling and injured or destroyed neurons,” Spiers concluded. “Some memories may return as the brain recovers, but a second brain injury is never a good treatment for a first brain injury.”

 

Read the original Here

 

Comments (0)
Study Identifies Biomarkers for Alzheimer's Disease
By Jason von Stietz, M.A.
August 15, 2016
Getty Images

 

Researchers at the University of Wisconsin Madison have identified biomarkers helpful in predicting the development of Alzheimer’s disease. The researchers analyzed the data of 175, which included brain scans, measures of cognitive abilities, and genotyping. The study was discussed in a recent article in NeuroscientistNews: 

 

This approach – which statistically analyzes a panel of biomarkers – could help identify people most likely to benefit from drugs or other interventions to slow the progress of the disease. The study was published in the August edition of the journal Brain.

 

"The Alzheimer's Association estimates that if we had a prevention that merely pushed back the clinical symptoms of the disease by five years, it would almost cut in half the number of people projected to develop dementia,'' says Annie Racine, a doctoral candidate and the study's lead author. "That translates into millions of lives and billions of dollars saved."

 

Dr. Sterling Johnson, the study's senior author, says that while brain changes – such as the buildup of beta-amyloid plaques and tangles of another substance called tau – are markers of the disease, not everyone with these brain changes develops Alzheimer's symptoms.

 

"Until now, we haven't had a great way to use the biomarkers to predict who was going to develop clinical symptoms of the disease,'' Johnson says. "Although the new algorithm isn't perfect, now we can say with greater certainty who is at increased risk and more likely to decline."

 

The research team recruited 175 older adults at risk for Alzheimer's disease, and used statistical algorithms to categorize them into four clusters based on different patterns and profiles of pathology in their brains. Then, the researchers analyzed cognitive data from the participants to investigate whether these cluster groups differed on their cognitive abilities measured over as many as 10 years.

 

As it turns out, the biomarker panels were predictive of cognitive decline over time. One cluster in particular stood out. The group that had a biomarker profile consistent with Alzheimer's – abnormal levels of tau and beta-amyloid in their cerebrospinal fluid – showed the steepest decline on cognitive tests of memory and language over the 10 years of testing. About two-thirds of the 22 people sorted into this group were also positive for the APOE4 gene—the greatest known risk factor for sporadic Alzheimer's disease—compared with about one-third in the other clusters.

 

At the other end of the spectrum, the largest group, 76 people, were sorted into a cluster that appears to be made up of healthy aging individuals. They showed normal levels on the five biomarkers and did not decline cognitively over time.

 

In between, there were two clusters that weren't classified as Alzheimer's but who don't seem to be aging normally either. A group of 32 people showed signs of mixed Alzheimer's and vascular dementia. They had some of the amyloid changes in their cerebrospinal fluid, but also showed lesions in their brains' white matter, which indicate scarring from small ischemic lesions which can be thought of as minor silent strokes.

 

The other cluster of 45 people had signs of brain atrophy, with brain imaging showing that the hippocampus, the brain's memory center, was significantly smaller than the other groups. The authors speculate this group could have intrinsic brain vulnerability or could be affected by some other process that differentiates them from healthy aging. Both the in-between clusters showed non-specific decline on a test of global cognitive functioning, which further differentiates them from the healthy aging cluster.

 

The study participants came from a group of more than 1,800 people enrolled in two studies – the Wisconsin Alzheimer's Disease Research Center (WADRC) study and the Wisconsin Registry for Alzheimer's Prevention (WRAP). Both groups are enriched for people at risk for getting Alzheimer's because about ¾ of participants have a parent with the disease.

 

"This study shows that just having a family history doesn't mean you are going to get this disease," Johnson says. "Some of the people in our studies are on a trajectory for Alzheimer's, but more of them are aging normally, and some are on track to have a different type of brain disease." A comprehensive panel of biomarkers – such as the one evaluated in this study – could help to predict those variable paths, paving the way for early interventions to stop or slow the disease.

 

The authors of the study are affiliated with the WADRC, the Wisconsin Alzheimer's Institute, the Institute on Aging, the Waisman Center, and the Neuroscience and Public Policy program, all at UW-Madison; and the Geriatrics Research Education and Clinical Center at the William S. Middleton Veterans Hospital.

 

Read the original Here

Comments (0)
New Imaging Tool Measures Synaptic Density
By Jason von Stietz, M.A.
July 29, 2016
Photo Credit: Finemma et al. (2016). Science Translational Medicine 

 

Previously researcher have only been able to study the synaptic changes caused by brain disorders through autopsy. However, recently researcher from Yale have developed a new approach to brain scanning that allows for the measurement synaptic density. The study was discussed in a recent article in Medical Xpress: 

 

The study was published July 20 in Science Translational Medicine.

 

Certain changes in synapses—the junctions between nerve cells in the brain—have been linked with brain disorders. But researchers have only been able to evaluate synaptic changes during autopsies. For their study, the research team set out to develop a method for measuring the number of synapses, or synaptic density, in the living brain.

 

To quantify synapses throughout the brain, professor of radiology and biomedical imaging Richard Carson and his coauthors combined PET scanning technology with biochemistry. They developed a radioactive tracerthat, when injected into the body, binds with a key protein that is present in all synapses across the brain. They observed the tracer through PET imaging and then applied mathematical tools to quantify synaptic density.

 

The researchers used the imaging technique in both baboons and humans. They confirmed that the new method did serve as a marker for synaptic density. It also revealed synaptic loss in three patients with epilepsy compared to healthy individuals.

 

"This is the first time we have synaptic density measurement in live human beings," said Carson, who is senior author on the study. "Up to now any measurement of synaptic density was postmortem."

 

The finding has several potential applications. With this noninvasive method, researchers may be able to follow the progression of many brain disorders, including epilepsy and Alzheimer's disease, by measuring changes in synaptic density over time. Another application may be in assessing how pharmaceuticals slow the loss of neurons. "This opens the door to follow the natural evolution of synaptic density with normal aging and follow how drugs can alter synapses or synapse formation."

 

Carson and his colleagues plan future studies involving PET imaging of synapses to research epilepsy and other brain disorders, including Alzheimer's disease, schizophrenia, depression, and Parkinson's disease. "There are many diseases where neuro-degeneration comes into play," he noted.

 

Read the original article Here

Comments (0)
Impact Of Baby's Cries on Cognition
By Jason von Stietz, M.A.
July 20, 2016
Getty Images

 

People often refer to “parental instincts” as an innate drive to care for one’s offspring. However, we know little of the role cognition play in this instinct.  Researchers at the University of Toronto examined the impact of the sound of a baby’s cries on performance during a cognitive task. The study was discussed in a recent article in Neuroscience Stuff:

 

“Parental instinct appears to be hardwired, yet no one talks about how this instinct might include cognition,” says David Haley, co-author and Associate Professor of psychology at U of T Scarborough.

 

“If we simply had an automatic response every time a baby started crying, how would we think about competing concerns in the environment or how best to respond to a baby’s distress?”

 

The study looked at the effect infant vocalizations—in this case audio clips of a baby laughing or crying—had on adults completing a cognitive conflict task. The researchers used the Stroop task, in which participants were asked to rapidly identify the color of a printed word while ignoring the meaning of the word itself. Brain activity was measured using electroencephalography (EEG) during each trial of the cognitive task, which took place immediately after a two-second audio clip of an infant vocalization.

 

The brain data revealed that the infant cries reduced attention to the task and triggered greater cognitive conflict processing than the infant laughs. Cognitive conflict processing is important because it controls attention—one of the most basic executive functions needed to complete a task or make a decision, notes Haley, who runs U of T’s Parent-Infant Research Lab.

 

“Parents are constantly making a variety of everyday decisions and have competing demands on their attention,” says Joanna Dudek, a graduate student in Haley’s Parent-Infant Research Lab and the lead author of the study.

 

“They may be in the middle of doing chores when the doorbell rings and their child starts to cry. How do they stay calm, cool and collected, and how do they know when to drop what they’re doing and pick up the child?”

 

A baby’s cry has been shown to cause aversion in adults, but it could also create an adaptive response by “switching on” the cognitive control parents use in effectively responding to their child’s emotional needs while also addressing other demands in everyday life, adds Haley.

 

“If an infant’s cry activates cognitive conflict in the brain, it could also be teaching parents how to focus their attention more selectively,” he says.

 

“It’s this cognitive flexibility that allows parents to rapidly switch between responding to their baby’s distress and other competing demands in their lives—which, paradoxically, may mean ignoring the infant momentarily.”

 

The findings add to a growing body of research suggesting that infants occupy a privileged status in our neurobiological programming, one deeply rooted in our evolutionary past. But, as Haley notes, it also reveals an important adaptive cognitive function in the human brain.

 

Read the original article here

Comments (0)
Hypoconnectivity Found In Brains Of Those With Intermittent Explosive Disorder
By Jason von Stietz, M.A.
July 15, 2016
Photo Credit: Lee et al., Neuropsychopharmacology

 

Why do those with anger issues tend to misunderstand social situations? Researchers at the University of Chicago Medical School compared the brains of those suffering from intermittent explosive disorder (IED) to the brains of healthy controls using diffusion tensor imaging. Findings indicated that brains of people with IED showed less white matter connecting the frontal cortex to the parietal lobes. The study was discussed in a recent article in Medical Xpress: 

 

In a new study published in the journal Neuropsychopharmacology, neuroscientists from the University of Chicago show that white matter in a region of the brain called the superior longitudinal fasciculus (SLF) has less integrity and density in people with IED than in healthy individuals and those with other psychiatric disorders. The SLF connects the brain's frontal lobe—responsible for decision-making, emotion and understanding consequences of actions—with the parietal lobe, which processes language and sensory input.

 

"It's like an information superhighway connecting the frontal cortex to the parietal lobes," said Royce Lee, MD, associate professor of psychiatry and behavioral neuroscience at the University of Chicago and lead author of the study. "We think that points to social cognition as an important area to think about for people with anger problems."

 

Lee and his colleagues, including senior author Emil Coccaro, MD, Ellen C. Manning Professor and Chair of Psychiatry and Behavioral Neuroscience at UChicago, used diffusion tensor imaging, a form ofmagnetic resonance imaging (MRI) that measures the volume and density of white matter connective tissue in the brain. Connectivity is a critical issue because the brains of people with psychiatric disorders usually show very few physical differences from healthy individuals.

 

"It's not so much how the brain is structured, but the way these regions are connected to each other," Lee said. "That might be where we're going to see a lot of the problems in psychiatric disorders, so white matter is a natural place to start since that's the brain's natural wiring from one region to another."

 

People with anger issues tend to misunderstand the intentions of other people in social situations. They think others are being hostile when they are not and make the wrong conclusions about their intentions. They also don't take in all the data from a social interaction, such as body language or certain words, and notice only those things that reinforce their belief that the other person is challenging them.

 

Decreased connectivity between regions of the brain that process a social situation could lead to the impaired judgment that escalates to an explosive outburst of anger. The discovery of connectivity deficits in a specific region of the brain like the SLF provides an important starting point for more research on people with IED, as well as those with borderline personality disorder, who share similar social and emotional problems and appear to have the same abnormality in the SLF.

 

"This is another example of tangible deficits in the brains of those with IED that indicate that impulsive aggressive behavior is not simply 'bad behavior' but behavior with a real biological basis that can be studied and treated," Coccaro said.

 

Read the original article Here

Comments (0)
Strategy-Based Training Might Help Mild Cognitive Impairment
By Jason von Stietz, M.A.
June 26, 2016
Photo Credit: Neuroscience News

 

Making sense of everyday spoken and written language is an ongoing daily challenge for individuals with mild cognitive impairment, who are in a preclinical stage of Alzheimer’s disease. Researchers at the University of Texas at Dallas and the University of Illinois at Urbana Champagne investigated the usefulness of strategy-based reasoning training in improving cognitive ability. The study was discussed in a recent article in Neuroscience News: 

 

“Changes in memory associated with MCI are often disconcerting, but cognitive challenges such as lapses in sound decision-making and judgment can have potentially worse consequences,” said Dr. Sandra Bond Chapman, founder and chief director at the Center for BrainHealth and Dee Wyly Distinguished University Professor in the School of Behavioral and Brain Sciences. “Interventions that mitigate cognitive deterioration without causing side effects may provide an additive, safe option for individuals who are worried about brain and memory changes.”

 

For the study, 50 adults ages 54-94 with amnestic MCI were randomly assigned to either a strategy-based, gist reasoning training group or a new-learning control group. Each group received two hour-long training sessions each week. The gist reasoning group received and practiced strategies on how to absorb and understand complex information, and the new-learning group used an educational approach to teach and discuss facts about how the brain works and what factors influence brain health.

 

Strategies in the gist reasoning training group focused on higher-level brain functions such as strategic attention — the ability to block out distractions and irrelevant details and focus on what is important; integrated reasoning — the ability to synthesize new information by extracting a memorable essence, pearl of wisdom, or take-home message; and innovation — the ability to appreciate diverse perspectives, derive multiple interpretations and generate new ideas to solve problems.

 

Pre- and post-training assessments measured changes in cognitive functions between the two groups. The gist reasoning group improved in executive function (i.e., strategic attention to recall more important items over less-important ones) and memory span (i.e., how many details a person can hold in their memory after one exposure, such as a phone number). The new learning group improved in detail memory (i.e., a person’s ability to remember details from contextual information). Those in the gist reasoning group also saw gains in concept abstraction, or an individual’s ability to process and abstract relationships to find similarities (e.g., how are a car and a train alike).

 

“Our findings support the potential benefit of gist reasoning training as a way to strengthen cognitive domains that have implications for everyday functioning in individuals with MCI,” said Dr. Raksha Mudar, study lead author and assistant professor at the University of Illinois at Urbana-Champaign. “We are excited about these preliminary findings, and we plan to study the long-term benefits and the brain changes associated with gist reasoning training in subsequent clinical trials.”

 

“Extracting sense from written and spoken language is a key daily life challenge for anyone with brain impairment, and this study shows that gist reasoning training significantly enhances this ability in a group of MCI patients,” said Dr. Ian Robertson, T. Boone Pickens Distinguished Scientist at the Center for BrainHealth and co-director of The Global Brain Health Initiative. “This is the first study of its kind and represents a very important development in the growing field of cognitive training for age-related cognitive and neurodegenerative disorders.”

 

“Findings from this study, in addition to our previous Alzheimer’s research, support the potential for cognitive training, and specifically gist reasoning training, to impact cognitive function for those with MCI,” said Audette Rackley, head of special programs at the Center for BrainHealth. “We hope studies like ours will aid in the development of multidimensional treatment options for an ever-growing number of people with concerns about memory in the absence of dementia.”

 

Read the original article Here

Comments (0)
Regular Exercise Protects Cognition in Older Adults
By Jason von Stietz, M.A.
June 17, 2016
Photo Credit: Getty Images

 

As baby boomers continue to age, the need for methods of maintaining cognitive abilities in older adults continues to grow. Researchers at the University of Melbourne found that regular exercise is the most effective approach to preventing cognitive decline. Consistent exercise in any form, even as simple as walking, led to cognitive benefits. The study was discussed in a recent article in Medical Xpress: 

 

University of Melbourne researchers followed 387 Australian women from the Women's Healthy Ageing Project for two decades. The women were aged 45 to 55-years-old when the study began in 1992.

 

The research team made note of their lifestyle factors, includingexercise and diet, education, marital and employment status, number of children, mood, physical activity and smoking.

 

The women's' hormone levels, cholesterol, height, weight, Body Mass Index and blood pressure were recorded 11 times throughout the study. Hormone replacement therapy was factored in.

 

They were also asked to learn a list of 10 unrelated words and attempt to recall them half an hour later, known as an Episodic Verbal Memory test.

 

When measuring the amount of memory loss over 20 years, frequent physical activity, normal blood pressureand high good cholesterol were all strongly associated with better recall of the words.

 

Study author Associate Professor Cassandra Szoeke, who leads the Women's Healthy Ageing Project, said once dementia occurs, it is irreversible. In our study more weekly exercise was associated with better memory.

 

"We now know that brain changes associated with dementia take 20 to 30 years to develop," Associate Professor Szoeke said.

 

"The evolution of cognitive decline is slow and steady, so we needed to study people over a long time period. We used a verbal memory test because that's one of the first things to decline when you develop Alzheimer's Disease."

 

Regular exercise of any type, from walking the dog to mountain climbing, emerged as the number one protective factor against memory loss. Asoc Prof Szoeke said that the best effects came from cumulative exercise, that is, how much you do and how often over the course of your life.

 

"The message from our study is very simple. Do more physical activity, it doesn't matter what, just move more and more often. It helps your heart, your body and prevents obesity and diabetes and now we know it can help your brain.

 

It could even be something as simple as going for a walk, we weren't restrictive in our study about what type."

 

But the key, she said, was to start as soon as possible.

 

"We expected it was the healthy habits later in life that would make a difference but we were surprised to find that the effect of exercise was cumulative. So every one of those 20 years mattered.

 

"If you don't start at 40, you could miss one or two decades of improvement to your cognition because every bit helps. That said, even once you're 50 you can make up for lost time."

 

Read the original article Here

Comments (0)
Depression in Low SES Adolescents Linked to Epigenetics
By Jason von Stietz, M.A.
June 7, 2016
Photo Credit: Getty Images

 

Research findings have long pointed to the link between poverty and depression in high-risk adolescents. Findings by researchers at Duke University identified one mechanism involving epigenetic modification of gene expression. These epigenetic changes resulted in adolescents with more active amygdala’s, which led to higher rates of depression. The study was discussed in a recent article in Medical Xpress: 

 

The results are part of a growing body of work that may lead to biological predictors that could guide individualized depression-prevention strategies.

 

Adolescents growing up in households with lower socioeconomic statuswere shown to accumulate greater quantities of a chemical tag on a depression-linked gene over the course of two years. These "epigenetic" tags work by altering the activity of genes. The more chemical tags an individual had near a gene called SLC6A4, the more responsive was their amygdala—a brain area that coordinates the body's reactions to threat—to photographs of fearful faces as they underwent functional MRI brain scans. Participants with a more active amygdala were more likely to later report symptoms of depression.

 

"This is some of the first research to demonstrating that low socioeconomic status can lead to changes in the way genes are expressed, and it maps this out through brain development to the future experience of depression symptoms," said the study's first author Johnna Swartz, a Duke postdoctoral researcher in the lab of Ahmad Hariri, a Duke professor of psychology and neuroscience.

 

Adolescence is rarely an easy time for anyone. But growing up in a family with low socioeconomic status or SES—a metric that incorporates parents' income and education levels—can add chronic stressors such as family discord and chaos, and environmental risks such as poor nutrition and smoking.

 

"These small daily hassles of scraping by are evident in changes that build up and affect children's development," Swartz said.

 

The study included 132 non-Hispanic Caucasian adolescents in the Teen Alcohol Outcomes Study (TAOS) who were between 11 and 15 years old at the outset of the study and came from households that ranged from low to high SES. About half of the participants had a family history of depression.

 

"The biggest risk factor we have currently for depression is a family history of the disorder," said study co-author Douglas Williamson, principal investigator of TAOS and professor of psychiatry and behavioral sciences at Duke. "Our new work reveals one of the mechanisms by which such familial risk may be manifested or expressed in a particular group of vulnerable individuals during adolescence."

 

The group's previous work, published last year in the journal Neuron, had shown that fMRI scan activity of the amygdala could signal who is more likely to experience depression and anxiety in response to stress several years later. That study included healthy college-aged participants of Hariri's ongoing Duke Neurogenetics Study (DNS), which aims to link genes, brain activity, and other biological markers to a risk for mental illness.

 

This study asked whether higher activity in the same brain area could predict depression in the younger, at-risk TAOS participants. Indeed, about one year later, these individuals (now between 14 and 19 years of age) were more likely to report symptoms of depression, especially if they had a family history of the disorder.

 

Swartz said the new study examined a range of socioeconomic status and did not focus specifically on families affected by extreme poverty or neglect. She said the findings suggest that even modestly lower socioeconomic status is associated with biological differences that elevate adolescents' risk for depression.

 

Most of the team's work so far has focused on epigenetic chemical tags near the SLC6A4 gene because it helps control the brain's levels of serotonin, a neurochemical involved in clinical depression and other mood disorders. The more marks present just upstream of this gene, the less likely it is to be active.

 

In 2014, Williamson and Hariri first showed that the presence of marks near the SLC6A4 gene can predict the way a person's amygdala responds to threat. That study included both Williamson's TAOS and Hariri's DNS participants, but had looked at the chemical tags at a single point in time.

 

Looking at the changes in these markers over an extended time is a more powerful way to understand an individual's risk for depression, said Hariri, who is also a member of the Duke Institute for Brain Sciences.

 

The team is now searching the genome for new markers that would predict depression. Ultimately, a panel of markers used in combination will lead to more accurate predictions, Swartz said.

 

They also hope to expand the age ranges of the study to include younger individuals and to continue following the TAOS participants into young adulthood.

 

"As they enter into young adulthood they are going to be experiencing more problems with depression or anxiety—or maybe substance abuse," Hariri said. "The extent to which our measures of their genomes and brains earlier in their lives continue to predict their relative health is something that's very important to know and very exciting for us to study."

 

Read the original article Here

Comments (0)
Sleep Patterns Uncovered Using Smartphone Application
By Jason von Stietz, M.A.
May 31, 2016
Getty Images

 

What is the powerful influencer of our sleep patterns? Circadian rhythms? The needs and demands of society? Why is jet lag so difficult to overcome? Researchers from University of Michigan studied sleep patterns by employing a free app. The findings were discussed in a recent article in Medical Xpress:  

 

A pioneering study of worldwide sleep patterns combines math modeling, mobile apps and big data to parse the roles society and biology each play in setting sleep schedules.

 

The study, led by University of Michigan mathematicians, used a free smartphone app that reduces jetlag to gather robust sleep data from thousands of people in 100 nations. The researchers examined how age, gender, amount of light and home country affect the amount of shut-eye people around the globe get, when they go to bed, and when they wake up.

 

Among their findings is that cultural pressures can override natural circadian rhythms, with the effects showing up most markedly at bedtime. While morning responsibilities like work, kids and school play a role in wake-time, the researchers say they're not the only factor. Population-level trends agree with what they would expect from current knowledge of the circadian clock.

 

"Across the board, it appears that society governs bedtime and one's internal clock governs wake time, and a later bedtime is linked to a loss of sleep," said Daniel Forger, who holds faculty positions in mathematics at the U-M College of Literature, Science, and the Arts, and in the U-M Medical School's Department of Computational Medicine and Bioinformatics. "At the same time, we found a strong wake-time effect from users' biological clocks—not just their alarm clocks. These findings help to quantify the tug-of-war between solar and social timekeeping."

 

When Forger talks about internal or biological clocks, he's referring to circadian rhythms—fluctuations in bodily functions and behaviors that are tied to the planet's 24-hour day. These rhythms are set by a grain-of-rice-sized cluster of 20,000 neurons behind the eyes. They're regulated by the amount of light, particularly sunlight, our eyes take in.

 

Circadian rhythms have long been thought to be the primary driver of sleep schedules, even since the advent of artificial light and 9-to-5 work schedules. The new research helps to quantify the role that society plays.

 

Here's how Forger and colleague Olivia Walch arrived at their findings. Several years ago, they released an app called Entrain that helps travelers adjust to new time zones. It recommends custom schedules of light and darkness. To use the app, you have to plug in your typical hours of sleep and light exposure, and are given the option of submitting your information anonymously to U-M.

 

The quality of the app's recommendations depended on the accuracy of the users' information, and the researchers say this motivated users to be particularly careful in reporting their lighting history and sleep habits.

 

With information from thousands of people in hand, they then analyzed it for patterns. Any correlations that bubbled up, they put to the test in what amounts to a circadian rhythm simulator. The simulator—a mathematical model—is based on the field's deep knowledge of how light affects the brain's suprachiasmatic nucleus (that's the cluster of neurons behind the eyes that regulates our internal clocks). With the model, the researchers could dial the sun up and down at will to see if the correlations still held in extreme conditions.

 

"In the real world, bedtime doesn't behave how it does in our model universe," Walch said. "What the model is missing is how society affects that."

 

The spread of national averages of sleep duration ranged from a minimum of around 7 hours, 24 minutes of sleep for residents of Singapore and Japan to a maximum of 8 hours, 12 minutes for those in the Netherlands. That's not a huge window, but the researchers say every half hour of sleep makes a big difference in terms of cognitive function and long-term health.

 

The findings, the researchers say, point to an important lever for the sleep-deprived—a set that the Centers for Disease Control and Prevention is concerned about. A recent CDC study found that across the U.S., one in three adults aren't getting the recommended minimum of seven hours. Sleep deprivation, the CDC says, increases the risk of obesity, diabetes, high blood pressure, heart disease, stroke and stress.

 

The U-M researchers also found that:

 

  • Middle-aged men get the least sleep, often getting less than the recommended 7 to 8 hours.
  • Women schedule more sleep than men, about 30 minutes more on average. They go to bed a bit earlier and wake up later. This is most pronounced in ages between 30 and 60.
  • People who spend some time in the sunlight each day tend to go to bed earlier and get more sleep than those who spend most of their time in indoor light.
  • Habits converge as we age. Sleep schedules were more similar among the older-than-55 set than those younger than 30, which could be related to a narrowing window in which older individuals can fall and stay asleep.

 

Sleep is more important than a lot of people realize, the researchers say. Even if you get six hours a night, you're still building up a sleep debt, says Walch, doctoral student in the mathematics department and a co-author on the paper.

 

"It doesn't take that many days of not getting enough sleep before you're functionally drunk," she said. "Researchers have figured out that being overly tired can have that effect. And what's terrifying at the same time is that people think they're performing tasks way better than they are. Your performance drops off but your perception of your performance doesn't."

 

Aside from the findings themselves, the researchers say the work demonstrates that mobile technology can be a reliable way to gather massive data sets at very low cost.

 

"This is a cool triumph of citizen science," Forger said.

 

Read the original article Here

 

 

 

Comments (0)
by -