Duke Research Blog

Following the people and events that make up the research community at Duke.

Author: Sarah Haurin Page 1 of 3

How the Flu Vaccine Fails

Influenza is ubiquitous. Every fall, we line up to get our flu shots with the hope that we will be protected from the virus that infects 10 to 20 percent of people worldwide each year. But some years, the vaccine is less effective than others.

Every year, CDC scientists engineer a new flu virus. By examining phylogenetic relationships, which are based on shared common ancestry and relatedness, researchers identify virus strains to target with a vaccine for the following flu season.

Sometimes, they do a good job predicting which strains will flourish in the upcoming flu season; other times, they pick wrong.

Pekosz’s work has identified why certain flu seasons saw less effective vaccines.

Andrew Pekosz, PhD, is a researcher at Johns Hopkins who examines why we fail to predict strains to target with vaccines. In particular, he examines years when the vaccine was ineffective and the viruses that were most prevalent to identify properties of these strains.

A virus consists of RNA enclosed in a membrane. Vaccines function by targeting membrane proteins that facilitate movement of the viral genome into host cells that it is infecting. For the flu virus, this protein is hemagglutinin (HA). An additional membrane protein called neuraminidase (NA) allows the virus to release itself from a cell it has infected and prevents it from returning to infected cells.  

The flu vaccine targets proteins on the membrane of the RNA virus. Image courtesy of scienceanimations.com.

Studying the viruses that flourished in the 2014-2015 and 2016-2017 flu seasons, Pekosz and his team have identified mutations to these surface proteins that allowed certain strains to evade the vaccine.

In the 2014-2015 season, a mutation in the HA receptor conferred an advantage to the virus, but only in the presence of the antibodies present in the vaccine. In the absence of these antibodies, this mutation was actually detrimental to the virus’s fitness. The strain was present in low numbers in the beginning of the flu season, but the selective pressure of the vaccine pushed it to become the dominant strain by the end.

The 2016-2017 flu season saw a similar pattern of mutation, but in the NA protein. The part of the virus membrane where the antibody binds, or the epitope, was covered in the mutated viral strain. Since the antibodies produced in response to the vaccine could not effectively identify the virus, the vaccine was ineffective for these mutated strains.

With the speed at which the flu virus evolves, and the fact that numerous strains can be active in any given flu season, engineering an effective vaccine is daunting. Pekosz’s findings on how these vaccines have previously failed will likely prove invaluable at combating such a persistent and common public health concern.

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin


The Costs of Mental Effort

Every day, we are faced with countless decisions regarding cognitive control, or the process of inhibiting automatic or habitual responses in order to perform better at a task.

Amitai Shenhav, PhD, of Brown University, and his lab are working on understanding the factors that influence this decision making process. Having a higher level cognitive control is what allows us to complete hard tasks like a math problem or a dense reading, so we may expect that the optimal practice is to exert a high level of control at all times.

Shenhav’s lab explores motivation and decision making related to cognitive control.

Experimental performance shows this is not the case: people tend to choose easier over hard tasks, require more money to complete harder tasks, and exert more mental effort as the reward value increases. These behaviors all suggest that the subjects’ automatic state is not to be at the highest possible level of control.

Shenhav’s research has centered around why we see variation in level of control. Because cognitive control is a costly process, there must be a limit to how much we can exert. These costs can be understood as tradeoffs between level of control and other brain functions and consequences of negative affective changes related to difficult tasks, like stress.

To understand how people make decisions about cognitive control in real time, Shenhav has developed an algorithm called the Expected Value of Control (EVC) model, which focuses on how individuals weigh the costs and benefits of increasing control.

Employing this model has helped Shenhav and his colleagues identify situations in which people are likely to choose to invest a lot of cognitive control. In one study, by varying whether the reward was paired only with a correct response or was given randomly, Shenhav simulated variability in efficacy of control. They found that people learn fairly quickly whether increasing their efforts will increase the likelihood of earning the reward and adjust their control accordingly: people are more likely to invest more effort when they learn that there is a correlation between their own effort and the likelihood of reward than when rewards are distributed independent of performance.

Another study explored how we adjust our strategies following difficult tasks. Experiments with cognitive control often rely on paradigms like the Stroop task, where subjects are asked to identify a target cue (color) while being presented with a distractor (incongruency of the word with its text color). Shenhav found that when subjects face a difficult trial or make a mistake, they adjust by decreasing attention to the distractor.

The Stroop task is a classic experimental design for understanding cognitive control. Successful completion of Stroop task 3 requires overriding your reflex to read the word in cases where the text and its color are mismatched.

A final interesting finding from Shenhav’s work tells us that part of the value of hard work may be in the work itself: people value rewards following a task in a way that scales to the effort they put into the task.

Style Recommendations From Data Scientists

A combination of data science and psychology is behind the recommendations for products we get when shopping online.

At the intersection of social psychology, data science and fashion is Amy Winecoff.

Amy Winecoff uses her background in psychology and neuroscience to improve recommender systems for shopping.

After earning a Ph.D. in psychology and neuroscience here at Duke, Winecoff spent time teaching before moving over to industry.

Today, Winecoff works as a senior data scientist at True Fit, a company that provides tools to retailers to help them decide what products they suggest to their customers.

True Fit’s software relies on collecting data about how clothes fit people who have bought them. With this data on size and type of clothing, True Fit can make size recommendations for a specific consumer looking to buy a certain product.    

In addition to recommendations on size, True Fit is behind many sites’ recommendations of products similar to those you are browsing or have bought.

While these recommender systems have been shown to work well for sites like Netflix, where you may have watched many different movies and shows in the recent past that can be used to make recommendations, Winecoff points out that this can be difficult for something like pants, which people don’t tend to buy in bulk.

To overcome this barrier, True Fit has engineered its system, called the Discovery engine, to parse a single piece of clothing into fifty different traits. With this much information, making recommendations for similar styles can be easier.

However, Winecoff’s background in social psychology has led her to question how well these algorithms make predictions that are in line with human behavior. She argues that understanding how people form their preferences is an integral part of designing a system to make recommendations.

One way Winecoff is testing how true the predictions are to human preferences is employing psychological studies to gain insight in how to fine-tune mathematical-based recommendations.

With a general goal of determining how humans determine similarity in clothes, Winecoff designed an online study where subjects are presented with a piece of clothing and told the garment is out of stock. They are then presented with two options and must pick one to replace the out-of-stock item. By varying one aspect in each of the two choices, like different color, pattern, or skirt length, Winecoff and her colleagues can distinguish which traits are most salient to a person when determining similarity.

Winecoff’s work illustrates the power of combining algorithmic recommendations with social psychological outcomes, and that science reaches into unexpected places, like influencing your shopping choices.  

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

Bias in Brain Research

Despite apparent progress in achieving gender equality, sexism continues to be pervasive — and scientists aren’t immune.  

In a cyber talk delivered to the Duke Institute for Brain Sciences, professor Cordelia Fine of the University of Melbourne highlighted compelling evidence that neuroscientific research is yet another culprit of gender bias.

Fine says the persistent idea of gender essentialism contributes to this stagnation. Gender essentialism describes the idea that men and women are fundamentally different, specifically at a neurological level. This “men are from Mars, women are from Venus” attitude has spread from pop culture into experimental design and interpretation.

However, studies that look for sex differences in male and female behavior tend to show more similarities than differences. One study looked at 106 meta-analyses about psychological differences between men and women. The researchers found that in areas as diverse as temperament, communication styles, and interests, gender had a small effect, representing statistically small differences between the sexes.

Looking at fMRI data casts further doubt on how pronounced gender differences really are. A meta-analysis of fMRI studies investigating functional differences between men and women found a large reporting bias. Studies finding brain differences across genders were overrepresented compared to those finding similarities.

Of those small sex differences found in the central nervous system, Fine points out how difficult it is to determine their functional significance. One study found no difference between men and women in self-reported emotional experience, but found via fMRI that men exhibited more processing in the prefrontal cortex, or the executive center of the brain, than women. Although subjective experience of emotion was the same between men and women, the researchers reported that men are more cognitive, while women are more emotional.

Fine argues that conclusions like this are biased by gender essentialism. In a study she co-authored, Fine found that gender essentialism correlates with stronger belief in gender stereotypes, that gender roles are fixed, and that the current understanding of gender does not need to change.

When scientists allow preconceived notions about gender to bias their interpretation of results, our collective understanding suffers. The best way to overcome these biases is to ensure we are continuing to bring more and more diverse voices to the table, Fine said.

Fine spoke last month as part of the Society for Neuroscience Virtual Conference, “Mitigating Implicit Bias: Tools for the Neuroscientist.” The Duke Institute for Brain Sciences (@DukeBrain) made the conference available to the Duke community.  

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

Nature vs. Nurture and Addiction

Epigenetics involves modifications to DNA that do not change its sequence but only affect which genes are active, or expressed. Photo courtesy of whatisepigenetics.com

The progressive understanding of addiction as a disease rather than a choice has opened the door to better treatment and research, but there are aspects of addiction that make it uniquely difficult to treat.

One exceptional characteristic of addiction is its persistence even in the absence of drug use: during periods of abstinence, symptoms get worse over time, and response to the drug increases.

Researcher Elizabeth Heller, PhD, of the University of Pennsylvania Epigenetics Institute, is interested in understanding why we observe this persistence in symptoms even after drug use, the initial cause of the addiction, is stopped. Heller, who spoke at a Jan. 18 biochemistry seminar, believes the answer lies in epigenetic regulation.

Elizabeth Heller is interested in how changes in gene expression can explain the chronic nature of addiction.

Epigenetic regulation represents the nurture part of “nature vs. nurture.” Without changing the actual sequence of DNA, we have mechanisms in our body to control how and when cells express certain genes. These mechanisms are influenced by changes in our environment, and the process of influencing gene expression without altering the basic genetic code is called epigenetics.

Heller believes that we can understand the persistent nature of the symptoms of drugs of abuse even during abstinence by considering epigenetic changes caused by the drugs themselves.

To investigate the role of epigenetics in addiction, specifically cocaine addiction, Heller and her team have developed a series of tools to bind to DNA and influence expression of the molecules that play a role in epigenetic regulation, which are called transcription factors. They identified the FosB gene, which has been previously implicated as a regulator of drug addiction, as a site for these changes.

Increased expression of the FosB gene has been shown to increase sensitivity to cocaine, meaning individuals expressing this gene respond more than those not expressing it. Heller found that cocaine users show decreased levels of the protein responsible for inhibiting expression of FosB. This suggests cocaine use itself is depleting the protein that could help regulate and attenuate response to cocaine, making it more addictive.

Another gene, Nr4a1, is important in dopamine signaling, the reward pathway that is “hijacked” by drugs of abuse.  This gene has been shown to attenuate reward response to cocaine in mice. Mice who underwent epigenetic changes to suppress Nr4a1 showed increased reward response to cocaine. A drug that is currently used in cancer treatment has been shown to suppress Nr4a1 and, consequently, Heller has shown it can reduce cocaine reward behavior in mice.

The identification of genes like FosB and Nr4a1 and evidence that changes in gene expression are even greater in periods of abstinence than during drug use. These may be exciting leaps in our understanding of addiction, and ultimately finding treatments best-suited to such a unique and devastating disease.   

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Drug Homing Method Helps Rethink Parkinson’s

The brain is the body’s most complex organ, and consequently the least understood. In fact, researchers like Michael Tadross, MD, PhD, wonder if the current research methods employed by neuroscientists are telling us as much as we think.

Michael Tadross is using novel approaches to tease out the causes of neuropsychiatric diseases at a cellular level.

Current methods such as gene editing and pharmacology can reveal how certain genes and drugs affect the cells in a given area of the brain, but they’re limited in that they don’t account for differences among different cell types. With his research, Tadross has tried to target specific cell types to better understand mechanisms that cause neuropsychiatric disorders.

To do this, Tadross developed a method to ensure a drug injected into a region of the brain will only affect specific cell types. Tadross genetically engineered the cell type of interest so that a special receptor protein, called HaloTag, is expressed at the cell membrane. Additionally, the drug of interest is altered so that it is tethered to the molecule that binds with the HaloTag receptor. By connecting the drug to the Halo-Tag ligand, and engineering only the cell type of interest to express the specific Halo-Tag receptor, Tadross effectively limited the cells affected by the drug to just one type. He calls this method “Drugs Acutely Restricted by Tethering,” or DART.

Tadross has been using the DART method to better understand the mechanisms underlying Parkinson’s disease. Parkinson’s is a neurological disease that affects a region of the brain called the striatum, causing tremors, slow movement, and rigid muscles, among other motor deficits.

Only cells expressing the HaloTag receptor can bind to the AMPA-repressing drug, ensuring virtually perfect cell-type specificity.

Patients with Parkinson’s show decreased levels of the neurotransmitter dopamine in the striatum. Consequently, treatments that involve restoring dopamine levels improve symptoms. For these reasons, Parkinson’s has long been regarded as a disease caused by a deficit in dopamine.

With his technique, Tadross is challenging this assumption. In addition to death of dopaminergic neurons, Parkinson’s is associated with an increase of the strength of synapses, or connections, between neurons that express AMPA receptors, which are the most common excitatory receptors in the brain.

In order to simulate the effects of Parkinson’s, Tadross and his team induced the death of dopaminergic neurons in the striatum of mice. As expected, the mice displayed significant motor impairments consistent with Parkinson’s. However, in addition to inducing the death of these neurons, Tadross engineered the AMPA-expressing cells to produce the Halo-Tag protein.

Tadross then treated the mice striatum with a common AMPA receptor blocker tethered to the Halo-Tag ligand. Amazingly, blocking the activity of these AMPA-expressing neurons, even in the absence of the dopaminergic neurons, reversed the effects of Parkinson’s so that the previously affected mice moved normally.

Tadross’s findings with the Parkinson’s mice exemplifies how little we know about cause and effect in the brain. The key to designing effective treatments for neuropsychiatric diseases, and possibly other diseases outside the nervous system, may be in teasing out the relationship of specific types of cells to symptoms and targeting the disease that way.

The ingenious work of researchers like Tadross will undoubtedly help bring us closer to understanding how the brain truly works.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

 

Aging and Decision-Making

Who makes riskier decisions, the young or the old? And what matters more in our decisions as we age — friends, health or money? The answers might surprise you.

Kendra Seaman works at the Center for the Study of Aging and Human Development and is interested in decision-making across the lifespan.

Duke postdoctoral fellow Kendra Seaman, Ph.D. uses mathematical models and brain imaging to understand how decision-making changes as we age. In a talk to a group of cognitive neuroscientists at Duke, Seamen explained that we have good reason to be concerned with how older people make decisions.

Statistically, older people in the U.S. have more money, and additionally more expenditures, specifically in healthcare. And by 2030, 20 percent of the US population will be over the age of 65.

One key component to decision-making is subjective value, which is a measure of the importance a reward or outcome has to a specific person at a specific point in time. Seaman used a reward of $20 as an example: it would have a much higher subjective value for a broke college student than for a wealthy retiree. Seaman discussed three factors that influence subjective value: reward, cost, and discount rate, or the determination of the value of future rewards.

Brain imaging research has found that subjective value is represented similarly in the medial prefrontal cortex (MPFC) across all ages. Despite this common network, Seaman and her colleagues have found significant differences in decision-making in older individuals.

The first difference comes in the form of reward. Older individuals are likely to be more invested in the outcome of a task if the reward is social or health-related rather than monetary. Consequently, they are more likely to want these health and social rewards  sooner and with higher certainty than younger individuals are. Understanding the salience of these rewards is crucial to designing future experiments to identify decision-making differences in older adults.

A preference for positive skew becomes more pronounced with age.

Older individuals also differ in their preferences for something called “skewed risks.” In these tasks, positive skew means a high probability of a small loss and a low probability of a large gain, such as buying a lottery ticket. Negative skew means a low probability of a large loss and a high probability of a small gain, such as undergoing a common medical procedure that has a low chance of harmful complications.

Older people tend to prefer positive skew to a greater degree than younger people, and this bias toward positive skew becomes more pronounced with age.

Understanding these tendencies could be vital in understanding why older people fall victim to fraud and decide to undergo risky medical procedures, and additionally be better equipped to motivate an aging population to remain involved in physical and mental activities.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Combatting the Opioid Epidemic

The opioid epidemic needs to be combatted in and out of the clinic.

In the U.S. 115 people die from opioids every day. The number of opioid overdoses increased fivefold from 1999 to 2016. While increased funding for resources like Narcan has helped — the opioid overdose-reversing drug now carried by emergency responders in cities throughout the country — changes to standard healthcare practices are still sorely needed.

Ashwin A Patkar, MD, medical director of the Duke Addictions Program, spoke to the Duke Center on Addiction and Behavior Change about how opioid addiction is treated.

The weaknesses of the current treatment standards first appear in diagnosis. Heroin and cocaine are currently being contaminated by distributors with fentanyl, an opioid that is 25 to 50 times more potent than heroin and cheaper than either of these drugs. Despite fentanyl’s prevalence in these street drugs, the standard form and interview for addiction patients does not include asking about or testing for the substance.

Patkar has found that 30 percent of opioid addiction patients have fentanyl in their urine and do not disclose it to the doctor. Rather than resulting from the patients’ dishonesty, Patkar believes, in most cases, patients are taking fentanyl without knowing that the drugs they are taking are contaminated.

Because of its potency, fentanyl causes overdoses that may require more Narcan than a standard heroin overdose. Understanding the prevalence of Narcan in patients is vital both for public health and educating patients so they can be adequately prepared.

Patkar also pointed out that, despite a lot of research supporting medication-assisted therapy, only 21 percent of addiction treatment facilities in the U.S. offer this type of treatment. Instead, most facilities rely on detoxification, which has high rates of relapse (greater than 85 percent within a year after detox) and comes with its own drawbacks. Detox lowers the patient’s tolerance to the drug, but care providers often neglect to tell the patients this, resulting in a rate of overdose that is three times higher than before detox.

Another common treatment for opioid addiction involves using methadone, a controlled substance that helps alleviate symptoms from opioid withdrawal. Because retention rate is high and cost of production is low, methadone poses a strong financial incentive. However, methadone itself is addictive, and overdose is possible.

Patkar points to a resource developed by Julie Bruneau as a reference for the Canadian standard of care for opioid abuse disorder. Rather than recommending detox or methadone as a first line of treatment, Bruneau and her team recommend buprenorphine , and naltrexone as a medication to support abstinence after treatment with buprenorphine.

Buprenorphine is a drug with a similar function as methadone, but with better and safer clinical outcomes. Buprenorphine does not create the same euphoric effect as methadone, and rates of overdose are six times less than in those prescribed methadone.

In addition to prescribing the right medicine, clinicians need to encourage patients to stick with treatment longer. Despite buprenorphine having good outcomes, patients who stop taking it after only 4 to 12 weeks, even with tapering directed by a doctor, exhibit only an 18 percent rate of successful abstinence.

Patkar closed his talk by reminding the audience that opioid addiction is a brain disease. In order to see a real change in the number of people dying from opioids, we need to focus on treating addiction as a disease; no one would question extended medication-based treatment of diseases like diabetes or heart disease, and the same should be said about addiction. Healthcare providers have a responsibility to treat addiction based on available research and best practices, and patients with opioid addiction deserve a standard of care the same as anyone else.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Medicine, Research and HIV

Duke senior Jesse Mangold has had an interest in the intersection of medicine and research since high school. While he took electives in a program called “Science, Medicine, and Research,” it wasn’t until the summer after his first year at Duke that he got to participate in research.

As a member of the inaugural class of Huang fellows, Mangold worked in the lab of Duke assistant professor Christina Meade on the compounding effect of HIV and marijuana use on cognitive abilities like memory and learning.

The following summer, Mangold traveled to Honduras with a group of students to help with collecting data and also meeting the overwhelming need for eye care. Mangold and the other students traveled to schools, administered visual exams, and provided free glasses to the children who needed them. Additionally, the students contributed to a growing research project, and for their part, put together an award-winning poster.

Mangold’s (top right) work in Honduras helped provide countless children with the eye care they so sorely needed.

Returning to school as a junior, Mangold wanted to focus on his greatest research interest: the molecular mechanisms of human immunodeficiency virus (HIV). Mangold found a home in the Permar lab, which investigates mechanisms of mother-to-child transmission of viruses including HIV, Zika, and Cytomegalovirus (CMV).

From co-authoring a book chapter to learning laboratory techniques, he was given “the opportunity to fail, but that was important, because I would learn and come back the next week and fail a little bit less,” Mangold said.

In the absence of any treatment, mothers who are HIV positive transmit the virus to their infants only 30 to 40 percent of the time, suggesting a component of the maternal immune system that provides at least partial protection against transmission.

The immune system functions through the activity of antibodies, or proteins that bind to specific receptors on a microbe and neutralize the threat they pose. The key to an effective HIV vaccine is identifying the most common receptors on the envelope of the virus and engineering a vaccine that can interact with any one of these receptors.

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

Mangold is working with Duke postdoctoral associate Ashley Nelson, Ph.D., to understand the immune response conferred on the infants of HIV positive mothers. To do this, they are using a rhesus macaque model. In order to most closely resemble the disease path as it would progress in humans, they are using a virus called SHIV, which is engineered to have the internal structure of simian immunodeficiency virus (SIV) and the viral envelope of HIV; SHIV can thus serve to naturally infect the macaques but provide insight into antibody response that can be generalized to humans.

The study involves infecting 12 female monkeys with the virus, waiting 12 weeks for the infection to proceed, and treating the monkeys with antiretroviral therapy (ART), which is currently the most effective treatment for HIV. Following the treatment, the level of virus in the blood, or viral load, will drop to undetectable levels. After an additional 12 weeks of treatment and three doses of either a candidate HIV vaccine or a placebo, treatment will be stopped. This design is meant to mirror the gold-standard of treatment for women who are HIV-positive and pregnant.

At this point, because the treatment and vaccine are imperfect, some virus will have survived and will “rebound,” or replicate fast and repopulate the blood. The key to this research is to sequence the virus at this stage, to identify the characteristics of the surviving virus that withstood the best available treatment. This surviving virus is also what is passed from mothers on antiretroviral therapy to their infants, so understanding its properties is vital for preventing mother-to-child transmission.

As a Huang fellow, Mangold had the opportunity to present his research on the compounding effect of HIV and marijuana on cognitive function.

Mangold’s role is looking into the difference in viral diversity before treatment commences and after rebound. This research will prove fundamental in engineering better and more effective treatments.

In addition to working with HIV, Mangold will be working on a project looking into a virus that doesn’t receive the same level of attention as HIV: Cytomegalovirus. CMV is the leading congenital cause of hearing loss, and mother-to-child transmission plays an important role in the transmission of this devastating virus.

Mangold and his mentor, pediatric resident Tiziana Coppola, M.D., are authoring a paper that reviews existing literature on CMV to look for a link between the prevalence of CMV in women of child-bearing age and whether this prevalence is predictive of the number of children suffer CMV-related hearing loss. With this study, Mangold and Coppola are hoping to identify if there is a component of the maternal immune system that confers some immunity to the child, which can then be targeted for vaccine development.

After graduation, Mangold will continue his research in the Permar lab during a gap year while applying to MD/PhD programs. He hopes to continue studying at the intersection of medicine and research in the HIV vaccine field.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

 

Quantifying Sleepiness and How It Relates to Depression

Sleep disturbance is a significant issue for many individuals with depressive illnesses. While most individuals deal with an inability to sleep, or insomnia, about 20-30% of depressed patients report the opposite problem – hypersomnia, or excessive sleep duration.

David Plante’s work investigates the relationship between depressive disorders and hypersomnolence. Photo courtesy of sleepfoundation.org

Patients who experience hypersomnolence report excessive daytime sleepiness (EDS) and often seem to be sleep-deprived, making the condition difficult to identify and poorly researched.

David Plante’s research focuses on a neglected type of sleep disturbance: hypersomnolence.

David T. Plante, MD, of the University of Wisconsin School of Medicine and Public Health, studies the significance of hypersomnolence in depression. He said the condition is resistant to treatment, often persisting even after depression has been treated, and its role in increasing risk of depression in previously healthy individuals needs to be examined.

One problem in studying daytime sleepiness is quantifying it. Subjective measures include the Epworth sleepiness scale, a quick self-report of how likely you are to fall asleep in a variety of situations. Objective scales are often involved processes, such as the Multiple Sleep Latency Test (MSLT), which requires an individual to attempt to take 4-5 naps, each 2 hours apart, in a lab while EEG records brain activity.

The MSLT measures how long it takes a person to fall asleep. Individuals with hypersomnolence will fall asleep faster than other patients, but determining a cutoff for what constitutes healthy and what qualifies as hypersomnolence has made the test an inexact measure. Typical cutoffs of 5-8 minutes provide a decent measure, but further research has cast doubt on this test’s value in studying depression.

The Wisconsin Sleep Cohort Study is an ongoing project begun in 1988 that follows state employees and includes a sleep study every four years. From this study, Plante has found an interesting and seemingly paradoxical relationship: while an increase in subjective measures of sleepiness is associated with increased likelihood of depression, objective measures like the MSLT associate depression with less sleepiness. Plante argues that this paradoxical relationship does not represent an inability for individuals to report their own sleepiness, but rather reflects the limitations of the MSLT.

Plante proposed several promising candidates for quantitative measures of excessive daytime sleepiness. One candidate, which is already a tool for studying sleep deprivation, is a ‘psychomotor vigilance task,’ where lapses in reaction time correlate with daytime sleepiness. Another method involves infrared measurements of the dilation of the pupil. Pupils dilate when a person is sleepy, so this somatic reaction could be useful.

High density EEG allowed Plante to identify the role of disturbed slow wave sleep in hypersomnolence.

Another area of interest for Plante is the signs of depressive sleepiness in the brain. Using high density EEG, which covers the whole head of the subject, Plante found that individuals with hypersomnolence experience less of the sleep cycle most associated with restoration, known as slow wave sleep. He identified a potential brain circuitry associated with sleepiness, but emphasized a need for methods like transcranial magnetic stimulation to get a better picture of the relationship between this circuitry and observed sleepiness.

By Sarah Haurin

Page 1 of 3

Powered by WordPress & Theme by Anders Norén