Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Author: Sarah Haurin Page 2 of 4

Nature vs. Nurture and Addiction

Epigenetics involves modifications to DNA that do not change its sequence but only affect which genes are active, or expressed. Photo courtesy of whatisepigenetics.com

The progressive understanding of addiction as a disease rather than a choice has opened the door to better treatment and research, but there are aspects of addiction that make it uniquely difficult to treat.

One exceptional characteristic of addiction is its persistence even in the absence of drug use: during periods of abstinence, symptoms get worse over time, and response to the drug increases.

Researcher Elizabeth Heller, PhD, of the University of Pennsylvania Epigenetics Institute, is interested in understanding why we observe this persistence in symptoms even after drug use, the initial cause of the addiction, is stopped. Heller, who spoke at a Jan. 18 biochemistry seminar, believes the answer lies in epigenetic regulation.

Elizabeth Heller is interested in how changes in gene expression can explain the chronic nature of addiction.

Epigenetic regulation represents the nurture part of “nature vs. nurture.” Without changing the actual sequence of DNA, we have mechanisms in our body to control how and when cells express certain genes. These mechanisms are influenced by changes in our environment, and the process of influencing gene expression without altering the basic genetic code is called epigenetics.

Heller believes that we can understand the persistent nature of the symptoms of drugs of abuse even during abstinence by considering epigenetic changes caused by the drugs themselves.

To investigate the role of epigenetics in addiction, specifically cocaine addiction, Heller and her team have developed a series of tools to bind to DNA and influence expression of the molecules that play a role in epigenetic regulation, which are called transcription factors. They identified the FosB gene, which has been previously implicated as a regulator of drug addiction, as a site for these changes.

Increased expression of the FosB gene has been shown to increase sensitivity to cocaine, meaning individuals expressing this gene respond more than those not expressing it. Heller found that cocaine users show decreased levels of the protein responsible for inhibiting expression of FosB. This suggests cocaine use itself is depleting the protein that could help regulate and attenuate response to cocaine, making it more addictive.

Another gene, Nr4a1, is important in dopamine signaling, the reward pathway that is “hijacked” by drugs of abuse.  This gene has been shown to attenuate reward response to cocaine in mice. Mice who underwent epigenetic changes to suppress Nr4a1 showed increased reward response to cocaine. A drug that is currently used in cancer treatment has been shown to suppress Nr4a1 and, consequently, Heller has shown it can reduce cocaine reward behavior in mice.

The identification of genes like FosB and Nr4a1 and evidence that changes in gene expression are even greater in periods of abstinence than during drug use. These may be exciting leaps in our understanding of addiction, and ultimately finding treatments best-suited to such a unique and devastating disease.   

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Drug Homing Method Helps Rethink Parkinson’s

The brain is the body’s most complex organ, and consequently the least understood. In fact, researchers like Michael Tadross, MD, PhD, wonder if the current research methods employed by neuroscientists are telling us as much as we think.

Michael Tadross is using novel approaches to tease out the causes of neuropsychiatric diseases at a cellular level.

Current methods such as gene editing and pharmacology can reveal how certain genes and drugs affect the cells in a given area of the brain, but they’re limited in that they don’t account for differences among different cell types. With his research, Tadross has tried to target specific cell types to better understand mechanisms that cause neuropsychiatric disorders.

To do this, Tadross developed a method to ensure a drug injected into a region of the brain will only affect specific cell types. Tadross genetically engineered the cell type of interest so that a special receptor protein, called HaloTag, is expressed at the cell membrane. Additionally, the drug of interest is altered so that it is tethered to the molecule that binds with the HaloTag receptor. By connecting the drug to the Halo-Tag ligand, and engineering only the cell type of interest to express the specific Halo-Tag receptor, Tadross effectively limited the cells affected by the drug to just one type. He calls this method “Drugs Acutely Restricted by Tethering,” or DART.

Tadross has been using the DART method to better understand the mechanisms underlying Parkinson’s disease. Parkinson’s is a neurological disease that affects a region of the brain called the striatum, causing tremors, slow movement, and rigid muscles, among other motor deficits.

Only cells expressing the HaloTag receptor can bind to the AMPA-repressing drug, ensuring virtually perfect cell-type specificity.

Patients with Parkinson’s show decreased levels of the neurotransmitter dopamine in the striatum. Consequently, treatments that involve restoring dopamine levels improve symptoms. For these reasons, Parkinson’s has long been regarded as a disease caused by a deficit in dopamine.

With his technique, Tadross is challenging this assumption. In addition to death of dopaminergic neurons, Parkinson’s is associated with an increase of the strength of synapses, or connections, between neurons that express AMPA receptors, which are the most common excitatory receptors in the brain.

In order to simulate the effects of Parkinson’s, Tadross and his team induced the death of dopaminergic neurons in the striatum of mice. As expected, the mice displayed significant motor impairments consistent with Parkinson’s. However, in addition to inducing the death of these neurons, Tadross engineered the AMPA-expressing cells to produce the Halo-Tag protein.

Tadross then treated the mice striatum with a common AMPA receptor blocker tethered to the Halo-Tag ligand. Amazingly, blocking the activity of these AMPA-expressing neurons, even in the absence of the dopaminergic neurons, reversed the effects of Parkinson’s so that the previously affected mice moved normally.

Tadross’s findings with the Parkinson’s mice exemplifies how little we know about cause and effect in the brain. The key to designing effective treatments for neuropsychiatric diseases, and possibly other diseases outside the nervous system, may be in teasing out the relationship of specific types of cells to symptoms and targeting the disease that way.

The ingenious work of researchers like Tadross will undoubtedly help bring us closer to understanding how the brain truly works.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

 

Aging and Decision-Making

Who makes riskier decisions, the young or the old? And what matters more in our decisions as we age — friends, health or money? The answers might surprise you.

Kendra Seaman works at the Center for the Study of Aging and Human Development and is interested in decision-making across the lifespan.

Duke postdoctoral fellow Kendra Seaman, Ph.D. uses mathematical models and brain imaging to understand how decision-making changes as we age. In a talk to a group of cognitive neuroscientists at Duke, Seamen explained that we have good reason to be concerned with how older people make decisions.

Statistically, older people in the U.S. have more money, and additionally more expenditures, specifically in healthcare. And by 2030, 20 percent of the US population will be over the age of 65.

One key component to decision-making is subjective value, which is a measure of the importance a reward or outcome has to a specific person at a specific point in time. Seaman used a reward of $20 as an example: it would have a much higher subjective value for a broke college student than for a wealthy retiree. Seaman discussed three factors that influence subjective value: reward, cost, and discount rate, or the determination of the value of future rewards.

Brain imaging research has found that subjective value is represented similarly in the medial prefrontal cortex (MPFC) across all ages. Despite this common network, Seaman and her colleagues have found significant differences in decision-making in older individuals.

The first difference comes in the form of reward. Older individuals are likely to be more invested in the outcome of a task if the reward is social or health-related rather than monetary. Consequently, they are more likely to want these health and social rewards  sooner and with higher certainty than younger individuals are. Understanding the salience of these rewards is crucial to designing future experiments to identify decision-making differences in older adults.

A preference for positive skew becomes more pronounced with age.

Older individuals also differ in their preferences for something called “skewed risks.” In these tasks, positive skew means a high probability of a small loss and a low probability of a large gain, such as buying a lottery ticket. Negative skew means a low probability of a large loss and a high probability of a small gain, such as undergoing a common medical procedure that has a low chance of harmful complications.

Older people tend to prefer positive skew to a greater degree than younger people, and this bias toward positive skew becomes more pronounced with age.

Understanding these tendencies could be vital in understanding why older people fall victim to fraud and decide to undergo risky medical procedures, and additionally be better equipped to motivate an aging population to remain involved in physical and mental activities.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Combatting the Opioid Epidemic

The opioid epidemic needs to be combatted in and out of the clinic.

In the U.S. 115 people die from opioids every day. The number of opioid overdoses increased fivefold from 1999 to 2016. While increased funding for resources like Narcan has helped — the opioid overdose-reversing drug now carried by emergency responders in cities throughout the country — changes to standard healthcare practices are still sorely needed.

Ashwin A Patkar, MD, medical director of the Duke Addictions Program, spoke to the Duke Center on Addiction and Behavior Change about how opioid addiction is treated.

The weaknesses of the current treatment standards first appear in diagnosis. Heroin and cocaine are currently being contaminated by distributors with fentanyl, an opioid that is 25 to 50 times more potent than heroin and cheaper than either of these drugs. Despite fentanyl’s prevalence in these street drugs, the standard form and interview for addiction patients does not include asking about or testing for the substance.

Patkar has found that 30 percent of opioid addiction patients have fentanyl in their urine and do not disclose it to the doctor. Rather than resulting from the patients’ dishonesty, Patkar believes, in most cases, patients are taking fentanyl without knowing that the drugs they are taking are contaminated.

Because of its potency, fentanyl causes overdoses that may require more Narcan than a standard heroin overdose. Understanding the prevalence of Narcan in patients is vital both for public health and educating patients so they can be adequately prepared.

Patkar also pointed out that, despite a lot of research supporting medication-assisted therapy, only 21 percent of addiction treatment facilities in the U.S. offer this type of treatment. Instead, most facilities rely on detoxification, which has high rates of relapse (greater than 85 percent within a year after detox) and comes with its own drawbacks. Detox lowers the patient’s tolerance to the drug, but care providers often neglect to tell the patients this, resulting in a rate of overdose that is three times higher than before detox.

Another common treatment for opioid addiction involves using methadone, a controlled substance that helps alleviate symptoms from opioid withdrawal. Because retention rate is high and cost of production is low, methadone poses a strong financial incentive. However, methadone itself is addictive, and overdose is possible.

Patkar points to a resource developed by Julie Bruneau as a reference for the Canadian standard of care for opioid abuse disorder. Rather than recommending detox or methadone as a first line of treatment, Bruneau and her team recommend buprenorphine , and naltrexone as a medication to support abstinence after treatment with buprenorphine.

Buprenorphine is a drug with a similar function as methadone, but with better and safer clinical outcomes. Buprenorphine does not create the same euphoric effect as methadone, and rates of overdose are six times less than in those prescribed methadone.

In addition to prescribing the right medicine, clinicians need to encourage patients to stick with treatment longer. Despite buprenorphine having good outcomes, patients who stop taking it after only 4 to 12 weeks, even with tapering directed by a doctor, exhibit only an 18 percent rate of successful abstinence.

Patkar closed his talk by reminding the audience that opioid addiction is a brain disease. In order to see a real change in the number of people dying from opioids, we need to focus on treating addiction as a disease; no one would question extended medication-based treatment of diseases like diabetes or heart disease, and the same should be said about addiction. Healthcare providers have a responsibility to treat addiction based on available research and best practices, and patients with opioid addiction deserve a standard of care the same as anyone else.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Medicine, Research and HIV

Duke senior Jesse Mangold has had an interest in the intersection of medicine and research since high school. While he took electives in a program called “Science, Medicine, and Research,” it wasn’t until the summer after his first year at Duke that he got to participate in research.

As a member of the inaugural class of Huang fellows, Mangold worked in the lab of Duke assistant professor Christina Meade on the compounding effect of HIV and marijuana use on cognitive abilities like memory and learning.

The following summer, Mangold traveled to Honduras with a group of students to help with collecting data and also meeting the overwhelming need for eye care. Mangold and the other students traveled to schools, administered visual exams, and provided free glasses to the children who needed them. Additionally, the students contributed to a growing research project, and for their part, put together an award-winning poster.

Mangold’s (top right) work in Honduras helped provide countless children with the eye care they so sorely needed.

Returning to school as a junior, Mangold wanted to focus on his greatest research interest: the molecular mechanisms of human immunodeficiency virus (HIV). Mangold found a home in the Permar lab, which investigates mechanisms of mother-to-child transmission of viruses including HIV, Zika, and Cytomegalovirus (CMV).

From co-authoring a book chapter to learning laboratory techniques, he was given “the opportunity to fail, but that was important, because I would learn and come back the next week and fail a little bit less,” Mangold said.

In the absence of any treatment, mothers who are HIV positive transmit the virus to their infants only 30 to 40 percent of the time, suggesting a component of the maternal immune system that provides at least partial protection against transmission.

The immune system functions through the activity of antibodies, or proteins that bind to specific receptors on a microbe and neutralize the threat they pose. The key to an effective HIV vaccine is identifying the most common receptors on the envelope of the virus and engineering a vaccine that can interact with any one of these receptors.

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

Mangold is working with Duke postdoctoral associate Ashley Nelson, Ph.D., to understand the immune response conferred on the infants of HIV positive mothers. To do this, they are using a rhesus macaque model. In order to most closely resemble the disease path as it would progress in humans, they are using a virus called SHIV, which is engineered to have the internal structure of simian immunodeficiency virus (SIV) and the viral envelope of HIV; SHIV can thus serve to naturally infect the macaques but provide insight into antibody response that can be generalized to humans.

The study involves infecting 12 female monkeys with the virus, waiting 12 weeks for the infection to proceed, and treating the monkeys with antiretroviral therapy (ART), which is currently the most effective treatment for HIV. Following the treatment, the level of virus in the blood, or viral load, will drop to undetectable levels. After an additional 12 weeks of treatment and three doses of either a candidate HIV vaccine or a placebo, treatment will be stopped. This design is meant to mirror the gold-standard of treatment for women who are HIV-positive and pregnant.

At this point, because the treatment and vaccine are imperfect, some virus will have survived and will “rebound,” or replicate fast and repopulate the blood. The key to this research is to sequence the virus at this stage, to identify the characteristics of the surviving virus that withstood the best available treatment. This surviving virus is also what is passed from mothers on antiretroviral therapy to their infants, so understanding its properties is vital for preventing mother-to-child transmission.

As a Huang fellow, Mangold had the opportunity to present his research on the compounding effect of HIV and marijuana on cognitive function.

Mangold’s role is looking into the difference in viral diversity before treatment commences and after rebound. This research will prove fundamental in engineering better and more effective treatments.

In addition to working with HIV, Mangold will be working on a project looking into a virus that doesn’t receive the same level of attention as HIV: Cytomegalovirus. CMV is the leading congenital cause of hearing loss, and mother-to-child transmission plays an important role in the transmission of this devastating virus.

Mangold and his mentor, pediatric resident Tiziana Coppola, M.D., are authoring a paper that reviews existing literature on CMV to look for a link between the prevalence of CMV in women of child-bearing age and whether this prevalence is predictive of the number of children suffer CMV-related hearing loss. With this study, Mangold and Coppola are hoping to identify if there is a component of the maternal immune system that confers some immunity to the child, which can then be targeted for vaccine development.

After graduation, Mangold will continue his research in the Permar lab during a gap year while applying to MD/PhD programs. He hopes to continue studying at the intersection of medicine and research in the HIV vaccine field.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

 

Quantifying Sleepiness and How It Relates to Depression

Sleep disturbance is a significant issue for many individuals with depressive illnesses. While most individuals deal with an inability to sleep, or insomnia, about 20-30% of depressed patients report the opposite problem – hypersomnia, or excessive sleep duration.

David Plante’s work investigates the relationship between depressive disorders and hypersomnolence. Photo courtesy of sleepfoundation.org

Patients who experience hypersomnolence report excessive daytime sleepiness (EDS) and often seem to be sleep-deprived, making the condition difficult to identify and poorly researched.

David Plante’s research focuses on a neglected type of sleep disturbance: hypersomnolence.

David T. Plante, MD, of the University of Wisconsin School of Medicine and Public Health, studies the significance of hypersomnolence in depression. He said the condition is resistant to treatment, often persisting even after depression has been treated, and its role in increasing risk of depression in previously healthy individuals needs to be examined.

One problem in studying daytime sleepiness is quantifying it. Subjective measures include the Epworth sleepiness scale, a quick self-report of how likely you are to fall asleep in a variety of situations. Objective scales are often involved processes, such as the Multiple Sleep Latency Test (MSLT), which requires an individual to attempt to take 4-5 naps, each 2 hours apart, in a lab while EEG records brain activity.

The MSLT measures how long it takes a person to fall asleep. Individuals with hypersomnolence will fall asleep faster than other patients, but determining a cutoff for what constitutes healthy and what qualifies as hypersomnolence has made the test an inexact measure. Typical cutoffs of 5-8 minutes provide a decent measure, but further research has cast doubt on this test’s value in studying depression.

The Wisconsin Sleep Cohort Study is an ongoing project begun in 1988 that follows state employees and includes a sleep study every four years. From this study, Plante has found an interesting and seemingly paradoxical relationship: while an increase in subjective measures of sleepiness is associated with increased likelihood of depression, objective measures like the MSLT associate depression with less sleepiness. Plante argues that this paradoxical relationship does not represent an inability for individuals to report their own sleepiness, but rather reflects the limitations of the MSLT.

Plante proposed several promising candidates for quantitative measures of excessive daytime sleepiness. One candidate, which is already a tool for studying sleep deprivation, is a ‘psychomotor vigilance task,’ where lapses in reaction time correlate with daytime sleepiness. Another method involves infrared measurements of the dilation of the pupil. Pupils dilate when a person is sleepy, so this somatic reaction could be useful.

High density EEG allowed Plante to identify the role of disturbed slow wave sleep in hypersomnolence.

Another area of interest for Plante is the signs of depressive sleepiness in the brain. Using high density EEG, which covers the whole head of the subject, Plante found that individuals with hypersomnolence experience less of the sleep cycle most associated with restoration, known as slow wave sleep. He identified a potential brain circuitry associated with sleepiness, but emphasized a need for methods like transcranial magnetic stimulation to get a better picture of the relationship between this circuitry and observed sleepiness.

By Sarah Haurin

Better Butterfly Learners Take Longer to Grow Up

Emilie Snell-Rood studies butterflies to understand the factors that influence plasticity.

The ability of animals to vary their phenotypes, or physical expression of their genes, in different environments is a key element to survival in an ever-changing world.

Emilie Snell-Rood, PhD, of the University of Minnesota, is interested in why this phenomena of plasticity varies. Some animals’ phenotypes are relatively stable despite varying environmental pressures, while others display a wide range of behaviors.

Researchers have looked into how the costs of plasticity limit its variability. While many biologists expected that energetic costs should be adequate explanations for the limits to plasticity, only about 30 percent of studies that have looked for plasticity-related costs have found them.

Butterflies’ learning has provided insight into developmental plasticity.

With her model of butterflies, Snell-Rood has worked to understand why these researchers have come up with little results.

Snell-Rood hypothesized that the life history of an animal, or the timing of major developmental events like weaning, should be of vital importance in the constraints on plasticity, specifically on the type of plasticity involved in learning. Much of learning involves trial and error, which is costly – it requires time, energy, and exposure to potential predators while exploring the environment.

Additionally, behavioral flexibility requires an investment in developing brain tissue to accommodate this learning.

Because of these costs, animals that engage in this kind of learning must forgo reproduction until later in life.

To test the costs of learning, Snell-Rood used butterflies as a subject. Butterflies require developmental plasticity to explore their environments and optimize their food finding strategies. Over time, butterflies get more efficient at landing on the best host plants, using color and other visual cues to find the best food sources.

Studying butterfly families shows that families that are better learners have increased volume in the part of the brain associated with sensory integration. Furthermore, experimentally speeding up an organism’s life history leads to a decline in learning ability.

These results support a tradeoff between an organism’s developmental plasticity and life history. While this strategy is more costly in terms of investment in neural development and energy investment, it provides greater efficacy in adaptation to environment. However, further pressures from resource availability can also influence plasticity.

Looking to the butterfly model, Snell-Rood found that quality nutrition increases egg production as well as areas of the brain associated with plasticity.

Understanding factors that influence an animal’s plasticity is becoming increasingly important. Not only does it allow us to understand the role of plasticity in evolution up to this point, but it allows us to predict how organisms will adapt to novel and changing environments, especially those that are changing because of human influence. For the purposes of conservation, these predictions are vital.

By Sarah Haurin

ECT: Shockingly Safe and Effective

Husain is interested in putting to rest misconceptions about the safety and efficacy of ECT.

Few treatments have proven as controversial and effective as electroconvulsive therapy (ECT), or ‘shock therapy’ in common parlance.

Hippocrates himself saw the therapeutic benefits of inducing seizures in patients with mental illness, observing that convulsions caused by malaria helped attenuate symptoms of mental illness. However, depictions of ECT as a form of medical abuse, as in the infamous scene from One Flew Over the Cuckoo’s Nest, have prevented ECT from becoming a first-line psychiatric treatment.

The Duke Hospital Psychiatry program recently welcomed back Duke Medical School alumnus Mustafa Husain to deliver the 2018 Ewald “Bud” Busse Memorial Lecture, which is held to commemorate a Duke doctor who pioneered the field of geriatric psychiatry.

Husain, from the University of Texas Southwestern, delivered a comprehensive lecture on neuromodulation, a term for the emerging subspecialty of psychiatric medicine that focuses on physiological treatments that are not medication.

The image most people have of ECT is probably the gruesome depiction seen in “One Flew Over the Cuckoo’s Nest.”

Husain began his lecture by stating that ECT is one of the most effective treatments for psychiatric illness. While medication and therapy are helpful for many people with depression, a considerable proportion of patients’ depression can be categorized as “treatment resistant depression” (TRD). In one of the largest controlled experiments of ECT, Husain and colleagues showed that 82 percent of TRD patients treated with ECT were remitted. While this remission rate is impressive, the rate at which remitted individuals experience a relapse into symptoms is also substantial – over 50% of remitted individuals will experience relapse.

Husain’s study continued to test whether a continuation of ECT would be a potentially successful therapy to prevent relapse in the first six months after acute ECT. He found that continuation of ECT worked as well as the current best combination of drugs used.

From this study, Husain made an interesting observation – the people who were doing best in the 6 months after ECT were elderly patients. He then set out to study the best form of treatment for these depressed elderly patients.

Typically, ECT involves stimulation of both sides of the brain (bilateral), but this treatment is associated with adverse cognitive effects like memory loss. Using right unilateral ECT effectively decreased cognitive side effects while maintaining an appreciable remission rate.

After the initial treatment, patients were again assigned to either receive continued drug treatment or continued ECT. In contrast to the previous study, however, the treatment for continued ECT was designed based on the individual patients’ ratings from a commonly used depression scaling system.

The results of this study show the potential that ECT has in becoming a more common treatment for major depressive disorder: maintenance ECT showed a lower relapse rate than drug treatment following initial ECT. If psychiatrists become more flexible in their prescription of ECT, adjusting the treatment plan to accommodate the changing needs of the patients, a disorder that is exceedingly difficult to treat could become more manageable.

In addition to discussing ECT, Husain shared his research into other methods of neuromodulation, including Magnetic Seizure Therapy (MST). MST uses magnetic fields to induce seizures in a more localized region of the brain than available via ECT.

Importantly, MST does not cause the cognitive deficits observed in patients who receive ECT. Husain’s preliminary investigation found that a treatment course relying on MST was comparable in efficacy to ECT. While further research is needed, Husain is hopeful in the possibilities that interventional psychiatry can provide for severely depressed patients.

By Sarah Haurin 

Understanding the Link Between ADHD and Binge Eating Could Point to New Treatments

 

Binge eating disorder is the most prevalent eating disorder in the United States. Infographic courtesy of Multi-Service Eating Disorders Association

With more than a third of the adult population of the United States meeting criteria for obesity, doctors are becoming increasingly interested in behaviors that contribute to these rates.

Allan Kaplan is interested in improving treatment of binge eating disorder.

Allan Kaplan, MD, of the University of Toronto, is interested in eating disorders, specifically binge eating disorder, which is observed in about 35 percent of people with obesity.

Binge eating disorder (BED) is a pattern of disordered eating characterized by consumption of a large number of calories in a relatively short period of time. In addition to these binges, patients report lack of control and feelings of self-disgust. Because of these patterns of excessive caloric intake, binge eating disorder and obesity go hand-in-hand, and treatment of the disorder could be instrumental in decreasing rates of obesity and improving overall health.

In addition to the health risks associated with obesity, binge eating disorder is associated with anxiety disorders, affective disorders, substance abuse and attention deficit hyperactivity disorder (ADHD) – in fact, about 30 percent of individuals with binge eating disorder also have a history of ADHD.

Binge eating disorder displays a high comorbidity with mood and affective disorders. Infographic courtesy of American Addiction Centers.

ADHD is characterized by inability to focus, hyperactivity, and impulsivity, and substance abuse involves cravings and patterns of losing control followed by regret. These patterns of mental and physiological sympoms resemble those seen in patients with binge eating disorder. Kaplan and other researchers are linking the neurological patterns observed in these disorders to better understand BED.

Researchers have found that the neurological pathways become active when a patient with binge eating disorder is provided with a food-related stimulus. Individuals with the eating disorder are more sensitive to food-related rewards than most people. Researchers have also identified a genetic basis — certain genes make individuals more susceptible to reward and thus more likely to engage in binges.

Because patients with ADHD exhibit similar neurological patterns, doctors are looking to drugs already approved by the FDA to treat ADHD as possible treatments for binge eating disorder. The first of these approved drugs, Vyvanse, has proven not much better than the traditional form of treatment, cognitive behavioral therapy, a form of talk therapy that aims to identify and correct dysfunctions in behavior and thought patterns that lead to disordered behaviors.

Another drug, however, proved promising in a study conducted by Kaplan and his colleagues. The ADHD drug methylphenidate, combined with CBT, led to significant clinical outcomes — pateints engaged in less binges and cravings and body mass index decreased. Kaplan argues that the most effective treatment would reduce binges, treat physiological symptoms like obesity, improve psychological disturbances like low self-esteem, and, of course, be safe. So far, the combination of psychostimulants like methylphenidate and CBT have met these criteria.

Kaplan emphasized a need to make information about binge eating disorder and its treatments more available. Most individuals currently being treated for BED do not obtain treatment knowing they have an eating disorder — they are usually diagnosed only after seeking help with obesity-related health issues or help in weight loss. Making clinicians more familiar with the disorder and its associated behaviors as well as encouraging patients to seek treatment could prove instrumental in combating the current healthcare issue of obesity.

By Sarah Haurin

Mice, motor learning, and making decisions

Advanced imaging techniques allow neuroscientists to better understand how the motor outputs we observe are created in the brain.

Early understandings of the brain viewed it as a black box that takes sensory input and generates a motor response, with the in-between functioning of the brain as a mystery.

Takaki Komiyama is curious about how the brain produces the stereotypical movements characteristic of motor learning.

Takaki Komiyama of the University of California, San Diego is curious about the relationship between sensory input, motor output, and what happens in between. “What fascinates me the most is the flexibility of this dynamic… this flexibility of the relationship between the environment and the brain is the key element of my research,” Komiyama said to an audience of Duke neuroscience researchers.

Komiyama and his lab have designed experiments to watch how the brain changes as mice learn. Specifically, they train mice to complete a lever-pushing task in response to an auditory stimulus and then use an advanced imaging technique to watch the activity of specific populations of neurons.

Komiyama based his experimental design on a hallmark of motor learning: An “expert” mouse will hear the auditory stimulus and produce a motor response that is exactly the same each time. Komiyama’s team was curious about how these reproducible movements are learned.

Focusing on the primary motor cortex, called M1 for short, Komiyama observed many different neuronal firing patterns as the mouse learned the motion of lever-pushing. As the mouse ventured into “expert” territory, usually after about two weeks of training, this variation was replaced by an activity pattern that is the same from trial to trial. In addition to being consistent, this final pattern starts earlier after the stimulus and takes less time to complete than earlier patterns. In other words, during learning, the brain tries out different pathways for the goal action and then converges on the most efficient way of producing the desired response.

Komiyama then turned his focus to M2, the secondary motor cortex, which he observed to be one of the last areas activated during early learning trials but one of the first activated during late trials. To test M2’s role in learning, Komiyama inactivated the region in trained mice and subjected them to the same stimulus-motor response trial.

The mice with inactivated M2’s missed more trials, took longer to initiate movement, and completed the lever pushing less efficiently. Essentially, the mice behaved as if they had never learned the movement, suggesting that M2 is crucial for coordinating learned motor behavior.

In addition to identifying crucial patterns of motor learning, Komiyama and his team are working to understand decision making. After designing a more complex lever-pushing task that required pushing a joystick in different directions depending on the visual stimulus, Komiyama observed the mice’s accuracy plateaued around 60%.

The mice’s internal biases prevented them from achieving better results in the visual stimuli task.

Komiyama hypothesized that this pattern of inaccuracy could be explained by the mice’s internal biases from previous trials’ outcomes. He designed a statistical model that incorporated the previous trials’ outcomes. With further testing, the model accurately predicted the mice’s wrong choices.

The posterior parietal cortex (PPC) is an area of the brain that has been found to be involved in decision making tasks. Komiyama observed neurons in the PPC that predicted which direction the mice would push the joystick. In addition to being active before the motor response during trials, these neurons were also active in the time between trials.

Seeing this as a neural correlate for internal biases, Komiyama hypothesized that inactivating this region would decrease the influence of bias on the mice’s choices. Sure enough, inactivating the PPC led to more accurate responses in the mice, thus confirming the PPC as a neural source of bias.

 By Sarah Haurin

Page 2 of 4

Powered by WordPress & Theme by Anders Norén