Duke Research Blog

Following the people and events that make up the research community at Duke.

Author: Sarah Haurin (Page 1 of 2)

Combatting the Opioid Epidemic

The opioid epidemic needs to be combatted in and out of the clinic.

In the U.S. 115 people die from opioids every day. The number of opioid overdoses increased fivefold from 1999 to 2016. While increased funding for resources like Narcan has helped — the opioid overdose-reversing drug now carried by emergency responders in cities throughout the country — changes to standard healthcare practices are still sorely needed.

Ashwin A Patkar, MD, medical director of the Duke Addictions Program, spoke to the Duke Center on Addiction and Behavior Change about how opioid addiction is treated.

The weaknesses of the current treatment standards first appear in diagnosis. Heroin and cocaine are currently being contaminated by distributors with fentanyl, an opioid that is 25 to 50 times more potent than heroin and cheaper than either of these drugs. Despite fentanyl’s prevalence in these street drugs, the standard form and interview for addiction patients does not include asking about or testing for the substance.

Patkar has found that 30 percent of opioid addiction patients have fentanyl in their urine and do not disclose it to the doctor. Rather than resulting from the patients’ dishonesty, Patkar believes, in most cases, patients are taking fentanyl without knowing that the drugs they are taking are contaminated.

Because of its potency, fentanyl causes overdoses that may require more Narcan than a standard heroin overdose. Understanding the prevalence of Narcan in patients is vital both for public health and educating patients so they can be adequately prepared.

Patkar also pointed out that, despite a lot of research supporting medication-assisted therapy, only 21 percent of addiction treatment facilities in the U.S. offer this type of treatment. Instead, most facilities rely on detoxification, which has high rates of relapse (greater than 85 percent within a year after detox) and comes with its own drawbacks. Detox lowers the patient’s tolerance to the drug, but care providers often neglect to tell the patients this, resulting in a rate of overdose that is three times higher than before detox.

Another common treatment for opioid addiction involves using methadone, a controlled substance that helps alleviate symptoms from opioid withdrawal. Because retention rate is high and cost of production is low, methadone poses a strong financial incentive. However, methadone itself is addictive, and overdose is possible.

Patkar points to a resource developed by Julie Bruneau as a reference for the Canadian standard of care for opioid abuse disorder. Rather than recommending detox or methadone as a first line of treatment, Bruneau and her team recommend buprenorphine , and naltrexone as a medication to support abstinence after treatment with buprenorphine.

Buprenorphine is a drug with a similar function as methadone, but with better and safer clinical outcomes. Buprenorphine does not create the same euphoric effect as methadone, and rates of overdose are six times less than in those prescribed methadone.

In addition to prescribing the right medicine, clinicians need to encourage patients to stick with treatment longer. Despite buprenorphine having good outcomes, patients who stop taking it after only 4 to 12 weeks, even with tapering directed by a doctor, exhibit only an 18 percent rate of successful abstinence.

Patkar closed his talk by reminding the audience that opioid addiction is a brain disease. In order to see a real change in the number of people dying from opioids, we need to focus on treating addiction as a disease; no one would question extended medication-based treatment of diseases like diabetes or heart disease, and the same should be said about addiction. Healthcare providers have a responsibility to treat addiction based on available research and best practices, and patients with opioid addiction deserve a standard of care the same as anyone else.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Medicine, Research and HIV

Duke senior Jesse Mangold has had an interest in the intersection of medicine and research since high school. While he took electives in a program called “Science, Medicine, and Research,” it wasn’t until the summer after his first year at Duke that he got to participate in research.

As a member of the inaugural class of Huang fellows, Mangold worked in the lab of Duke assistant professor Christina Meade on the compounding effect of HIV and marijuana use on cognitive abilities like memory and learning.

The following summer, Mangold traveled to Honduras with a group of students to help with collecting data and also meeting the overwhelming need for eye care. Mangold and the other students traveled to schools, administered visual exams, and provided free glasses to the children who needed them. Additionally, the students contributed to a growing research project, and for their part, put together an award-winning poster.

Mangold’s (top right) work in Honduras helped provide countless children with the eye care they so sorely needed.

Returning to school as a junior, Mangold wanted to focus on his greatest research interest: the molecular mechanisms of human immunodeficiency virus (HIV). Mangold found a home in the Permar lab, which investigates mechanisms of mother-to-child transmission of viruses including HIV, Zika, and Cytomegalovirus (CMV).

From co-authoring a book chapter to learning laboratory techniques, he was given “the opportunity to fail, but that was important, because I would learn and come back the next week and fail a little bit less,” Mangold said.

In the absence of any treatment, mothers who are HIV positive transmit the virus to their infants only 30 to 40 percent of the time, suggesting a component of the maternal immune system that provides at least partial protection against transmission.

The immune system functions through the activity of antibodies, or proteins that bind to specific receptors on a microbe and neutralize the threat they pose. The key to an effective HIV vaccine is identifying the most common receptors on the envelope of the virus and engineering a vaccine that can interact with any one of these receptors.

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

This human T cell (blue) is under attack by HIV (yellow), the virus that causes AIDS. Credit: Seth Pincus, Elizabeth Fischer and Austin Athman, National Institute of Allergy and Infectious Diseases, National Institutes of Health

Mangold is working with Duke postdoctoral associate Ashley Nelson, Ph.D., to understand the immune response conferred on the infants of HIV positive mothers. To do this, they are using a rhesus macaque model. In order to most closely resemble the disease path as it would progress in humans, they are using a virus called SHIV, which is engineered to have the internal structure of simian immunodeficiency virus (SIV) and the viral envelope of HIV; SHIV can thus serve to naturally infect the macaques but provide insight into antibody response that can be generalized to humans.

The study involves infecting 12 female monkeys with the virus, waiting 12 weeks for the infection to proceed, and treating the monkeys with antiretroviral therapy (ART), which is currently the most effective treatment for HIV. Following the treatment, the level of virus in the blood, or viral load, will drop to undetectable levels. After an additional 12 weeks of treatment and three doses of either a candidate HIV vaccine or a placebo, treatment will be stopped. This design is meant to mirror the gold-standard of treatment for women who are HIV-positive and pregnant.

At this point, because the treatment and vaccine are imperfect, some virus will have survived and will “rebound,” or replicate fast and repopulate the blood. The key to this research is to sequence the virus at this stage, to identify the characteristics of the surviving virus that withstood the best available treatment. This surviving virus is also what is passed from mothers on antiretroviral therapy to their infants, so understanding its properties is vital for preventing mother-to-child transmission.

As a Huang fellow, Mangold had the opportunity to present his research on the compounding effect of HIV and marijuana on cognitive function.

Mangold’s role is looking into the difference in viral diversity before treatment commences and after rebound. This research will prove fundamental in engineering better and more effective treatments.

In addition to working with HIV, Mangold will be working on a project looking into a virus that doesn’t receive the same level of attention as HIV: Cytomegalovirus. CMV is the leading congenital cause of hearing loss, and mother-to-child transmission plays an important role in the transmission of this devastating virus.

Mangold and his mentor, pediatric resident Tiziana Coppola, M.D., are authoring a paper that reviews existing literature on CMV to look for a link between the prevalence of CMV in women of child-bearing age and whether this prevalence is predictive of the number of children suffer CMV-related hearing loss. With this study, Mangold and Coppola are hoping to identify if there is a component of the maternal immune system that confers some immunity to the child, which can then be targeted for vaccine development.

After graduation, Mangold will continue his research in the Permar lab during a gap year while applying to MD/PhD programs. He hopes to continue studying at the intersection of medicine and research in the HIV vaccine field.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

 

Quantifying Sleepiness and How It Relates to Depression

Sleep disturbance is a significant issue for many individuals with depressive illnesses. While most individuals deal with an inability to sleep, or insomnia, about 20-30% of depressed patients report the opposite problem – hypersomnia, or excessive sleep duration.

David Plante’s work investigates the relationship between depressive disorders and hypersomnolence. Photo courtesy of sleepfoundation.org

Patients who experience hypersomnolence report excessive daytime sleepiness (EDS) and often seem to be sleep-deprived, making the condition difficult to identify and poorly researched.

David Plante’s research focuses on a neglected type of sleep disturbance: hypersomnolence.

David T. Plante, MD, of the University of Wisconsin School of Medicine and Public Health, studies the significance of hypersomnolence in depression. He said the condition is resistant to treatment, often persisting even after depression has been treated, and its role in increasing risk of depression in previously healthy individuals needs to be examined.

One problem in studying daytime sleepiness is quantifying it. Subjective measures include the Epworth sleepiness scale, a quick self-report of how likely you are to fall asleep in a variety of situations. Objective scales are often involved processes, such as the Multiple Sleep Latency Test (MSLT), which requires an individual to attempt to take 4-5 naps, each 2 hours apart, in a lab while EEG records brain activity.

The MSLT measures how long it takes a person to fall asleep. Individuals with hypersomnolence will fall asleep faster than other patients, but determining a cutoff for what constitutes healthy and what qualifies as hypersomnolence has made the test an inexact measure. Typical cutoffs of 5-8 minutes provide a decent measure, but further research has cast doubt on this test’s value in studying depression.

The Wisconsin Sleep Cohort Study is an ongoing project begun in 1988 that follows state employees and includes a sleep study every four years. From this study, Plante has found an interesting and seemingly paradoxical relationship: while an increase in subjective measures of sleepiness is associated with increased likelihood of depression, objective measures like the MSLT associate depression with less sleepiness. Plante argues that this paradoxical relationship does not represent an inability for individuals to report their own sleepiness, but rather reflects the limitations of the MSLT.

Plante proposed several promising candidates for quantitative measures of excessive daytime sleepiness. One candidate, which is already a tool for studying sleep deprivation, is a ‘psychomotor vigilance task,’ where lapses in reaction time correlate with daytime sleepiness. Another method involves infrared measurements of the dilation of the pupil. Pupils dilate when a person is sleepy, so this somatic reaction could be useful.

High density EEG allowed Plante to identify the role of disturbed slow wave sleep in hypersomnolence.

Another area of interest for Plante is the signs of depressive sleepiness in the brain. Using high density EEG, which covers the whole head of the subject, Plante found that individuals with hypersomnolence experience less of the sleep cycle most associated with restoration, known as slow wave sleep. He identified a potential brain circuitry associated with sleepiness, but emphasized a need for methods like transcranial magnetic stimulation to get a better picture of the relationship between this circuitry and observed sleepiness.

By Sarah Haurin

Better Butterfly Learners Take Longer to Grow Up

Emilie Snell-Rood studies butterflies to understand the factors that influence plasticity.

The ability of animals to vary their phenotypes, or physical expression of their genes, in different environments is a key element to survival in an ever-changing world.

Emilie Snell-Rood, PhD, of the University of Minnesota, is interested in why this phenomena of plasticity varies. Some animals’ phenotypes are relatively stable despite varying environmental pressures, while others display a wide range of behaviors.

Researchers have looked into how the costs of plasticity limit its variability. While many biologists expected that energetic costs should be adequate explanations for the limits to plasticity, only about 30 percent of studies that have looked for plasticity-related costs have found them.

Butterflies’ learning has provided insight into developmental plasticity.

With her model of butterflies, Snell-Rood has worked to understand why these researchers have come up with little results.

Snell-Rood hypothesized that the life history of an animal, or the timing of major developmental events like weaning, should be of vital importance in the constraints on plasticity, specifically on the type of plasticity involved in learning. Much of learning involves trial and error, which is costly – it requires time, energy, and exposure to potential predators while exploring the environment.

Additionally, behavioral flexibility requires an investment in developing brain tissue to accommodate this learning.

Because of these costs, animals that engage in this kind of learning must forgo reproduction until later in life.

To test the costs of learning, Snell-Rood used butterflies as a subject. Butterflies require developmental plasticity to explore their environments and optimize their food finding strategies. Over time, butterflies get more efficient at landing on the best host plants, using color and other visual cues to find the best food sources.

Studying butterfly families shows that families that are better learners have increased volume in the part of the brain associated with sensory integration. Furthermore, experimentally speeding up an organism’s life history leads to a decline in learning ability.

These results support a tradeoff between an organism’s developmental plasticity and life history. While this strategy is more costly in terms of investment in neural development and energy investment, it provides greater efficacy in adaptation to environment. However, further pressures from resource availability can also influence plasticity.

Looking to the butterfly model, Snell-Rood found that quality nutrition increases egg production as well as areas of the brain associated with plasticity.

Understanding factors that influence an animal’s plasticity is becoming increasingly important. Not only does it allow us to understand the role of plasticity in evolution up to this point, but it allows us to predict how organisms will adapt to novel and changing environments, especially those that are changing because of human influence. For the purposes of conservation, these predictions are vital.

By Sarah Haurin

ECT: Shockingly Safe and Effective

Husain is interested in putting to rest misconceptions about the safety and efficacy of ECT.

Few treatments have proven as controversial and effective as electroconvulsive therapy (ECT), or ‘shock therapy’ in common parlance.

Hippocrates himself saw the therapeutic benefits of inducing seizures in patients with mental illness, observing that convulsions caused by malaria helped attenuate symptoms of mental illness. However, depictions of ECT as a form of medical abuse, as in the infamous scene from One Flew Over the Cuckoo’s Nest, have prevented ECT from becoming a first-line psychiatric treatment.

The Duke Hospital Psychiatry program recently welcomed back Duke Medical School alumnus Mustafa Husain to deliver the 2018 Ewald “Bud” Busse Memorial Lecture, which is held to commemorate a Duke doctor who pioneered the field of geriatric psychiatry.

Husain, from the University of Texas Southwestern, delivered a comprehensive lecture on neuromodulation, a term for the emerging subspecialty of psychiatric medicine that focuses on physiological treatments that are not medication.

The image most people have of ECT is probably the gruesome depiction seen in “One Flew Over the Cuckoo’s Nest.”

Husain began his lecture by stating that ECT is one of the most effective treatments for psychiatric illness. While medication and therapy are helpful for many people with depression, a considerable proportion of patients’ depression can be categorized as “treatment resistant depression” (TRD). In one of the largest controlled experiments of ECT, Husain and colleagues showed that 82 percent of TRD patients treated with ECT were remitted. While this remission rate is impressive, the rate at which remitted individuals experience a relapse into symptoms is also substantial – over 50% of remitted individuals will experience relapse.

Husain’s study continued to test whether a continuation of ECT would be a potentially successful therapy to prevent relapse in the first six months after acute ECT. He found that continuation of ECT worked as well as the current best combination of drugs used.

From this study, Husain made an interesting observation – the people who were doing best in the 6 months after ECT were elderly patients. He then set out to study the best form of treatment for these depressed elderly patients.

Typically, ECT involves stimulation of both sides of the brain (bilateral), but this treatment is associated with adverse cognitive effects like memory loss. Using right unilateral ECT effectively decreased cognitive side effects while maintaining an appreciable remission rate.

After the initial treatment, patients were again assigned to either receive continued drug treatment or continued ECT. In contrast to the previous study, however, the treatment for continued ECT was designed based on the individual patients’ ratings from a commonly used depression scaling system.

The results of this study show the potential that ECT has in becoming a more common treatment for major depressive disorder: maintenance ECT showed a lower relapse rate than drug treatment following initial ECT. If psychiatrists become more flexible in their prescription of ECT, adjusting the treatment plan to accommodate the changing needs of the patients, a disorder that is exceedingly difficult to treat could become more manageable.

In addition to discussing ECT, Husain shared his research into other methods of neuromodulation, including Magnetic Seizure Therapy (MST). MST uses magnetic fields to induce seizures in a more localized region of the brain than available via ECT.

Importantly, MST does not cause the cognitive deficits observed in patients who receive ECT. Husain’s preliminary investigation found that a treatment course relying on MST was comparable in efficacy to ECT. While further research is needed, Husain is hopeful in the possibilities that interventional psychiatry can provide for severely depressed patients.

By Sarah Haurin 

Understanding the Link Between ADHD and Binge Eating Could Point to New Treatments

 

Binge eating disorder is the most prevalent eating disorder in the United States. Infographic courtesy of Multi-Service Eating Disorders Association

With more than a third of the adult population of the United States meeting criteria for obesity, doctors are becoming increasingly interested in behaviors that contribute to these rates.

Allan Kaplan is interested in improving treatment of binge eating disorder.

Allan Kaplan, MD, of the University of Toronto, is interested in eating disorders, specifically binge eating disorder, which is observed in about 35 percent of people with obesity.

Binge eating disorder (BED) is a pattern of disordered eating characterized by consumption of a large number of calories in a relatively short period of time. In addition to these binges, patients report lack of control and feelings of self-disgust. Because of these patterns of excessive caloric intake, binge eating disorder and obesity go hand-in-hand, and treatment of the disorder could be instrumental in decreasing rates of obesity and improving overall health.

In addition to the health risks associated with obesity, binge eating disorder is associated with anxiety disorders, affective disorders, substance abuse and attention deficit hyperactivity disorder (ADHD) – in fact, about 30 percent of individuals with binge eating disorder also have a history of ADHD.

Binge eating disorder displays a high comorbidity with mood and affective disorders. Infographic courtesy of American Addiction Centers.

ADHD is characterized by inability to focus, hyperactivity, and impulsivity, and substance abuse involves cravings and patterns of losing control followed by regret. These patterns of mental and physiological sympoms resemble those seen in patients with binge eating disorder. Kaplan and other researchers are linking the neurological patterns observed in these disorders to better understand BED.

Researchers have found that the neurological pathways become active when a patient with binge eating disorder is provided with a food-related stimulus. Individuals with the eating disorder are more sensitive to food-related rewards than most people. Researchers have also identified a genetic basis — certain genes make individuals more susceptible to reward and thus more likely to engage in binges.

Because patients with ADHD exhibit similar neurological patterns, doctors are looking to drugs already approved by the FDA to treat ADHD as possible treatments for binge eating disorder. The first of these approved drugs, Vyvanse, has proven not much better than the traditional form of treatment, cognitive behavioral therapy, a form of talk therapy that aims to identify and correct dysfunctions in behavior and thought patterns that lead to disordered behaviors.

Another drug, however, proved promising in a study conducted by Kaplan and his colleagues. The ADHD drug methylphenidate, combined with CBT, led to significant clinical outcomes — pateints engaged in less binges and cravings and body mass index decreased. Kaplan argues that the most effective treatment would reduce binges, treat physiological symptoms like obesity, improve psychological disturbances like low self-esteem, and, of course, be safe. So far, the combination of psychostimulants like methylphenidate and CBT have met these criteria.

Kaplan emphasized a need to make information about binge eating disorder and its treatments more available. Most individuals currently being treated for BED do not obtain treatment knowing they have an eating disorder — they are usually diagnosed only after seeking help with obesity-related health issues or help in weight loss. Making clinicians more familiar with the disorder and its associated behaviors as well as encouraging patients to seek treatment could prove instrumental in combating the current healthcare issue of obesity.

By Sarah Haurin

Mice, motor learning, and making decisions

Advanced imaging techniques allow neuroscientists to better understand how the motor outputs we observe are created in the brain.

Early understandings of the brain viewed it as a black box that takes sensory input and generates a motor response, with the in-between functioning of the brain as a mystery.

Takaki Komiyama is curious about how the brain produces the stereotypical movements characteristic of motor learning.

Takaki Komiyama of the University of California, San Diego is curious about the relationship between sensory input, motor output, and what happens in between. “What fascinates me the most is the flexibility of this dynamic… this flexibility of the relationship between the environment and the brain is the key element of my research,” Komiyama said to an audience of Duke neuroscience researchers.

Komiyama and his lab have designed experiments to watch how the brain changes as mice learn. Specifically, they train mice to complete a lever-pushing task in response to an auditory stimulus and then use an advanced imaging technique to watch the activity of specific populations of neurons.

Komiyama based his experimental design on a hallmark of motor learning: An “expert” mouse will hear the auditory stimulus and produce a motor response that is exactly the same each time. Komiyama’s team was curious about how these reproducible movements are learned.

Focusing on the primary motor cortex, called M1 for short, Komiyama observed many different neuronal firing patterns as the mouse learned the motion of lever-pushing. As the mouse ventured into “expert” territory, usually after about two weeks of training, this variation was replaced by an activity pattern that is the same from trial to trial. In addition to being consistent, this final pattern starts earlier after the stimulus and takes less time to complete than earlier patterns. In other words, during learning, the brain tries out different pathways for the goal action and then converges on the most efficient way of producing the desired response.

Komiyama then turned his focus to M2, the secondary motor cortex, which he observed to be one of the last areas activated during early learning trials but one of the first activated during late trials. To test M2’s role in learning, Komiyama inactivated the region in trained mice and subjected them to the same stimulus-motor response trial.

The mice with inactivated M2’s missed more trials, took longer to initiate movement, and completed the lever pushing less efficiently. Essentially, the mice behaved as if they had never learned the movement, suggesting that M2 is crucial for coordinating learned motor behavior.

In addition to identifying crucial patterns of motor learning, Komiyama and his team are working to understand decision making. After designing a more complex lever-pushing task that required pushing a joystick in different directions depending on the visual stimulus, Komiyama observed the mice’s accuracy plateaued around 60%.

The mice’s internal biases prevented them from achieving better results in the visual stimuli task.

Komiyama hypothesized that this pattern of inaccuracy could be explained by the mice’s internal biases from previous trials’ outcomes. He designed a statistical model that incorporated the previous trials’ outcomes. With further testing, the model accurately predicted the mice’s wrong choices.

The posterior parietal cortex (PPC) is an area of the brain that has been found to be involved in decision making tasks. Komiyama observed neurons in the PPC that predicted which direction the mice would push the joystick. In addition to being active before the motor response during trials, these neurons were also active in the time between trials.

Seeing this as a neural correlate for internal biases, Komiyama hypothesized that inactivating this region would decrease the influence of bias on the mice’s choices. Sure enough, inactivating the PPC led to more accurate responses in the mice, thus confirming the PPC as a neural source of bias.

 By Sarah Haurin

How A Bat’s Brain Navigates

Most of what we know about how the hippocampus, a region of the brain associated with memory formation and spatial representations, comes from research done on rodents. Rat brains have taught us a lot, but researchers in Israel have found an interesting alternative model to understanding how the hippocampus helps mammals navigate: Bats.

The Egyptian fruit bat proved the perfect subject for studies of mammalian navigation.

Weizmann Institute neurophysiologist Nachum Ulanovsky, PhD, and his team have looked to bats to understand the nuances of navigation through space. While previous research has identified specific cells in the hippocampus, called place cells, that are active when an animal is located in a specific place, there is not much literature describing how animals actually navigate from point A to point B.

Nachum Ulanovsky

Ulanovsky believes that bats are an ingenious model to study mammalian navigation. While bats have the same types of hippocampal neurons found in rats, the patterns of bats’ neurons’ firings more closely match that of humans than rats do.

Ulanovsky sought to test how bats know where they are going. Using GPS tracking equipment, his team found that wild bats that lived in a cave would travel up to 20 kilometers to forage fruit from specific trees. Night after night, these bats followed similar routes past perfectly viable food sources to the same tree over and over again.

The understanding of hippocampal place cells firing at specific locations doesn’t explain the apparent guided travel of the bat night after night, and other explanations like olfactory input do not explain why the bats fly over good food sources to their preferred tree.

The researchers designed an experiment to test how bats encode the 3D information necessary for this navigation. By letting the bats fly around and recording brain activity, Ulanovsky and team found that their 3D models are actually spherical in shape. They also found another type of hippocampal cells that encode the orientation the bat is facing. These head direction cells operate in a coordinate system that allows for a continuity of awareness of its orientation as the animal moves through space.

http://www.cell.com/cms/attachment/2091916945/2076305003/gr1_lrg.jpg

Ulanovsky found bats relied on memory to navigate toward the goal.

To understand how the bats navigate toward a specific goal, the researchers devised another experiment. They constructed a goal with a landing place and a food incentive. The bat would learn where the goal was and find it. In order to test whether the bats’ ability to find the goal was memory-based, or utilized the hippocampus, the researchers then conducted trials where the goal was hidden from the bats’ view.

To test whether the bats’ relied on memory, the Ulvanosky team measured the goal direction angle, or the angle between the bat’s head orientation and the goal. After being familiarized with the location of the goal, the bats tended toward a goal-direction angle of zero, meaning they oriented themselves toward the goal even when the goal was out of sight.

Continued research identified cells that encode information about the distance the bat is from the goal, the final piece allowing bats to navigate to a goal successfully. These hippocampal cells selectively fire when the bat is within specific distances of the goal, allowing for an awareness of location over distance.

While Ulanovsky and his team have met incredible success in identifying new types of cells as well as new functions of known cells in the hippocampus, further research in a more natural setting is required.

“If we study only under these very controlled and sterile environments, we may miss the very thing we are trying to understand, which is behavior,” Ulanovsky concluded.

By Sarah Haurin

Dopamine, Drugs, and Depression

The neurotransmitter dopamine plays a major role in mental illnesses like substance abuse disorders and depressive disorders, as well as a more general role in reward and motivational systems of the brain. But there are still certain aspects of dopamine activity in the brain that we don’t know much about.

Nii Antie Addy and his lab are interested in the role of dopamine in substance abuse and mood disorders.

Duke graduate Nii Antie Addy, PhD, and his lab at Yale School of Medicine have been focusing on dopamine activity in a specific part of the brain that has not been studied: the ventral tegmental area (VTA).

To understand the mechanisms underlying this association, Addy and his team looked at cue-induced drug-seeking behavior. Using classical conditioning, rats can be trained to pair certain cues with the reward of drug administration. When a rat receives an unexpected award, dopamine activity increases. After conditioning, dopamine is released in response to the cue more  than to the drug itself. Looking at the patterns of dopamine release in rats who are forced to undergo detoxification can thus provide insight into how these cues and neurotransmitter activity relate to relapse of substance abuse.

When rats are taught to self-administer cocaine, and each administration of the drug is paired with the cue, after a period of forced detoxification, the rodents continue to try to self-administer the drug, even when the drug is withheld and only the cue is presented. This finding again demonstrates the connection between the cue and drug-seeking behavior.

Studying the activity in the VTA gave additional insights into the regulation of this system. During the period of abstinence, when the rodents are forced to detox, researchers observed an increase in the activity of cholingergic neurons, or neurons in the brain system that respond to the neurotransmitter acetylcholine.

Using these observations, Addy and his team sought to identify which of the various receptors that respond to acetylcholine can be used to regulate the dopamine system behind drug-seeking behaviors. They discovered that a specific type of acetylcholine receptor, the muscarinic receptor, is involved in more general reward-seeking behaviors and thus may be a target for therapies.

Using Isradipine, a drug already approved by the FDA for treatment of high blood pressure, Addy designed an experiment to test the role of these muscarinic receptors. He co-opted the drug to act as a calcium antagonist in the VTA and thus increase dopamine activity in rodents during their forced detox and before returning them to access to cocaine. The outcome was promising: administration of Isradipine was associated with a decrease in the coke-seeking behavior of rodents then placed in the chamber with the cue.

The understanding of the role of cholinergic neurons in regulation of dopamine-related mental illnesses like substance-abuse also contributes insights into depressive and anxiety disorders. If the same pathway implicated in cue-induced drug-seeking were involved in depressive and anxious behaviors, then increasing cholinergic activity should increase pro-depressive behavior.

Addy’s experiment yielded exactly these results, opening up new areas to be further researched to improve the treatment of mood disorders.

Post by Sarah Haurin

 

How We Know Where We Are

The brain is a personalized GPS. It can keep track of where you are in time and space without your knowledge.

The hippocampus is a key structure in formation of memories and includes cells that represent a person’s environment.

Daniel Dombeck PhD, and his team of researchers at Northwestern University have been using a technique designed by Dombeck himself to figure out how exactly the brain knows where and when we are. He shared his methods and findings to a group of researchers in neurobiology at Duke on Tuesday.

Domeck and his lab at Northwestern are working at identifying exactly how the brain represents spatial environments.

The apparatus used for these experiments was adapted from a virtual reality system. They position a mouse on a ball-like treadmill that it manipulates to navigate through a virtual reality field or maze projected for the mouse to see. Using water as a reward, Dombeck’s team was able to train mice to traverse their virtual fields in a little over a week.

In order to record data about brain activity in their mice as they navigated virtual hallways, Dombeck and his team designed a specialized microscope that could record activity of single cells in the hippocampus, a deep brain structure previously found to be involved in spatial navigation.

The device allows researchers to observe single cells as a mouse navigates through a simulated hallway.

Previous research has identified hippocampal place cells, specialized cells in the hippocampus that encode information about an individual’s current environment. The representations of the environment that these place cells encode are called place fields.

Dombeck and his colleague Mark Sheffield of the University of Chicago were interested in how we encode new environments in the hippocampus.

Sheffield studied the specific neural mechanisms behind place field formation.

After training the mice to navigate in one virtual environment, Sheffield switched the virtual hallway, thus simulating a new environment for the mouse to navigate.

They found that the formation of these new place cells uses existing neural networks initially, and then requires learning to adapt and strengthen these representations.

After identifying the complex system representing this spatial information, Dombeck and colleagues wondered how the system of representing time compared.

Jim Heys, a colleague of Dombeck, designed a new virtual reality task for the lab mice.

In order to train the mice to rely on an internal representation of passing time, Heys engineered a door-stop task, where a mouse traversing the virtual hallway would encounter an invisible door. If the mouse waited 6 seconds at the door before trying to continue on the track, it would be rewarded with water. After about three months of training the mice, Heys was finally able to collect information about how they encoded the passing of time.

Heys indentified cells in the hippocampus that would become active only after a certain amount of time had passed – one cell would be active after 1 second, then another would become active after 2 seconds, etc. until the 6-second wait time was reached. Then, the mouse knew it was safe to continue down the hallway.

When comparing the cells active in each different task, Dombeck and Heys found that the cells that encode time information are different from the cells that encode spatial information. In other words, the cells that hold information about where we are in time are separate from the ones that tell us where we are in space.

Still these cells work together to create the built-in GPS we share with animals like mice.

By Sarah Haurin

Page 1 of 2

Powered by WordPress & Theme by Anders Norén