Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Author: Sarah Haurin Page 1 of 4

Predictive maps in the brain

How do we represent space in the brain? Neuroscientists have been working to understand this question since the mid-20th century, when researchers like EC Tolman started experimenting with rats in mazes. When placed in a maze with a food reward that the rats had been trained to retrieve, the rats consistently chose the shortest path to the reward, even if they hadn’t practiced that path before.

Sam Gershman is interested in how we encode information about our environments.

Over 50 years later, researchers like Sam Gershman, PhD, of Harvard’s Gershman Lab are still working to understand how our brains encode information about space.

Gershman’s research questions center around the concept of a cognitive map, which allows the brain to represent landmarks in space and the distance between them. He spoke at a Center for Cognitive Neuroscience colloquium at Duke on Feb. 7.

Maps are formed via reinforcement learning, which involves predicting and maximizing future reward. When an individual is faced with problems that have multiple steps, they can do this by relying on previously learned predictions about the future, a method called successor representation (SR), which would suggest that the maps we hold in our brain are predictive rather than retroactive.

One specific region implicated in representations of physical space is the hippocampus, with hippocampal place cell activity corresponding to positions in physical space. In one study, Gershman found, as rats move through space, that place field activity corresponding to physical location in space skews opposite of the direction of travel; in other words, activity reflects both where the rodent currently is and where it just was. This pattern suggests encoding of information that will be useful for future travel through the same terrain: in Gershman’s words, “As you repeatedly traverse the linear track, the locations behind you now become predictive of where you are going to be in the future.”

Activation patterns in place cells correspond to both where the animal is and where the animal just was, pointing to the construction of a predictive map during learning. Graphic courtesy of Stachenfield et al., 2017.

This idea that cognitive activity during learning reflects construction of a predictive map is further supported by studies where the rodents encounter novel barriers. After being trained to retrieve a reward from a particular location, introducing a barrier along this known path leads to increased place cell activity as they get closer to the barrier; the animal is updating its predictive map to account for the novel obstacle.

This model also explains a concept called context preexposure facilitation effect, seen when animals are introduced to a new environment and subsequently exposed to a mild electrical shock. Animals who spend more time in the new environment before receiving the shock show a stronger fear response upon subsequent exposures to the box than those that receive a shock immediately in the new environment. Gershman attributes this observation to the time it takes the animal to construct its predictive map of the new environment; if the animal is shocked before it can construct its predictive map, it may be less able to generalize the fear response to the new environment.

With this understanding of cognitive maps, Gershman presents a compelling and far-reaching model to explain how we encode information about our environments to aid us in future tasks and decision making.

Brain networks change with age

Graph theory allows researchers to model the structural and functional connection between regions of the brain. Image courtesy of Shu-Hsien Chu et al.

As we age, our bodies change, and these changes extend into our brains and cognition. Although research has identified many changes to the brain with age, like decreases in gray matter volume or delayed recall from memory, researchers like Shivangi Jain, PhD, are interested in a deeper look at how the brain changes with age.

Shivangi Jain uses graph theory to study how the brain changes with age.

As a post-doctoral associate in the David Madden Lab at Duke, Jain is interested in how structural and functional connectivity in the brain change with age. Jain relies on the increasingly popular method of graph theory, which is a way of modeling the brain as a set of nodes or brain regions that are interconnected. Studying the brain in this way allows researchers to make connections between the physical layout of the brain and how these regions interact when they are active. Structural connectivity represents actual anatomical connections between regions in the brain, while functional connectivity refers to correlated activity between brain regions.

Jain’s studies use a series of tasks that test speed, executive function, and memory, each of which decline with age. Using fMRI data, Jain observed a decline in functional connectivity, where functional modules become less segregated with age.  In terms of structural connectivity, aging was associated with a decline in the strength of white matter connections and global efficiency, which represents the length between modules with shorter paths being more efficient. Thus, the aging brain shows changes at the anatomical, activational, and behavioral levels.

Jain then examined how these network-level changes played a role in the observed behavioral changes. Using statistical modeling, she found that the decline in performance in tasks for executive control could be explained by the observed changes in functional connectivity. Furthermore, Jain found that the changes in structural connectivity caused the change in functional connectivity. Taken together, these results indicate that the physical connections between areas in the brain deteriorate with age, which in turn causes a decrease in functional connectedness and a decline in cognitive ability.

Research like Jain’s can help explain the complicated relationships between brain structure and function, and how these relationships affect behavioral output.

Post by undergraduate blogger Sarah Haurin
Post by Sarah Haurin

The evolution of a tumor

The results of evolution are often awe-inspiring — from the long neck of the giraffe to the majestic colors of a peacock — but evolution does not always create structures of function and beauty.

In the case of cancer, the growth of a population of malignant cells from a single cell reflects a process of evolution too, but with much more harrowing results.

Johannes Reiter uses mathematical models to understand the evolution of cancer

Researchers like Johannes Reiter, PhD, of Stanford University’s Translational Cancer Evolution Laboratory, are examining the path of cancer from a single sell to many metastatic tumors. By using this perspective and simple mathematical models, Reiter interrogates the current practices in cancer treatment. He spoke at Duke’s mathematical biology seminar on Jan. 17.

 The evolutionary process of cancer begins with a single cell. At each division, a cell acquires a few mutations to its genetic code, most of which are inconsequential. However, if the mutations occur in certain genes called driver genes, the cell lineage can follow a different path of rapid growth. If these mutations can survive, cells continue to divide at a rate faster than normal, and the result is a tumor.

As cells divide, they acquire mutations that can drive abnormal growth and form tumors. Tumors and their metastases can consist of diverse cell populations, complicating treatment plans out patient outcomes. Image courtesy of Reiter Lab

With each additional division, the cell continues to acquire mutations. The result is that a single tumor can consist of a variety of unique cell populations; this diversity is called intratumoral heterogeneity (ITH). As tumors metastasize, or spread to other locations throughout the body, the possibility for diversity grows.

Intratumoral heterogeneity can exist within primary tumors, within metastases, or between metastases. Vogelstein et al., Science, 2013

Reiter describes three flavors of ITH. Intra-primary heterogeneity describes the diversity of cell types within the initial tumor. Intrametastatic heterogeneity describes the diversity of cell types within a single metastasis. Finally, inter-metastatic heterogeneity describes diversity between metastases from the same primary tumor.

For Reiter, inter-metastatic heterogeneity presents a particularly compelling problem. If treatment plans are made based on biopsy of the primary tumor but the metastases differ from each other and from the primary tumor, the efficacy of treatment will be greatly limited.

With this in mind, Reiter developed a mathematical model to predict whether a cell sample collected by biopsy of just the primary tumor would provide adequate information for treatment.

Using genetic sequence data from patients who had at least two untreated metastases and a primary tumor, Reiter found that metastases and primary tumors overwhelmingly share a single driver gene. Reiter said this confirmed that a biopsy of the primary tumor should be sufficient to plan targeted therapies, because the risk of missing driver genes that are functional in the metastases proved to be negligible.

 In his next endeavors as a new member of the Canary Center for Cancer Early Detection, Reiter plans to use his knack for mathematical modeling to tackle problems of identifying cancer while still in its most treatable stage.  

Post by undergraduate blogger Sarah Haurin

Post by Sarah Haurin

Does aging make our brains less efficient?

We are an aging population. Demographic projections predict the largest population growth will be in the oldest age group – one study predicted a doubling of people age 65 and over between 2012 and 2050. Understanding aging and prolonging healthy years is thus becoming increasingly important.

Michele Diaz and her team explore the effects of aging on cognition.

For Michele Diaz, PhD, of Pennsylvania State University, understanding aging is most important in the context of cognition. She’s a former Duke faculty member who visited campus recently to update us on her work.

Diaz said the relationship between aging and how we think is much more nuanced than the usual stereotype of a steady cognitive decline with age.

Research has found that change in cognition with age cannot be explained as a simple decline: while older people tend to decline with fluid intelligence, or information processing, they maintain crystallized intelligence, or knowledge.

Diaz’s work explores the relationship between aging and language. Aging in the context of language shows an interesting phenomenon: older people have more diverse vocabularies, but may take longer to produce these words. In other words, as people age, they continue to learn more words but have a more difficult time retrieving them, leading to a more frequent tip-of-the-tongue experience.

In order to understand the brain activation patterns associated with such changes, Diaz conducted a study where participants of varying ages were asked to name objects depicted in images while undergoing fMRI scanning. As expected, both groups showed less accuracy in naming of less common objects, and the older adult group showed a slightly lower naming accuracy than the younger.

Additionally, Diaz found that the approach older adults take to solving more difficult tasks may be different from younger adults: in younger adults, less common objects elicited an increase in activation, while older adults showed less activation for these more difficult tasks.

Additionally, an increase in activation was associated with a decrease in accuracy. Taken together, these results show that younger and older adults rely on different regions of the brain when presented with difficult tasks, and that the approach younger adults take is more efficient.

In another study, Diaz and her team explored picture recognition of objects of varying semantic and phonological neighborhood density. Rather than manipulation of how common the objects presented in the images are, this approach looks at networks of words based on whether they sound similar or have similar meanings. Words that have denser networks, or more similar sounding or meaning words, should be easier to recognize.

An example of a dense (left) and sparse (right) phonological neighborhood. Words with a greater number of similar sounding or meaning words should be more easily recognized. Image courtesy of Vitevitch, Ercal, and Adagarla, Frontiers in Psychology, 2011.

With this framework, Diaz found no age effect on recognition ability for differences in semantic or phonological neighborhood density. These results suggest that adults may experience stability in their ability to process phonological and semantic characteristics as they age.

Teasing out these patterns of decline and stability in cognitive function is just one part of understanding aging. Research like Diaz’s will only prove to be more important to improve care of such a growing demographic group as our population ages.

Post by undergraduate blogger Sarah Haurin

Post by undergraduate blogger Sarah Haurin

Predicting sleep quality with the brain

Modeling functional connectivity allows researchers to compare brain activation to behavioral outcomes. Image: Chu, Parhi, & Lenglet, Nature, 2018.

For undergraduates, sleep can be as elusive as it is important. For undergraduate researcher Katie Freedy, Trinity ’20, understanding sleep is even more important because she works in Ahmad Hariri’s Lab of Neurogenetics.

After taking a psychopharmacology class while studying abroad in Copenhagen, Freedy became interested in the default mode network, a brain network implicated in autobiographical thought, self-representation and depression. Upon returning to her lab at Duke, Freedy wanted to explore the interaction between brain regions like the default mode network with sleep and depression.

Freedy’s project uses data from the Duke Neurogenetics Study, a study that collected data on brain scans, anxiety, depression, and sleep in 1,300 Duke undergraduates. While previous research has found connections between brain connectivity, sleep, and depression, Freedy was interested in a novel approach.

Connectome predictive modeling (CPM) is a statistical technique that uses fMRI data to create models for connections within the brain. In the case of Freedy’s project, the model takes in data on resting state and task-based scans to model intrinsic functional connectivity. Functional connectivity is mapped as a relationship between the activation of two different parts of the brain during a specific task. By looking at both resting state and task-based scans, Freedy’s models can create a broader picture of connectivity.

To build the best model, a procedure is repeated for each subject where a single subject’s data is left out of the model. Once the model is constructed, its validity is tested by taking the brain scan data of the left-out subject and assessing how well the model predicts that subject’s other data. Repeating this for every subject trains the model to make the most generally applicable but accurate predictions of behavioral data based on brain connectivity.

Freedy presented the preliminary results from her model this past summer at the BioCORE Symposium as a Summer Neuroscience Program fellow. The preliminary results showed that patterns of brain connectivity were able to predict overall sleep quality. With additional analyses, Freedy is eager to explore which specific patterns of connectivity can predict sleep quality, and how this is mediated by depression.

Freedy presented the preliminary results of her project at Duke’s BioCORE Symposium.

Understanding the links between brain connectivity, sleep, and depression is of specific importance to the often sleep-deprived undergraduates.

“Using data from Duke students makes it directly related to our lives and important to those around me,” Freedy says. “With the field of neuroscience, there is so much we still don’t know, so any effort in neuroscience to directly tease out what is happening is important.”

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

These Microbes ‘Eat’ Electrons for Energy

The human body is populated by a greater number of microbes than its own cells. These microbes survive using metabolic pathways that vary drastically from humans’.

Arpita Bose’s research explores the metabolism of microorganisms.

Arpita Bose, PhD, of Washington University in St. Louis, is interested in understanding the metabolism of these ubiquitous microorganisms, and putting that knowledge to use to address the energy crisis and other applications.

Photoferrotrophic organisms use light and electrons from the environment as an energy source

One of the biggest research questions for her lab involves understanding photoferrotrophy, or using light and electrons from an external source for carbon fixation. Much of the source of energy humans consume comes from carbon fixation in phototrophic organisms like plants. Carbon fixation involves using energy from light to fuel the production of sugars that we then consume for energy.

Before Bose began her research, scientists had found that some microbes interact with electricity in their environments, even donating electrons to the environment. Bose hypothesized that the reverse could also be true and sought to show that some organisms can also accept electrons from metal oxides in their environments. Using a bacterial strain called Rhodopseudomonas palustris TIE-1 (TIE-1), Bose identified this process called extracellular electron uptake (EEU).

After showing that some microorganisms can take in electrons from their surroundings and identifying a collection of genes that code for this ability, Bose found that this ability was dependent on whether a light source was also present. Without the presence of light, these organisms lost 70% of their ability to take in electrons.   

Because the organisms Bose was studying can rely on light as a source of energy, Bose hypothesized that this dependence on light for electron uptake could signify a function of the electrons in photosynthesis.  With subsequent studies, Bose’s team found that these electrons the microorganisms were taking were entering their photosystem.

To show that the electrons were playing a role in carbon fixation, Bose and her team looked at the activity of an enzyme called RuBisCo, which plays an integral role in converting carbon dioxide into sugars that can be broken down for energy. They found that RuBisCo was most strongly expressed and active when EEU was occurring, and that, without RuBisCo present, these organisms lost their ability to take in electrons. This finding suggests that organisms like TIE-1 are able to take in electrons from their environment and use them in conjunction with light energy to synthesize molecules for energy sources.  

In addition to broadening our understanding of the great diversity in metabolisms, Bose’s research has profound implications in sustainability. These microbes have the potential to play an integral role in clean energy generation.

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

How the Flu Vaccine Fails

Influenza is ubiquitous. Every fall, we line up to get our flu shots with the hope that we will be protected from the virus that infects 10 to 20 percent of people worldwide each year. But some years, the vaccine is less effective than others.

Every year, CDC scientists engineer a new flu virus. By examining phylogenetic relationships, which are based on shared common ancestry and relatedness, researchers identify virus strains to target with a vaccine for the following flu season.

Sometimes, they do a good job predicting which strains will flourish in the upcoming flu season; other times, they pick wrong.

Pekosz’s work has identified why certain flu seasons saw less effective vaccines.

Andrew Pekosz, PhD, is a researcher at Johns Hopkins who examines why we fail to predict strains to target with vaccines. In particular, he examines years when the vaccine was ineffective and the viruses that were most prevalent to identify properties of these strains.

A virus consists of RNA enclosed in a membrane. Vaccines function by targeting membrane proteins that facilitate movement of the viral genome into host cells that it is infecting. For the flu virus, this protein is hemagglutinin (HA). An additional membrane protein called neuraminidase (NA) allows the virus to release itself from a cell it has infected and prevents it from returning to infected cells.  

The flu vaccine targets proteins on the membrane of the RNA virus. Image courtesy of scienceanimations.com.

Studying the viruses that flourished in the 2014-2015 and 2016-2017 flu seasons, Pekosz and his team have identified mutations to these surface proteins that allowed certain strains to evade the vaccine.

In the 2014-2015 season, a mutation in the HA receptor conferred an advantage to the virus, but only in the presence of the antibodies present in the vaccine. In the absence of these antibodies, this mutation was actually detrimental to the virus’s fitness. The strain was present in low numbers in the beginning of the flu season, but the selective pressure of the vaccine pushed it to become the dominant strain by the end.

The 2016-2017 flu season saw a similar pattern of mutation, but in the NA protein. The part of the virus membrane where the antibody binds, or the epitope, was covered in the mutated viral strain. Since the antibodies produced in response to the vaccine could not effectively identify the virus, the vaccine was ineffective for these mutated strains.

With the speed at which the flu virus evolves, and the fact that numerous strains can be active in any given flu season, engineering an effective vaccine is daunting. Pekosz’s findings on how these vaccines have previously failed will likely prove invaluable at combating such a persistent and common public health concern.

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin


The Costs of Mental Effort

Every day, we are faced with countless decisions regarding cognitive control, or the process of inhibiting automatic or habitual responses in order to perform better at a task.

Amitai Shenhav, PhD, of Brown University, and his lab are working on understanding the factors that influence this decision making process. Having a higher level cognitive control is what allows us to complete hard tasks like a math problem or a dense reading, so we may expect that the optimal practice is to exert a high level of control at all times.

Shenhav’s lab explores motivation and decision making related to cognitive control.

Experimental performance shows this is not the case: people tend to choose easier over hard tasks, require more money to complete harder tasks, and exert more mental effort as the reward value increases. These behaviors all suggest that the subjects’ automatic state is not to be at the highest possible level of control.

Shenhav’s research has centered around why we see variation in level of control. Because cognitive control is a costly process, there must be a limit to how much we can exert. These costs can be understood as tradeoffs between level of control and other brain functions and consequences of negative affective changes related to difficult tasks, like stress.

To understand how people make decisions about cognitive control in real time, Shenhav has developed an algorithm called the Expected Value of Control (EVC) model, which focuses on how individuals weigh the costs and benefits of increasing control.

Employing this model has helped Shenhav and his colleagues identify situations in which people are likely to choose to invest a lot of cognitive control. In one study, by varying whether the reward was paired only with a correct response or was given randomly, Shenhav simulated variability in efficacy of control. They found that people learn fairly quickly whether increasing their efforts will increase the likelihood of earning the reward and adjust their control accordingly: people are more likely to invest more effort when they learn that there is a correlation between their own effort and the likelihood of reward than when rewards are distributed independent of performance.

Another study explored how we adjust our strategies following difficult tasks. Experiments with cognitive control often rely on paradigms like the Stroop task, where subjects are asked to identify a target cue (color) while being presented with a distractor (incongruency of the word with its text color). Shenhav found that when subjects face a difficult trial or make a mistake, they adjust by decreasing attention to the distractor.

The Stroop task is a classic experimental design for understanding cognitive control. Successful completion of Stroop task 3 requires overriding your reflex to read the word in cases where the text and its color are mismatched.

A final interesting finding from Shenhav’s work tells us that part of the value of hard work may be in the work itself: people value rewards following a task in a way that scales to the effort they put into the task.

Style Recommendations From Data Scientists

A combination of data science and psychology is behind the recommendations for products we get when shopping online.

At the intersection of social psychology, data science and fashion is Amy Winecoff.

Amy Winecoff uses her background in psychology and neuroscience to improve recommender systems for shopping.

After earning a Ph.D. in psychology and neuroscience here at Duke, Winecoff spent time teaching before moving over to industry.

Today, Winecoff works as a senior data scientist at True Fit, a company that provides tools to retailers to help them decide what products they suggest to their customers.

True Fit’s software relies on collecting data about how clothes fit people who have bought them. With this data on size and type of clothing, True Fit can make size recommendations for a specific consumer looking to buy a certain product.    

In addition to recommendations on size, True Fit is behind many sites’ recommendations of products similar to those you are browsing or have bought.

While these recommender systems have been shown to work well for sites like Netflix, where you may have watched many different movies and shows in the recent past that can be used to make recommendations, Winecoff points out that this can be difficult for something like pants, which people don’t tend to buy in bulk.

To overcome this barrier, True Fit has engineered its system, called the Discovery engine, to parse a single piece of clothing into fifty different traits. With this much information, making recommendations for similar styles can be easier.

However, Winecoff’s background in social psychology has led her to question how well these algorithms make predictions that are in line with human behavior. She argues that understanding how people form their preferences is an integral part of designing a system to make recommendations.

One way Winecoff is testing how true the predictions are to human preferences is employing psychological studies to gain insight in how to fine-tune mathematical-based recommendations.

With a general goal of determining how humans determine similarity in clothes, Winecoff designed an online study where subjects are presented with a piece of clothing and told the garment is out of stock. They are then presented with two options and must pick one to replace the out-of-stock item. By varying one aspect in each of the two choices, like different color, pattern, or skirt length, Winecoff and her colleagues can distinguish which traits are most salient to a person when determining similarity.

Winecoff’s work illustrates the power of combining algorithmic recommendations with social psychological outcomes, and that science reaches into unexpected places, like influencing your shopping choices.  

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

Bias in Brain Research

Despite apparent progress in achieving gender equality, sexism continues to be pervasive — and scientists aren’t immune.  

In a cyber talk delivered to the Duke Institute for Brain Sciences, professor Cordelia Fine of the University of Melbourne highlighted compelling evidence that neuroscientific research is yet another culprit of gender bias.

Fine says the persistent idea of gender essentialism contributes to this stagnation. Gender essentialism describes the idea that men and women are fundamentally different, specifically at a neurological level. This “men are from Mars, women are from Venus” attitude has spread from pop culture into experimental design and interpretation.

However, studies that look for sex differences in male and female behavior tend to show more similarities than differences. One study looked at 106 meta-analyses about psychological differences between men and women. The researchers found that in areas as diverse as temperament, communication styles, and interests, gender had a small effect, representing statistically small differences between the sexes.

Looking at fMRI data casts further doubt on how pronounced gender differences really are. A meta-analysis of fMRI studies investigating functional differences between men and women found a large reporting bias. Studies finding brain differences across genders were overrepresented compared to those finding similarities.

Of those small sex differences found in the central nervous system, Fine points out how difficult it is to determine their functional significance. One study found no difference between men and women in self-reported emotional experience, but found via fMRI that men exhibited more processing in the prefrontal cortex, or the executive center of the brain, than women. Although subjective experience of emotion was the same between men and women, the researchers reported that men are more cognitive, while women are more emotional.

Fine argues that conclusions like this are biased by gender essentialism. In a study she co-authored, Fine found that gender essentialism correlates with stronger belief in gender stereotypes, that gender roles are fixed, and that the current understanding of gender does not need to change.

When scientists allow preconceived notions about gender to bias their interpretation of results, our collective understanding suffers. The best way to overcome these biases is to ensure we are continuing to bring more and more diverse voices to the table, Fine said.

Fine spoke last month as part of the Society for Neuroscience Virtual Conference, “Mitigating Implicit Bias: Tools for the Neuroscientist.” The Duke Institute for Brain Sciences (@DukeBrain) made the conference available to the Duke community.  

Post by undergraduate blogger Sarah Haurin
Post by undergraduate blogger Sarah Haurin

Page 1 of 4

Powered by WordPress & Theme by Anders Norén