Duke Research Blog

Following the people and events that make up the research community at Duke.

Category: Behavior/Psychology Page 1 of 19

Innocent Until Proven Guilty? Well, That Depends

This is the last of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

In the criminal justice system, one might imagine that the more serious a crime is, the more extensive the evidence should be to support the verdict. However, a recent study conducted at Duke University finds that jurors assessment of guilt is less reliant on the type of evidence and more on the severity of the crime.

Mock jurors in the study were more likely to find someone charged with murder guilty than someone charged with robbery.

A still from the movie “Twelve Angry Men” (1954), a tense drama about jury deliberations.

Numerous scholars have looked at how flawed forensic evidence, mistaken eyewitness identifications and defendants’ prior criminal convictions can introduce errors in criminal prosecutions.

But John Pearson, an assistant professor in four Duke departments including neurobiology, and his colleagues in law wanted to know whether the type of crime can also lead to a greater chance of wrongful conviction. It may be that jurors use moral and emotional responses to various crimes as reasoning for the decisions they make regarding the defendant’s guilt.

The researchers aimed to understand the relationship between crime severity and confidence in guilt by seeing how mock jurors, practicing prosecutors, and other practicing lawyers weigh various types of evidence in order to make a decision on guilt.

John Pearson

Participants in the study were subjected to about 30 crime scenarios, each one paired with a random variety of types of evidence. After participants read through each respective scenario, they rated the strength of the case on a 0-100 scale and their emotional and moral responses.

It appeared that the more threat or outrage they felt toward crime type, the more likely they were to find the defendant guilty.

The authors also tested different types of evidence’s potential interaction with people’s beliefs.

They found that both DNA and non-DNA physical evidence had the highest amount of influence on participants, but the difference between how the participants weighed them was small. The jurors appeared to place very similar, if not the same amount of weight onto these two types of evidence in terms of their confidence.

Pearson refers to juror’s equal weight of DNA and non DNA evidence as the “CSI effect.” But DNA evidence is far more reliable than non DNA evidence. The CSI effect lays out that jurors tend to give more weight to conclusions based on traditional evidence. The study found that no matter one’s position, the pattern of similar weight between the DNA and non DNA evidence was found across all groups. The study also states that “subjects tend to overweight widely used types of forensic evidence, but give much less weight than expected to a defendant’s criminal history.”

Along with finding similar patterns between confidence in guilt and evidence type, researchers also discovered an intense link between the subject’s confidence in guilt with the severity of the crime.

Notably for jurors, crime type highly influenced their perception of confidence in guilt. The study showed a positive correlation between personal, emotional, and moral biases and “adjudicative bias,’ or the likelihood of conviction.

And while jurors did show more of a trend in this finding, practicing lawyers and prosecutors also exhibited a crime-type bias correlation with the seriousness of crime, even though it was much smaller.

The study’s results model how punishment, outrage, and threat are almost entirely dependent on crime effect and crime scenario. This indicates that despite how much evidence was presented, crime type alone influenced jurors decisions to charge someone as guilty of that crime more frequently.

(Bang) Guilty!

This could mean that regardless of how much evidence or what type of evidence is present, innocent people wrongly charged of crimes could more easily be convicted if it is a more severe offense.

These findings indicate how easy it is to reach wrongful convictions of severe crimes within the US criminal justice system.

Guest post by Casey M. Chanler

6-Month-Old Brains Are Categorically Brilliant

This is the seventh of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

Let’s say you visit your grandmother later today and come across a bowl of unknown exotic berries that look and taste similar to a raspberry. Your grandmother tells you that they are called bayberries. How would your mind react to the new word “bayberry”?

Research shows that an adult brain would probably categorize the word “bayberry” into the category of berries, and draw connections between “bayberry” and other related berry names.

But how do you think an infant would deal with a word like “bayberry”? Would he or she categorize the word the same way you would?

Elika Bergelson, a developmental psychologist at Duke University, provided some possible answers for this question in a study published in the Proceedings of National Academy of Sciences.

Six-month-old infants were shown two objects on a screen simultaneously, as a speaker provided labeling for one of the objects (eg. Look at the dog!).

The thing on the right is a shoe, sweetie. We’re not sure about that other thing…

The two objects were either literally related or unrelated. For example, the words nose and mouth are semantically, or literally, related since they both refer to body parts, while the words nose and boots are semantically unrelated.

As the babies were presented with these objects, their eye movements were tracked. The longer a baby stared at an object, the more confident he or she is presumed to be about the object’s match with the label. This acted as an indicator of how well the baby understood which object the label was referring to.

If the infants categorized words into semantically related groups, then they’d be more likely to confuse objects that are related. This means that the infants would perform better at choosing the correct object when the objects are unrelated.

The results suggest that infants approach words no differently than adults. The babies correctly identified the labeled object more frequently when the two were unrelated than when the two objects were related. This indicates that babies have the mental representation of words categorized into semantically related groups. When encountering two unrelated objects, babies can quickly distinguish between the two objects because they do not belong to the same mental category.

Elika Bergelson

However, when the two objects are related, the infants often confuse them with each other because they belong to the same or closely related categories — while 6-month-olds have developed a general categorization of nouns, their categories remain broad and unrefined, which causes the boundaries between objects in the same category to be unclear.

So what do all these results mean? Well, back to the bayberry example, it means that a 6-month-old will place the word “bayberry” into his or her mental category of “berries.” He or she might not be able to distinguish bayberries from raspberries the next time you mention the word “bayberry,” but he or she will definitely not point to bayberries when you drop the word “milk” or “car.”

Toddler Rock

If the results of this study can be replicated, it means that the infant approach to language is much more similar to adults than researchers previously thought; the infants have already developed a deep understanding of semantics that resembles grown-ups much earlier than researchers previously speculated.

While the results are exciting, there are limitations to the study. In addition to the small sample size, the infants mainly came from upper middle class families with highly educated parents. Parents in these families tend to spend more time with their infant and expose the infant with more words than parents with lower socio-economic status. So these findings might not be representative of the entire infant population. Nevertheless, the study sheds light on how infants approach and acquire words. It’s also possible this finding could become a new way to detect language delay in infants by the age of six-months.

Guest post by Jing Liu, a psychology and neuroscience major, Trinity 2022.

A Mind at Rest Still Has Feelings

This is the sixth of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

Emotions drive our everyday lives: They help us make decisions, they guide us into acting certain ways and they can even define who we are as people. But when we take a break from our busy lives and rest, does our brain do the same?

A 2016 study by Duke researchers tested whether neural models developed to categorize distinct emotional categories in an MRI brain scan would work with people who are in a resting state, meaning no activity is being done by the person physically or mentally.

An algorithm determined different patterns of brain activity that mapped to different emotional states.

When a person is active, emotions are usually a huge part of the ways they interact and the decisions they make, but this study led by Kevin LaBar a professor of psychology and neuroscience, wanted to see if changing the activity level to its minimum can cause different effects on the person and the emotions they experience.

They used a tool called functional magnetic resonance imaging (fMRI) that allows scientists to measure brain activity by seeing the amount of blood flow to different areas in the brain. They were looking for universal emotions, those feelings that are understood in all cultures and societies as the same state of mind, such as contentment, amusement, surprise, fear, anger, sadness, and neutral.

Each emotion has been shown by fMRI to activate different portions of the brain. This is significant if a person is injured or has decreased activity level in a region of the brain, because it can change the ways they feel, act, and interact with others. It also can help to better understand why certain people have better visual recollection of memories, can recall certain information, even when in a sleeping or resting state.

This study consisted of two experiments. The first experiment included a large number of students recruited for a larger study by Ahmad Hariri, a professor of psychology and neuroscience. These healthy, young adult university students have been assessed on  a wide range of behavioral and biological traits. For this experiment, they were told to stare at a blank gray screen and to rest while not thinking of anything particular while being scanned by the fMRI.

The second experiment was with a smaller sample of just 22 participants. Before going into the fMRI, they rated how they felt emotionally in an unconstrained resting state. Once in the machine, they were told to rest and let their mind wander and to think freely with the blank screen occasionally letting them rate their current state of emotion. By the end of the experiment, they completed 40 trials of rating how they felt, which consisted of 16 different emotions they could choose from.

The researchers tried to quantify the occurrence of different spontaneous emotional states in resting brains.

At the end of both experiments, the researchers tested the brain scans with an algorithm that categorized emotional arousal and brain activity. They found distinct patterns of activity in these resting minds that seemed to match various emotional states the students had reported. Prior to this study, there had only been experiments which test to see how the brain is stimulated in active people in a non-resting state.

Although this experiment was successful and helped the researchers understand a lot more about the emotional states of the brain, there were some limitations as well. One of the main biases of the self-report experiment was the high percentage of students reporting that they were experiencing amusement (23.45%) and contentment (46.31%) which the researchers suppose was students putting forth a more positive image of themselves to others. Another possible bias is that brain patterns might vary depending on the emotional status of an individual. Emotional processes unfolding at both long and short time scales likely contribute to spontaneous brain activity.  

This study holds important clinical implications. Being able to ‘see’ emotional states in a resting brain would help us understand how important the feelings we experience are. With refinement, fMRI could become useful for diagnosing personality or mood disorders by showing us the brain areas being stimulated during certain periods of sadness, anger, and anxiety. Such applications could help with identifying emotional experiences in individuals with impaired awareness or compromised ability to communicate.

Guest post by Brynne O’Shea.

Are People Stuck with Their Political Views?

This is the fifth of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

Whether you cheered or cried when Donald Trump was elected President, or if you stood in the blazing heat marching for women’s rights, your position on socio-political issues is important to you.  Would you ever change it?

Psychologists have found that people tend to hold onto their views, even when presented with conflicting evidence. Is it ever worth your time to argue with the opposition, knowing that they will not budge from their stance?

A 2013 protest in Brussels. Picture by M0tty via wikimedia commons

Researchers from Duke University explored the idea that people stand with their positions on political and social matters, even when presented with affirming or conflicting evidence.

But they also offer hope that knowing that these cognitive biases exist and understanding how they work can help lead to more rational decision-making, open debate of socio-political issues, and fact-based discussion.

The stubbornness of people’s views are based on a couple of concepts. “Resistance to belief-change” is the idea that people will stand with their original views and are unwilling to change them. This could be a result of a cognitive bias known as the “confirmation bias.” The bias is that people will favor evidence that supports their claim and deny evidence that refutes their position.

An example of this would be a person who supports Donald Trump rating an article about how he is doing a great job more favorably as opposed to a non-supporter who would use evidence that shows he is doing a bad job. Whether their position is a supporter or non-supporter, they will use evidence that supports their position and will overlook any conflicting evidence.

This can be shown through the following 2019 experiment performed by the Duke team, which was led by Felipe De  Brigard, a Duke assistant professor of philosophy and member of the Duke Institute for Brain Sciences.

This experiment started with a group of individuals across the spectrum of socio-economic and political interests. They were presented with five different socio-political issues: fracking, animal testing, drone strikes, standardized testing, and the gold standard. They started by reading background on the issue and then were to report any prior knowledge on these issues to eliminate people favoring information that they had previously encountered.

After reporting any prior knowledge, they were to make their decision and rate how confident they were in that decision. They were then tasked with evaluating only affirming evidence, only conflicting evidence, or evidence for both sides. After this, they gave their final decision about their stance on the issues.

The results showed that there was very little change in people’s positions after being presented with the evidence. For example, in the topic of fracking, about two in one-hundred people changed their position after being presented with affirming evidence. Also, when being presented with conflicting evidence, only one in five people changed their stance on the issue.

Similar changes were recorded with other issues and other sets of evidence. The results showed that receiving conflicting evidence caused people to change their position the most, but it was still a small percentage of people who changed their stance. This is significant because it shows how people are resistant to change because of their belief biases. Another interesting aspect is that participants rated evidence that affirmed their belief to be more favorable than those that conflicted it. This means they tended to use this evidence to support their stance and overlook conflicting evidence, which shows how cognitive biases, like the confirmation bias, play an important role in decision-making.

Cognitive bias affects how we make our decisions. More importantly, it entrenches our views and stops us from being open-minded. It is important to understand cognitive biases because they impact our choices and behavior. Becoming aware of biases like resistance to change, and the confirmation bias allows people to think independently and make decisions based off of rationale as well as emotion because they are aware of how these impact their decision making process.

Well, do you?

We expect to act rationally, making decisions that are in our best interest. However, this is often not true of humans. However, having adequate information, including understanding the impact of biases on decisionmaking, can lead humans to make better judgements. The next step in decision making research is to understand how people can change their entrenched positions to eliminate biases like the confirmation bias and bring more fact-based, open debate to socio-political issues.

To borrow from President Obama’s campaign slogan, is that change you can believe in?

Guest Post by Casey Holman, psychology major.

Move Your Eyes and Wiggle Your Ears

This is the fourth of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

Research by Duke University neuroscientists has uncovered that the eardrums move when the eyes do. Even without sound, simply moving your eyes side to side causes the eardrums to vibrate.

Because the vibrations and eye movements seem to start simultaneously, it seems as if both processes are controlled by the same parts of the brain, suggesting the same motor commands control both processes, according to senior author Jennifer Groh of psychology and neuroscience.

A human ear.

Her team used rhesus monkeys and humans in an experiment that has given us new understanding of how the brain pairs hearing and seeing.

This research could help shed light on the brain’s role in experiencing outside stimuli, such as sounds or lights, or even in understanding hearing disorders. Scientists still don’t understand the purpose of eardrum movement, however.

The experiment fitted sixteen participants with microphones small enough to fit into the ear canals, but also sensitive enough to pick up the eardrums’ vibrations. It is known that the eardrum can be controlled by the brain, and these movements help control the influx of sound from the outside and also produce small sounds called “otoacoustic emissions.” Thus, it is important to measure vibrations, as this would signify the movement of the eardrum.

LED lights were presented in front of the participants and they were asked to follow the lights with their eyes as they shifted side to side.

Rhesus monkeys move their eardrums too!

This experiment was also replicated in three rhesus monkeys, using five of the six total ears between them. These trials were conducted in the same way as the humans.

The researchers concluded that whenever the eyes move, the eardrums moved together to shift focus to the direction of sight. Vibrations began shortly before and lasted slightly after the eye movements, further suggesting the brain controls these processes together. As eye movements get bigger, they cause larger vibrations.

These relationships highlight an important void in previous research, as the simultaneous and even anticipatory action of nearly 10 milliseconds of eardrum vibrations show that the brain has more control in making the systems work together, using the same motor commands. The information being sent to the eardrums, therefore, likely contains information received from the eyes.

Perhaps immersive headphones or movie theaters could also take advantage of this by playing sounds linked to the movements of eyes and eardrums to create a more “realistic” experience.

While the relationship between side to side eye movements was analyzed for their effect on eardrum movement, the relationship between up and down eye movements has yet to be discovered. Hearing disorders, like being unable to focus on a specific sound when many are played at once, are still being investigated. Scientists hope to further understand the relationship the brain has with the audio and visual systems, and the relationship they have with each other.

Guest post by Benjamin Fiszel, Class of 2022.

Your Brain Likes YOU Most

This is the third of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

Imagine you’re at a party. You have a few friends there, but the rest of the people you don’t know. You fiddle with the beaded bracelet around your wrist, take a breath, relax your arms, and walk in. You grab some pretzels and a drink, and talk to this guy named Richard. He has a daughter, or a niece, or something like that. His moustache looked weird.

Okay, now quick question: would you remember if he was wearing a bracelet or not? Odds are you wouldn’t unless he had a bracelet like yours. In fact, it turns out that we recall things far better when those things concern ourselves.

Research has shown us that when it comes to what we notice, the quickest thing to grab our attention will be something we relate to ourselves, such as a picture of your own face compared to a picture of any other face. What still remains unknown however, is to what extent our prioritization of self plays an internal role in our processes of memory and decision making.

I am. Therefore I selfie.

To explore this, an international team of researchers led by Duke’s Tobias Egner analyzed the degree to which we prioritize self-related information by looking at how efficiently we encode and actively retrieve information we have deemed to concern ourselves.

They did this with a game. Research participants were shown three different colored circles that represented self, friend, and stranger. A pair of colored circles would appear in various locations on the screen, then vanish, followed by a black circle which appeared in the same or different location as one of the colored circles.

Participants were then asked if the black circle appeared at the same location where one of the colored circles had been. The responses were quite revealing.

People responded significantly quicker when the black circle was in the location of the circle labeled self, rather than friend or stranger. After variations of the experiment, the results still held. In one variation, the black circle would appear in the location of the self-circle only half as often as it did the others. But participants still recalled the quickest when the black circle appeared where their self-circle had been.

If the light blue dot is “you,” will you get the answer quicker?

With nothing but perception and reaction time, the process demonstrated that this is not a conscious decision we make, but an automatic response we have to information we consider our own.

The experiment demonstrated that when it comes to holding and retrieving information on demand, the self takes precedence. The interesting thing in this study however, is that the self-related stimulus in this experiment was not a picture of the person, or even the circle of their preferred color, it was simply the circle that the researchers assigned to the participant as self, it had nothing to do with the participants themselves. It was simply the participant’s association of that circle with themselves that made the information more important and readily available. It seems that to associate with self, is to bring our information closer.

The fact that we better recall things related to ourselves is not surprising. As creatures intended mostly to look after our own well-being, this seems quite an intuitive response by our working memory. However, if there is anything to take away from this experiment, it’s the significance of the colored circle labeled self. It was no different than any of the others circles, but merely making it ‘self’ improved the brain’s ability to recall and retrieve relevant information.

Simply associating things with ourselves makes them more meaningful to us.

Guest post by Kenan Kaptanoglu, Class of 2020.

Putting Your Wandering Mind on a Leash

This is the second of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

What should I eat for dinner? What do I need to do when I return home? What should I do this weekend? All three questions above are questions we frequently ask ourselves when we begin to mind-wander in class, at work, and even at home.

Mind-wandering has commonly been defined and recognized as the unconscious process of getting distracted from a task at hand. Thus, mind-wandering has garnered a fairly negative connotation due to it being viewed as an uncontrollable phenomenon. But what if I told you that recent research shows that not only can we control our mind wandering with the presence of an upcoming task, but we can do so on a moment-to-moment basis as well?

         Illustration by Charlie Taylor @c.e.b.t. (http://www.mylittleplaceofcalm.com/the-wonderings-of-a-wandering-mind/)

And if we can indeed modulate and directly control our minds, can we find ways to mind-wander that would ultimately optimize our productivity? Could we potentially control our off-topic thoughts without seeing a loss in overall performance of a task?

To answer these questions, Harvard postdoc Paul Seli, who is now an assistant professor of psychology and neuroscience at Duke, and his team conducted a fascinating experiment. They knew from earlier work that our minds tend to wander more while completing easier tasks than difficult ones. Why? Because we simply need to use fewer executive resources to perform easy tasks and thus we can freely mind-wander without noticing a loss in performance. In fact, one could say that we are optimizing our executive functions and resources across a variety of different tasks instead of just one.

Seli hypothesized that people could control their mind wandering on the basis of their expectations of upcoming challenges in a task. To test this, he had research participants sit in front of a computer screen that showed a large analog clock. Researchers told each participant to click on the spacebar every time the clock struck 12:00. Seems simple right? Even simpler, the clock struck 12:00 every 20 seconds and thus it was completely predictable. To incentivize the participants to click the spacebar on time, a bonus payment was awarded for every correct response.

Paul Seli studies…
What were we talking about?

During some of the 20-second intervals, the participants were presented with what are called “thought probes.” These popped up on the screen to ask the participants whether or not their mind had just been wandering.

The participants were assured that their responses did not affect their bonus payments and the probes were presented above a paused clock face so that the participants still saw where the hand of the clock was while answering the question. Participants could either respond by clicking “on task” (meaning that they were focusing on the clock), “intentionally mind-wandering” (meaning that they were purposely thinking about something off-topic), or “unintentionally mind-wandering.” After a response was given, the question disappeared, and the clock resumed.

By using the thought probes to track the mind-wandering of participants on a second-by-second basis, Seli found that the participants tended to decrease their levels of mind-wandering as the clock approached 12:00. In other words, participants would freely mind-wander in the early stages of the hand’s rotation and then quickly refocus on the task at hand as the clock approached 12:00.

Seli showed that we have some ability to control a wandering mind. Instead of mind-wandering being solely dependent on the difficulty of the task, Seli found that we can control our mind-wandering on a moment-to-moment basis as the more difficult or pressing aspect of the task approaches.

Even if we are distracted, we have the ability to refocus when the task at hand becomes pressing. However, there is a time and place for mind-wandering and multitasking, and we should certainly not get too confident with our mind-wandering abilities.

Take mind-wandering and distracted driving for example. Approximately nine Americans are killed each day due to distracted driving and more than 1,000 people are injured. Therefore, just because you are overly familiar with a task does not mean that it’s not crucial and demanding. Thus, we shouldn’t undervalue the amount of executive resources and attention we need to focus and stay safe.    

So, the next time you catch yourself thinking about your upcoming weekend, chances are that the task your completing isn’t too pressing, because if it were, you’d be using up all of your executive resources to focus.

Guest post by Jesse Lowey, Trinity 2021

Just The Way You Say It Can Make Something ‘True’

This is the first of eight blog posts written by undergraduates in PSY102: Introduction to Cognitive Psychology, Summer Term I 2019.

We’ve all accepted a lie that we’ve heard before. For example, “vitamin C prevents the common cold” is a statement that rings true for many people. However, there is only circumstantial evidence supporting this claim, and instead, many researchers agree that the evidence in fact reveals that vitamin C has no effect on the common cold.

So why do we end up believing things that are not true? One reason is known as the “illusory truth effect” which claims that the more “fluent” a statement is or feels, the more likely it is to be remembered as true.

Fluency in this case refers to how easily we can later recall information. Fluency can increase in a variety of ways; it could be due to the size of the text in which the fact was presented, or how many times you have heard the statement. Fluency can even be influenced by the color of the text that we read. As an example, if we were only presented with the blue-text version of the four statements shown in the picture above, it would be easier for us to remember — compared to if we were only shown the yellow-text version — and thus easier for us to recall later. Similarly, if the text was larger, or the statements were repeated more frequently, it would be easier for us to recall the information.

This fluency can be useful if we are constantly told accurate facts. However, in our current day and age, truth and lies can become muddled, and if we end up hearing more lies than truths, this illusory truth effect can take over, and we soon begin to accept these falsehoods.

Vanderbilt University psychologist Lisa Fazio studied this during graduate school at Duke. Her aim was to explore this illusory truth effect.

Eighty Duke undergraduates participated in her studies. For the first part, participants were shown factual statements — both true and false — and asked to rate how interesting they were.

For the second part, participants were shown statements — some of which came from the first part of the study — and told that some would be true and some false. They were then asked to rate how truthful the statements were, on a scale from one to six, with one being definitely false, and six being definitely true.

Fazio and her colleagues found that the illusory truth effect is not only a powerful mental mechanism, but that it is so powerful, it can override our personal knowledge.

For example, if presented with the question “what is the name of the skirt that Scottish people sometimes wear?” most people would correctly respond with “a kilt.” However, if you were shown the false statement “a sari is the skirt worn by Scottish people,” you would be more likely to later report this statement as being truthful, even though you knew the correct answer before reading that false statement.

Fazio’s paper also proposed a model for how fluency and knowledge may interact in this situation. Their model (shown below) suggests that fluency is the main deciding factor on the decisions that we make. If we cannot easily remember an answer, then we rely on our prior knowledge, and finally, if our knowledge fails, then we resort to a guess. This model makes an important distinction from their other model and the underlying hypothesis, which both suggest that knowledge comes first, and thus could override the illusory truth effect.

All of this research can seem scary at first glance. In a world where “fake news” is on the rise, and where we are surrounded by ads and propaganda, how can we make sure that the information we believe to be true is actually true? While the paper does not fully explore the

Lisa Fazio’s model of the illusory truth effect.

effectiveness of different ways to train our brains to weaken the illusory truth effect, the authors  do offer some suggestions.

The first is to place yourself in situations where you are going to rely more on your knowledge. Instead of being a passive consumer of information, actively fact-check the information you find. Similar to a reporter chasing down a story, someone who actively thinks about the things they hear is not as likely to fall victim to this effect.

The second suggestion is to train oneself. Providing training with trial-by-trial feedback in a situation similar to this study could help people understand where their gut reactions fall short, and when to avoid using them. The most important point to remember is that the illusory truth effect is not inherently bad. Instead, it can act as a useful tool to reduce mental work throughout one’s day. If ten people say one thing, and one person says another, many times, then ten will be right, and the one will be wrong. The real skill is learning when to trust the wisdom of the crowds, and when to reject it.

Kevyn Smith Guest Post by Kevyn Smith, a third-year undergraduate majoring in Electrical and Computer Engineering and Computer Science, and minoring in Psychology.

How Many Neuroscientists Does it Take to Unlock a Door?

Duke’s Summer Neuroscience Program kicked off their first week of research on June 4 with a standard morning meeting: schedules outlined, expectations reiterated, students introduced. But that afternoon, psychology and neuroscience professor Thomas Newpher and undergraduate student services coordinator Tyler Lee made the students play a very unconventional get-to-know-you game — locking them in a room with only one hour to escape.

Not the usual team building activity: Students in Duke’s 8-week Summer Neuroscience Program got to know each other while locked in a room.

Bull City Escape is one of a few escape rooms in the Triangle, but the only one to let private groups from schools or companies or families to come and rent out the space exclusively. Like a live-in video game, you’re given a dramatic plot with an inevitably disastrous end: The crown jewels have been stolen! The space ship is set to self-destruct! Someone has murdered Mr. Montgomery, the eccentric millionaire! With minutes to go, your rag-tag bunch scrambles to uncover clues to unlock locks that yield more clues to yet more locks and so on, until finally you discover the key code that releases you back to the real world.

This summer’s program dips into many subfields, in hopes of pushing the the 16 students (most of them seniors) toward an honors thesis. According to Newpher, three quarters of the senior neuroscience students who participated in the 2018 SNP program graduated with distinction last May.

From “cognitive neuro” that addresses how behavior and psychology interacts with your neural network, to “translational neuro” which puts neurology in a medical context, to “molecular and cellular neuro” that looks at neurons’ complex functions, these students are handling subjects that are not for the faint of heart or dim of mind.

But do lab smarts carry over when you’re locked in a room with people you hardly know, a monitor bearing a big, red timer, blinking its way steadily toward zero?

Apparently so. The “intrepid team of astronauts” that voyaged into space were faced with codes and locks and hidden messages, all deciphered with seven minutes left on the clock, while the “crack-team of detectives” facing the death of Mr. Montgomery narrowly escaped, with less than a minute to spare. At one point, exasperated and staring at a muddled bunch of seemingly meaningless files, a student looked at Dr. Newpher and asked, “Is this a lesson in writing a methods section?”

The Bull City Escape website lists creative problem-solving, focus, attention to detail, and performance under pressure as a few of the skills a group hones by playing their game — all of which are relevant to this group of students, many of whom are pre-med. But hidden morals about clarity and strength-building aside, Newpher picked the activity because it allows different sides of people’s personalities to come out: “When you’re put in that stressful environment and the clock is ticking, it’s a great way to really get to know each other fast.”

By Vanessa Moss
By Vanessa Moss

Pot Not So Harmless for Teens

Marijuana is becoming legalized and decriminalized to the point that more than 63 percent of Americans have access to medical and recreational cannabis. But researchers and policy experts still don’t know very much about the long-term health effects.

The 2019 annual symposium by Duke’s Center on Addiction & Behavior Change,  “Altered States of Cannabis Regulation: Informing Policy with Science,” provided some scientific answers. Madeline Meier, assistant professor of Psychology at Arizona State University and a former Duke post-doc, spoke about her longitudinal research projects that offer critical insights about the long-term effects of cannabis use.

Meier investigates the relationship between cannabis use and IQ in a 38-year-long study that has been collecting data on a group of 1,000 people born in New Zealand since birth. Longitudinal studies like this that follow the same group of individuals across their lifespan are vital to understanding the effects of extended cannabis use on the human body, but they are difficult to conduct and keep funded. The 95 percent retention rate of this study is quite impressive and provides much-needed data.

Madeline Meier of Arizona State University

The researchers had tested the babies’ IQ at early childhood, then conducted regular IQ and cannabis use assessments between the ages of 18 and 38. They found that participants who heavily used weed for extended periods of time experienced a significant IQ drop, as well as other impairments in learning and memory skills. Specifically, users who had three or more clinical diagnoses of cannabis dependency, defined as compulsive use despite physical, legal, or social problems caused by the drug, showed an average 6-point IQ drop over the years. Those who only tried the drug a few times showed no decline, and those who never used weed showed a 1-point IQ increase.

Notably, however, the results depended on age of onset and level of use. Meier emphasized that her results do not support the common misconception that any amount of weed use can immediately lead to IQ decline. To the contrary, Meier’s team found that short-term, low-level use did not have any effect on IQ; only heavy users suffered the negative effects. The age of onset of cannabis use was critical, too: Adolescents were more vulnerable to the drug’s harms, with study participants who started using as adolescents showing an 8-point drop in IQ points. Given what we know about adolescents’ affinity for risky behavior, specifically around experimentation with drugs, this finding is particularly worrisome.

In addition to causing cognitive impairment, persistent cannabis use jeopardizes people’s psychosocial functioning as well. The Dunedin longitudinal study has also revealed that people who continued to use weed despite multiple dependency diagnoses experienced downward social mobility, relationship problems, antisocial workplace behavior, financial difficulties, and even higher numbers of traffic convictions. In short, social life is likely to be perilous for heavy weed users.

While some have suggested that the harmful effects of weed might be caused not by the drug itself but by the reduced years of education, low socioeconomic status, mental health problems, or simultaneous use of tobacco, alcohol or other drugs among weed users, Meier and her team found that the impairments persisted even when these factors were accounted for. Cannabis alone was responsible for the effects reflected in Meier’s research. In fact, there is limited evidence for the opposite causational link: weed use may be the cause of mental health problems rather than being caused by them. One study found a weak correlation between years of marijuana use and depression, but Meier was careful to point out that it would take “a lot of cannabis use to lead to clinically diagnosed depression.”

Given this data, Meier called on the policy-makers in the room to focus their efforts on delaying the onset of cannabis use in youth and encouraging cessation (especially among adolescents). In appealing to the researchers, she underlined the need for additional longitudinal studies into the mechanisms and parameters of cannabis use that produce long-term impairments.

As public and political support of marijuana legalization grows, we must be careful not to underestimate the dangers of the drug. Without knowing the full extent of the risks and benefits of weed, policy-makers cannot effectively promote public health, safety, and social equity.

Guest Post by Deniz Ariturk

Page 1 of 19

Powered by WordPress & Theme by Anders Norén