Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Artificial Intelligence Page 1 of 2

Democracy Threatened: Can We Depolarize Digital Spaces?

Sticky post

“Israeli Mass Slaughter.” “Is Joe Biden Fit to be President?” Each time we log on to social media, potent headlines encircle us, as do the unwavering and charged opinions that fill the comment spaces. Each like, repost, or slight interaction we have with social media content is devoured by the “algorithm,” which tailors the space to our demonstrated beliefs.

So, where does this leave us? In our own personal “echo chamber,” claim the directors of Duke’s Political Polarization Lab in a recent panel.

Founded in 2018, the lab’s 40 scholars enact cutting edge research on politics and social media. This unique intersection requires a diverse team, evident in its composition of seven different disciplines and career stages. The research has proven valuable: beneficiaries include government policy-makers, non-profit organizations, and social media companies. 

The lab’s recent research project sought to probe the underlying mechanisms of our digital echo-chambers: environments where we only connect with like-minded individuals. Do we have the power to shatter the glass and expand perspectives? Researchers used bots to generate social media content of opposing party views. The content was intermixed with subject’s typical feeds, and participants were evaluated to see if their views would gradually moderate.

The results demonstrated that the more people paid attention to the bots, the more grounded in their viewpoints or polarized they became. 

Clicking the iconic Twitter bird or new “X” logo signifies a step onto the battlefield, where posts are ambushed by a flurry of rebuttals upon release.

Chris Bail, Professor of Political and Data Science, shared that 90% of these tweets are generated by a meager 6% of Twitter’s users. Those 6% identify as either very liberal or very conservative, rarely settling in a midde area. Their commitment to propagating their opinions is rewarded by the algorithm, which thrives on engagement. When reactive comments filter in, the post is boosted even more. The result is a distorted perception of social media’s community, when in truth the bulk of users are moderate and watching on the sidelines. 

Graphic from the Political Polarization Lab presentation at Duke’s 2024 Research & Innovation Week

Can this be changed? Bail described the exploration of incentives for social media users. This means rewarding both sides, fighting off the “trolls” who wreak havoc on public forums. Enter a new strategy: using bots to retweet top content creators that receive engagement from both parties.

X’s (formerly Twitter’s) Community Notes feature allows users to annotate tweets that they find misleading. This strategy includes boosting notes that annotate bipartisan creators, after finding that notes tended towards the polarized tweets.

 The results were hard to ignore: misinformation decreased by 25-35%, said Bail, saving companies millions of dollars.

Social media is democracy’s public square

Christopher bail

Instead of simply bashing younger generation’s fixation on social media, Bail urged the audience to consider the bigger picture.

“What do we want to get out of social media?” “

What’s the point and how can it be made more productive?”

On a mission to answer these questions, the Polarization Lab has set out to develop evidence-based social media by creating custom platforms. In order to test the platforms out, researchers prompted A.I. to create “digital twins” of real people, to simulate users. 

Co-Director Alex Volfovsky described the thought process that led to this idea: Running experiments on existing social media often requires dumping data into an A.I. system and interpreting results. But by building an engaging social network, researchers were able to manipulate conditions and observe causal effects.

How can the presence of a “like button” or “repost” feature affect our activity on platforms? On LinkedIn, even tweaking recommended users showed that people gain the most value from semi-distant connections.

In this exciting new field, unanswered questions ring loud. It can be frightening to place our trust in ambiguous algorithms for content moderation, especially when social media usage is at an all-time high.

After all, the media I consume has clearly trickled into my day-to-day decisions. I eat at restaurants I see on my Instagram feed, I purchase products that I see influencers promote, and I tend to read headlines that are spoon-fed to me. As a frequent social media user, I face the troubling reality of being susceptible to manipulation.

Amidst the fear, panelists stress that their research will help create a safer and more informed culture surrounding social media in pressing efforts to preserve democracy.

Post by Ana Lucia Ochoa, class of 2026
Post by Ana Lucia Ochoa, class of 2026

Your AI Survival Guide: Everything You Need to Know, According to an Expert

Sticky post

What comes to your mind when you hear the term ‘artificial intelligence’? Scary, sinister robots? Free help on assignments? Computers taking over the world?

Pictured: Media Architect Stephen Toback

Well, on January 24, Duke Media Architect Stephen Toback hosted a lively conversation on all things AI. An expert in the field of technology and media production, Toback discussed some of the practical applications of artificial intelligence in academic and professional settings.

According to Toback, enabling machines to think like humans is the essence of artificial intelligence. He views AI as a humanities discipline — an attempt to understand human intelligence. “AI is really a digital brain. You can’t digitize it unless you know how it actually works,” he began. Although AI has been around since 1956, the past year has seen an explosion in usage. ChatGPT, for example, became the fastest-growing user application in the world in less than 6 months. “One thing I always talk about is that AI is not gonna take your job, but someone using AI will.”

During his presentation, he referenced five dominant AI platforms on the market. The first one is ChatGPT, created by OpenAI. Released to the public in November 2022, it has over 100 million users every single month. The second is BardAI, which was created by Google in March 2023. Although newer on the market, the chatbot has gained significant traction online.

Pictured: Toback explaining the recent release of Meta’s AI “Characters.”

Next, we have LLama, owned by tech giant Meta. Last September, Meta launched AI ‘characters’ based on famous celebs including Paris Hilton and Snoop Dog, which users could chat with online. “They’ve already started commercializing AI,” Toback explained.

Then there’s Claude, by Anthropic. Claude is an AI assistant for a variety of digital tasks. “Writers tend to use Claude,” Toback said. “Its language models are more attuned to text.”

And finally on Toback’s list is Microsoft Copilot, which is changing the AI game. “It’s integrating ChatGPT into the apps that we use every day. And that’s the next step in this evolution of AI tools.” Described on Microsoft’s website as ‘AI for everything you do,’ Copilot embeds artificial intelligence models into the entire Microsoft 365 suite (which includes apps such as Word, Excel, PowerPoint, and Outlook). “I don’t have to copy and paste into ChatGPT and come back- It’s built right into the app.” It’s also the first AI tool on the market that provides integration into a suite of applications, instead of just one.

Pictured: A presentation created by Toback using Copilot in PowerPoint

He outlined several features of the software, such as: summarizing and responding to email threads on Outlook, creating intricate presentations from a simple text document in PowerPoint, and generating interview questions and resume comparisons in Word. “There’s a great example of using AI for something that I have to do… but now I can do it a little bit better and a little bit faster.”

Throughout his presentation, Toback also touched on the practical use of ChatGPT. “AI is not perfect,” he began. “If you just ask it a question, you’re like ‘Oh that sounds reasonable’, and it might not be right.” He emphasized challenges such as the rapidly changing nature of the platform, inherent biases, and incorrect data/information as potential challenges for practical use.

“Rather than saying I don’t know, it acts a lot like a middle schooler and says it knows everything and gives you a very convincing answer.”

Stephen Toback

These challenges have been felt nationwide. In early 2023, for example, lawyers for a federal court case used ChatGPT to find previous claims in an attempt to show precedent. However, after presenting the claims to a judge, the court found that the claims didn’t actually exist. “It cited all of these fake cases that look like real citations and then the judge considered sanctions, ” said Toback. ‘AI hallucinations’ such as this one, have caused national controversy over the use and accuracy of AI-generated content. “You need to be able to double-check and triple-check anything that you’re using through ChatGPT,” Toback said.

So how can we use ChatGPT more accurately? According to Toback, there are a variety of approaches, but the main one is called prompt engineering: the process of structuring text so that it can be understood by an AI model. “Prompts are really the key to all of this,” he revealed. “The better formed your question is, the more data you’re giving ChatGPT, the better the response you’re going to get.” Below is Toback’s 6-step template to make sure you are engineering prompts correctly for ChatGPT.

Pictured: Toback’s template for ChatGPT prompt engineering

So there you have it — your 2024 AI survival guide. It’s clear from the past few years that artificial intelligence is here to stay, and with that comes a need for improved understanding and use. As AI expert Oren Etzioni proclaims, “AI is a tool. The choice about how it gets deployed is ours.”

Have more questions about AI tools such as ChatGPT? Reach out to the Duke Office of Information Technology here.

Written by Skylar Hughes, Class of 2025

Sharing a Love of Electrical Engineering With Her Students

Note: Each year, we partner with Dr. Amy Sheck’s students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the seventh of eight posts.

“As a young girl, I always knew I wanted to be a scientist,” Dr. Tania Roy shares as she sits in her Duke Engineering office located next to state-of-the-art research equipment.

Dr. Tania Roy of Duke Engineering

The path to achieving her dream took her to many places and unique research opportunities. After completing her bachelor’s in India, she found herself pursuing further studies at universities in the United States, eventually receiving her Ph.D. from Vanderbilt University. 

Throughout these years Roy was able to explore and contribute to a variety of fields within electrical engineering, including energy-efficient electronics, two-dimensional materials, and neuromorphic computing, among others. But her deepest passion and commitment is to engage upcoming generations with electrical engineering research. 

As an assistant professor of electrical and computer engineering within Duke’s Pratt School of Engineering, Tania Roy gets to do exactly that. She finds happiness in mentoring her passionate young students. They work on projects focused on various problems in fields such as Biomedical Engineering (BME) and Mechanical Engineering, but her special focus is Electrical Engineering. 

Roy walks through the facilities carefully explaining the purpose of each piece of equipment when we run into one of her students. She explains how his project involves developing hardware for artificial intelligence, and the core idea of computer vision. 

Roy in her previous lab at the University of Central Florida. (UCF photo)

Through sharing her passion for electrical engineering, Roy hopes to motivate and inspire a new generation. 

“The field of electrical engineering is expected to experience immense growth in the future, especially with the recent trends in technological development,” she says, explaining that there needs to be more interest in the field of electrical engineering for the growth to meet demand. 

The recent shortage of semiconductor chips for the industrial market is an example of this. It poses a crucial problem to the supply and demand of various products that rely on these fundamental components, Roy says. By increasing the interest of students, and therefore increasing the number of students pursuing electrical engineering, we can build a foundation for the advancement of technologies powering our society today, says Roy.

Coming with a strong background of research herself, she is well equipped for the role of advocate and mentor. She has worked with gallium nitride for high voltage breakdowns. This is when the insulation between two conductors or electrical components fails, allowing electrical current to flow through the insulation. This breakdown usually occurs when the voltage across the insulating material exceeds a certain threshold known as the breakdown voltage.

In electric vehicles, high breakdown voltage is crucial for several reasons related to the safety, performance, and efficiency of the vehicle’s electrical system, and Roy’s work directly impacts this. She has also conducted extensive research on 2D materials and their photovoltaic capabilities, and is currently working on developing brain-inspired computer architectures for machine learning algorithms. Similar to the work of her student, this research utilizes the structure of the human brain to model an architecture for AI, replicating the synapses and neural connections.

As passionate as she is about research, she shares that she used to love to go to art galleries and look at paintings, “I could do it for hours,” Roy says. Currently, if she is not actively pursuing her research, she enjoys spending time with her two young children. 

“I hope to share my dream with this new generation,” Roy concludes.

Guest post by Sutharsika Kumar, North Carolina School of Science and Mathematics, Class of 2024

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Leveraging Google’s Technology to Improve Mental Health

Last Tuesday, October 10 was World Mental Health Day. To mark the holiday, the Duke Institute for Brain Sciences, in partnership with other student wellness organizations, welcomed Dr. Megan Jones Bell, PsyD, the clinical director of consumer and mental health at Google, to discuss mental health. Bell was formerly chief strategy and science officer at Headspace and helped guide Headspace through its transformation from a meditation app into a comprehensive digital mental health platform, Headspace Health. Bell also founded one of the first digital mental health start-ups, Lantern, where she pioneered blended mental health interventions leveraging software and coaching. In her conversation with Dr. Murali Doraiswamy, Duke professor of psychiatry and behavioral sciences, and Thomas Szigethy, Associate Dean of Students and Director of Duke’s Student Wellness Center, Bell revealed the actions Google is taking to improve the health of the billions of people who use their platform. 

She began by defining mental health, paraphrasing the World Health Organization’s definition. She said, “Mental health, to me, is a state of wellbeing in which the individual realizes his or her or their own abilities, can cope with the normal stresses of life, work productively and fruitfully, and can contribute to their own community.” Rather than taking a medicalized approach to mental health, she argued, mental health should be recognized as something that we all have. Critically, she said that mental health is not just mental  disorders; the first step to improving mental health is recognition and upstream intervention.

Underlining the critical role Google plays in global mental health, Bell cited multiple statistics: three out of four people turn to the internet first for health information. On Google Search, there are 100 million searches on health everyday; Youtube boasts 25 billion views of mental health content. Given their billions of users, Bell intimated Google’s huge responsibility to provide people with accurate, authoritative, and empathetic information. The company has multiple goals in terms of mental health that are specific to different communities. There are three principal audiences that Bell described Google’s goals for: consumers, caregivers, and communities. 

Google’s consumer-facing focus is providing access to high quality information and tools to manage their users’ health. With regards to caregivers, Google strives to create strong partnerships to create solutions to transform care delivery. In terms of community health, the company works with public health organizations worldwide, focusing on social determinants of health and aiming to open up data and insights to the public health community. 

Szigethy followed by launching a discussion of Google’s efforts to protect adolescents. He referenced the growing and urgent mental health crisis amongst adolescents; what is Google doing to protect them? 

Bell mentioned multiple projects across different platforms in order to provide youth with safer online experiences. Key to these projects is the desire to promote their mental health by default. On Google Search, this takes the form of the SafeSearch feature. SafeSearch is on by default, filtering out explicit or inappropriate results. On Youtube, default policies include various prevention measures, one of which automatically removes content that is considered “immitable.” Bell used the example of disordered eating content in order to explain the policy– in accordance with their prevention approach, YouTube removes dangerous eating-related content containing anything that the viewer can copy. YouTube also has age-restricted videos, unavailable to users under 18, as well as certain product features that can be blocked. Google also created an eating disorder hotline with experts online 24/7. 

Jokingly, Bell assured the Zoom audience that Google wouldn’t be creating a therapist chatbot anytime soon — she asserted that digital tools are not “either or.” When the conversation veered towards generative AI, Bell admitted that AI has enormous potential for helping billions of people, but maintained that it needs to be developed in a responsible way. At Google, the greatest service AI provides is scalability. Google.org, Bell said, recently worked with The Trevor Project and ReflexAI on a crisis hotline for veterans called HomeTeam. Google used AI that stimulated crises to help scale up training for volunteers. Bell said, “The human is still on the other side of the phone, and AI helped achieve that”. 

Next, Bell tackled the question of health information and misinformation– what she called a significant area of focus for Google. Before diving in, however, Bell clarified, “It’s not up to Google to decide what is accurate and what is not accurate.” Rather, she said that anchoring to trusted organizations is critical to embedding mental health into the culture of a community. When it comes to health information and misinformation, Bell encapsulated Google’s philosophy in this phrase: “define, operationalize, and elevate high quality information.” In order to combat misinformation on their platform, Google asked the National Academy of Medicine to help define what accurate medical sources are. The Academy then put together a framework of authoritative health info, which WHO then nationalized. YouTube then launched its “health sources” feature, where videos from the framework are the first thing that you see. In effect, the highest quality information is raised to the top of your page when you make a search. Videos in this framework also have a visible badge on the watch panel that features a  phrase like “from a healthcare professional” or “from an organization with a healthcare professional.” Bell suggested that this also helps people to remember where their information is coming from, acting as a guardrail in itself. Additionally, Google continues to fight medical misinformation with an updated medical misinformation policy, which enables them to remove content that is contradictory to medical authorities or medical consensus. 

Near the end of the conversation, Szigethy asked Bell if she would recommend any behaviors for embracing wellbeing. A prevention researcher by background, Bell stressed the importance of early and regular action. Our biggest leverage point for changing mental health, she asserted, is upstream intervention and embracing routines that foster our mental health. She breaks these down into five dimensions of wellbeing: mindfulness, sleep, movement and exercise, nutrition, and social connection. Her advice is to ask the question: what daily/weekly routines do I have that foster each of these? Make a list, she suggests, and try to incorporate a daily routine that addresses each of the five dimensions. 

Before concluding, Bell advocated that the best thing that we can do is to approach mental health issues with humility and listen to a community first. She shared that, at Headspace, her team worked with the mayor’s office and community organizations in Hartford, Connecticut to co-define their mental health goals and map the strengths and assets of the community. Then, they could start to think about how to contextualize Headspace in that community. Bell graciously entered the Duke community with the same humility, and her conversation was a wonderful commemoration of World Mental Health Day. 

By Isa Helton, Class of 2026

My Face Belongs to The Hive (and Yours Does Too)

Imagine having an app that could identify almost anyone using only a photograph of their face. For example, you could take a photograph of a stranger in a dimly lit restaurant and know within seconds who they are.

This technology exists, and Kashmir Hill has reported on several companies that offer these services.

An investigative journalist with the New York Times, Hill visited Duke Law Sept. 27 to talk about her new book, Your Face Belongs To Us.

The book is about a company that developed powerful facial recognition technology based on images harnessed from our social media profiles. To learn more about Clearview AI, the unlikely duo who were behind it, and how they sold it to law enforcement, I highly recommend reading this book.

Hill demonstrated for me a facial recognition app that provides subscribers with up to 25 face searches a day. She offered to let me see how well it worked.

Screen shot of the search app with Hill’s quick photo of me.

She snapped a quick photo of my face in dim lighting. Within seconds (3.07 to be exact), several photos of my face appeared on her phone.

The first result (top left) is unsurprising. It’s the headshot I use for the articles I write on the Duke Research Blog. The second result (top right) is a photo of me at my alma mater in 2017, where I presented at a research conference. The school published an article about the event, and I remember the photographer coming around to take photos. I was able to easily figure out exactly where on the internet both results had been pulled from.

The third result (second row, left) unsettled me. I had never seen this photo before.

A photo of me sitting between friends. Their faces have been blurred out.

After a quick search of the watermark on the photo (which has been blurred for safety), I discovered that the photograph was from an event I attended several years ago. Apparently, the venue had used the image for marketing on their website. Using these facial recognition results, I was able to easily find out the exact location of the event, its date, and who I had gone with.

What is Facial Recognition Technology?

Researchers have been trying for decades to produce a technology that could accurately identify human faces. The invention of neural network artificial intelligence has made it possible for computer algorithms to do this with increasing accuracy and speed. However, this technology requires large sets of data, in this case, hundreds of thousands of examples of human faces, to work.

Just think about how many photos of you exist online. There are the photos that you have taken and shared or that your friends and family have taken of you. Then there are photos that you’re unaware that you’re in – perhaps you walked by as someone snapped a picture and accidentally ended up in the frame. I don’t consider myself a heavy user of social media, but I am sure there are thousands of pictures of my face out there. I’ve uploaded and classified hundreds of photos of myself across platforms like Facebook, Instagram, LinkedIn, and even Venmo.

The developers behind Clearview AI recognized the potential in all these publicly accessible photographs and compiled them to create a massive training dataset for their facial recognition AI. They did this by scraping the social media profiles of hundreds of thousands of people. In fact, they got something like 2.1 million images of faces from Venmo and Tinder (a dating app) alone.

Why does this matter?

Clearly, there are major privacy concerns for this kind of technology. Clearview AI was marketed as being only available to law enforcement. In her book, Hill gives several examples of why this is problematic. People have been wrongfully accused, arrested, detained, and even jailed for the crime of looking (to this technology) like someone else.

We also know that AI has problems with bias. Facial recognition technology was first developed by mostly white, mostly male researchers, using photographs of mostly white, mostly male faces. The result of this has had a lasting effect. Marginalized communities targeted by policing are at increased risk, leading many to call for limits on the use of facial recognition by police.

It’s not just government agencies who have access to facial recognition. Other companies have developed off-the-shelf products that anyone can buy, like the app Hill demonstrated to me. This technology is now available to anyone willing to pay for a subscription. My own facial recognition results show how easy it is to find out a lot about a person (like their location, acquaintances, and more) using these apps. It’s easy to imagine how this could be dangerous.

There remain reasons to be optimistic about the future of privacy, however. Hill closed her talk by reminding everyone that with every technological breakthrough, there is opportunity for ethical advancement reflected by public policy. With facial recognition, policy makers have previously relied on private companies to make socially responsible decisions. As we face the results of a few radical actors using the technology maliciously, we can (and should) respond by developing legal restraints that safeguard our privacy.

On this front, Europe is leading by example. It’s likely that the actions of Clearview AI are already illegal in Europe, and they are expanding privacy rights with the European Commission’s (EC) proposed Artificial Intelligence (AI) regulation. These rules include requirements for technology developers to certify the quality of their processes, rather than algorithm performance, which would mitigate some of these harms. This regulation aims to take a technology-neutral approach and stratifies facial recognition technology by it’s potential for risk to people’s safety, livelihoods, and rights.

Post by Victoria Wilson, MA Bioethics and Science Policy, 2023

Neuroscience Shows Why Sex Assault Victims “Freeze.” It’s Not Consent.

Warning: the following article discusses rape and sexual assault. If you or someone you know has been sexually assaulted, help is available.

Image: DreamStudio AI, with prompt “Woman, screaming, sitting on the witness stand in a U.S. court of law, in the style of Edvard Munch’s ‘The Scream’”

“You never screamed for help?”

“Why didn’t you fight back?”

These are questions that lawyers asked E. Jean Carroll in her rape case against former president Donald J. Trump this spring. These kinds of questions reflect a myth about rape: that it’s only rape if the victim puts up a fight.

A recent review of the research, “Neuroscience Evidence Counters a Rape Myth,” aims to set the record straight. It serves as a call to action for those in the scientific and legal professions. Ebani Dhawan completed this work at the University College London with Professor Patrick Haggard. She is now my classmate at Duke University, where she is pursuing an MA in Bioethics & Science Policy.

Ebani Dhawan

Commonly accepted beliefs and myths about rape are a persistent problem in defining and prosecuting sexual assault. The intentions of all actors are examined in the courtroom. If a victim freezes or does not attempt to resist during a sexual assault, perpetrators may claim there was passive acquiescence; that consent was assumed from an absence of resistance.

From the moment a victim reports an assault, the legal process poses “why” questions about the survivor’s behavior. This is problematic because it upholds the idea that survivors can (and should) choose to scream or fight back during an assault.

This new paper presents neuroscientific evidence which counters that misconception. Many survivors of sexual assault report ‘freezing’ during an assault. The researchers argue that this is an involuntary response to a threat which can prevent a victim from actively resisting, and that it occurs throughout biology.

Animal studies have demonstrated that severe, urgent threats, like assault or physical restraint, can trigger a freeze response involving fixed posture (tonic immobility) or loss of muscle tone (collapsed immobility). Self-reports of these states in humans shed light on an important insight into immobility. Namely, that we are unable to make voluntary actions during this freezing response.

An example of this is the “lockup” state displayed by pilots during an aviation emergency. After a plane crash, it’s hard to imagine anyone asking a pilot if they froze because they really wanted to crash the plane.

Yet, quite frequently victims of sexual assault are asked to explain the freeze response, something which is further made difficult by the impaired memory and loss of sense of agency which often accompanies trauma.

The legal process around sexual assault should be updated to reflect this neuroscientific evidence.

THIS MYTH HAS REAL CONSEQUENCES.

The vast majority of sexual assault cases do not result in a conviction. It is estimated that out of every 1,000 sexual assaults in the U.S., only 310 are reported to the police and only 28 lead to felony conviction. That is a conviction rate of less than 3%.

In England and Wales, just 3% of rapes recorded in the previous year resulted in charges. According to RAINN, one of the leading anti-sexual assault organizations, many victims don’t report because they believe the justice system would not do anything to help — a belief that these conviction rates support.

E. Jean Carroll named this in her trial. She said, “Women don’t come forward. One of the reasons they don’t come forward is because they’re always asked, why didn’t you scream? You better have a good excuse if you didn’t scream.”

This research serves as a much-needed call-to-action. By revisiting processes steeped in myth, justice can be better served.

I asked Ebani what she thinks must be done. Here are her recommendations:

  1. The neuroscience community should pursue greater mechanistic understanding of threat processing and involuntary action processes and the interaction between them. 
  2. Activists and legal scholars should advocate for processes reflective of the science behind involuntary responses like freezing, and the inability of victims to explain that behavior.
  3. Neuroscientists should contribute to Police officers’ education regarding involuntary responses to rape and sexual assault.

“I’m telling you: He raped me whether I screamed or not.” – E. Jean Carroll

Post by Victoria Wilson, Class of 2023

When Art and Science Meet as Equals

Artists and scientists in today’s world often exist in their own disciplinary silos. But the Laboratory Art in Practice Bass Connections team hopes to rewrite this narrative, by engaging Duke students from a range of disciplines in a 2-semester series of courses designed to join “the artist studio, the humanities seminar room, and the science lab bench.” Their work culminated in “re:process” – an exhibition of student artwork on Friday, April 28, in the lobby of the French Family Science Center. Rather than science simply engaging artistic practice for the sake of science, or vice versa, the purpose of these projects was to offer an alternate reality where “art and science meet as equals.”

The re:process exhibition

Liuren Yin, a junior double-majoring in Computer Science and Visual and Media Studies, developed an art project to focus on the experience of prosopagnosia, or face blindness. Individuals with this condition are unable to tell two distinct faces apart, including their own, often relying on body language, clothing, and the sound of a person’s voice to determine the identity of a person. Using her experience in computer science, she developed an algorithm that inputs distinct faces and outputs the way that these faces are perceived by someone who has prosopagnosia.

Yin’s project exploring prosopagnosia

Next to the computer and screen flashing between indistinguishable faces, she’s propped up a mirror for passers-by to look at themselves and contemplate the questions that inspired her to create this piece. Yin says that as she learned about prosopagnosia, where every face looks the same, she found herself wondering, “how am I different from a person that looks like me?” Interrogating the link between our physical appearance and our identity is at the root of Yin’s piece. Especially in an era where much of our identity exists online and appearance can be curated any way one wants, Yin considers this artistic piece especially timely. She writes in her program note that “my exposure to technologies such as artificial intelligence, generative algorithms, and augmented reality makes me think about the combination and conflict between human identity and these futuristic concepts.”

Eliza Henne, a junior majoring in Art History with a concentration in Museum Theory and Practice, focused more on the biological world in her project, which used a lavender plant in different forms to ask questions like “what is truthful, and what do we consider real?” By displaying a live plant, an illustration of a plant, and pressings from a plant, she invites viewers to consider how every rendition of a commonly used model organism in scientific experiments omits some information about the reality of the organism.

Junior Eliza Henne

For example, lavender pressings have materiality, but there’s no scent or dimension to the plant. A detailed illustration is able to capture even the way light illuminates the thin veins of the leaf, but is merely an illustration of a live being. The plant itself, which is conventionally real, can only further be seen in this sort of illustrative detail under a microscope or in a diagram.

In walking through the lobby of FFSC, where these projects and more are displayed, you’re surrounded by conventionally scientific materials, like circuit boards, wires, and petri dishes, which, in an unusual turn of events are being used for seemingly unscientific endeavors. These endeavors – illustrating the range of human emotion, showcasing behavioral patterns like overconsumption, or demonstrating the imperfection inherent to life – might at first glance feel more appropriate in an art museum or a performing arts stage.

But the students and faculty involved in this exhibition see that as the point. Maybe it isn’t so unnatural to build a bridge between the arts and the sciences – maybe, they are simply two sides of the same coin.

Post by Meghna Datta, Class of 2023

Senior Jenny Huang on her Love for Statistics and the Scientific Endeavor

Statistics and computer science double major Jenny Huang (T’23) started Duke as many of us do – vaguely pre-med, undecided on a major – but she knew she had an interest in scientific research. Four years later, with a Quad Fellowship and an acceptance to MIT for her doctoral studies, she reflects on how research shaped her time at Duke, and how she hopes to impact research.

Jenny Huang (T’23)

What is it about statistics? And what is it about research?

With experience in biology research during high school and during her first year at Duke, Huang toyed with the idea of an MD/PhD, but ultimately realized that she might be better off dropping the MD. “I enjoy figuring out how the world works” Huang says, and statistics provided a language to examine the probabilistic and often unintuitive nature of the world around us.

In another life, Huang remarked, she might have been a physics and philosophy double major, because physics offers the most fundamental understanding of how the world works, and philosophy is similar to scientific research: in both, “you pursue the truth through cyclic questioning and logic.” She’s also drawn to engineering, because it’s the process of dissecting things until you can “build them back up from first principles.”

At the International Society for Bayesian Analysis summer conference in Montreal

Huang’s research and the impact of COVID-19

For Huang, research started her first year at Duke, on a Data+ team, led by Professor Charles Nunn, studying the variation of parasite richness across primate species. To map out what types of parasites interacted with what type of monkeys, the team relied on predictors such as body mass, diet, and social activity, but in the process, they came up against an interesting phenomenon.

It appeared that the more studied a primate was, the more interactions it would have with parasites, simply because of the amount of information available on the primate. Due to geographic and experimental constraints, however, a large portion of the primate-parasite network remained understudied. This example of a concept in statistics known as sampling bias was muddling their results. One day, while making an offhand remark about the problem to one of her professors (Professor David Dunson), Huang ended up arranging a serendipitous research match. It turned out that Dunson had a statistical model that could be applied to the problem Nunn and the Data+ team were facing.

The applicability of statistics to a variety of different fields enamored Huang. When COVID-19 hit, it impacted all of us to some degree, but for Huang, it provided the perfect opportunity to apply mathematical models to a rapidly-changing pandemic. For the past two summers, through work with Dunson on a DOMath project, as well as Professor Jason Xu and Professor Rick Durrett, Huang has used mathematical modeling to assess changes in the spread of COVID-19.

On inclusivity in research

As of 2018, just 28% of graduates in mathematics and statistics at the doctoral level identified as women. Huang will eventually be included in this percentage, seeing as she begins her Ph.D. at MIT’s Department of Electrical Engineering and Computer Science in the fall, working with Professor Tamara Broderick.

“When I was younger, I always thought that successful and smart people in academia were white men,” Huang laughed. But that’s not true, she emphasizes: “it’s just that we don’t have other people in the story.” As one of the few female-presenting people in her research meetings, Huang has often felt pressure to underplay her more, “girly” traits to fit in. But interacting with intelligent, accomplished female-identifying academics in the field (including collaborations with Professor Cynthia Rudin) reaffirms to her that it’s important to be yourself: “there’s a place for everyone in research.”

At the Joint Statistical Meetings Conference in D.C with fellow researcher Gaurav Parikh

Advice for first-years and what the future holds

While she can’t predict where exactly she’ll end up, Huang is interested in taking a proactive role in shaping the impacts of artificial intelligence and machine learning on society. And as the divide between academia and industry is becoming more and more gray, years from now, she sees herself existing somewhere in that space.

Her advice for incoming Duke students and aspiring researchers is threefold. First, Huang emphasizes the importance of mentorship. Having kind and validating mentors throughout her time at Duke made difficult problems in statistics so much more approachable for her, and in research, “we need more of that type of person!”

Second, she says that “when I first approached studying math, my impatience often got in the way of learning.” Slowing down with the material and allowing herself the time to learn things thoroughly helped her improve her academic abilities.

Being around people who have this shared love and a deep commitment for their work is just the human endeavor at its best.

Jenny huang

Lastly, she stresses the importance of collaboration. Sometimes, Huang remarked,“research can feel isolating, when really it is very community-driven.” When faced with a tough problem, there is nothing more rewarding than figuring it out together with the help of peers and professors.  And she is routinely inspired by the people she does research with: “being around people who have this shared love and a deep commitment for their work is just the human endeavor at its best.”

Post by Meghna Datta, Class of 2023

(Editor’s note: This is Jenny’s second appearance on the blog. As a senior at NC School of Science and Math, she wrote a post about biochemist Meta Kuehn.)

How Research Helped One Pre-med Discover a Love for Statistics and Computer Science

If you’re a doe-eyed first-year at Duke who wants to eventually become a doctor, chances are you are currently, or will soon, take part in a pre-med rite of passage: finding a lab to research in.

Most pre-meds find themselves researching in the fields of biology, chemistry, or neuroscience, with many hoping to make research a part of their future careers as clinicians. Undergraduate student and San Diego native Eden Deng (T’23) also found herself plodding a similar path in a neuroimaging lab her freshman year.

Eden Deng T’23

At the time, she was a prospective neuroscience major on the pre-med track. But as she soon realized, neuroimaging is done through fMRI. And to analyze fMRI data, you need to be able to conduct data analysis.

This initial research experience at Duke in the Martucci Lab, which looks at chronic pain and the role of the central nervous system, sparked a realization for Deng. “Ninety percent of my time was spent thinking about computational and statistical problems,” she explained to me. Analysis was new to her, and as she found herself struggling with it, she thought to herself, “why don’t I spend more time getting better at that academically?”

Deng at the Martucci Lab

This desire to get better at research led Deng to pursue a major in Statistics with a secondary in Computer Science, while still on the pre-med track. Many people might instantly think about how hard it must be to fit in so much challenging coursework that has virtually no overlap. And as Deng confirmed, her academic path not been without challenges.

For one, she’s never really liked math, so she was wary of getting into computation. Additionally, considering that most Statistics and Computer Science students want to pursue jobs in the technology industry, it’s been hard for her to connect with like-minded people who are equally familiar with computers and the human body.

“I never felt like I excelled in my classes,” Deng said. “And that was never my intention.” Deng had to quickly get used to facing what she didn’t know head-on. But as she kept her head down, put in the work, and trusted that eventually she would figure things out, the merits of her unconventional academic path started to become more apparent.

Research at the intersection of data and health

Last summer, Deng landed a summer research experience at Mount Sinai, where she looked at patient-level cancer data. Utilizing her knowledge in both biology and data analytics, she worked on a computational screener that scientists and biologists could use to measure gene expression in diseased versus normal cells. This will ultimately aid efforts in narrowing down the best genes to target in drug development. Deng will be back at Mount Sinai full-time after graduation, to continue her research before applying to medical school.

Deng presenting on her research at Mount Sinai

But in her own words, Deng’s most favorite research experience has been her senior thesis through Duke’s Department of Biostatistics and Bioinformatics. Last year, she reached out to Dr. Xiaofei Wang, who is part of a team conducting a randomized controlled trial to compare the merits of two different lung tumor treatments.

Generally, when faced with lung disease, the conservative approach is to remove the whole lobe. But that can pose challenges to the quality of life of people who are older, with more comorbidities. Recently, there has been a push to focus on removing smaller sections of lung tissue instead. Deng’s thesis looks at patient surgical data over the past 15 years, showing that patient survival rates have improved as more of these segmentectomies – or smaller sections of tissue removal – have become more frequent in select groups of patients.

“I really enjoy working on it every week,” Deng says about her thesis, “which is not something I can usually say about most of the work I do!” According to Deng, a lot of research – hers included – is derived from researchers mulling over what they think would be interesting to look at in a silo, without considering what problems might be most useful for society at large. What’s valuable for Deng about her thesis work is that she’s gotten to work closely with not just statisticians but thoracic surgeons. “Originally my thesis was going to go in a different direction,” she said, but upon consulting with surgeons who directly impacted the data she was using – and would be directly impacted by her results – she changed her research question. 

The merits of an interdisciplinary academic path

Deng’s unique path makes her the perfect person to ask: is pursuing seemingly disparate interests, like being a Statistics and Computer Science double-major on the pre-med, track worth it? And judging by Deng’s insights, the answer is a resounding yes.

At Duke, she says, “I’ve been challenged by many things that I wouldn’t have expected to be able to do myself” – like dealing with the catch-up work of switching majors and pursuing independent research. But over time she’s learned that even if something seems daunting in the moment, if you apply yourself, most, if not all things, can be accomplished. And she’s grateful for the confidence that she’s acquired through pursuing her unique path.

Moreover, as Deng reflects on where she sees herself – and the field of healthcare – a few years from now, she muses that for the first time in the history of healthcare, a third-party player is joining the mix – technology.

While her initial motivation to pursue statistics and computer science was to aid her in research, “I’ve now seen how its beneficial for my long-term goals of going to med school and becoming a physician.” As healthcare evolves and the introduction of algorithms, AI and other technological advancements widens the gap between traditional and contemporary medicine, Deng hopes to deconstruct it all and make healthcare technology more accessible to patients and providers.

“At the end of the day, it’s data that doctors are communicating to patients,” Deng says. So she’s grateful to have gained experience interpreting and modeling data at Duke through her academic coursework.

And as the Statistics major particularly has taught her, complexity is not always a good thing – sometimes, the simpler you can make something, the better. “Some research doesn’t always do this,” she says – she’s encountered her fair share of research that feels performative, prioritizing complexity to appear more intellectual. But by continually asking herself whether her research is explainable and applicable, she hopes to let those two questions be the North Stars that guide her future research endeavors.

At the end of the day, it’s data that doctors are communicating to patients.

Eden Deng

When asked what advice she has for first-years, Deng said that it’s important “to not let your inexperience or perceived lack of knowledge prevent you from diving into what interests you.” Even as a first-year undergrad, know that you can contribute to academia and the world of research.

And for those who might be interested in pursuing an academic path like Deng, there’s some good news. After Deng talked to the Statistics department about the lack of pre-health representation that existed, the Statistics department now has a pre-health listserv that you can join for updates and opportunities pertaining specifically to pre-med Stats majors. And Deng emphasizes that the Stats-CS-pre-med group at Duke is growing. She’s noticed quite a few underclassmen in the Statistics and Computer Science departments who vocalize an interest in medical school.

So if you also want to hone your ability to communicate research that you care about – whether you’re pre-med or not – feel free to jump right into the world of data analysis. As Deng concludes, “everyone has something to say that’s important.”

Post by Meghna Datta, Class of 2023

Page 1 of 2

Powered by WordPress & Theme by Anders Norén