Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Artificial Intelligence Page 2 of 3

Sharing a Love of Electrical Engineering With Her Students

Note: Each year, we partner with Dr. Amy Sheck’s students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the seventh of eight posts.

“As a young girl, I always knew I wanted to be a scientist,” Dr. Tania Roy shares as she sits in her Duke Engineering office located next to state-of-the-art research equipment.

Dr. Tania Roy of Duke Engineering

The path to achieving her dream took her to many places and unique research opportunities. After completing her bachelor’s in India, she found herself pursuing further studies at universities in the United States, eventually receiving her Ph.D. from Vanderbilt University. 

Throughout these years Roy was able to explore and contribute to a variety of fields within electrical engineering, including energy-efficient electronics, two-dimensional materials, and neuromorphic computing, among others. But her deepest passion and commitment is to engage upcoming generations with electrical engineering research. 

As an assistant professor of electrical and computer engineering within Duke’s Pratt School of Engineering, Tania Roy gets to do exactly that. She finds happiness in mentoring her passionate young students. They work on projects focused on various problems in fields such as Biomedical Engineering (BME) and Mechanical Engineering, but her special focus is Electrical Engineering. 

Roy walks through the facilities carefully explaining the purpose of each piece of equipment when we run into one of her students. She explains how his project involves developing hardware for artificial intelligence, and the core idea of computer vision. 

Roy in her previous lab at the University of Central Florida. (UCF photo)

Through sharing her passion for electrical engineering, Roy hopes to motivate and inspire a new generation. 

“The field of electrical engineering is expected to experience immense growth in the future, especially with the recent trends in technological development,” she says, explaining that there needs to be more interest in the field of electrical engineering for the growth to meet demand. 

The recent shortage of semiconductor chips for the industrial market is an example of this. It poses a crucial problem to the supply and demand of various products that rely on these fundamental components, Roy says. By increasing the interest of students, and therefore increasing the number of students pursuing electrical engineering, we can build a foundation for the advancement of technologies powering our society today, says Roy.

Coming with a strong background of research herself, she is well equipped for the role of advocate and mentor. She has worked with gallium nitride for high voltage breakdowns. This is when the insulation between two conductors or electrical components fails, allowing electrical current to flow through the insulation. This breakdown usually occurs when the voltage across the insulating material exceeds a certain threshold known as the breakdown voltage.

In electric vehicles, high breakdown voltage is crucial for several reasons related to the safety, performance, and efficiency of the vehicle’s electrical system, and Roy’s work directly impacts this. She has also conducted extensive research on 2D materials and their photovoltaic capabilities, and is currently working on developing brain-inspired computer architectures for machine learning algorithms. Similar to the work of her student, this research utilizes the structure of the human brain to model an architecture for AI, replicating the synapses and neural connections.

As passionate as she is about research, she shares that she used to love to go to art galleries and look at paintings, “I could do it for hours,” Roy says. Currently, if she is not actively pursuing her research, she enjoys spending time with her two young children. 

“I hope to share my dream with this new generation,” Roy concludes.

Guest post by Sutharsika Kumar, North Carolina School of Science and Mathematics, Class of 2024

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Leveraging Google’s Technology to Improve Mental Health

Last Tuesday, October 10 was World Mental Health Day. To mark the holiday, the Duke Institute for Brain Sciences, in partnership with other student wellness organizations, welcomed Dr. Megan Jones Bell, PsyD, the clinical director of consumer and mental health at Google, to discuss mental health. Bell was formerly chief strategy and science officer at Headspace and helped guide Headspace through its transformation from a meditation app into a comprehensive digital mental health platform, Headspace Health. Bell also founded one of the first digital mental health start-ups, Lantern, where she pioneered blended mental health interventions leveraging software and coaching. In her conversation with Dr. Murali Doraiswamy, Duke professor of psychiatry and behavioral sciences, and Thomas Szigethy, Associate Dean of Students and Director of Duke’s Student Wellness Center, Bell revealed the actions Google is taking to improve the health of the billions of people who use their platform. 

She began by defining mental health, paraphrasing the World Health Organization’s definition. She said, “Mental health, to me, is a state of wellbeing in which the individual realizes his or her or their own abilities, can cope with the normal stresses of life, work productively and fruitfully, and can contribute to their own community.” Rather than taking a medicalized approach to mental health, she argued, mental health should be recognized as something that we all have. Critically, she said that mental health is not just mental  disorders; the first step to improving mental health is recognition and upstream intervention.

Underlining the critical role Google plays in global mental health, Bell cited multiple statistics: three out of four people turn to the internet first for health information. On Google Search, there are 100 million searches on health everyday; Youtube boasts 25 billion views of mental health content. Given their billions of users, Bell intimated Google’s huge responsibility to provide people with accurate, authoritative, and empathetic information. The company has multiple goals in terms of mental health that are specific to different communities. There are three principal audiences that Bell described Google’s goals for: consumers, caregivers, and communities. 

Google’s consumer-facing focus is providing access to high quality information and tools to manage their users’ health. With regards to caregivers, Google strives to create strong partnerships to create solutions to transform care delivery. In terms of community health, the company works with public health organizations worldwide, focusing on social determinants of health and aiming to open up data and insights to the public health community. 

Szigethy followed by launching a discussion of Google’s efforts to protect adolescents. He referenced the growing and urgent mental health crisis amongst adolescents; what is Google doing to protect them? 

Bell mentioned multiple projects across different platforms in order to provide youth with safer online experiences. Key to these projects is the desire to promote their mental health by default. On Google Search, this takes the form of the SafeSearch feature. SafeSearch is on by default, filtering out explicit or inappropriate results. On Youtube, default policies include various prevention measures, one of which automatically removes content that is considered “immitable.” Bell used the example of disordered eating content in order to explain the policy– in accordance with their prevention approach, YouTube removes dangerous eating-related content containing anything that the viewer can copy. YouTube also has age-restricted videos, unavailable to users under 18, as well as certain product features that can be blocked. Google also created an eating disorder hotline with experts online 24/7. 

Jokingly, Bell assured the Zoom audience that Google wouldn’t be creating a therapist chatbot anytime soon — she asserted that digital tools are not “either or.” When the conversation veered towards generative AI, Bell admitted that AI has enormous potential for helping billions of people, but maintained that it needs to be developed in a responsible way. At Google, the greatest service AI provides is scalability. Google.org, Bell said, recently worked with The Trevor Project and ReflexAI on a crisis hotline for veterans called HomeTeam. Google used AI that stimulated crises to help scale up training for volunteers. Bell said, “The human is still on the other side of the phone, and AI helped achieve that”. 

Next, Bell tackled the question of health information and misinformation– what she called a significant area of focus for Google. Before diving in, however, Bell clarified, “It’s not up to Google to decide what is accurate and what is not accurate.” Rather, she said that anchoring to trusted organizations is critical to embedding mental health into the culture of a community. When it comes to health information and misinformation, Bell encapsulated Google’s philosophy in this phrase: “define, operationalize, and elevate high quality information.” In order to combat misinformation on their platform, Google asked the National Academy of Medicine to help define what accurate medical sources are. The Academy then put together a framework of authoritative health info, which WHO then nationalized. YouTube then launched its “health sources” feature, where videos from the framework are the first thing that you see. In effect, the highest quality information is raised to the top of your page when you make a search. Videos in this framework also have a visible badge on the watch panel that features a  phrase like “from a healthcare professional” or “from an organization with a healthcare professional.” Bell suggested that this also helps people to remember where their information is coming from, acting as a guardrail in itself. Additionally, Google continues to fight medical misinformation with an updated medical misinformation policy, which enables them to remove content that is contradictory to medical authorities or medical consensus. 

Near the end of the conversation, Szigethy asked Bell if she would recommend any behaviors for embracing wellbeing. A prevention researcher by background, Bell stressed the importance of early and regular action. Our biggest leverage point for changing mental health, she asserted, is upstream intervention and embracing routines that foster our mental health. She breaks these down into five dimensions of wellbeing: mindfulness, sleep, movement and exercise, nutrition, and social connection. Her advice is to ask the question: what daily/weekly routines do I have that foster each of these? Make a list, she suggests, and try to incorporate a daily routine that addresses each of the five dimensions. 

Before concluding, Bell advocated that the best thing that we can do is to approach mental health issues with humility and listen to a community first. She shared that, at Headspace, her team worked with the mayor’s office and community organizations in Hartford, Connecticut to co-define their mental health goals and map the strengths and assets of the community. Then, they could start to think about how to contextualize Headspace in that community. Bell graciously entered the Duke community with the same humility, and her conversation was a wonderful commemoration of World Mental Health Day. 

By Isa Helton, Class of 2026

My Face Belongs to The Hive (and Yours Does Too)

Imagine having an app that could identify almost anyone using only a photograph of their face. For example, you could take a photograph of a stranger in a dimly lit restaurant and know within seconds who they are.

This technology exists, and Kashmir Hill has reported on several companies that offer these services.

An investigative journalist with the New York Times, Hill visited Duke Law Sept. 27 to talk about her new book, Your Face Belongs To Us.

The book is about a company that developed powerful facial recognition technology based on images harnessed from our social media profiles. To learn more about Clearview AI, the unlikely duo who were behind it, and how they sold it to law enforcement, I highly recommend reading this book.

Hill demonstrated for me a facial recognition app that provides subscribers with up to 25 face searches a day. She offered to let me see how well it worked.

Screen shot of the search app with Hill’s quick photo of me.

She snapped a quick photo of my face in dim lighting. Within seconds (3.07 to be exact), several photos of my face appeared on her phone.

The first result (top left) is unsurprising. It’s the headshot I use for the articles I write on the Duke Research Blog. The second result (top right) is a photo of me at my alma mater in 2017, where I presented at a research conference. The school published an article about the event, and I remember the photographer coming around to take photos. I was able to easily figure out exactly where on the internet both results had been pulled from.

The third result (second row, left) unsettled me. I had never seen this photo before.

A photo of me sitting between friends. Their faces have been blurred out.

After a quick search of the watermark on the photo (which has been blurred for safety), I discovered that the photograph was from an event I attended several years ago. Apparently, the venue had used the image for marketing on their website. Using these facial recognition results, I was able to easily find out the exact location of the event, its date, and who I had gone with.

What is Facial Recognition Technology?

Researchers have been trying for decades to produce a technology that could accurately identify human faces. The invention of neural network artificial intelligence has made it possible for computer algorithms to do this with increasing accuracy and speed. However, this technology requires large sets of data, in this case, hundreds of thousands of examples of human faces, to work.

Just think about how many photos of you exist online. There are the photos that you have taken and shared or that your friends and family have taken of you. Then there are photos that you’re unaware that you’re in – perhaps you walked by as someone snapped a picture and accidentally ended up in the frame. I don’t consider myself a heavy user of social media, but I am sure there are thousands of pictures of my face out there. I’ve uploaded and classified hundreds of photos of myself across platforms like Facebook, Instagram, LinkedIn, and even Venmo.

The developers behind Clearview AI recognized the potential in all these publicly accessible photographs and compiled them to create a massive training dataset for their facial recognition AI. They did this by scraping the social media profiles of hundreds of thousands of people. In fact, they got something like 2.1 million images of faces from Venmo and Tinder (a dating app) alone.

Why does this matter?

Clearly, there are major privacy concerns for this kind of technology. Clearview AI was marketed as being only available to law enforcement. In her book, Hill gives several examples of why this is problematic. People have been wrongfully accused, arrested, detained, and even jailed for the crime of looking (to this technology) like someone else.

We also know that AI has problems with bias. Facial recognition technology was first developed by mostly white, mostly male researchers, using photographs of mostly white, mostly male faces. The result of this has had a lasting effect. Marginalized communities targeted by policing are at increased risk, leading many to call for limits on the use of facial recognition by police.

It’s not just government agencies who have access to facial recognition. Other companies have developed off-the-shelf products that anyone can buy, like the app Hill demonstrated to me. This technology is now available to anyone willing to pay for a subscription. My own facial recognition results show how easy it is to find out a lot about a person (like their location, acquaintances, and more) using these apps. It’s easy to imagine how this could be dangerous.

There remain reasons to be optimistic about the future of privacy, however. Hill closed her talk by reminding everyone that with every technological breakthrough, there is opportunity for ethical advancement reflected by public policy. With facial recognition, policy makers have previously relied on private companies to make socially responsible decisions. As we face the results of a few radical actors using the technology maliciously, we can (and should) respond by developing legal restraints that safeguard our privacy.

On this front, Europe is leading by example. It’s likely that the actions of Clearview AI are already illegal in Europe, and they are expanding privacy rights with the European Commission’s (EC) proposed Artificial Intelligence (AI) regulation. These rules include requirements for technology developers to certify the quality of their processes, rather than algorithm performance, which would mitigate some of these harms. This regulation aims to take a technology-neutral approach and stratifies facial recognition technology by it’s potential for risk to people’s safety, livelihoods, and rights.

Post by Victoria Wilson, MA Bioethics and Science Policy, 2023

Neuroscience Shows Why Sex Assault Victims “Freeze.” It’s Not Consent.

Warning: the following article discusses rape and sexual assault. If you or someone you know has been sexually assaulted, help is available.

Image: DreamStudio AI, with prompt “Woman, screaming, sitting on the witness stand in a U.S. court of law, in the style of Edvard Munch’s ‘The Scream’”

“You never screamed for help?”

“Why didn’t you fight back?”

These are questions that lawyers asked E. Jean Carroll in her rape case against former president Donald J. Trump this spring. These kinds of questions reflect a myth about rape: that it’s only rape if the victim puts up a fight.

A recent review of the research, “Neuroscience Evidence Counters a Rape Myth,” aims to set the record straight. It serves as a call to action for those in the scientific and legal professions. Ebani Dhawan completed this work at the University College London with Professor Patrick Haggard. She is now my classmate at Duke University, where she is pursuing an MA in Bioethics & Science Policy.

Ebani Dhawan

Commonly accepted beliefs and myths about rape are a persistent problem in defining and prosecuting sexual assault. The intentions of all actors are examined in the courtroom. If a victim freezes or does not attempt to resist during a sexual assault, perpetrators may claim there was passive acquiescence; that consent was assumed from an absence of resistance.

From the moment a victim reports an assault, the legal process poses “why” questions about the survivor’s behavior. This is problematic because it upholds the idea that survivors can (and should) choose to scream or fight back during an assault.

This new paper presents neuroscientific evidence which counters that misconception. Many survivors of sexual assault report ‘freezing’ during an assault. The researchers argue that this is an involuntary response to a threat which can prevent a victim from actively resisting, and that it occurs throughout biology.

Animal studies have demonstrated that severe, urgent threats, like assault or physical restraint, can trigger a freeze response involving fixed posture (tonic immobility) or loss of muscle tone (collapsed immobility). Self-reports of these states in humans shed light on an important insight into immobility. Namely, that we are unable to make voluntary actions during this freezing response.

An example of this is the “lockup” state displayed by pilots during an aviation emergency. After a plane crash, it’s hard to imagine anyone asking a pilot if they froze because they really wanted to crash the plane.

Yet, quite frequently victims of sexual assault are asked to explain the freeze response, something which is further made difficult by the impaired memory and loss of sense of agency which often accompanies trauma.

The legal process around sexual assault should be updated to reflect this neuroscientific evidence.

THIS MYTH HAS REAL CONSEQUENCES.

The vast majority of sexual assault cases do not result in a conviction. It is estimated that out of every 1,000 sexual assaults in the U.S., only 310 are reported to the police and only 28 lead to felony conviction. That is a conviction rate of less than 3%.

In England and Wales, just 3% of rapes recorded in the previous year resulted in charges. According to RAINN, one of the leading anti-sexual assault organizations, many victims don’t report because they believe the justice system would not do anything to help — a belief that these conviction rates support.

E. Jean Carroll named this in her trial. She said, “Women don’t come forward. One of the reasons they don’t come forward is because they’re always asked, why didn’t you scream? You better have a good excuse if you didn’t scream.”

This research serves as a much-needed call-to-action. By revisiting processes steeped in myth, justice can be better served.

I asked Ebani what she thinks must be done. Here are her recommendations:

  1. The neuroscience community should pursue greater mechanistic understanding of threat processing and involuntary action processes and the interaction between them. 
  2. Activists and legal scholars should advocate for processes reflective of the science behind involuntary responses like freezing, and the inability of victims to explain that behavior.
  3. Neuroscientists should contribute to Police officers’ education regarding involuntary responses to rape and sexual assault.

“I’m telling you: He raped me whether I screamed or not.” – E. Jean Carroll

Post by Victoria Wilson, Class of 2023

When Art and Science Meet as Equals

Artists and scientists in today’s world often exist in their own disciplinary silos. But the Laboratory Art in Practice Bass Connections team hopes to rewrite this narrative, by engaging Duke students from a range of disciplines in a 2-semester series of courses designed to join “the artist studio, the humanities seminar room, and the science lab bench.” Their work culminated in “re:process” – an exhibition of student artwork on Friday, April 28, in the lobby of the French Family Science Center. Rather than science simply engaging artistic practice for the sake of science, or vice versa, the purpose of these projects was to offer an alternate reality where “art and science meet as equals.”

The re:process exhibition

Liuren Yin, a junior double-majoring in Computer Science and Visual and Media Studies, developed an art project to focus on the experience of prosopagnosia, or face blindness. Individuals with this condition are unable to tell two distinct faces apart, including their own, often relying on body language, clothing, and the sound of a person’s voice to determine the identity of a person. Using her experience in computer science, she developed an algorithm that inputs distinct faces and outputs the way that these faces are perceived by someone who has prosopagnosia.

Yin’s project exploring prosopagnosia

Next to the computer and screen flashing between indistinguishable faces, she’s propped up a mirror for passers-by to look at themselves and contemplate the questions that inspired her to create this piece. Yin says that as she learned about prosopagnosia, where every face looks the same, she found herself wondering, “how am I different from a person that looks like me?” Interrogating the link between our physical appearance and our identity is at the root of Yin’s piece. Especially in an era where much of our identity exists online and appearance can be curated any way one wants, Yin considers this artistic piece especially timely. She writes in her program note that “my exposure to technologies such as artificial intelligence, generative algorithms, and augmented reality makes me think about the combination and conflict between human identity and these futuristic concepts.”

Eliza Henne, a junior majoring in Art History with a concentration in Museum Theory and Practice, focused more on the biological world in her project, which used a lavender plant in different forms to ask questions like “what is truthful, and what do we consider real?” By displaying a live plant, an illustration of a plant, and pressings from a plant, she invites viewers to consider how every rendition of a commonly used model organism in scientific experiments omits some information about the reality of the organism.

Junior Eliza Henne

For example, lavender pressings have materiality, but there’s no scent or dimension to the plant. A detailed illustration is able to capture even the way light illuminates the thin veins of the leaf, but is merely an illustration of a live being. The plant itself, which is conventionally real, can only further be seen in this sort of illustrative detail under a microscope or in a diagram.

In walking through the lobby of FFSC, where these projects and more are displayed, you’re surrounded by conventionally scientific materials, like circuit boards, wires, and petri dishes, which, in an unusual turn of events are being used for seemingly unscientific endeavors. These endeavors – illustrating the range of human emotion, showcasing behavioral patterns like overconsumption, or demonstrating the imperfection inherent to life – might at first glance feel more appropriate in an art museum or a performing arts stage.

But the students and faculty involved in this exhibition see that as the point. Maybe it isn’t so unnatural to build a bridge between the arts and the sciences – maybe, they are simply two sides of the same coin.

Post by Meghna Datta, Class of 2023

Senior Jenny Huang on her Love for Statistics and the Scientific Endeavor

Statistics and computer science double major Jenny Huang (T’23) started Duke as many of us do – vaguely pre-med, undecided on a major – but she knew she had an interest in scientific research. Four years later, with a Quad Fellowship and an acceptance to MIT for her doctoral studies, she reflects on how research shaped her time at Duke, and how she hopes to impact research.

Jenny Huang (T’23)

What is it about statistics? And what is it about research?

With experience in biology research during high school and during her first year at Duke, Huang toyed with the idea of an MD/PhD, but ultimately realized that she might be better off dropping the MD. “I enjoy figuring out how the world works” Huang says, and statistics provided a language to examine the probabilistic and often unintuitive nature of the world around us.

In another life, Huang remarked, she might have been a physics and philosophy double major, because physics offers the most fundamental understanding of how the world works, and philosophy is similar to scientific research: in both, “you pursue the truth through cyclic questioning and logic.” She’s also drawn to engineering, because it’s the process of dissecting things until you can “build them back up from first principles.”

At the International Society for Bayesian Analysis summer conference in Montreal

Huang’s research and the impact of COVID-19

For Huang, research started her first year at Duke, on a Data+ team, led by Professor Charles Nunn, studying the variation of parasite richness across primate species. To map out what types of parasites interacted with what type of monkeys, the team relied on predictors such as body mass, diet, and social activity, but in the process, they came up against an interesting phenomenon.

It appeared that the more studied a primate was, the more interactions it would have with parasites, simply because of the amount of information available on the primate. Due to geographic and experimental constraints, however, a large portion of the primate-parasite network remained understudied. This example of a concept in statistics known as sampling bias was muddling their results. One day, while making an offhand remark about the problem to one of her professors (Professor David Dunson), Huang ended up arranging a serendipitous research match. It turned out that Dunson had a statistical model that could be applied to the problem Nunn and the Data+ team were facing.

The applicability of statistics to a variety of different fields enamored Huang. When COVID-19 hit, it impacted all of us to some degree, but for Huang, it provided the perfect opportunity to apply mathematical models to a rapidly-changing pandemic. For the past two summers, through work with Dunson on a DOMath project, as well as Professor Jason Xu and Professor Rick Durrett, Huang has used mathematical modeling to assess changes in the spread of COVID-19.

On inclusivity in research

As of 2018, just 28% of graduates in mathematics and statistics at the doctoral level identified as women. Huang will eventually be included in this percentage, seeing as she begins her Ph.D. at MIT’s Department of Electrical Engineering and Computer Science in the fall, working with Professor Tamara Broderick.

“When I was younger, I always thought that successful and smart people in academia were white men,” Huang laughed. But that’s not true, she emphasizes: “it’s just that we don’t have other people in the story.” As one of the few female-presenting people in her research meetings, Huang has often felt pressure to underplay her more, “girly” traits to fit in. But interacting with intelligent, accomplished female-identifying academics in the field (including collaborations with Professor Cynthia Rudin) reaffirms to her that it’s important to be yourself: “there’s a place for everyone in research.”

At the Joint Statistical Meetings Conference in D.C with fellow researcher Gaurav Parikh

Advice for first-years and what the future holds

While she can’t predict where exactly she’ll end up, Huang is interested in taking a proactive role in shaping the impacts of artificial intelligence and machine learning on society. And as the divide between academia and industry is becoming more and more gray, years from now, she sees herself existing somewhere in that space.

Her advice for incoming Duke students and aspiring researchers is threefold. First, Huang emphasizes the importance of mentorship. Having kind and validating mentors throughout her time at Duke made difficult problems in statistics so much more approachable for her, and in research, “we need more of that type of person!”

Second, she says that “when I first approached studying math, my impatience often got in the way of learning.” Slowing down with the material and allowing herself the time to learn things thoroughly helped her improve her academic abilities.

Being around people who have this shared love and a deep commitment for their work is just the human endeavor at its best.

Jenny huang

Lastly, she stresses the importance of collaboration. Sometimes, Huang remarked,“research can feel isolating, when really it is very community-driven.” When faced with a tough problem, there is nothing more rewarding than figuring it out together with the help of peers and professors.  And she is routinely inspired by the people she does research with: “being around people who have this shared love and a deep commitment for their work is just the human endeavor at its best.”

Post by Meghna Datta, Class of 2023

(Editor’s note: This is Jenny’s second appearance on the blog. As a senior at NC School of Science and Math, she wrote a post about biochemist Meta Kuehn.)

How Research Helped One Pre-med Discover a Love for Statistics and Computer Science

If you’re a doe-eyed first-year at Duke who wants to eventually become a doctor, chances are you are currently, or will soon, take part in a pre-med rite of passage: finding a lab to research in.

Most pre-meds find themselves researching in the fields of biology, chemistry, or neuroscience, with many hoping to make research a part of their future careers as clinicians. Undergraduate student and San Diego native Eden Deng (T’23) also found herself plodding a similar path in a neuroimaging lab her freshman year.

Eden Deng T’23

At the time, she was a prospective neuroscience major on the pre-med track. But as she soon realized, neuroimaging is done through fMRI. And to analyze fMRI data, you need to be able to conduct data analysis.

This initial research experience at Duke in the Martucci Lab, which looks at chronic pain and the role of the central nervous system, sparked a realization for Deng. “Ninety percent of my time was spent thinking about computational and statistical problems,” she explained to me. Analysis was new to her, and as she found herself struggling with it, she thought to herself, “why don’t I spend more time getting better at that academically?”

Deng at the Martucci Lab

This desire to get better at research led Deng to pursue a major in Statistics with a secondary in Computer Science, while still on the pre-med track. Many people might instantly think about how hard it must be to fit in so much challenging coursework that has virtually no overlap. And as Deng confirmed, her academic path not been without challenges.

For one, she’s never really liked math, so she was wary of getting into computation. Additionally, considering that most Statistics and Computer Science students want to pursue jobs in the technology industry, it’s been hard for her to connect with like-minded people who are equally familiar with computers and the human body.

“I never felt like I excelled in my classes,” Deng said. “And that was never my intention.” Deng had to quickly get used to facing what she didn’t know head-on. But as she kept her head down, put in the work, and trusted that eventually she would figure things out, the merits of her unconventional academic path started to become more apparent.

Research at the intersection of data and health

Last summer, Deng landed a summer research experience at Mount Sinai, where she looked at patient-level cancer data. Utilizing her knowledge in both biology and data analytics, she worked on a computational screener that scientists and biologists could use to measure gene expression in diseased versus normal cells. This will ultimately aid efforts in narrowing down the best genes to target in drug development. Deng will be back at Mount Sinai full-time after graduation, to continue her research before applying to medical school.

Deng presenting on her research at Mount Sinai

But in her own words, Deng’s most favorite research experience has been her senior thesis through Duke’s Department of Biostatistics and Bioinformatics. Last year, she reached out to Dr. Xiaofei Wang, who is part of a team conducting a randomized controlled trial to compare the merits of two different lung tumor treatments.

Generally, when faced with lung disease, the conservative approach is to remove the whole lobe. But that can pose challenges to the quality of life of people who are older, with more comorbidities. Recently, there has been a push to focus on removing smaller sections of lung tissue instead. Deng’s thesis looks at patient surgical data over the past 15 years, showing that patient survival rates have improved as more of these segmentectomies – or smaller sections of tissue removal – have become more frequent in select groups of patients.

“I really enjoy working on it every week,” Deng says about her thesis, “which is not something I can usually say about most of the work I do!” According to Deng, a lot of research – hers included – is derived from researchers mulling over what they think would be interesting to look at in a silo, without considering what problems might be most useful for society at large. What’s valuable for Deng about her thesis work is that she’s gotten to work closely with not just statisticians but thoracic surgeons. “Originally my thesis was going to go in a different direction,” she said, but upon consulting with surgeons who directly impacted the data she was using – and would be directly impacted by her results – she changed her research question. 

The merits of an interdisciplinary academic path

Deng’s unique path makes her the perfect person to ask: is pursuing seemingly disparate interests, like being a Statistics and Computer Science double-major on the pre-med, track worth it? And judging by Deng’s insights, the answer is a resounding yes.

At Duke, she says, “I’ve been challenged by many things that I wouldn’t have expected to be able to do myself” – like dealing with the catch-up work of switching majors and pursuing independent research. But over time she’s learned that even if something seems daunting in the moment, if you apply yourself, most, if not all things, can be accomplished. And she’s grateful for the confidence that she’s acquired through pursuing her unique path.

Moreover, as Deng reflects on where she sees herself – and the field of healthcare – a few years from now, she muses that for the first time in the history of healthcare, a third-party player is joining the mix – technology.

While her initial motivation to pursue statistics and computer science was to aid her in research, “I’ve now seen how its beneficial for my long-term goals of going to med school and becoming a physician.” As healthcare evolves and the introduction of algorithms, AI and other technological advancements widens the gap between traditional and contemporary medicine, Deng hopes to deconstruct it all and make healthcare technology more accessible to patients and providers.

“At the end of the day, it’s data that doctors are communicating to patients,” Deng says. So she’s grateful to have gained experience interpreting and modeling data at Duke through her academic coursework.

And as the Statistics major particularly has taught her, complexity is not always a good thing – sometimes, the simpler you can make something, the better. “Some research doesn’t always do this,” she says – she’s encountered her fair share of research that feels performative, prioritizing complexity to appear more intellectual. But by continually asking herself whether her research is explainable and applicable, she hopes to let those two questions be the North Stars that guide her future research endeavors.

At the end of the day, it’s data that doctors are communicating to patients.

Eden Deng

When asked what advice she has for first-years, Deng said that it’s important “to not let your inexperience or perceived lack of knowledge prevent you from diving into what interests you.” Even as a first-year undergrad, know that you can contribute to academia and the world of research.

And for those who might be interested in pursuing an academic path like Deng, there’s some good news. After Deng talked to the Statistics department about the lack of pre-health representation that existed, the Statistics department now has a pre-health listserv that you can join for updates and opportunities pertaining specifically to pre-med Stats majors. And Deng emphasizes that the Stats-CS-pre-med group at Duke is growing. She’s noticed quite a few underclassmen in the Statistics and Computer Science departments who vocalize an interest in medical school.

So if you also want to hone your ability to communicate research that you care about – whether you’re pre-med or not – feel free to jump right into the world of data analysis. As Deng concludes, “everyone has something to say that’s important.”

Post by Meghna Datta, Class of 2023

What is it like to Direct a Large, Externally-Funded Research Center?

What are the trials and tribulations one can expect? And conversely, what are the highlights? To answer these questions, Duke Research & Innovation Week kicked off with a panel discussion on Monday, January 23.

The panel

Moderated by George A. Truskey, Ph.D, the Associate Vice President for Research & Innovation and a professor in the Department of Biomedical Engineering, the panelists included…

  • Claudia K. Gunsch, Ph.D., a professor in the Departments of Civil & Environmental Engineering, Biomedical Engineering, and Environmental Science & Policy. Dr. Gunsch is the director of the NSF Engineering Research Center for Microbiome Engineering (PreMiEr) and is also the Associate Dean for Duke Engineering Research & Infrastructure.
Dr. Claudia Gunsch
  • Yiran Chen, Ph.D., a professor in the Department of Electrical & Computer Engineering. Dr. Chen is the director of the NSF AI Institute for Edge Computing (Athena).
Dr. Yiran Chen
  • Stephen Craig, Ph.D., a professor in the Department of Chemistry. Dr. Craig is the director of the Center for the Chemistry of Molecularly Optimized Networks (MONET).
Dr. Stephen Craig

The centers

As the panelists joked, a catchy acronym for a research center is almost an unspoken requirement. Case in point: PreMiEr, Athena, and MONET were the centers discussed on Monday. As evidenced by the diversity of research explored by the three centers, large externally-funded centers run the gamut of academic fields.

PreMiEr, which is led by Gunsch, is looking to answer the question of microbiome acquisition. Globally, inflammatory diseases are connected to the microbiome, and studies suggest that our built environment is the problem, given that Americans spend on average less than 8% of time outdoors. It’s atypical for an Engineering Research Center (ERC) to be concentrated in one state but uniquely, PreMieR is. The center is a joint venture between Duke University, North Carolina A&T State University, North Carolina State University, the University of North Carolina – Chapel Hill and the University of North Carolina – Charlotte.

PreMiEr – not to be confused with the English Premier League

Dr. Chen’s Athena is the first funded AI institute for edge computing. Edge computing is all about improving a computer’s ability to process data faster and at greater volumes by processing data closer to where it’s being generated. AI is a relatively new branch of research, but it is growing in prevalence and in funding. In 2020, 7 institutes looking at AI were funded by the National Science Foundation (NSF), with total funding equaling 140 million. By 2021, 11 institutes were funded at 220 million – including Athena. All of these institutes span over 48 U.S states.

Athena, or the Greek goddess of wisdom, is a fitting name for a research center

MONET is innovating in polymer chemistry with Stephen Craig leading. Conceptualizing polymers as operating in a network, the center aims to connect the behaviors of a single chemical molecule in that network to the  behavior of the network as a whole. The goal of the center is to transform polymer and materials chemistry by “developing the knowledge and methods to enable molecular-level, chemical control of polymer network properties for the betterment of humankind.” The center has nine partner institutions in the U.S and one internationally.

MONET, like French painter Claude Monet

Key takeaways

Research that matters

Dr. Gunsch talked at length about how PreMiEr aspires to pursue convergent research. She describes this as identifying a large, societal challenge, then determining what individual fields can “converge” to solve the problem.

Because these centers aspire to solve large, societal problems, market research and industry involvement is common and often required in the form of an industry advisory group. At PreMiEr, the advisory group performs market analyses to assess the relevance and importance of their research. Dr. Chen also remarked that there is an advisory group at Athena, and in addition to academic institutions the center also boasts collaborators in the form of companies like Microsoft, Motorola, and AT&T.

Dr. Chen presenting on Athena’s partner institutions at Monday’s talk.

Commonalities in structure

Most research centers, like PreMiEr, Athena, and MONET, organize their work around pillars or “thrusts.” This can help to make research goals understandable to a lay audience but also clarifies the purpose of these centers to the NSF, other funding bodies, host and collaborating institutions, and the researchers themselves.

How exactly these goals are organized and presented is up to the center in question. For example, MONET conceptualizes its vision into three fronts – “fundamental chemical advances,” “conceptual advances,” and “technological advances.”

At Athena, the research is organized into four “thrusts” – “AI for Edge Computing,” “AI-Powered Computer Systems,” “AI-Powered Networking Systems,” and “AI-Enabled Services and Applications.”

Meanwhile, at PreMiEr, the three “thrusts” have a more procedural slant. The first “thrust” is “Measure,” involving the development of tracking tools and the exploration of microbial “dark matter.” Then there’s “Modify,” or the modification of target delivery methods based on measurements. Finally, “Modeling” involves predictive microbiome monitoring to generate models that can help analyze built environment microbiomes.  

A center is about the people  

“Collaborators who change what you can do are a gift. Collaborators who change how you think are a blessing.”

Dr. stephen craig

All three panelists emphasized that their centers would be nowhere without the people that make the work possible. But of course, humans complicate every equation, and when working with a team, it is important to anticipate and address tensions that may arise.

Dr. Craig spoke to the fact that successful people are also busy people, so what may be one person’s highest priority may not necessarily be another person’s priority. This makes it important to assemble a team of researchers that are united in a common vision. But, if you choose wisely, it’s worth it. As Dr. Craig quipped on one of his slides, “Collaborators who change what you can do are a gift. Collaborators who change how you think are a blessing.”

In academia, there is a loud push for diversity, and research centers are no exception. Dr. Chen spoke about Athena’s goals to continue to increase their proportions of female and underrepresented minority (URM) researchers. At PreMiEr, comprised of 42 scholars, the ratio of non-URM to URM researchers is 83-17, and the ratio of male to female researchers is approximately 50-50.

In conclusion, cutting-edge research is often equal parts thrilling and mundane, as the realities of applying for funding, organizing manpower, pushing through failures, and working out tensions with others sets in. But the opportunity to receive funding in order to start and run an externally-funded center is the chance to put together some of the brightest minds to solve some of the most pressing problems the world faces. And this imperative is summarized well by the words of Dr. Craig: “Remember: if you get it, you have to do it!”

Post by Megna Datta, Class of 2023

How Concerned Should You Be About AirTags?

Photograph of an AirTag from Wikimedia Commons. Image licensed under Creative Commons Attribution-Share Alike 4.0 International. Creator: KKPCW.

I didn’t even know what an AirTag was until I attended a cybersecurity talk by Nick Tripp, senior manager of Duke’s IT Security Office, but according to Tripp, AirTag technology is “something that the entire Duke community probably needs to be aware of.”

An AirTag is a small tracking device that can connect to any nearby Apple device using Bluetooth. AirTags were released by Apple in April 2021 and are designed to help users keep track of items like keys and luggage. Tripp himself has one attached to his keys. If he loses them, he can open the “Find My” app on his phone (installed by default on Apple devices), and if anyone else with an Apple device has been near his keys since he lost them, the Bluetooth technology will let him see where his keys were when the Apple device user passed them—or took them.

According to Tripp, AirTags have two distinct advantages over earlier tracking devices. First, they use technology that lets the “Find My” app provide “precise location tracking”—within an inch of the AirTag’s location. Second, because AirTags use the existing Apple network, “every iPhone and iPad in the world becomes a listening device.”

You can probably guess where this is going. Unfortunately, the very features that make AirTags so useful for finding lost or stolen items also make them susceptible to abuse. There are numerous reports of AirTags being used to stalk people. Tripp has seen that problem on Duke’s campus, too. He gives the example of someone going to a bar and later finding an AirTag in their bag or jacket without knowing who put it there. The IT Security Office at Duke sees about 2-3 suspected cyberstalking incidents per month, with 1-2 confirmed each year. Cyberstalking, Tripp emphasizes, isn’t confined to the internet. It “straddles the internet and the real world.” Not all of the cyberstalking reports Duke deals with involve tracking devices, but “the availability of low-cost tracking technology” is a concern. In the wrong hands, AirTags can enable dangerous stalking behavior.

As part of his IT security work, and with his wife’s permission, Tripp dropped an AirTag into his wife’s bag to better understand the potential for nefarious use of AirTags by attackers. Concerningly, he found that he was able to track her movement using the app on his phone—not constantly, but about every five minutes, and if a criminal is trying to stalk someone, knowing their location every five minutes is more than enough.

Fortunately, Apple has created certain safety features to help prevent the malicious use of AirTags. For instance, if someone has been near the same AirTag for several hours (such as Tripp’s wife while there was an AirTag in her bag), they’ll get a pop-up notification on their phone after a random period of time between eight and twenty-four hours warning them that “Your current location can be seen by the owner of this AirTag.” Also, an AirTag will start making a particular sound if it has been away from its owner for eight to twenty-four hours. (It will emit a different sound if the owner of the AirTag is nearby and actively trying to find their lost item using their app.) Finally, each AirTag broadcasts a certain Bluetooth signal, a “public key,” associated with the AirTag’s “private key.” To help thwart potential hackers, that public key changes every eight to twenty-four hours. (Are you wondering yet what’s special about the eight-to-twenty-four hour time period? Tripp says it’s meant to be “frequently enough that Apple can give some privacy to the owner of that AirTag” but “infrequently enough that they can establish a pattern of malicious activity.”)

But despite these safety features, a highly motivated criminal could get around them. Tripp and his team built a “DIY Stealth AirTag” in an attempt to anticipate what measures criminals might take to deactivate or counteract Apple’s built-in security features. (Except when he’s presenting to other IT professionals, Tripp makes a point of not revealing the exact process his team used to make their Stealth AirTag. He wants to inform the public about the potential dangers of tracking technology while avoiding giving would-be criminals any ideas.) Tripp’s wife again volunteered to be tracked, this time with a DIY Stealth AirTag that Tripp placed in her car. He found that the modified AirTag effectively and silently tracked his wife’s car. Unlike the original AirTag, their stealthy version could create a map of everywhere his wife had driven, complete with red markers showing the date, time, and coordinates of each location. An AirTag that has been modified by a skilled hacker could let attackers see “not just where a potential victim is going but when they go there and how often.”

“The AirTag cat is out of the bag, so to speak,” Tripp says. He believes Apple should update their AirTag design to make the safety features harder to circumvent. Nonetheless, “it is far more likely that someone will experience abuse of a retail AirTag” than one modified by a hacker to be stealthier. So how can you protect yourself? Tripp has several suggestions.

  1. Know the AirTag beep indicating that an AirTag without its owner is nearby, potentially in your belongings.
  2. If you have an iPhone, watch for AirTag alerts. If you receive a notification warning you about a nearby AirTag, don’t ignore it.
  3. If you have an Android, Tripp recommends installing the “Tracker Detect” app from Apple because unlike iPhone users, Android users don’t get automatic pop-up notifications if an AirTag has been near them for several hours. The “Tracker Detect” Android app isn’t a perfect solution—you still won’t get automatic notifications; you’ll have to manually open the app to check for nearby trackers. But Tripp still considers it worthwhile.
  4. For iPhone users, make sure you have tracking notifications configured in the “Find My” app. You can go into the app and click “Me,” then “Customize Tracking Notifications.” Make sure the app has permission to send you notifications.
  5. Know how to identify an AirTag if you find one. If you find an AirTag that isn’t yours, and you have an iPhone, go into the “Find My” app, click “Items,” and then swipe up until you see the “Identify Found Item” option. That tool lets you scan the AirTag by holding it near your phone. It will then show the AirTag’s serial number and the last four digits of the owner’s phone number, which can be useful for the police. “If I found one,” Tripp says, “I think it’s worth making a police report.”

It’s worth noting that owning an AirTag does not put you at higher risk of stalking or other malicious behavior. The concern, whether or not you personally use AirTags, is that attackers can buy AirTags themselves and use them maliciously. Choosing to use AirTags to keep track of important items, meanwhile, won’t hurt you and may be worth considering, especially if you travel often or are prone to misplacing things. Not all news about AirTags is bad. They’ve helped people recover lost items, from luggage and wallets to photography gear and an electric scooter.

“I actually think this technology is extremely useful,” Tripp says. It’s the potential for abuse by attackers that’s the problem.

Post by Sophie Cox, Class of 2025

Page 2 of 3

Powered by WordPress & Theme by Anders Norén