Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Computers/Technology Page 2 of 20

How Do Animals – Alone or in Groups – Get Where They’re Going?

Note: Each year, we partner with Dr. Amy Sheck’s students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the of fourth eight posts.

In the intricate world of biology, where the mysteries of animal behavior unfold, Dr. Jesse Granger emerges as a passionate and curious scientist with a Ph.D. in biology and a penchant for unraveling the secrets of how animals navigate their surroundings.

Her journey began in high school when she posed a question to her biology teacher about the effect of eye color on night vision. Unable to find an answer, they embarked together on a series of experiments, igniting a passion that would shape Granger’s future in science.

Jesse Granger in her lab at Duke

Granger’s educational journey was marked by an honors thesis at the College of  William & Mary that delved into the potential of diatoms, single-cell algae known for their efficiency in capturing light, to enhance solar panel efficiency. This early exploration of light structures paved the way for a deeper curiosity about electricity and magnetism, leading to her current research on how animals perceive and use the electromagnetic spectrum.

Currently, Granger is involved in projects that explore the dynamics of animal group navigation. She is investigating how animals travel in groups to find food, with collective movement and decision-making.  

Among her countless research endeavors, one project holds a special place in Granger’s heart. Her study involved creating a computational model to explore the dynamics of group travel among animals.  She found that agents, a computational entity mimicking the behavior of an animal, are way better at getting where they are going as part of a group than agents who are traveling alone.

Granger’s daily routine in the Sönke Johnson Lab revolves around computational work. While it may not seem like a riveting adventure to an outsider, to her, the glow of computer screens harbors the key to unlocking the secrets of animal behavior. Coding becomes her toolkit, enabling her to analyze data, develop models, and embark on simulations that mimic the complexities of the natural world.

Granger’s expertise in coding extends to using R for data wrangling and NetLogo, an agent-based modeling program, for simulations. She describes the simulation process as akin to creating a miniature world where coded animals follow specific rules, giving rise to emergent properties and valuable insights into their behavior. This skill set seamlessly intertwined with her favorite project, where the exploration of group dynamics and navigation unfolded within the intricate landscapes of her simulated miniature world.

In the tapestry of scientific exploration, Jesse Granger emerges as a weaver of knowledge, blending biology, physics, and computation to unravel the mysteries of animal navigation. Her journey, marked by curiosity and innovation, not only enriches our understanding of the natural world but also inspires the next generation of  scientists to embark on their unique scientific odysseys.      

Guest Post by Mansi Malhotra, North Carolina School of Science and Math, Class of 2025.

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Leveraging Google’s Technology to Improve Mental Health

Last Tuesday, October 10 was World Mental Health Day. To mark the holiday, the Duke Institute for Brain Sciences, in partnership with other student wellness organizations, welcomed Dr. Megan Jones Bell, PsyD, the clinical director of consumer and mental health at Google, to discuss mental health. Bell was formerly chief strategy and science officer at Headspace and helped guide Headspace through its transformation from a meditation app into a comprehensive digital mental health platform, Headspace Health. Bell also founded one of the first digital mental health start-ups, Lantern, where she pioneered blended mental health interventions leveraging software and coaching. In her conversation with Dr. Murali Doraiswamy, Duke professor of psychiatry and behavioral sciences, and Thomas Szigethy, Associate Dean of Students and Director of Duke’s Student Wellness Center, Bell revealed the actions Google is taking to improve the health of the billions of people who use their platform. 

She began by defining mental health, paraphrasing the World Health Organization’s definition. She said, “Mental health, to me, is a state of wellbeing in which the individual realizes his or her or their own abilities, can cope with the normal stresses of life, work productively and fruitfully, and can contribute to their own community.” Rather than taking a medicalized approach to mental health, she argued, mental health should be recognized as something that we all have. Critically, she said that mental health is not just mental  disorders; the first step to improving mental health is recognition and upstream intervention.

Underlining the critical role Google plays in global mental health, Bell cited multiple statistics: three out of four people turn to the internet first for health information. On Google Search, there are 100 million searches on health everyday; Youtube boasts 25 billion views of mental health content. Given their billions of users, Bell intimated Google’s huge responsibility to provide people with accurate, authoritative, and empathetic information. The company has multiple goals in terms of mental health that are specific to different communities. There are three principal audiences that Bell described Google’s goals for: consumers, caregivers, and communities. 

Google’s consumer-facing focus is providing access to high quality information and tools to manage their users’ health. With regards to caregivers, Google strives to create strong partnerships to create solutions to transform care delivery. In terms of community health, the company works with public health organizations worldwide, focusing on social determinants of health and aiming to open up data and insights to the public health community. 

Szigethy followed by launching a discussion of Google’s efforts to protect adolescents. He referenced the growing and urgent mental health crisis amongst adolescents; what is Google doing to protect them? 

Bell mentioned multiple projects across different platforms in order to provide youth with safer online experiences. Key to these projects is the desire to promote their mental health by default. On Google Search, this takes the form of the SafeSearch feature. SafeSearch is on by default, filtering out explicit or inappropriate results. On Youtube, default policies include various prevention measures, one of which automatically removes content that is considered “immitable.” Bell used the example of disordered eating content in order to explain the policy– in accordance with their prevention approach, YouTube removes dangerous eating-related content containing anything that the viewer can copy. YouTube also has age-restricted videos, unavailable to users under 18, as well as certain product features that can be blocked. Google also created an eating disorder hotline with experts online 24/7. 

Jokingly, Bell assured the Zoom audience that Google wouldn’t be creating a therapist chatbot anytime soon — she asserted that digital tools are not “either or.” When the conversation veered towards generative AI, Bell admitted that AI has enormous potential for helping billions of people, but maintained that it needs to be developed in a responsible way. At Google, the greatest service AI provides is scalability. Google.org, Bell said, recently worked with The Trevor Project and ReflexAI on a crisis hotline for veterans called HomeTeam. Google used AI that stimulated crises to help scale up training for volunteers. Bell said, “The human is still on the other side of the phone, and AI helped achieve that”. 

Next, Bell tackled the question of health information and misinformation– what she called a significant area of focus for Google. Before diving in, however, Bell clarified, “It’s not up to Google to decide what is accurate and what is not accurate.” Rather, she said that anchoring to trusted organizations is critical to embedding mental health into the culture of a community. When it comes to health information and misinformation, Bell encapsulated Google’s philosophy in this phrase: “define, operationalize, and elevate high quality information.” In order to combat misinformation on their platform, Google asked the National Academy of Medicine to help define what accurate medical sources are. The Academy then put together a framework of authoritative health info, which WHO then nationalized. YouTube then launched its “health sources” feature, where videos from the framework are the first thing that you see. In effect, the highest quality information is raised to the top of your page when you make a search. Videos in this framework also have a visible badge on the watch panel that features a  phrase like “from a healthcare professional” or “from an organization with a healthcare professional.” Bell suggested that this also helps people to remember where their information is coming from, acting as a guardrail in itself. Additionally, Google continues to fight medical misinformation with an updated medical misinformation policy, which enables them to remove content that is contradictory to medical authorities or medical consensus. 

Near the end of the conversation, Szigethy asked Bell if she would recommend any behaviors for embracing wellbeing. A prevention researcher by background, Bell stressed the importance of early and regular action. Our biggest leverage point for changing mental health, she asserted, is upstream intervention and embracing routines that foster our mental health. She breaks these down into five dimensions of wellbeing: mindfulness, sleep, movement and exercise, nutrition, and social connection. Her advice is to ask the question: what daily/weekly routines do I have that foster each of these? Make a list, she suggests, and try to incorporate a daily routine that addresses each of the five dimensions. 

Before concluding, Bell advocated that the best thing that we can do is to approach mental health issues with humility and listen to a community first. She shared that, at Headspace, her team worked with the mayor’s office and community organizations in Hartford, Connecticut to co-define their mental health goals and map the strengths and assets of the community. Then, they could start to think about how to contextualize Headspace in that community. Bell graciously entered the Duke community with the same humility, and her conversation was a wonderful commemoration of World Mental Health Day. 

By Isa Helton, Class of 2026

My Face Belongs to The Hive (and Yours Does Too)

Imagine having an app that could identify almost anyone using only a photograph of their face. For example, you could take a photograph of a stranger in a dimly lit restaurant and know within seconds who they are.

This technology exists, and Kashmir Hill has reported on several companies that offer these services.

An investigative journalist with the New York Times, Hill visited Duke Law Sept. 27 to talk about her new book, Your Face Belongs To Us.

The book is about a company that developed powerful facial recognition technology based on images harnessed from our social media profiles. To learn more about Clearview AI, the unlikely duo who were behind it, and how they sold it to law enforcement, I highly recommend reading this book.

Hill demonstrated for me a facial recognition app that provides subscribers with up to 25 face searches a day. She offered to let me see how well it worked.

Screen shot of the search app with Hill’s quick photo of me.

She snapped a quick photo of my face in dim lighting. Within seconds (3.07 to be exact), several photos of my face appeared on her phone.

The first result (top left) is unsurprising. It’s the headshot I use for the articles I write on the Duke Research Blog. The second result (top right) is a photo of me at my alma mater in 2017, where I presented at a research conference. The school published an article about the event, and I remember the photographer coming around to take photos. I was able to easily figure out exactly where on the internet both results had been pulled from.

The third result (second row, left) unsettled me. I had never seen this photo before.

A photo of me sitting between friends. Their faces have been blurred out.

After a quick search of the watermark on the photo (which has been blurred for safety), I discovered that the photograph was from an event I attended several years ago. Apparently, the venue had used the image for marketing on their website. Using these facial recognition results, I was able to easily find out the exact location of the event, its date, and who I had gone with.

What is Facial Recognition Technology?

Researchers have been trying for decades to produce a technology that could accurately identify human faces. The invention of neural network artificial intelligence has made it possible for computer algorithms to do this with increasing accuracy and speed. However, this technology requires large sets of data, in this case, hundreds of thousands of examples of human faces, to work.

Just think about how many photos of you exist online. There are the photos that you have taken and shared or that your friends and family have taken of you. Then there are photos that you’re unaware that you’re in – perhaps you walked by as someone snapped a picture and accidentally ended up in the frame. I don’t consider myself a heavy user of social media, but I am sure there are thousands of pictures of my face out there. I’ve uploaded and classified hundreds of photos of myself across platforms like Facebook, Instagram, LinkedIn, and even Venmo.

The developers behind Clearview AI recognized the potential in all these publicly accessible photographs and compiled them to create a massive training dataset for their facial recognition AI. They did this by scraping the social media profiles of hundreds of thousands of people. In fact, they got something like 2.1 million images of faces from Venmo and Tinder (a dating app) alone.

Why does this matter?

Clearly, there are major privacy concerns for this kind of technology. Clearview AI was marketed as being only available to law enforcement. In her book, Hill gives several examples of why this is problematic. People have been wrongfully accused, arrested, detained, and even jailed for the crime of looking (to this technology) like someone else.

We also know that AI has problems with bias. Facial recognition technology was first developed by mostly white, mostly male researchers, using photographs of mostly white, mostly male faces. The result of this has had a lasting effect. Marginalized communities targeted by policing are at increased risk, leading many to call for limits on the use of facial recognition by police.

It’s not just government agencies who have access to facial recognition. Other companies have developed off-the-shelf products that anyone can buy, like the app Hill demonstrated to me. This technology is now available to anyone willing to pay for a subscription. My own facial recognition results show how easy it is to find out a lot about a person (like their location, acquaintances, and more) using these apps. It’s easy to imagine how this could be dangerous.

There remain reasons to be optimistic about the future of privacy, however. Hill closed her talk by reminding everyone that with every technological breakthrough, there is opportunity for ethical advancement reflected by public policy. With facial recognition, policy makers have previously relied on private companies to make socially responsible decisions. As we face the results of a few radical actors using the technology maliciously, we can (and should) respond by developing legal restraints that safeguard our privacy.

On this front, Europe is leading by example. It’s likely that the actions of Clearview AI are already illegal in Europe, and they are expanding privacy rights with the European Commission’s (EC) proposed Artificial Intelligence (AI) regulation. These rules include requirements for technology developers to certify the quality of their processes, rather than algorithm performance, which would mitigate some of these harms. This regulation aims to take a technology-neutral approach and stratifies facial recognition technology by it’s potential for risk to people’s safety, livelihoods, and rights.

Post by Victoria Wilson, MA Bioethics and Science Policy, 2023

New Blogger Isa Helton: Asking AND Listening

When I studied abroad in Paris, France, this summer, I became very familiar with the American tendencies that French people collectively despise. As I sat in a windowless back room of the school I would be studying at in the sixth arrondissement of Paris, the program director carefully warned us of the biggest faux-pas that would make our host families regret welcoming a foreign student into their home and the habitudes that would provoke irritated second glances on the street.

Eiffel Tower and the Seine at dusk
La Seine at dusk with Tour Eiffel.

One: American people are loud. Don’t be loud. We are loud when we talk on the phone, loud putting on our shoes, loud stomping around the Haussmanian apartment built in the 1800s with creaky parquet flooring.

Two: Americans smile too much. Don’t smile at people on the street. No need for a big, toothy grin at every passerby and at every unsuspecting dog-walker savoring the few tourist-free morning hours.

Three: Why do Americans love to ask questions without any intention of sticking around to hear the response? When French people ask you how you’re doing – Comment ça va?– how you slept – Vous-avez bien dormi? – how the meal was – Ça vous a plu? – they stand there and wait for an answer after asking the question. So when Americans exchange a jolly “How are you today!” in passing, it drives French people crazy. Why ask a question if you don’t even want an answer?

This welcome post feels a little bit like that American “How are you today!” Not to say that you, reader, are not a patient, intrigued Frenchman or woman, who is genuinely interested in a response –  I am well-assured that the readers of Duke’s Research Blog are just the opposite. That is to say that the question of “who are you?” is quite complicated to answer in a single, coherent blog post. I will proudly admit that I am still in the process of figuring out who I am. And isn’t that what I’m supposed to be doing in college, anyway?

I can satisfyingly answer a few questions about me, though, starting with where I am from. I’m lucky enough to call Trabuco Canyon, California my home– a medium-sized city about fifteen minutes from the beach, and smack-dab in the middle of San Diego and Los Angeles. Demographically, it’s fairly uninteresting; 68% White, 19% Hispanic, and 8% Asian. I’ve never moved, so I suppose this would imply that most of my life has been fairly unexposed to cultural diversity. However, I think one of the things that has shaped me the most has been experiencing different cultures in my travels growing up.

My dad is a classically-trained archaeologist turned environmental consultant, and I grew up observing his constant anthropological analysis of people and situations in the countries we traveled to. I learned from him the richness of a compassionate, empathetic, multi-faceted life that comes from traveling, talking to people, and being curious. I am impassioned by discovering new cultures and uncovering new schools of thought through breaking down linguistic barriers, which is one of the reasons I am planning on majoring in French Studies.

Perhaps from my Korean mother I learned perseverance, mental strength, and toughness. I also gained practicality, which explains my second major, Computer Science. Do I go crazy over coding a program that creates a simulation of the universe (my latest assignment in one of my CS classes)? Not particularly. But, you have to admit, the degree is a pretty good security blanket.

Why blog? Writing is my therapy and has always been one of my passions. Paired with an unquenchable curiosity and a thirst to converse with people different from me, writing for the Duke Research Blog gives me what my boss Karl Bates – Executive Director, Research Communications – calls “a license to hunt.”

Exclusive, top-researcher-only, super-secret conference on campus about embryonics? I’ll be making a bee-line to the speakers with my notepad in hand, thank you. Completely-sold-out talk by the hottest genome researcher on the academic grapevine? You can catch me in the front row. In short, blogging on Duke Research combines multiple passions of mine and gives me the chance to flex my writing muscles.

Thus, I am also cognizant of the privilege and the responsibility that this license to hunt endows me with. It must be said that elite universities are famously – and in reality – extremely gated-off from the rest of society. While access to Duke’s physical space may still be exclusive, the knowledge within is for anyone’s taking.

In this blog, I hope to dismantle the barrier between you and what can sometimes seem like intimidating, high-level research that is being undertaken on Duke’s campus. I hope to make my blogs a mini bi-monthly revelation that can enrich your intellect and widen your perspective. And don’t worry – when it comes to posing questions to researchers, I plan to stick around to hear the response.

Read my summer blogs from my study abroad in Paris HERE!

Post by Isabella Helton, Class of 2026

Shifting from Social Comparison to “Social Savoring” Seems to Help

The face of a brown-eyed girl with freckles, bangs and new adult teeth fills most of the frame. Superimposed to the right are the icons of multiple real and imagined social media apps in a semicircular arrangement. Image by geralt, via Pixabay.
Image by geralt, via pixabay.

The literature is clear: there is a dark side to engaging with social media, with linkages to depressive symptoms, a sense of social isolation, and dampened self-esteem recently revealed in the global discourse as alarming potential harms.

Underlying the pitfalls of social media usage is social comparison—the process of evaluating oneself relative to another person—to the extent that those who engage in more social comparison are at a significantly higher risk of negative health outcomes linked to their social media consumption.

Today, 72 percent of Americans use some type of social media, with most engaging daily with at least one platform.(1) Particularly for adolescents and young adults, interactions on social media are an integral part of building and maintaining social networks.(2-5) While the potential risks to psychosocial well-being posed by chronic engagement with these platforms have increasingly come to light within the past several years, mitigating these adverse downstream effects poses a novel and ongoing challenge to researchers and healthcare professionals alike.

The intervention aimed to supplant college students’ habitual social comparison … with social savoring: experiencing joyful emotions about someone else’s experiences.

A team of researchers led by Nancy Zucker, PhD, professor in Psychiatry & Behavioral Sciences and director of graduate studies in psychology and neuroscience at Duke University, recently investigated this issue and found promising results for a brief online intervention targeted at altering young adults’ manner of engagement with social media. The intervention aimed to supplant college students’ habitual social comparison when active on social media with social savoring: experiencing joyful emotions about someone else’s experiences.

A cartoon depicts a small man in a ball cap standing on a table with a smartphone nearby. A larger person on the right with a cat-like nose regards him with tears in her eyes.
Image from Andrade et al

Zucker’s team followed a final cohort of 55 college students (78 percent female, 42 percent White, with an average age of 19.29) over a two-week period, first taking baseline measures of their mental well-being, connectedness, and social media usage before the students returned to daily social media usage. On day 8, a randomized group of students received the experimental intervention: an instructional video on the skill of social savoring. These students were then told to implement this new skill when active on social media throughout days 8 to 14, before being evaluated with the rest of the cohort at the two-week mark.

For those taught how and why to socially savor their daily social media intake, shifting focus from social comparison to social savoring measurably increased their performance self-esteem—their positive evaluation—as compared with the control group, who received no instructional video. Consciously practicing social savoring even seemed to enable students to toggle their self-esteem levels up or down: those in the intervention group reported significantly higher levels of self-esteem on days during which they engaged in more social savoring.

Encouragingly, the students who received the educational intervention on social media engagement also opted to practice more social savoring over time, suggesting they found this mode of digesting their daily social media feeds to be enduringly preferable to that of social comparison. The team’s initial findings suggest a promising future for targeted educational interventions as an effective way to improve facets of young adults’ mental health without changing the quantity or quality of their media consumption.

Of course, the radical alternative—forgoing social media platforms altogether in the name of improved well-being—looms in the distance as an appealing yet often unrealistic option for many; therefore, thoughtfully designed, evidence-based interventions such as this research team’s program seem to offer a more realistic path forward.

Read the full journal article.

References

  1. Auxier B, Anderson M. Social media use in 2021: A majority of Americans say they use YouTube and Facebook, while use of Instagram, Snapchat and TikTok is especially common among adults under 30. 2021.
    2. McKenna KYA, Green AS, Gleason MEJ. Relationship formation on the Internet: What’s the big attraction? J Soc Issues. 2002;58(1):9-31.
    3.Blais JJ, Craig WM, Pepler D, Connolly J. Adolescents online: The importance of Internet activity choices to salient relationships. J Youth Adolesc. 2008;37(5):522-536.
    4. Valkenburg PM, Peter J. Preadolescents’ and adolescents’ online communication and their closeness to friends. Dev Psychol. 2007;43(2):267-277.
    5. Michikyan M, Subrahmanyam K. Social networking sites: Implications for youth. In: Encyclopedia of Cyber Behavior, Vols. I – III. Information Science Reference/IGI Global; 2012:132-147.

Guest Post by Eleanor Robb, Class of 2023

When Art and Science Meet as Equals

Artists and scientists in today’s world often exist in their own disciplinary silos. But the Laboratory Art in Practice Bass Connections team hopes to rewrite this narrative, by engaging Duke students from a range of disciplines in a 2-semester series of courses designed to join “the artist studio, the humanities seminar room, and the science lab bench.” Their work culminated in “re:process” – an exhibition of student artwork on Friday, April 28, in the lobby of the French Family Science Center. Rather than science simply engaging artistic practice for the sake of science, or vice versa, the purpose of these projects was to offer an alternate reality where “art and science meet as equals.”

The re:process exhibition

Liuren Yin, a junior double-majoring in Computer Science and Visual and Media Studies, developed an art project to focus on the experience of prosopagnosia, or face blindness. Individuals with this condition are unable to tell two distinct faces apart, including their own, often relying on body language, clothing, and the sound of a person’s voice to determine the identity of a person. Using her experience in computer science, she developed an algorithm that inputs distinct faces and outputs the way that these faces are perceived by someone who has prosopagnosia.

Yin’s project exploring prosopagnosia

Next to the computer and screen flashing between indistinguishable faces, she’s propped up a mirror for passers-by to look at themselves and contemplate the questions that inspired her to create this piece. Yin says that as she learned about prosopagnosia, where every face looks the same, she found herself wondering, “how am I different from a person that looks like me?” Interrogating the link between our physical appearance and our identity is at the root of Yin’s piece. Especially in an era where much of our identity exists online and appearance can be curated any way one wants, Yin considers this artistic piece especially timely. She writes in her program note that “my exposure to technologies such as artificial intelligence, generative algorithms, and augmented reality makes me think about the combination and conflict between human identity and these futuristic concepts.”

Eliza Henne, a junior majoring in Art History with a concentration in Museum Theory and Practice, focused more on the biological world in her project, which used a lavender plant in different forms to ask questions like “what is truthful, and what do we consider real?” By displaying a live plant, an illustration of a plant, and pressings from a plant, she invites viewers to consider how every rendition of a commonly used model organism in scientific experiments omits some information about the reality of the organism.

Junior Eliza Henne

For example, lavender pressings have materiality, but there’s no scent or dimension to the plant. A detailed illustration is able to capture even the way light illuminates the thin veins of the leaf, but is merely an illustration of a live being. The plant itself, which is conventionally real, can only further be seen in this sort of illustrative detail under a microscope or in a diagram.

In walking through the lobby of FFSC, where these projects and more are displayed, you’re surrounded by conventionally scientific materials, like circuit boards, wires, and petri dishes, which, in an unusual turn of events are being used for seemingly unscientific endeavors. These endeavors – illustrating the range of human emotion, showcasing behavioral patterns like overconsumption, or demonstrating the imperfection inherent to life – might at first glance feel more appropriate in an art museum or a performing arts stage.

But the students and faculty involved in this exhibition see that as the point. Maybe it isn’t so unnatural to build a bridge between the arts and the sciences – maybe, they are simply two sides of the same coin.

Post by Meghna Datta, Class of 2023

The Brain Science of Tiny Birds With Amazing Memories

A black-capped chickadee. Dmitriy Aronov, Ph.D., brought wild black-capped chickadees into the lab to study their memories.
Black-Capped Chickadee” by USFWS Mountain Prairie is licensed under CC BY 2.0.

Black-capped chickadees have an incredible ability to remember where they’ve cached food in their environments. They are also small, fast, and able to fly.

So how exactly can a neuroscientist interested in their memories conduct studies on their brains? Dmitriy Aronov, Ph.D., a neuroscientist at the Zuckerman Mind Brain Behavior Institute at Columbia University, visited Duke recently to talk about chickadee memory and the practicalities of studying wild birds in a lab.

Black-capped chickadees, like many other bird species, often store food in hiding places like tree crevices. This behavior is called caching, and the ability to hide food in dozens of places and then relocate it later represents an impressive feat of memory. “The bird doesn’t get to experience this event happening over and over again,” Aronov says. It must instantly form a memory while caching the food, a process that relies on episodic memory. Episodic memory involves recalling specific experiences from the past, and black-capped chickadees are “champions of episodic memory.”

They have to remember not just the location of cached food but also other features of each hiding place, and they often have only moments to memorize all that information before moving on. According to Aronov, individual birds are known to cache up to 5,000 food items per day! But how do they do it?

Chickadees, like humans, rely on the brain’s hippocampus to form episodic memories, and the hippocampus is considerably bigger in food-caching birds than in birds of similar size that aren’t known to cache food. Aronov and his team wanted to investigate how neural activity represents the formation and retrieval of episodic memories in black-capped chickadees.

Step one: find a creative way to study food-caching in a laboratory setting. Marissa Applegate, a graduate student in Aronov’s lab, helped design a caching arena “optimized for chickadee ergonomics,” Aronov says. The arenas included crevices covered by opaque flaps that the chickadees could open with their toes or beaks and cache food in. The chickadees didn’t need any special training to cache food in the arena, Aronov says. They naturally explore crevices and cache surplus food inside.

Once a flap closed over a piece of cached food (sunflower seeds), the bird could no longer see inside—but the floor of each crevice was transparent, and a camera aimed at the arena from below allowed scientists to see exactly where birds were caching seeds. Meanwhile, a microdrive attached to the birds’ tiny heads and connected to a cable enabled live monitoring of their brain activity, down to the scale of individual neurons.

An artistic rendering of one of the cache sites in an arena. “Arenas in my lab have between 64 and 128 of these sites,” Aronov says.
Drawing by Julia Kuhl.

Through a series of experiments, Aronov and his team discovered that “the act of caching has a profound effect on hippocampal activity,” with some neurons becoming more active during caching and others being suppressed. About 35% percent of neurons that are active during caching are consistently either enhanced or suppressed during caching—regardless of which site a bird is visiting. But the remaining 65% of variance is site-specific: “every cache is represented by a unique pattern of this excess activity in the hippocampus,” a pattern that holds true even when two sites are just five centimeters apart—close enough for a bird to reach from one to another.

Chickadees could hide food in any of the sites for retrieval at a future time. The delay period between the caching phase (when chickadees could store surplus food in the cache sites) and the retrieval phase (when chickadees were placed back in the arena and allowed to retrieve food they had cached earlier) ranged from a few minutes to an hour. When a bird returned to a cache to retrieve food, the same barcode-like pattern of neural activity reappeared in its brain. That pattern “represents a particular experience in a bird’s life” that is then “reactivated” at a later time.

Aronov said that in addition to caching and retrieving food, birds often “check” caching sites, both before and after storing food in them. Of course, as soon as a bird opens one of the flaps, it can see whether or not there’s food inside. Therefore, measuring a bird’s brain activity after it has lifted a flap makes it impossible to tell whether any changes in brain activity when it checks a site are due to memory or just vision. So the researchers looked specifically at neural activity when the bird first touched a flap—before it had time to open it and see what was inside. That brain activity, as it turns out, starts changing hundreds of milliseconds before the bird can actually see the food, a finding that provides strong evidence for memory.

What about when the chickadees checked empty caches? Were they making a memory error, or were they intentionally checking an empty site—even knowing it was empty—for their own mysterious reasons? On a trial-by-trial basis, it’s impossible to know, but “statistically, we have to invoke memory in order to explain their behavior,” he said.

A single moment of caching, Aronov says, is enough to create a new, lasting, and site-specific pattern. The implications of that are amazing. Chickadees can store thousands of moments across thousands of locations and then retrieve those memories at will whenever they need extra food.

It’s still unclear how the retrieval process works. From Aronov’s study, we know that chickadees can reactivate site-specific brain activity patterns when they see one of their caches (even when they haven’t yet seen what’s inside). But let’s say a chickadee has stored a seed in the bark of a particular tree. Does it need to see that tree in order to remember its cache site there? Or can it be going about its business on the other side of the forest, suddenly decide that it’s hungry for a seed, and then visualize the location of its nearest cache without actually being there? Scientists aren’t sure.

Post by Sophie Cox, Class of 2025

How Research Helped One Pre-med Discover a Love for Statistics and Computer Science

If you’re a doe-eyed first-year at Duke who wants to eventually become a doctor, chances are you are currently, or will soon, take part in a pre-med rite of passage: finding a lab to research in.

Most pre-meds find themselves researching in the fields of biology, chemistry, or neuroscience, with many hoping to make research a part of their future careers as clinicians. Undergraduate student and San Diego native Eden Deng (T’23) also found herself plodding a similar path in a neuroimaging lab her freshman year.

Eden Deng T’23

At the time, she was a prospective neuroscience major on the pre-med track. But as she soon realized, neuroimaging is done through fMRI. And to analyze fMRI data, you need to be able to conduct data analysis.

This initial research experience at Duke in the Martucci Lab, which looks at chronic pain and the role of the central nervous system, sparked a realization for Deng. “Ninety percent of my time was spent thinking about computational and statistical problems,” she explained to me. Analysis was new to her, and as she found herself struggling with it, she thought to herself, “why don’t I spend more time getting better at that academically?”

Deng at the Martucci Lab

This desire to get better at research led Deng to pursue a major in Statistics with a secondary in Computer Science, while still on the pre-med track. Many people might instantly think about how hard it must be to fit in so much challenging coursework that has virtually no overlap. And as Deng confirmed, her academic path not been without challenges.

For one, she’s never really liked math, so she was wary of getting into computation. Additionally, considering that most Statistics and Computer Science students want to pursue jobs in the technology industry, it’s been hard for her to connect with like-minded people who are equally familiar with computers and the human body.

“I never felt like I excelled in my classes,” Deng said. “And that was never my intention.” Deng had to quickly get used to facing what she didn’t know head-on. But as she kept her head down, put in the work, and trusted that eventually she would figure things out, the merits of her unconventional academic path started to become more apparent.

Research at the intersection of data and health

Last summer, Deng landed a summer research experience at Mount Sinai, where she looked at patient-level cancer data. Utilizing her knowledge in both biology and data analytics, she worked on a computational screener that scientists and biologists could use to measure gene expression in diseased versus normal cells. This will ultimately aid efforts in narrowing down the best genes to target in drug development. Deng will be back at Mount Sinai full-time after graduation, to continue her research before applying to medical school.

Deng presenting on her research at Mount Sinai

But in her own words, Deng’s most favorite research experience has been her senior thesis through Duke’s Department of Biostatistics and Bioinformatics. Last year, she reached out to Dr. Xiaofei Wang, who is part of a team conducting a randomized controlled trial to compare the merits of two different lung tumor treatments.

Generally, when faced with lung disease, the conservative approach is to remove the whole lobe. But that can pose challenges to the quality of life of people who are older, with more comorbidities. Recently, there has been a push to focus on removing smaller sections of lung tissue instead. Deng’s thesis looks at patient surgical data over the past 15 years, showing that patient survival rates have improved as more of these segmentectomies – or smaller sections of tissue removal – have become more frequent in select groups of patients.

“I really enjoy working on it every week,” Deng says about her thesis, “which is not something I can usually say about most of the work I do!” According to Deng, a lot of research – hers included – is derived from researchers mulling over what they think would be interesting to look at in a silo, without considering what problems might be most useful for society at large. What’s valuable for Deng about her thesis work is that she’s gotten to work closely with not just statisticians but thoracic surgeons. “Originally my thesis was going to go in a different direction,” she said, but upon consulting with surgeons who directly impacted the data she was using – and would be directly impacted by her results – she changed her research question. 

The merits of an interdisciplinary academic path

Deng’s unique path makes her the perfect person to ask: is pursuing seemingly disparate interests, like being a Statistics and Computer Science double-major on the pre-med, track worth it? And judging by Deng’s insights, the answer is a resounding yes.

At Duke, she says, “I’ve been challenged by many things that I wouldn’t have expected to be able to do myself” – like dealing with the catch-up work of switching majors and pursuing independent research. But over time she’s learned that even if something seems daunting in the moment, if you apply yourself, most, if not all things, can be accomplished. And she’s grateful for the confidence that she’s acquired through pursuing her unique path.

Moreover, as Deng reflects on where she sees herself – and the field of healthcare – a few years from now, she muses that for the first time in the history of healthcare, a third-party player is joining the mix – technology.

While her initial motivation to pursue statistics and computer science was to aid her in research, “I’ve now seen how its beneficial for my long-term goals of going to med school and becoming a physician.” As healthcare evolves and the introduction of algorithms, AI and other technological advancements widens the gap between traditional and contemporary medicine, Deng hopes to deconstruct it all and make healthcare technology more accessible to patients and providers.

“At the end of the day, it’s data that doctors are communicating to patients,” Deng says. So she’s grateful to have gained experience interpreting and modeling data at Duke through her academic coursework.

And as the Statistics major particularly has taught her, complexity is not always a good thing – sometimes, the simpler you can make something, the better. “Some research doesn’t always do this,” she says – she’s encountered her fair share of research that feels performative, prioritizing complexity to appear more intellectual. But by continually asking herself whether her research is explainable and applicable, she hopes to let those two questions be the North Stars that guide her future research endeavors.

At the end of the day, it’s data that doctors are communicating to patients.

Eden Deng

When asked what advice she has for first-years, Deng said that it’s important “to not let your inexperience or perceived lack of knowledge prevent you from diving into what interests you.” Even as a first-year undergrad, know that you can contribute to academia and the world of research.

And for those who might be interested in pursuing an academic path like Deng, there’s some good news. After Deng talked to the Statistics department about the lack of pre-health representation that existed, the Statistics department now has a pre-health listserv that you can join for updates and opportunities pertaining specifically to pre-med Stats majors. And Deng emphasizes that the Stats-CS-pre-med group at Duke is growing. She’s noticed quite a few underclassmen in the Statistics and Computer Science departments who vocalize an interest in medical school.

So if you also want to hone your ability to communicate research that you care about – whether you’re pre-med or not – feel free to jump right into the world of data analysis. As Deng concludes, “everyone has something to say that’s important.”

Post by Meghna Datta, Class of 2023

Origami Robots: How Technology Moves at the Micro Level

Imagine a robot small enough to fit on a U.S. penny. Or even small enough to rest on Lincoln’s chest. It sounds preposterous enough. Now, imagine a robot small enough to rest on the chest of Lincoln – not the Lincoln whose head decorates the front side of the penny, but the even tinier version of him on the back. 

Before it was changed to a Union Shield, the tail side of pennies contained the Lincoln Memorial, including a miniscule representation of the seated Lincoln statue that rests inside. Barely visible to the naked eye, this miniature Lincoln is on the order of a few hundred micrometers wide. As incredible as it sounds, this is the scale of robots being built by Professor Itai Cohen and his lab at Cornell University. On February 22, Cohen shared several of his lab’s cutting-edge technologies with an audience in Duke’s Schiciano Auditorium. 

Dr. Itai Cohen from Cornell University begins his presentation by demonstrating the scale of the microrobots being developed by his lab.

To begin, Cohen describes the challenge of building robots as consisting of two distinct parts: the brain of the robot, and the brawn. The brain refers to the microchip, and the brawn refers to the “legs,” or actuating limbs of the robot. Between these two, the brain – believe it or not – is the easy part. As Cohen explains, “fifty years of Moore’s Law has solved this problem.” (In 1965, Gordon Moore theorized that roughly every two years, the number of transistors able to fit on microchips will double, suggesting that computational progress will become exponentially more efficient over time.) We now possess the ability to create ridiculously small microcircuits that fit on the footprint of a few micrometers. The brawn, on the other hand, is a major challenge. 

This is where Cohen and his lab come in. Their idea was to use standard fabrication tools used by the semiconductor industry to build the chips, and then build the robot around the chip by folding the robot into the 3D shape they desired. Think origami, but at the microscopic scale. 

Like any good origami artist, the researchers at the Cohen lab recognized that it all starts with the paper. Using the unique tools at the Cornell Nanoscale Facility, the Cohen team created the world’s thinnest paper, including one made out of a single sheet of graphene. To clarify, that’s a single atom thickness.

Next, it came to the folding.  As Cohen describes, there’s really two main options. The first is to shrink down the origami artist to the microscopic level. He concedes that science doesn’t know how to do that quite yet. Alas, the second strategy is to have the paper fold itself. (I will admit that as an uneducated listener, option number two sounds about as absurd as the first one.) Regardless, this turns out to be the more reasonable option.

Countless different iterations of microrobots can be fabricated using the origami folding technique.

The basic process works like this: a seven nanometer thick platinum layer is coated on one side with an inert material. When put in a solution and voltage applied, ions that are dissociated in the solvent will absorb onto the platinum surface. When this happens, a stress is created that bends the device. Reversing the voltage drives away the ions and unbends the device. Applying stiff elements to certain regions restricts the bending to occur only in desired locations. Devices about the thickness of a hair diameter can be created (folded and unfolded) using this method. 

This microscopic origami duck developed by the Cohen Lab graced the covered of Science Robotics in March 2021.

As incredible as this is, there is still one defect: it requires a wire to an external power source that attaches onto the device. To solve this problem, the Cohen lab uses photovoltaics (mini solar panels) that attach directly onto the device itself. When light is shined on the photovoltaic (via sunlight or lasers), it moves the limb. With this advance and some continuous tweaking, the Cohen lab was able to develop the world’s smallest walking robot. 

At just 40 microns by 70 microns by 2 microns thick, the smallest walking microrobot in the world is able to fold itself up and walk off the page.

The Cohen Lab also achieved “BroBot” – a microrobot that “flexes his muscles” when light is shined on the front photovoltaics and truly “looks like he belongs on a beach somewhere.”

The “BroBot,” complete with “chest hair,” was one of the earlier versions of the robot that eventually was refined into the world record-winning microrobot.

The Cohen Lab successfully eliminated the need for any external wire, but there was still more left to be desired. These robots, including “BroBot” and the Guinness World Record-winning microrobot, still required lasers to activate the limbs. In this sense, as Cohen explains, the robots were “still just marionettes” being controlled by “strings” in the form of laser pulses.

To go beyond this, the Cohen Lab began working with a commercial foundry, X-Fab, to create microchips that would act as a brain that could coordinate the limb movements. In this way, the robots would be able to move on their own, without using lasers pointed at specific photovoltaics. Cohen describes this moment as “cutting the strings on the marionette, and bringing Pinocchio to life.”

This is the final key step in the development of Ant Bot: a microrobot that moves all on its own. It uses a hexapod gate, meaning a tripod on each side. All that has to be done is placing the robot in sunlight, and the brain does the rest of the coordination.

“Ant Bot,” one of the most advanced of all microrobots to come out of the Cohen Lab, is able to move autonomously, without the aid of lasers.

The potential for these kinds of microrobots is nearly limitless. As Cohen emphasizes, the application for robots at the microscale is “basically anything you can imagine doing at the macroscale.” Cleaning surfaces, transporting cargo, building components. Perhaps conducting microsurgeries, or exploring new worlds that appear inaccessible. One particularly promising application is a robot that mimics that movement of cilia – the microscopic cellular hair responsible for countless locomotion and sensory functions in the body. A cilia-covered chip could become the basis of new portable diagnostic devices, enabling field testing that would be much easier, cheaper, and more efficient.

The researchers at the Cohen Lab envision a possible future where microscopic robots are used in swarms to restructure blood vessels, or probe large swathes of the human brain in a new form of healthcare based on quantum materials. 

Until now, few would have imagined that the ancient art of origami would predict and enable technology that could transform the future of medicine and accelerate the exploration of the universe.

Post by Kyla Hunter, Class of ’23

Page 2 of 20

Powered by WordPress & Theme by Anders Norén