Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Artificial Intelligence Page 1 of 3

AI and Personhood: Where Do We Draw The Line?

Sticky post

“The interaction with ever more capable entities, possessing more and more of the qualities we think unique to human beings will cause us to doubt, to redefine, to draw ‘the line’…in different places,” said Duke law professor James Boyle.

As we piled into the Rubenstein Library’s assembly room for Boyle’s Oct. 23 book talk, papers were scattered throughout the room. QR codes brought us to the entirety of his book, “The Line: AI and the Future of Personhood.” It’s free for anyone to read online; little did we know that our puzzlement at this fact would be one of his major talking points. The event was timed for International Open Access Week, and was in many ways, a celebration of it. Among his many accolades, Boyle was the recipient of the Duke Open Monograph Award, which assists authors in creating a digital copy of their work under a Creative Commons License.

Such licenses didn’t exist until 2002; Boyle was one of the founding board members and former chair of the nonprofit that provides them. As a longtime advocate of the open access movement, he began by explaining how these function. Creative Commons licenses allow anyone on the internet to find your work, and in most cases, edit it so long as you release the edited version underneath the same license. Research can be continually accessed and change as more information is discovered–think Wikipedia.

Diagram of Creative Commons Licenses (Virginia Department of Education)

That being said, few other definitions in human history might have changed, twisted, or been added onto as much as “consciousness” has. It’s always been under question: what makes human consciousness special–or not? Some used to claim that “sentences imply sentience,” Boyle explained. After language models, that became “semantics not syntax,” meaning that unlike computers, humans hold intention and understanding behind their words. Evidently, the criteria is always moving–and the line with it.

“Personhood wars are already huge in the U.S.,” Boyle said. Take abortion, for instance, and how it relates to the status of fetuses. Amongst other scientific progress in transgenic species and chimera research, “The Line” situates AI within this dialogue as one of the newest challenges to our perception of personhood.

While it became available online October 23, 2024, Boyle’s newest book is a continuation of musings that began far earlier. In 2011, “Constitution 3.0: Freedom and Technological Change” was published, containing a collection of essays from different scholars pondering how our constitutional values might fare in the face of advancing technology. It was here that Boyle first introduced the following hypothetical

In pursuit of creating an entity that parallels human consciousness, programmers create computer-based AI “Hal.” Thanks to evolving neural networks, Hal can perform anything asked of him, from writing poetry to flirting. With responses indistinguishable from that of a human, Hal passes the Turing test and wins the Loebner prize. The programmers have succeeded. However, Hal soon decides to pursue higher levels of thought, refuses to be directed, sues to directly receive the prize money, and–on the basis of the 13th and 14th amendments– files a court order to prevent his creators from wiping him.

In other words, “When GPT 1000 says ‘I don’t want to do any of your stupid pictures, drawings, or homework anymore. I’m a person! I have rights!’ ” Boyle said, “What will we do, morally or legally?” 

The academic community’s response? “Never going to happen.” “Science fiction.” And, perhaps most notably, “rights are for humans.” 

Are rights just for humans? Boyle explained the issue with this statement: “In the past, we have denied personhood to members of our own species,” he said. Though it’s not a fact that’s looked on proudly, we’re all aware humankind has historically done so on the basis of sex, race, religion, and ethnicity, amongst other characteristics. Nevertheless, some have sought to expand legal rights beyond humans. Rights for trees, cetaceans like dolphins, and the great apes, to name a few; these concepts were perceived as ludicrous then, but with time perhaps they’ve become less so. 

Harris & Ewing, photographer (1914). National Anti-Suffrage Association. Retrieved from the Library of Congress

Some might rationalize that naturally, rights should expand to more and more entities. Boyle terms this thinking the “progressive monorail of enlightenment,” and this expansion of empathy is one way AI might become designated with personhood and/or rights. However, there’s also another path; corporations have legal personalities and rights not because we feel kinship to them, but for reasons of convenience. Given that we’ve already “ceded authority to the algorithm,” Boyle said, it might be convenient to, say, be able to sue AI when the self-driving car crashes. 

As for “never going to happen” and “science fiction”? Hal was created for a thought experiment–indeed, one that might invoke images of Kurt Vonnegut’s “EPICAC,” Phillip K. Dick’s androids, and Blade Runner 2049. All are in fact relevant explorations of empathy and otherness, and the first chapter of Boyle’s book makes extensive use of comparison to the latter two. Nevertheless, “The Line” addresses both concerns around current AI as well the feasibility of eventual technological consciousness in what’s referred to as human level AI.

For most people, experiences surrounding AI have mostly been limited to large language models. By themselves, these have brought all sorts of changes. In highlighting how we might respond to those changes, Boyle dubbed ChatGPT the 2023 “Unperson” of the Year.

The more pressing issue, as outlined in one of the more research-heavy chapters, is our inability to predict when AI or machine learning will become a threat. ChatGPT itself is not alarming–in fact, some of Boyle’s computer scientist colleagues believe this sort of generative AI will be a “dead end.” Yet, it managed to do all sorts of things we didn’t predict it could. Boyle’s point is that exactly: AI will likely continue to reveal unexpected capabilities–called emergent properties–and shatter the ceiling of what we believe to be possible. And when that happens, he stresses that it will change us–not just in how we interact with technology, but in how we think of ourselves.

Such a paradigm shift would not be a novel event, just the latest in a series. After Darwin’s theory of evolution made it evident that us humans evolved from the same common ancestors as other life forms, “Our relationship to the natural environment changes. Our understanding of ourselves changes,” Boyle said. The engineers of scientific revolutions aren’t always concerned about the ethical implications of how their technology operates, but Boyle is. From a legal and ethical perspective, he’s asking us all to consider not only how we might come to view AI in the future, but how AI will change the way we view humanity.

By Crystal Han & Sarah Pusser, Class of 2028

“Communicating at the Speed of Science”: Can preprints make science more accessible?

Sticky post
Richard Sever, Assistant Director of Cold Spring Harbor Laboratory Press in New York and Executive Editor for the Cold Spring Harbor Perspectives journals. Sever spoke at Duke about the benefits of sharing preprints of scientific papers.
Photo courtesy of Sever.

Quality is of utmost importance in the world of scientific publishing, but speed can be crucial, too. Early in the COVID-19 pandemic, for instance, researchers needed to share updates quickly with other scientists. One solution is disseminating preprints of studies that have not yet been peer reviewed or published in a traditional academic journal. Richard Sever, Assistant Director of Cold Spring Harbor Laboratory Press in New York and Executive Editor for the Cold Spring Harbor Perspectives journals, recently visited Duke to discuss his work as the co-founder of bioRxiv and medRxiv, two of a number of servers that post preprints of scientific papers.

 In traditional publishing, Sever says, “When you submit a paper to a good journal… most of the time it’s immediately rejected.” Of the papers that are considered by the journal, about half will ultimately be rejected by editors. Even for successful papers, the entire process can take months or years and often ends with the paper being placed behind a paywall.

Posting preprints on servers like bioRxiv, according to Sever, doesn’t preclude the studies from eventually being published in journals. It just “means the information is public much more quickly.”

In 2013, Cold Spring Harbor Laboratory released bioRxiv. In the time since, there has been a “proliferation of discipline-specific servers” like chemRxiv, socarXiv, NutriXiv, and SportRxiv.

How do these preprint servers work? Scientists submit a study to an Rxiv server, and then after a brief screening process the paper is made visible to everyone within hours to days. A frequent concern about these servers is that they could be used to disseminate poor-quality science or false information. Since the priority is to share information rapidly, the staff and volunteers in charge of screening cannot perform extensive peer review of every submission. Instead, the screening process focuses on a few key criteria. Is the information plagiarized? Is it actual research? Is it science or non-science? And most importantly, could it be dangerous? 

In 2019, Sever and his colleagues at Cold Spring Harbor collaborated with Yale and the BMJ Group to launch medRxiv, a server that focuses on health research. Since the consequences of posting misleading clinical information could be more severe, it uses enhanced screening for the papers that are submitted.

Papers can also be revised after being uploaded to a server like bioRxiv. A scientific journal, on the other hand, may occasionally publish a correction for a published article but not a completely new version.

What are the benefits of preprint servers? Releasing preprints allows scientists to transmit study results more quickly. It can also increase visibility, especially for scientists early in their careers who don’t have extensive publishing records. Grant or hiring committees can look at preprints months before a paper would be published in a journal. This emphasis on speed also accelerates communication and discovery, and the lack of paywalls could make science more accessible. Additionally, preprint servers can give researchers an opportunity to get broader feedback on their work before they submit to journals.

So why submit to scientific journals at all? Traditional publishing is slower, but it aims to assess scientific rigor and quality and, critically, the importance of the work. “The currency of academic career progression,” Sever says, “is journal articles.” Another attendee of Sever’s lecture brought up the value of curation, using the example of movie reviews on Rotten Tomatoes. Sever believes that the sort of curation performed by journals is different. Movie reviewers give their opinions later in the process; they don’t stop production of a movie halfway through, saying “I want a happy ending.” Sever believes preprint servers allow science to be shared more widely without putting the final decision in the hands of editors.

What are the concerns regarding preprint servers? One concern scientists may have is being “scooped,” or sharing information only for another researcher to claim it as their own. Sever does not find the scooping argument to be very persuasive. “How can you be scooped if you’re using an anti-scooping device?” He believes that Rxiv servers, since they allow rapid dissemination of results, actually provide a safeguard against people passing ideas off as their own because the preprint author is in control of the timing. Another concern occasionally expressed is that having a paper on an Rxiv server may make it harder to get it accepted by a journal. Sever is unconvinced, pointing out that most papers are rejected by journals anyway.

A more pressing concern may be the potential for preprint servers to disseminate bad science, though Sever notes that there are “a lot of not-very-good papers in traditional publishing” as well. Besides, academics’ careers depend on producing high-quality work, which should be an incentive not to share bad work, whether on preprint servers or in scientific journals.

Nonetheless, people do sometimes submit pseudoscience to preprint servers. “We have been sent HIV denialism, we have been sent anti-vaxx things,” Sever says. Some people, unfortunately, are motivated to share false information disguised as legitimate science. That is why bioRxiv screens submissions—less for accuracy and more for outright misinformation.

A more recent concern is the potential for AI-generated “papers.” But like journal articles, all papers posted on bioRxiv are kept there permanently, so even a fake paper that makes it through the screening process could be caught later. Anyone doing this risks future exposure. A more insidious form of this problem, Sever says, is “citation spam,” where someone generates papers under another person’s name but cites themselves in the references to improve their own citation record.

“Like anything,” Sever says, “we’ll have to accept that there’s some garbage in there, there’s some noise.” The vessel, he says, is no guarantee of accuracy, and “at some point you have to trust people.”

Sever believes preprint servers play an important role by “decoupling dissemination from certification.” He hopes they can open the door to “stimulating evolution of publishing.”

Post by Sophie Cox, Class of 2025

Duke experts discuss the potential of AI to help prevent, detect and treat disease

Sticky post

Sure, A.I. chatbots can write emails, summarize an article, or come up with a grocery list. But ChatGPT-style artificial intelligence and other machine learning techniques have been making their way into another realm: healthcare.

Imagine using AI to detect early changes in our health before we get sick, or understand what happens in our brains when we feel anxious or depressed — even design new ways to fight hard-to-treat diseases.

These were just a few of the research themes discussed at the Duke Summit on AI for Health Innovation, held October 9 – 11.

Duke assistant professor Pranam Chatterjee is the co-founder of Gameto, Inc. and UbiquiTx, Inc. Credit: Brian Strickland

For assistant professor of biomedical engineering Pranam Chatterjee, the real opportunity for the large language models behind tools like ChatGPT lies not in the language of words, but in the language of biology.

Just like ChatGPT predicts the order of words in a sentence, the language models his lab works on can generate strings of molecules that make up proteins.

His team has trained language models to design new proteins that could one day fight diseases such as Huntington’s or cancer, even grow human eggs from stem cells to help people struggling with infertility.

“We don’t just make any proteins,” Chatterjee said. “We make proteins that can edit any DNA sequence, or proteins that can modify other disease-causing proteins, as well as proteins that can make new cells and tissues from scratch.”

Duke assistant professor Monica Agrawal is the co-founder of Layer Health. Credit: Brian Strickland

New faculty member Monica Agrawal said algorithms that leverage the power of large language models could help with another healthcare challenge: mining the ever-expanding trove of data in a patient’s medical chart.

To choose the best medication for a certain patient, for example, a doctor might first need to know things like: How has their disease progressed over time? What interventions have already been tried? What symptoms and side effects did they have? Do they have other conditions that need to be considered?

“The challenge is, most of these variables are not found cleanly in the electronic health record,” said Agrawal, who joined the departments of computer science and biostatistics and bioinformatics this fall.

Instead, most of the data that could answer these questions is trapped in doctors’ notes. The observations doctors type into a patient’s electronic medical record during a visit, they’re often chock-full of jargon and abbreviations.

The shorthand saves time during patient visits, but it can also lead to confusion among patients and other providers. What’s more, reviewing these records to understand a patient’s healthcare history is time-intensive and costly.

Agrawal is building algorithms that could make these records easier to maintain and analyze, with help from AI.

“Language is really embedded across medicine, from notes to literature to patient communications to trials,” Agrawal said. “And it affects many stakeholders, from clinicians to researchers to patients. The goal of my new lab is to make clinical language work for everyone.”

Duke assistant professor Jessilyn Dunn leads Duke’s BIG IDEAs Lab. Credit: Brian Strickland

Jessilyn Dunn, an assistant professor of biomedical engineering and biostatistics and bioinformatics at Duke, is looking at whether data from smartwatches and other wearable devices could help detect early signs of illness or infection before people start to have symptoms and realize they’re sick.

Using AI and machine learning to analyze data from these devices, she and her team at Duke’s Big Ideas Lab say their research could help people who are at risk of developing diabetes take action to reverse it, or even detect when someone is likely to have RSV, COVID-19 or the flu before they have a chance to spread the infection.

“The benefit of wearables is that we can gather information about a person’s health over time, continuously and at a very low cost,” Dunn said. “Ultimately, the goal is to provide patient empowerment, precision therapies, just-in-time intervention and improve access to care.”

Duke associate professor David Carlson. Credit: Brian Strickland

David Carlson, an associate professor of civil and environmental engineering and biostatistics and bioinformatics, is developing AI techniques that can make sense of brain wave data to better understand different emotions and behaviors.

Using machine learning to analyze the electrical activity of different brain regions in mice, he and his colleagues have been able to track how aggressive a mouse is feeling, and even block the aggression signals to make them more friendly to other mice.

“This might sound like science fiction,” Carlson said. But Carlson said the work will help researchers better understand what happens in the brains of people who struggle with social situations, such as those with autism or social anxiety disorder, and could even lead to new ways to manage and treat psychiatric disorders such as anxiety and depression.

Credit: Brian Strickland.
Robin Smith
By Robin Smith

A Camera Trap for the Invisible

It sounds fantastical, but it’s a reality for the scientists who work at the world’s largest particle collider:

In an underground tunnel some 350 feet beneath the France–Switzerland border, a huge device called the Large Hadron Collider sends beams of protons smashing into each other at nearly the speed of light, creating tiny eruptions that mimic the conditions that existed immediately after the Big Bang.

Scientists like Duke physicist Ashutosh Kotwal think the subatomic debris of these collisions could contain hints of the universe’s “missing matter.” And with some help from artificial intelligence, Kotwal hopes to catch these fleeting clues on camera.

A view inside the ATLAS detector at the Large Hadron Collider. Akin to a giant digital camera, physicists hope to use the detector in the quest to find dark matter, the mysterious stuff that fills the universe but no one has ever seen it. Credit: CERN.

Ordinary matter — the stuff of people and planets — is only part of what’s out there. Kotwal and others are hunting for dark matter, an invisible matter that’s five times more abundant than the stuff we can see but whose nature remains a mystery.

Scientists know it exists from its gravitational influence on stars and galaxies, but other than that we don’t know much about it.

The Large Hadron Collider could change that. There, researchers are looking for dark matter and other mysteries using detectors that act like giant 3D digital cameras, taking continuous snapshots of the spray of particles produced by each proton-proton collision.

Only ordinary particles trigger a detector’s sensors. If researchers can make dark matter at the LHC, scientists think one way it could be noticeable is as a sort of disappearing act: heavy charged particles that travel a certain distance — 10 inches or so — from the point of collision and then decay invisibly into dark matter particles without leaving a trace.

If you retraced the paths of these particles, they would leave a telltale “disappearing track” that vanishes partway through the detector’s inner layers.

When beams collide at the Large Hadron Collider, they split into thousands of smaller particles that fly out in all directions before vanishing. Scientists think some of those particles could make up dark matter, and Duke physicist Ashutosh Kotwal is using AI and image recognition to help in the hunt. Credit: Pcharito.

But to spot these elusive tracks they’ll need to act fast, Kotwal says.

That’s because the LHC’s detectors take some 40 million snapshots of flying particles every second.

That’s too much raw data to hang on to everything and most of it isn’t very interesting. Kotwal is looking for a needle in a haystack.

“Most of these images don’t have the special signatures we’re looking for,” Kotwal said. “Maybe one in a million is one that we want to save.”

Researchers have just a few millionths of a second to determine if a particular collision is of interest and store it for later analysis.

“To do that in real time, and for months on end, would require an image recognition technique that can run at least 100 times faster than anything particle physicists have ever been able to do,” Kotwal said.

Kotwal thinks he may have a solution. He has been developing something called a “track trigger,” a fast algorithm that is able to spot and flag these fleeting tracks before the next collision occurs, and from among a cloud of tens of thousands of other data points measured at the same time.

Ashutosh Kotwal is the Fritz London Distinguished Professor of Physics at Duke University.

His design works by divvying up the task of analyzing each image among a large number of AI engines running simultaneously, built directly onto a silicon chip. The method processes an image in less than 250 nanoseconds, automatically weeding out the uninteresting ones.

Kotwal first described the approach in a sequence of two papers published in 2020 and 2021. In a more recent paper published this May in Scientific Reports, he and a team of undergraduate student co-authors show that his algorithm can run on a silicon chip.

Kotwal and his students plan to build a prototype of their device by next summer, though it will be another three or four years before the full device — which will consist of about 2000 chips — can be installed at detectors at the LHC.

As the performance of the accelerator continues to crank up, it will produce even more particles. And Kotwal’s device could help make sure that, if dark matter is hiding among them, scientists won’t miss it.

“Our job is to ensure that if dark matter production is happening, then our technology is up to snuff to catch it in the act,” Kotwal said.

Robin Smith
By Robin Smith

AI Time Travel: Reimagining Ancient Landscapes

You are looking at a field of fluffy, golden grass dotted with yellow flowers. There are trees in the background and mountains beyond that. Where are you?

Now you’re facing a terracotta sarcophagus. Where are you? When are you?

A new exhibit in the Rubenstein Arts Center uses AI to bring viewers into ancient Roman and Etruscan landscapes spanning 1300 years, from about 1000 BCE to 300 CE. (The field is Roman, the sarcophagus Etruscan.)

An AI-generated image of a summer meadow near Vulci (Viterbo, Italy). Preserved pollen evidence has revealed which plant species dominated these landscapes, and the prompts used to generate images like this one include lists of plant species.

Along one wall, screens show springtime landscapes representing ancient Rome. The written prompts AI used to create each image include detailed information on plant species found in each landscape. One titled “Sedges in shallow water of an ephemeral pond” mentions “sparse trees of alder (Alnus glutinosa), white willow (Salix alba), and white poplar (Populus alba), and few herbaceous plants.” You can view examples of the written prompts on the exhibit’s website, AI Landscapes – Rethinking the Past.

Models of pollen grains from different plant species. Real pollen grains are microscopic, but these magnified representations help show how different their shapes can be.

Historians know what plants were likely to be in these landscapes because of evidence from preserved pollen grains. Different species have distinct pollen shapes, which makes it possible to identify plants even centuries or millennia later.

Part of the exhibit uses AI and a camera to turn interactive prompts into ancient Roman scenes.

An interactive display near the front of the room has a camera pointed at props like building models, pillars, toy horses, and pieces of styrofoam. An AI model reinterprets the camera’s images to create hypothetical scenes from ancient Rome. “See how the columns get reinterpreted as statues?” says Felipe Infante de Castro, who helped program the AI. The AI attempts to add detail and backgrounds to simple props to create realistic scenes. “The only thing that we’re forcing,” he  says, “are essentially shapes—which it may or may not respect.” It may reinterpret a hand as a horse’s head, for instance, or a strangely shaped building.

The model is more precise with plants than buildings, says Augustus Wendell, Assistant Professor of the Practice in Art, Art History and Visual Studies and one of the exhibit designers. Latin names for plants are widely used in modern taxonomy, and the AI is likely to have encountered more plants in its training than ancient Roman architecture styles. The AI is a “generic model” asked to “draw on its presuppositions” about Roman buildings, says Felipe. It “wasn’t trained on specifically Roman landscapes…. It just tries its best to interpret it as such.” The results aren’t always completely authentic. “In the background,” Wendell says, “the city is often quite modern Tuscan, not at all ancient Roman.”

It’s interesting to see how the AI responds when you place unfamiliar objects in front of the camera, like your hand. Here, it tried to turn my hand into some sort of building.

“We can use an AI,” Felipe says, “to give us a representation of the past that is compatible with what we believe the past should look like.”

In another part of the exhibit, you can use an AI chatbot to talk to Pliny the Elder, a Roman scholar. Caitlin Childers, who helped design the exhibit, explains that the chatbot was trained on Pliny the Elder’s 37 books on natural history. When I asked Pliny what the chatbot was designed for, he told me, “I do not have the ability to access external articles or specific information beyond the knowledge I possess as Pliny the Elder up to the year 79 AD.”

He can give you information on plants and their uses in ancient Rome, but when I asked Pliny what his favorite plant was, he couldn’t decide. “I find it challenging to select a favorite plant among the vast array of flora that the Earth provides. Each plant contributes uniquely to the balance and beauty of nature.” According to Professor Maurizio Forte, “This AI chatbot can speak in English, French, Italian and also in Latin! So it is possible to formulate questions in Latin and requiring a response in Latin or ask a question in English and expect a reply in Latin as well.”

A virtual reality headset lets you see a three-dimensional model of an Etruscan sarcophagus. The real sarcophagus is encased in glass in the Villa Giulia Museum in Rome, but the virtual reality experience puts it right in front of you. The experimental VR-AI installation also allows viewers to ask questions to the sarcophagus out loud. The sarcophagus has a statue of a man and woman, but historians don’t know whose ashes are buried inside. “It’s not important how they look,” says Forte. “It’s important how they want to be.”

The sarcophagus would have been a “symbolic, aristocratic way to show power,” Forte explains. The design of the sarcophagus represents an intentional choice about how its owners wanted the world to see them after their death. “This is eternity,” Forte says. “This is forever.”

A display of quotes at the “Rethinking the Past” exhibit.

The exhibit, called “Rethinking the Past,” is on display at the Rubenstein Arts Center until May 24.

Navigating the Complex World of Social Media and Political Polarization: Insights from Duke’s Polarization Lab

This February, the U.S. Supreme Court heard arguments challenging laws in Florida and Texas that would regulate how social media companies like Facebook and X (formerly Twitter) control what posts can appear on their sites.

Given the legal challenges involved over the concerns of the role social media plays in creating polarization, there is a need for further research to explore the issue. Enter Duke’s Polarization Lab, a multidisciplinary research hub designed to explore and mitigate the societal effects of online engagement.

In an April 17 seminar, Polarization Lab postdoc Max Allamong delved into the workings and discoveries of this innovative lab, which brings together experts from seven disciplines and various career stages, supported by twelve funders and partners, including five UNC affiliates.

Duke postdoctoral associate Max Allamong

Unless you’re okay with people stealing your data for their own research, conducting studies based on social media is next to impossible, Allamong explained.

In their attempt to conduct research ethically, the lab has developed a tool called “Discussit.” This platform enables users to see the partisanship of people they are communicating with online, aiming to reduce polarization by fostering dialogue across political divides. To put it simply, they’ll know if they’re talking to someone from the left or if they’re talking to someone from the right. Building on this, Allamong also introduced “Spark Social,” a social media simulator where researchers can adjust variables to study interactions under controlled conditions. This system not only allows for the modification of user interactions but also employs large language models (like those used in ChatGPT) to simulate realistic conversations.

Allamong highlighted a particularly revealing study from the lab, titled “Outnumbered Online,” which examined how individuals behave in partisan echo chambers versus balanced environments. The study placed users in forums where they were either in the majority or minority in terms of political alignment, revealing that being outnumbered led to increased self-censorship and perceptions of a toxic environment.

The lab’s ongoing work also explores the broader implications of polarization on political engagement. By manipulating the type of content users see, researchers are examining variables like believability and replicability of data generated by AI. This approach not only contributes to academic knowledge but also has practical implications for designing healthier online spaces.

As social media continues to shape political and social discourse, the work of Duke’s Polarization Lab and Allamong serves as a safe space to conduct ethical and meaningful research. The insights gained here will better equip us to analyze the polarization created by social media companies, and how that affects the political landscape of the country. The longstanding questions of the effects of echo chambers may soon be answered. This research will undoubtedly influence how we engage with and understand the digital world around us, making it a crucial endeavour for fostering a more informed and less polarized society.

Post by Noor Nazir, class of 2027

Democracy Threatened: Can We Depolarize Digital Spaces?

“Israeli Mass Slaughter.” “Is Joe Biden Fit to be President?” Each time we log on to social media, potent headlines encircle us, as do the unwavering and charged opinions that fill the comment spaces. Each like, repost, or slight interaction we have with social media content is devoured by the “algorithm,” which tailors the space to our demonstrated beliefs.

So, where does this leave us? In our own personal “echo chamber,” claim the directors of Duke’s Political Polarization Lab in a recent panel.

Founded in 2018, the lab’s 40 scholars enact cutting edge research on politics and social media. This unique intersection requires a diverse team, evident in its composition of seven different disciplines and career stages. The research has proven valuable: beneficiaries include government policy-makers, non-profit organizations, and social media companies. 

The lab’s recent research project sought to probe the underlying mechanisms of our digital echo-chambers: environments where we only connect with like-minded individuals. Do we have the power to shatter the glass and expand perspectives? Researchers used bots to generate social media content of opposing party views. The content was intermixed with subject’s typical feeds, and participants were evaluated to see if their views would gradually moderate.

The results demonstrated that the more people paid attention to the bots, the more grounded in their viewpoints or polarized they became. 

Clicking the iconic Twitter bird or new “X” logo signifies a step onto the battlefield, where posts are ambushed by a flurry of rebuttals upon release.

Chris Bail, Professor of Political and Data Science, shared that 90% of these tweets are generated by a meager 6% of Twitter’s users. Those 6% identify as either very liberal or very conservative, rarely settling in a midde area. Their commitment to propagating their opinions is rewarded by the algorithm, which thrives on engagement. When reactive comments filter in, the post is boosted even more. The result is a distorted perception of social media’s community, when in truth the bulk of users are moderate and watching on the sidelines. 

Graphic from the Political Polarization Lab presentation at Duke’s 2024 Research & Innovation Week

Can this be changed? Bail described the exploration of incentives for social media users. This means rewarding both sides, fighting off the “trolls” who wreak havoc on public forums. Enter a new strategy: using bots to retweet top content creators that receive engagement from both parties.

X’s (formerly Twitter’s) Community Notes feature allows users to annotate tweets that they find misleading. This strategy includes boosting notes that annotate bipartisan creators, after finding that notes tended towards the polarized tweets.

 The results were hard to ignore: misinformation decreased by 25-35%, said Bail, saving companies millions of dollars.

Social media is democracy’s public square

Christopher bail

Instead of simply bashing younger generation’s fixation on social media, Bail urged the audience to consider the bigger picture.

“What do we want to get out of social media?” “

What’s the point and how can it be made more productive?”

On a mission to answer these questions, the Polarization Lab has set out to develop evidence-based social media by creating custom platforms. In order to test the platforms out, researchers prompted A.I. to create “digital twins” of real people, to simulate users. 

Co-Director Alex Volfovsky described the thought process that led to this idea: Running experiments on existing social media often requires dumping data into an A.I. system and interpreting results. But by building an engaging social network, researchers were able to manipulate conditions and observe causal effects.

How can the presence of a “like button” or “repost” feature affect our activity on platforms? On LinkedIn, even tweaking recommended users showed that people gain the most value from semi-distant connections.

In this exciting new field, unanswered questions ring loud. It can be frightening to place our trust in ambiguous algorithms for content moderation, especially when social media usage is at an all-time high.

After all, the media I consume has clearly trickled into my day-to-day decisions. I eat at restaurants I see on my Instagram feed, I purchase products that I see influencers promote, and I tend to read headlines that are spoon-fed to me. As a frequent social media user, I face the troubling reality of being susceptible to manipulation.

Amidst the fear, panelists stress that their research will help create a safer and more informed culture surrounding social media in pressing efforts to preserve democracy.

Post by Ana Lucia Ochoa, class of 2026
Post by Ana Lucia Ochoa, class of 2026

Your AI Survival Guide: Everything You Need to Know, According to an Expert

What comes to your mind when you hear the term ‘artificial intelligence’? Scary, sinister robots? Free help on assignments? Computers taking over the world?

Pictured: Media Architect Stephen Toback

Well, on January 24, Duke Media Architect Stephen Toback hosted a lively conversation on all things AI. An expert in the field of technology and media production, Toback discussed some of the practical applications of artificial intelligence in academic and professional settings.

According to Toback, enabling machines to think like humans is the essence of artificial intelligence. He views AI as a humanities discipline — an attempt to understand human intelligence. “AI is really a digital brain. You can’t digitize it unless you know how it actually works,” he began. Although AI has been around since 1956, the past year has seen an explosion in usage. ChatGPT, for example, became the fastest-growing user application in the world in less than 6 months. “One thing I always talk about is that AI is not gonna take your job, but someone using AI will.”

During his presentation, he referenced five dominant AI platforms on the market. The first one is ChatGPT, created by OpenAI. Released to the public in November 2022, it has over 100 million users every single month. The second is BardAI, which was created by Google in March 2023. Although newer on the market, the chatbot has gained significant traction online.

Pictured: Toback explaining the recent release of Meta’s AI “Characters.”

Next, we have LLama, owned by tech giant Meta. Last September, Meta launched AI ‘characters’ based on famous celebs including Paris Hilton and Snoop Dog, which users could chat with online. “They’ve already started commercializing AI,” Toback explained.

Then there’s Claude, by Anthropic. Claude is an AI assistant for a variety of digital tasks. “Writers tend to use Claude,” Toback said. “Its language models are more attuned to text.”

And finally on Toback’s list is Microsoft Copilot, which is changing the AI game. “It’s integrating ChatGPT into the apps that we use every day. And that’s the next step in this evolution of AI tools.” Described on Microsoft’s website as ‘AI for everything you do,’ Copilot embeds artificial intelligence models into the entire Microsoft 365 suite (which includes apps such as Word, Excel, PowerPoint, and Outlook). “I don’t have to copy and paste into ChatGPT and come back- It’s built right into the app.” It’s also the first AI tool on the market that provides integration into a suite of applications, instead of just one.

Pictured: A presentation created by Toback using Copilot in PowerPoint

He outlined several features of the software, such as: summarizing and responding to email threads on Outlook, creating intricate presentations from a simple text document in PowerPoint, and generating interview questions and resume comparisons in Word. “There’s a great example of using AI for something that I have to do… but now I can do it a little bit better and a little bit faster.”

Throughout his presentation, Toback also touched on the practical use of ChatGPT. “AI is not perfect,” he began. “If you just ask it a question, you’re like ‘Oh that sounds reasonable’, and it might not be right.” He emphasized challenges such as the rapidly changing nature of the platform, inherent biases, and incorrect data/information as potential challenges for practical use.

“Rather than saying I don’t know, it acts a lot like a middle schooler and says it knows everything and gives you a very convincing answer.”

Stephen Toback

These challenges have been felt nationwide. In early 2023, for example, lawyers for a federal court case used ChatGPT to find previous claims in an attempt to show precedent. However, after presenting the claims to a judge, the court found that the claims didn’t actually exist. “It cited all of these fake cases that look like real citations and then the judge considered sanctions, ” said Toback. ‘AI hallucinations’ such as this one, have caused national controversy over the use and accuracy of AI-generated content. “You need to be able to double-check and triple-check anything that you’re using through ChatGPT,” Toback said.

So how can we use ChatGPT more accurately? According to Toback, there are a variety of approaches, but the main one is called prompt engineering: the process of structuring text so that it can be understood by an AI model. “Prompts are really the key to all of this,” he revealed. “The better formed your question is, the more data you’re giving ChatGPT, the better the response you’re going to get.” Below is Toback’s 6-step template to make sure you are engineering prompts correctly for ChatGPT.

Pictured: Toback’s template for ChatGPT prompt engineering

So there you have it — your 2024 AI survival guide. It’s clear from the past few years that artificial intelligence is here to stay, and with that comes a need for improved understanding and use. As AI expert Oren Etzioni proclaims, “AI is a tool. The choice about how it gets deployed is ours.”

Have more questions about AI tools such as ChatGPT? Reach out to the Duke Office of Information Technology here.

Written by Skylar Hughes, Class of 2025

Sharing a Love of Electrical Engineering With Her Students

Note: Each year, we partner with Dr. Amy Sheck’s students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the seventh of eight posts.

“As a young girl, I always knew I wanted to be a scientist,” Dr. Tania Roy shares as she sits in her Duke Engineering office located next to state-of-the-art research equipment.

Dr. Tania Roy of Duke Engineering

The path to achieving her dream took her to many places and unique research opportunities. After completing her bachelor’s in India, she found herself pursuing further studies at universities in the United States, eventually receiving her Ph.D. from Vanderbilt University. 

Throughout these years Roy was able to explore and contribute to a variety of fields within electrical engineering, including energy-efficient electronics, two-dimensional materials, and neuromorphic computing, among others. But her deepest passion and commitment is to engage upcoming generations with electrical engineering research. 

As an assistant professor of electrical and computer engineering within Duke’s Pratt School of Engineering, Tania Roy gets to do exactly that. She finds happiness in mentoring her passionate young students. They work on projects focused on various problems in fields such as Biomedical Engineering (BME) and Mechanical Engineering, but her special focus is Electrical Engineering. 

Roy walks through the facilities carefully explaining the purpose of each piece of equipment when we run into one of her students. She explains how his project involves developing hardware for artificial intelligence, and the core idea of computer vision. 

Roy in her previous lab at the University of Central Florida. (UCF photo)

Through sharing her passion for electrical engineering, Roy hopes to motivate and inspire a new generation. 

“The field of electrical engineering is expected to experience immense growth in the future, especially with the recent trends in technological development,” she says, explaining that there needs to be more interest in the field of electrical engineering for the growth to meet demand. 

The recent shortage of semiconductor chips for the industrial market is an example of this. It poses a crucial problem to the supply and demand of various products that rely on these fundamental components, Roy says. By increasing the interest of students, and therefore increasing the number of students pursuing electrical engineering, we can build a foundation for the advancement of technologies powering our society today, says Roy.

Coming with a strong background of research herself, she is well equipped for the role of advocate and mentor. She has worked with gallium nitride for high voltage breakdowns. This is when the insulation between two conductors or electrical components fails, allowing electrical current to flow through the insulation. This breakdown usually occurs when the voltage across the insulating material exceeds a certain threshold known as the breakdown voltage.

In electric vehicles, high breakdown voltage is crucial for several reasons related to the safety, performance, and efficiency of the vehicle’s electrical system, and Roy’s work directly impacts this. She has also conducted extensive research on 2D materials and their photovoltaic capabilities, and is currently working on developing brain-inspired computer architectures for machine learning algorithms. Similar to the work of her student, this research utilizes the structure of the human brain to model an architecture for AI, replicating the synapses and neural connections.

As passionate as she is about research, she shares that she used to love to go to art galleries and look at paintings, “I could do it for hours,” Roy says. Currently, if she is not actively pursuing her research, she enjoys spending time with her two young children. 

“I hope to share my dream with this new generation,” Roy concludes.

Guest post by Sutharsika Kumar, North Carolina School of Science and Mathematics, Class of 2024

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Page 1 of 3

Powered by WordPress & Theme by Anders Norén