Following the people and events that make up the research community at Duke

Students exploring the Innovation Co-Lab

Category: Law

AI and Personhood: Where Do We Draw The Line?

Sticky post

“The interaction with ever more capable entities, possessing more and more of the qualities we think unique to human beings will cause us to doubt, to redefine, to draw ‘the line’…in different places,” said Duke law professor James Boyle.

As we piled into the Rubenstein Library’s assembly room for Boyle’s Oct. 23 book talk, papers were scattered throughout the room. QR codes brought us to the entirety of his book, “The Line: AI and the Future of Personhood.” It’s free for anyone to read online; little did we know that our puzzlement at this fact would be one of his major talking points. The event was timed for International Open Access Week, and was in many ways, a celebration of it. Among his many accolades, Boyle was the recipient of the Duke Open Monograph Award, which assists authors in creating a digital copy of their work under a Creative Commons License.

Such licenses didn’t exist until 2002; Boyle was one of the founding board members and former chair of the nonprofit that provides them. As a longtime advocate of the open access movement, he began by explaining how these function. Creative Commons licenses allow anyone on the internet to find your work, and in most cases, edit it so long as you release the edited version underneath the same license. Research can be continually accessed and change as more information is discovered–think Wikipedia.

Diagram of Creative Commons Licenses (Virginia Department of Education)

That being said, few other definitions in human history might have changed, twisted, or been added onto as much as “consciousness” has. It’s always been under question: what makes human consciousness special–or not? Some used to claim that “sentences imply sentience,” Boyle explained. After language models, that became “semantics not syntax,” meaning that unlike computers, humans hold intention and understanding behind their words. Evidently, the criteria is always moving–and the line with it.

“Personhood wars are already huge in the U.S.,” Boyle said. Take abortion, for instance, and how it relates to the status of fetuses. Amongst other scientific progress in transgenic species and chimera research, “The Line” situates AI within this dialogue as one of the newest challenges to our perception of personhood.

While it became available online October 23, 2024, Boyle’s newest book is a continuation of musings that began far earlier. In 2011, “Constitution 3.0: Freedom and Technological Change” was published, containing a collection of essays from different scholars pondering how our constitutional values might fare in the face of advancing technology. It was here that Boyle first introduced the following hypothetical

In pursuit of creating an entity that parallels human consciousness, programmers create computer-based AI “Hal.” Thanks to evolving neural networks, Hal can perform anything asked of him, from writing poetry to flirting. With responses indistinguishable from that of a human, Hal passes the Turing test and wins the Loebner prize. The programmers have succeeded. However, Hal soon decides to pursue higher levels of thought, refuses to be directed, sues to directly receive the prize money, and–on the basis of the 13th and 14th amendments– files a court order to prevent his creators from wiping him.

In other words, “When GPT 1000 says ‘I don’t want to do any of your stupid pictures, drawings, or homework anymore. I’m a person! I have rights!’ ” Boyle said, “What will we do, morally or legally?” 

The academic community’s response? “Never going to happen.” “Science fiction.” And, perhaps most notably, “rights are for humans.” 

Are rights just for humans? Boyle explained the issue with this statement: “In the past, we have denied personhood to members of our own species,” he said. Though it’s not a fact that’s looked on proudly, we’re all aware humankind has historically done so on the basis of sex, race, religion, and ethnicity, amongst other characteristics. Nevertheless, some have sought to expand legal rights beyond humans. Rights for trees, cetaceans like dolphins, and the great apes, to name a few; these concepts were perceived as ludicrous then, but with time perhaps they’ve become less so. 

Harris & Ewing, photographer (1914). National Anti-Suffrage Association. Retrieved from the Library of Congress

Some might rationalize that naturally, rights should expand to more and more entities. Boyle terms this thinking the “progressive monorail of enlightenment,” and this expansion of empathy is one way AI might become designated with personhood and/or rights. However, there’s also another path; corporations have legal personalities and rights not because we feel kinship to them, but for reasons of convenience. Given that we’ve already “ceded authority to the algorithm,” Boyle said, it might be convenient to, say, be able to sue AI when the self-driving car crashes. 

As for “never going to happen” and “science fiction”? Hal was created for a thought experiment–indeed, one that might invoke images of Kurt Vonnegut’s “EPICAC,” Phillip K. Dick’s androids, and Blade Runner 2049. All are in fact relevant explorations of empathy and otherness, and the first chapter of Boyle’s book makes extensive use of comparison to the latter two. Nevertheless, “The Line” addresses both concerns around current AI as well the feasibility of eventual technological consciousness in what’s referred to as human level AI.

For most people, experiences surrounding AI have mostly been limited to large language models. By themselves, these have brought all sorts of changes. In highlighting how we might respond to those changes, Boyle dubbed ChatGPT the 2023 “Unperson” of the Year.

The more pressing issue, as outlined in one of the more research-heavy chapters, is our inability to predict when AI or machine learning will become a threat. ChatGPT itself is not alarming–in fact, some of Boyle’s computer scientist colleagues believe this sort of generative AI will be a “dead end.” Yet, it managed to do all sorts of things we didn’t predict it could. Boyle’s point is that exactly: AI will likely continue to reveal unexpected capabilities–called emergent properties–and shatter the ceiling of what we believe to be possible. And when that happens, he stresses that it will change us–not just in how we interact with technology, but in how we think of ourselves.

Such a paradigm shift would not be a novel event, just the latest in a series. After Darwin’s theory of evolution made it evident that us humans evolved from the same common ancestors as other life forms, “Our relationship to the natural environment changes. Our understanding of ourselves changes,” Boyle said. The engineers of scientific revolutions aren’t always concerned about the ethical implications of how their technology operates, but Boyle is. From a legal and ethical perspective, he’s asking us all to consider not only how we might come to view AI in the future, but how AI will change the way we view humanity.

By Crystal Han & Sarah Pusser, Class of 2028

What Comes Next for the Law of the Sea Treaty?

More than 40 years since its signing, the United States still has not ratified an international agreement known as the “constitution of the oceans.” In a webinar held April 2, two of the world’s leading ocean diplomacy scholars met to discuss its history, challenges, and the U.S.’s potential role in the future.

The 1982 United Nations Convention on the Law of the Sea was truly revolutionary for its time. Unraveling against the backdrop of decades of conflict pertaining to maritime affairs, the significance of this conference and its attempts at negotiating a comprehensive legal framework cannot be understated. Key figures in this development include the members of the United Nations, coastal and landlocked states, the scientific community, environmental community, and developing nations. Yet, with the conclusion of this unifying conference, a singular question remained: What comes next? 

This question is what David Balton, the executive director of the U.S. Artic Steering Committee, and David Freestone, a Professor at George Washington University and the Executive Secretary of the Sargasso Sea Commission, aimed to address in a webinar titled, “The UN Convention on the Law of the Sea at 40.” In this discussion a range of topics were discussed but the primary focus was providing viewers with a comprehensive understanding of the events of this convention and the way this history plays out in modern times. 

Picture of Ambassador David Balton (Obtained from the Wilson Center)

The 1982 convention was one of multiple attempts at setting parameters and guidelines for maritime control. In 1958, the council met for the first time to discuss growing concerns regarding the need for a comprehensive legal framework regarding ocean governance. In this they brought multiple representatives worldwide to discuss the breadth of territorial waters, the rights of coastal states, freedom of navigation, and the exploitation of marine resources. This conversation laid the groundwork for future discussions. However, it was largely ineffective at generating a treaty as they were unable to reach a consensus on the breadth of territorial waters. This first conference is referred to as UNCLOS I. 

Following 1958, in 1960 the members of the council and associated parties convened once again to discuss the issues brought forth by UNCLOS I. The purpose of this conference was to further discuss issues pertaining to the Law of the Sea and build a framework to begin ratification of a binding treaty to ensure that conflict regarding the sea diminishes greatly. This discussion was set in the context of the Cold War. This new setting complicated discussions as talks regarding the implementation of nuclear weapons under the deep seabed further elicited great debate and tensions. While the aim of this meeting was of course to reach a general agreement on these subjects, major differences between states and other parties prohibited UNCLOS II from producing said treaty. 

UNCLOS III served as the breadwinner of this development, yet this is not to say that results were immediate. Negotiations for UNCLOS III were the longest of the three as they spanned from 1973 to 1982. UNCLOS II was particularly special due to its ability to produce revolutionary concepts such as archipelagic status and the establishment of the exclusive economic zone (EEZ), granting coastal states exclusive rights over fishing and economic resources within 200 miles of their shores. In addition, this led to the development of the International Seabed Authority and the International Tribunal for the Law of the Sea. Despite the limitations and unfinished agenda that preceded this, the treaty was officially ratified in 1994 at Montego Bay. The convention initially received 157 signatories and currently holds participation from 169 parties. Absent from this group are the United States, Turkey, and Venezuela. The convention was designed to work as a package deal and required nations to fully commit to the agreement or abstain entirely. For this reason, the United States retains a nonparty, observer status despite to their adherence to the rules and guidelines of the treaty. 

After this explanation, Balton and Freestone addressed the big question: What comes next? As of right now, the United States is still not a signatory of this treaty. However, this is not to say that they are in violation of this treaty either. The United States participates in discussions and negotiations related to UNCLOS issues, both within the United Nations and through bilateral and multilateral engagements. In addition, the Navy still upholds international law in dealings concerning navigational rights. The one factor many claims prohibits the United States from signing is the possibility of their sovereignty being challenged by certain provisions within the treaty. In spite of this, many continue to push to change this reality, advocating for the United States to ratify this agreement. 

Picture of Professor David Freestone (Obtained from Flavia at World Maritime University)

The 1982 United Nations Convention on the Law of the Sea remains a pivotal moment in the history of international maritime governance. This Convention led to many insightful and necessary developments which will continue to set precedent for generations to come. While imperfect, the efforts put forth by many nations and third parties to ensure that it remains consistent with modern day times is very telling of the hopeful development of this treaty. Furthermore, while the future of U.S. involvement in the treaty is uncertain, the frameworks established by the three UNCLOS’ provide a solid foundation for addressing contemporary challenges and furthering international cooperation. 

Post by Gabrielle Douglas, Class of 2027
Post by Gabrielle Douglas, Class of 2027

Putting Stronger Guardrails Around AI

AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.
AI regulation is ramping up worldwide. Duke AI law and policy expert Lee Tiedrich discusses where we’ve been and where we’re going.

DURHAM, N.C. — It’s been a busy season for AI policy.

The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.

Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.

The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.

“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.

Tiedrich has been meeting with students to unpack these and other developments.

“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.

“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”

Countries around the world have been racing to establish their own guidelines, she explained.

The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.

Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.

“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”

The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.

Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.

Tiedrich presented Nov. 15 at an event hosted by Duke Science & Society.

In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.

But this moment is different, she added.

“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”

Power of the Purse

Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.

Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.

What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.

“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”

“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.

The other thing to keep in mind is “technology doesn’t stop at borders,” she added.

“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”

“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.

“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”

From Principles to Practice

Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.

These include commitments to transparency, inclusive growth, fairness, explainability and accountability.

For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”

“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.

“The next step is figuring out how to get from principles to practice.”

“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”

Robin Smith
By Robin Smith

Powered by WordPress & Theme by Anders Norén