DURHAM, N.C. — It’s been a busy season for AI policy.
The rise of ChatGPT unleashed a frenzy of headlines around the promise and perils of artificial intelligence, and raised concerns about how AI could impact society without more rules in place.
Consequently, government intervention entered a new phase in recent weeks as well. On Oct. 30, the White House issued a sweeping executive order regulating artificial intelligence.
The order aims to establish new standards for AI safety and security, protect privacy and equity, stand up for workers and consumers, and promote innovation and competition. It’s the U.S. government’s strongest move yet to contain the risks of AI while maximizing the benefits.
“It’s a very bold, ambitious executive order,” said Duke executive-in-residence Lee Tiedrich, J.D., who is an expert in AI law and policy.
Tiedrich has been meeting with students to unpack these and other developments.
“The technology has advanced so much faster than the law,” Tiedrich told a packed room in Gross Hall at a Nov. 15 event hosted by Duke Science & Society.
“I don’t think it’s quite caught up, but in the last few weeks we’ve taken some major leaps and bounds forward.”
Countries around the world have been racing to establish their own guidelines, she explained.
The same day as the US-led AI pledge, leaders from the Group of Seven (G7) — which includes Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — announced that they had reached agreement on a set of guiding principles on AI and a voluntary code of conduct for companies.
Both actions came just days before the first ever global summit on the risks associated with AI, held at Bletchley Park in the U.K., during which 28 countries including the U.S. and China pledged to cooperate on AI safety.
“It wasn’t a coincidence that all this happened at the same time,” Tiedrich said. “I’ve been practicing law in this area for over 30 years, and I have never seen things come out so fast and furiously.”
The stakes for people’s lives are high. AI algorithms do more than just determine what ads and movie recommendations we see. They help diagnose cancer, approve home loans, and recommend jail sentences. They filter job candidates and help determine who gets organ transplants.
Which is partly why we’re now seeing a shift in the U.S. from what has been a more hands-off approach to “Big Tech,” Tiedrich said.
In the 1990s when the internet went public, and again when social media started in the early 2000s, “many governments — the U.S. included — took a light touch to regulation,” Tiedrich said.
But this moment is different, she added.
“Now, governments around the world are looking at the potential risks with AI and saying, ‘We don’t want to do that again. We are going to have a seat at the table in developing the standards.’”
Power of the Purse
Biden’s AI executive order differs from laws enacted by Congress, Tiedrich acknowledged in a Nov. 3 meeting with students in Pratt’s Master of Engineering in AI program.
Congress continues to consider various AI legislative proposals, such as the recently introduced bipartisan Artificial Intelligence Research, Innovation and Accountability Act, “which creates a little more hope for Congress,” Tiedrich said.
What gives the administration’s executive order more force is that “the government is one of the big purchasers of technology,” Tiedrich said.
“They exercise the power of the purse, because any company that is contracting with the government is going to have to comply with those standards.”
“It will have a trickle-down effect throughout the supply chain,” Tiedrich said.
The other thing to keep in mind is “technology doesn’t stop at borders,” she added.
“Most tech companies aren’t limiting their market to one or two particular jurisdictions.”
“So even if the U.S. were to have a complete change of heart in 2024” and the next administration were to reverse the order, “a lot of this is getting traction internationally,” she said.
“If you’re a U.S. company, but you are providing services to people who live in Europe, you’re still subject to those laws and regulations.”
From Principles to Practice
Tiedrich said a lot of what’s happening today in terms of AI regulation can be traced back to a set of guidelines issued in 2019 by the Organization for Economic Cooperation and Development, where she serves as an AI expert.
These include commitments to transparency, inclusive growth, fairness, explainability and accountability.
For example, “we don’t want AI discriminating against people,” Tiedrich said. “And if somebody’s dealing with a bot, they ought to know that. Or if AI is involved in making a decision that adversely affects somebody, say if I’m denied a loan, I need to understand why and have an opportunity to appeal.”
“The OECD AI principles really are the North Star for many countries in terms of how they develop law,” Tiedrich said.
“The next step is figuring out how to get from principles to practice.”
“The executive order was a big step forward in terms of U.S. policy,” Tiedrich said. “But it’s really just the beginning. There’s a lot of work to be done.”