Town Hall

Constitutional Challenges in the Age of AI

May 15, 2024

Share

Tech policy experts Mark Coeckelbergh, author of the new book Why AI Undermines Democracy and What To Do About It, Mary Anne Franks of George Washington University Law School, and Marc Rotenberg of the Center for AI and Digital Policy explored the evolving relationship between artificial intelligence and constitutional principles and suggest strategies to protect democratic values in the digital age. This conversation was moderated by Thomas Donnelly, chief content officer at the National Constitution Center.

This program was made possible through the generous support of Citizen Travelers, the nonpartisan civic engagement initiative of Travelers.

Video:

Podcast:

Participants:

Mark Coeckelbergh is professor of philosophy of media and technology at the University of Vienna. He is the former president of the Society for Philosophy and Technology. He has authored numerous books and articles on the philosophy and ethics of technology, including: AI Ethics (2020), The Political Philosophy of AI (2022), and his latest book Why AI Undermines Democracy and What To Do About It(April/May 2024).

Mary Anne Franks is the Eugene L. and Barbara A. Bernard Professor in Intellectual Property, Technology, and Civil Rights Law at the George Washington University Law School. She is an internationally recognized expert on the intersection of civil rights, free speech, and technology and advises several major technology platforms on privacy, free expression, and safety issues. Franks is also the president and legislative & tech policy director of the Cyber Civil Rights Initiative, a nonprofit organization dedicated to combating online abuse and discrimination. She is the author of the forthcoming book due out later this year: Fearless Speech: Breaking Free from the First Amendment (Oct. 2024).

Marc Rotenberg is the executive director and founder of the Center for AI and Digital Policy. He co-founded the Electronic Privacy Information Center (EPIC), and has served on many international advisory panels, including the OECD AI Group of Experts. He also helped draft the Universal Guidelines for AI, a widely endorsed human rights framework for the regulation of artificial intelligence, and has directed international comparative law studies on Privacy and Human Rights and Artificial Intelligence and Democratic Values. Rotenberg is the author of several textbooks including the 2020 AI Policy Sourcebook and Privacy and Society (2016). He also serves as an adjunct professor at Georgetown Law.

Additional Resources:

Excerpt from Interview: Mary Anne Franks argues that the dangers of manipulated imagery, like Deepfakes, lie in their potential to undermine democracy by blurring truth and falsity, emphasizing the limitations of legislative solutions and the need to address societal willingness to discern and value truth.

Mary Anne Franks: I do think to underscore those points about how one of the reasons why we need to be concerned about manipulated imagery and audio, all of these kinds of issues that are kind of housed under this term of Deepfakes is because as Hannah Arendt had pointed out many years ago, when you have a society that cannot tell truth from falsity, you can do anything with them. And the dangers there for authoritarianism and for the undermining of democracy, or the continued inability of democracy to fully assert itself maybe is a better way to put it. These are really serious issues that require careful attention. The limitations, I think of some of the legislative approaches that it's certainly true as Marc Rotenberg was saying, that putting emphasis on watermarking and things of that nature will I think, mitigate their partial responses to some of the problems that we're facing.

But as he also pointed out, there's limitations to this. Not only because the technology always races ahead of anything that the solution will be, but also because of the way that Deepfakes operate on our cognitive and emotional the reactions that we have to it. So, two things I'd wanna point out about that. One being, when we're talking about sexually explicit digital forgeries, the fact that something's watermarked or indicated clearly that this is a fake, you go to a website that says all these things are fakes, does not actually undo the harm, right? It may help clarify that this is not a real image. This is a manipulated image, but the exploitative aspect of this, the dignitary harms, all of those kinds of harms are still there even if you openly disclose this is a fake. The second thing I would say is that even beyond the sexually explicit forgery question, Deepfakes generally, so depictions of maybe a peaceful protest as though it were violent, or again, a politician doing something or saying something that they never did.

The question isn't just about whether you can go back and correct this. And again, AI isn't new here. Misinformation has been around forever, right? It's much more sophisticated now. But there was always a challenge to try to correct misinformation, not only because sometimes we disagree about what misinformation is as opposed to strident disagreement or editorialization or opinions, but also because as a cognitive matter, when we try to tell people this thing is false, what often ends up happening instead is that we reinforce the false message. So we know from behavioral science that there's this thing called the illusory truth effect, which means that once we see something, especially if it's highly realistic, even when we're later told that that thing is false, if someone puts that image back up and says, this is a false image, what seems to happen is that our brains basically process that as a repetition of that same image in a kind of truthful sense.

So we don't retain the correction. We don't retain, this is false, we retain the image. And so I think that's important to keep in mind in terms of limitations of technological solutions, watermarking, provenance questions, all of which are really important to solve pieces of this problem cannot across the board solve this because of that illusory truth effect. And I think all of them are also just sort of moving around the bigger question, the bigger democracy question, which is how do you make an effective misinformation diagnosis of misinformation system when first of all, there's all these technical challenges, but there's also the challenge of how do you make the public want to know the truth? One of the problems I think that we're facing is not just that people have a hard time distinguishing between what is true and what is false, we're also dealing with the fact that lots of the population don't want to know whether something is true or false.

They want to know whether something validates their particular worldview. So they're eager to indulge in something that might have questionable provenance because it supports something that they think is right, and they're happy to share that and to click on that and engage with that because it supports their worldview. So there's this question of why do we see so much of the problems we're having right now when we're trying to figure out the solutions to this problem? We also have to try to understand the psychological, behavioral, political reasons why so many people are investing in falsity and want to be invested in falsity. And how is it that we encourage a public to actually want to know what is true?

Excerpt from Interview: Mark Coeckelbergh suggests that integrating digital literacy and critical thinking into education can foster the informed, democratic citizens needed for a healthier democracy.

Mark Coeckelbergh: This is a very good question how we might bring it about. First about vision, I think that if we look at the history, we see that in the renaissance new technologies, in particular, the printing press new communication technology what was used to launch cultural project to the Renaissance. And that brought together scholars, but also in the end until today has been democratized. And there has been sort of, literacy has been really transforming also the entire society. So if we could do something with digital technologies that's similar where digital technologies are really linked to a wider societal and cultural project where literacy of people is stimulated and where people are trained in critical thinking in also developing critical relation with the new technologies, I think then we're sort of moving more towards democracy because we're creating the kind of citizens that we need for democracy.

So if we can do that, for example, but I mean, there are lots of people working on digital literacy and media literacy. There are people thinking about how to reform education. If we could combine efforts towards democracy and democratic AI with those efforts and do something politically also about education, I think we are on the right track because we are creating those background conditions that people indeed care about the truth and avoid also the things that Mary Anne just mentioned in the book, I also mentioned Aren. I think it's very important to create those background conditions where truth is seen as important where critical thinking is a normal thing to do. And then it's possible to discuss with one another. We don't have to share everything. We don't have to be the same. We can keep our differences, but we can also come together and try to reach some shared understanding. And I think that's also in line with what the enlightenment both in Europe and in the US want and so in that sense, a kind of constitutional project.

Marc Rotenberg argues for legal reforms to ensure transparency and human oversight in algorithmic decision-making as highlighted by EU regulations like the GDPR and AI Act.

Marc Rotenberg:Well, it's a very good question, and I think the question also suggests the answer, that we do, in fact, need changes in law to establish transparency as to decision-making and to maintain human control over the outcomes. It was actually almost a decade ago, as these issues were coming forward, I launched a campaign in support of algorithmic transparency. And we were basically arguing in the moment that as the decisions become deeply embedded in statistical systems, it's not only the person who's impacted by the decision, but actually the organization that's responsible for the decision that may not see the basis of the outcome. This happens, for example, in the employment sector when large companies delegate to third-party vendors the initial screening of resumes and the vendor comes back and says, of the 10,000, we recommend you talk to the 200.

The criteria may not be known to the company that has selected that vendor why precisely those 200 were selected. Now, as I said, if you look to the EU, you see in the GDPR already in Article 22, a requirement that for any automated decision-making that affects a fundamental right or has a legal effect, a person should have the right to a human determination. This is not the ads that you see when you browse the internet and it's not how your video games operate. But if you are denied a job under the GDPR, you could be entitled to a human review.

And this is actually carried forward also in the EU AI Act in Article 8, which speaks of access to the logic and the reasoning. And I think these are positive developments. I'd like to see them adopted in the US and elsewhere. I think ultimately, you know, fairness is in the interest both of the person who is subject to a decision, but it's also in the interest of the decision maker. We should make decisions that we can stand behind. And not turn the decision-making process over to a system we don't understand.

Full Transcript

View Transcript (PDF)

This transcript may not be in its final form, accuracy may vary, and it may be updated or revised in the future.

Stay Connected and Learn More:

  • Questions or comments about the show? Email us at [email protected]
  • Continue the conversation by following us on social media @ConstitutionCtr.
  • Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate.
  • Subscribe, rate, and review wherever you listen.
  • Join us for an upcoming live program or watch recordings on YouTube.
  • Support our important work.

Donate

Loading...

Explore Further

Podcast
The Battle Over Free Speech on Campus

A deep dive into the constitutional principles surrounding speech and protest rights on college campuses

Town Hall Video
Ensuring Election Integrity: Insights From Meta’s Oversight Board

Members of Meta’s Oversight Board Michael McConnell and Kenji Yoshino discuss the board’s recent work, including its efforts…

Blog Post
The Supreme Court rules on the government pressuring websites to moderate content

At what point does the government, in taking actions to make social media websites aware of content considered to be…

Educational Video
AP Court Case Review Featuring Caroline Fredrickson (All Levels)

In this fast-paced and fun session, Caroline Fredrickson, one of the legal scholars behind the National Constitution Center’s…

Donate

Support Programs Like These

Your generous support enables the National Constitution Center to hear the best arguments on all sides of the constitutional issues at the center of American life. As a private, nonprofit organization, we rely on support from corporations, foundations, and individuals.

Donate Today

More from the National Constitution Center
Constitution 101

Explore our new 15-unit core curriculum with educational videos, primary texts, and more.

Media Library

Search and browse videos, podcasts, and blog posts on constitutional topics.

Founders’ Library

Discover primary texts and historical documents that span American history and have shaped the American constitutional tradition.

News & Debate