How are social media platforms dealing with disinformation in the midst of election 2020? Experts joined host Jeffrey Rosen to explore that question as well as the complex, persistent issues surrounding the regulation of online speech and content, and how all of that relates to the First Amendment and free speech norms. The panel featured David Hudson, Jr., First Amendment Fellow at the Freedom Forum; Professor Kate Klonick of St. John’s University School of Law, who’s studied and written about the creation of the Facebook Oversight Board; John Samples, vice president at the Cato Institute, who’s a member of the Oversight Board; and Professor Nate Persily, co-director of the Stanford Program on Democracy and the Internet. This program was presented in partnership with the Freedom Forum, and its chair, Jan Neuharth, delivers opening remarks.
For more on the 2020 election and nonpartisan educational resources, check out our recent election-related episodes of Live at the National Constitution Center and visit the National Constitution Center’s election resources page—which includes informative podcast episodes, video lessons, and more—at constitutioncenter.org/calendar/election-day-programming.
FULL PODCAST
Or, listen on Apple Podcasts or Google Podcasts.
PARTICIPANTS
David Hudson, Jr., a Visiting Associate Professor of Legal Practice, teaches Legal Information and Communication at Belmont University. He is the author, co-author, or co-editor of more than 40 books. For much of his career, he has worked on First Amendment issues. He serves as a Justice Robert H. Jackson Legal Fellow for the Foundation for Individual Rights in Education and a First Amendment Fellow for the Freedom Forum Institute. For 17 years, he was an attorney and scholar at the First Amendment Center in Nashville, Tennessee. Hudson has taught classes at Vanderbilt Law School and the Nashville School of Law.
Kate Klonick is an Assistant Professor of Law at the St. Johns University School of Law and an Affiliate Fellow at the Information Society Project at Yale Law School. Her current research focuses on the development of Facebook's new Oversight Board. Her work has appeared in the Harvard Law Review, the Yale Law Journal, the Southern California Law Review, Maryland Law Review, New Yorker, New York Times, The Atlantic, Slate, Lawfare, Vox, The Guardian and numerous other publications.
Nathaniel Persily is the James B. McClatchy Professor of Law at Stanford Law School, with appointments in the departments of Political Science, Communication, and FSI. He is also a commissioner on the Kofi Annan Commission on Elections and Democracy in the Digital Age and along with Professor Charles Stewart III, he recently founded HealthyElections.Org (the Stanford-MIT Project on a Healthy Election). Persily is co-director of the Stanford Cyber Policy Center, Stanford Program on Democracy and the Internet, and Social Science One. He has served as the Senior Research Director for the Presidential Commission on Election Administration.
John Samples is a vice president at the Cato Institute, where he founded and directs Cato’s Center for Representative Government. Samples serves on Facebook's Oversight Board. He is currently living in Northern California and working on a book‐length manuscript about social media and speech regulation which extends and updates his policy analysis, “Why Government Should not Regulate Content Moderation of Social Media.” He previously wrote The Struggle to Limit Government: A Modern Political History and The Fallacy of Campaign Finance Reform. Samples also co‐edited with Michael McDonald The Marketplace of Democracy.
Jan Neuharth the chair and CEO of the Freedom Forum. Neuharth practiced law with Paul, Hastings, Janofsky and Walker in Los Angeles, worked as a press assistant for Sen. Howard Baker in Washington, D.C., and conducted political polling for Louis Harris International in London. Neuharth is an active member of the California Bar and admitted as an Attorney and Counselor of the United States Supreme Court. Neuharth serves on the boards of several non-profit, corporate and community organizations, and is the author of the award-winning Hunt Country Suspenseseries.
Jeffrey Rosen is the president and CEO of the National Constitution Center, a nonpartisan nonprofit organization devoted to educating the public about the U.S. Constitution. Rosen is also professor of law at The George Washington University Law School and a contributing editor of The Atlantic.
ADDITIONAL RESOURCES
- Freedom Forum First Amendment Freedom to Petition
- Kate Klonick, Yale Law Journal, “The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression”
- Kate Klonick, Harvard Law Review, “The New Governors: The People, Rules, and Processes Governing Online Speech”
- Stanford-MIT Healthy Elections Project
- John Samples, Cato at Liberty, “A First Look at Facebook’s Oversight Board”
- David Hudson, Freedom of Speech: Documents Decoded
- Jeffrey Rosen, The New York Times Magazine, “Google’s Gatekeepers"
- Nathaniel Persily (co-editor), Social Media and Democracy: The State of the Field, Prospects for Reform
- Packingham v. North Carolina
- Brandenburg v. Ohio (1969)
- Whitney v. California (1927)
- Citizens United v. Federal Election Commission (2010)
This episode was engineered by Greg Scheckler with editing by Jackie McDermott. It was produced by Jackie McDermott, Tanaya Tauber, and Lana Ulrich.
Stay Connected and Learn More
Questions or comments about the show? Email us at [email protected].
Continue today’s conversation on Facebook and Twitter using @ConstitutionCtr.
Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate, at bit.ly/constitutionweekly.
Please subscribe to Live at the National Constitution Center and our companion podcast We the People on Apple Podcasts, Stitcher, or your favorite podcast app.
To watch National Constitution Center Town Halls live, check out our schedule of upcoming programs. Register through Zoom to ask your constitutional questions in the Q&A or watch live on YouTube.
TRANSCRIPT
This transcript may not be in its final form, accuracy may vary, and it may be updated or revised in the future.
Jackie McDermott: [00:00:00] Welcome to Live at the National Constitution Center, the podcast sharing live constitutional conversations, hosted by the National Constitution Center. I'm Jackie McDermott, the show's producer. Last Thursday, we hosted a program on what social media platforms are doing to tackle disinformation in the midst of the election and how that relates to the first amendment.
First, you'll hear short intro remarks from Jan Neuharth chair and CEO of Freedom Forum. Next, Jeffrey Rosen was joined by a panel featuring David Hudson Jr., First Amendment Fellow at the Freedom Forum, Kate Klonick, a law professor who studied the Facebook oversight board, John Samples of the Cato Institute, who's also a member of the oversight board and Nate Persily, co-director of the Stanford program on democracy and the internet. Here's Jeff, to get the conversation started.
Jeffrey Rosen: [00:00:51] And now it's my great pleasure to turn the mic over to my friend and colleague Jan Neuharth.
Thank you,
Jan Neuharth: [00:00:58] Jeffrey. And we are so pleased to be partnering with the National Constitution Center for this important program on what social media platforms are doing to tackle disinformation and foreign interference during this election season. I want to thank Jeffrey and his team at the constitution center for their work and putting tonight's panel together. The freedom formum's mission is to foster first amendment freedoms for all. We work to raise awareness of first amendment freedoms through education, advocacy, and action, sharing the stories of Americans who have exercised their rights to ignite change. For nearly 30 years through our educational programs and initiatives, the freedom forum has used the first amendment as a springboard to illuminate the challenges of democracy and the importance of making informed decisions in a diverse and demanding world. We hope tonight's program helps us highlight the importance of media literacy, the dangers of disinformation, and how to successfully authenticate and evaluate information from a variety of sources.
The election is now just five days away. And voting is the ultimate expression of petition, one of the five freedoms of the first amendment. To learn more about this first amendment freedom, and to view some of the resources we offer about engaging in elections, please visit [email protected] forward slash petition. And now I'll hand it back to our moderator Jeffrey Rosen.
Jeffrey Rosen: [00:02:26] Thank you so much, Jan, and thank you and the freedom forum for all you have done to increase awareness and understanding of the first amendment and the connection between our first amendment freedoms and American democracy. It is crucial work and, very, very much appreciated.
Thank you so much for joining us. David Hudson, Kate Klonick, Nate Persily, and John Samples. What a dream team and what an ideal group to discuss this crucially important question of what the platforms are doing to combat election disinformation. The topic could not be more timely. Just last week, Facebook announced that it was taking more preventative measures to keep political candidates from using it to manipulate the election outcome and aftermath the company now plans to prohibit all political and issue-based advertising after the polls close on November 3rd for an undetermined length of time, and Facebook joins other social media companies, including Twitter, which banned all political ads from the service a year ago. And last month Google said it, too, would ban all political issue ads after election day. Nate Persily, you have been following these developments so closely. Please, summarize for our great viewers what the major platforms are doing with regard to the election and what you think about it.
Nate Persily: [00:03:49] Well, thanks very much and thank you to the National Constitution Center for hosting this, it's always-- I hope we can do this live sometime soon. So, the platforms have learned a lot of lessons since 2016 and, given that there was widespread criticism, then with respect to whether it's foreign intervention or political advertising or any number of other problems in 2016, they've, they've made a lot of changes, and they've been sort of trying to throw everything at the wall to see what sticks. Now, they've, no doubt, not going to be enough, to get at, you know, pervasive, disinformation, or other kinds of problems that, that critics will allege.
But, here are the things that they've been doing that are different now than say four years ago. So, the first is that there is more aggressive take-downs and demotion of vote suppressive content, whether it's a disinformation that deals with say wrong voting days, discouraging people to vote, those kinds of messages that might lead people not to vote or to fear about voting.
Then there's a whole suite of reforms that they've done on political advertising. You mentioned some of them, but first is that they, Google and Facebook have come up with transparency, regimes, ad archives. So that they can, so you can look up what kind of ads had been put out there. The ad libraries are actually quite different between Facebook and Google and what kind of information they provide, but it's an effort to proovide some transparency.
But in addition to the disclosure in the transparency through political advertisements, there's also been some additional regulations. So, you mentioned now the rules that all three platforms have come up with to prevent advertisements in the post-election period, because there's concern that that might foment, sort of unrest and the like. Twitter, as you may know, banned political ads outright, or at least as has tried to, so that ads on for candidates at least are not allowed on the platform and have been for most of this have not been for most of this year.
In addition, Facebook took an extraordinary move to ban new ads in the last seven days before the election. So, that while you could have last week, you could've come up with a new ad if you were an approved advertiser, verified advertiser. Now, Facebook will not allow you to come up with a new ad, in this seven days before the election. So, in additio to all of that with, with advertising, other restrictions on organic content have also come into play. So, mentions about false claims of victory through, through paid communication, but it's also going to apply to organic content as well, where they will either be taking down or labeling and demoting false claims of victory on election day, or sort of precipitous claims of victory.
And a lot of this is, is also, dovetailing with work that they've done to try to diminish the importance of some of this disinformation by building these voting information centers. And so if you look at Facebook and, and, you'll see that there's this repository of information that they put out there and they've been putting for the last two months, they've been putting at the top of people's feeds reminders on how to vote on, mail balloting and all kinds of other things. On election night that will switch and it will become more of a repository for authoritative election results to try to mute the effect of any disinformation that'll be coming from unofficial or official sources.
Jeffrey Rosen: [00:07:17] Wow. Thank you for that extremely comprehensive and helpful summary. You've given us a lot to discuss. Kate, Nate just mentioned a series of steps ranging from disclosure, transparency in the forms of ad archives. These new moves about political ads, and it's striking how some of them, as he said, are coming just in the past couple of weeks and now up in Facebook's case, just apply to new ads. And then these restrictions on false claims of victory, either paid or organic communication. You, you followed all the companies so closely. You wrote a pathbreaking article for the Yale Law Journal about content moderation. What strikes you about some of these late breaking developments and what additional context can you place on what the platforms are doing?
Kate Klonick: [00:08:00] Yeah. Well, what's interesting is how, even though the stakes are changing so much, I have heard the platforms kind of clinging to a familiar language. So, even though, so I'm going to go give the example of Holocaust denial, which is a, is an example that, Mark Zuckerberg, himself who is Jewish, like has repeatedly allowed on his platform, despite all of these kind of the clamoring of Anti-Defamation League and all of these other types of anti hate speech groups that want this type of thing to come down saying that it's in the service of free speech. And what you see all of a sudden in like the decision from Facebook to decide to suddenly issue a new policy around Holocaust denial is the language of, like we decided that the safety concerns of the users had finally outweighed the concerns for voice that we thought that we, you know, that we had to worry about before. And you can query how, how authentic that statement is.
You know, there is, there is certainly something to be said for the fact that this is a moment in time in which the pressures are incredibly intense on all of these questions around speech, such as to like create new balances to the equities. But there's also the possibility that suddenly just these are some low-hanging fruit that Facebook has been wanting to get rid of and kind of address for a while and they see an opportunity to do it in this moment and garner some goodwill. And so it's unclear how all of this is going to kind of end up flushing itself out. The debate over political ads that played out in the fall of 2019, and now we are, here we are a year later, but that entire debate in which Jack Dorsey famously after Mark Zuckerberg refused to ban political ads on the site, or fact check political ads in the side, I should say, Jack Dorsey issued that statement about basically saying well, we're just going to ban all ads, political ads on Twitter.
There's also a lot more behind that statement. It was issued right before the Facebook's earnings reports. Twitter has a tiny fraction, maybe a 10th of the, of the revenue that Facebook has from political ads. In fact, political ads on Facebook are credited with the resurgence of, the ability for a lot of smaller down ticket candidates to stage grassroots campaigns from small donors, and fund their campaigns and get notoriety without having to pay the big machine politics kind of thing. And so there were a lot of considerations that Facebook really had to weigh in going into the elections that were a lot more complicated than just well, let's ban ads.
For Jack Dorsey, that wasn't a big deal. There just aren't simply aren't that many political ads on Twitter. And, you know, for Facebook, it was a much larger consideration that had much larger implications, and they're just trying to draw some bright lines so that they don't have to worry about certain types of content and they can say yes, no, yes, no and keep going. And that's the necessity of doing this at scale.
Jeffrey Rosen: [00:11:25] Thanks for all of that great context. You and Nate did a phenomenal we the people podcasts nearly a year ago about the Facebook, Twitter, different decisions about political ads. And you've just reminded us that there's a lot going on behind the Facebook's calculations and you help us understand why it wasn't until just before the election that Facebook made the decision that it recently did. John Samples, you serve on the Facebook advisory board. That board is not centrally tasked with election-related decisions. It may be reviewing content take-downs down the line, but tell us what if anything, that Facebook board is doing that's relevant to election related decisions.
And I'll just introduce right now, Mark Winkelman's good question. He says, well, I applaud what Facebook has done. How is it able to get around the first amendment with this? Of course, Facebook is a private company. It isn't formally bound by the first amendment, but I think Mark is asking to what degree is Facebook maintaining Mark Zuckerberg's stated commitment to first amendment values in its recent decisions to take down political ads?
John Samples: [00:12:28] Right, Jeff, this all begins with the fact that the first amendment doesn't apply to Facebook as a private firm and as a private firm, I think they have their reasons, business reasons and others, to take down a lot of content, the content would drive users away or whatever. However, at the same time, and this is where Kate's work and scholarship, I think really comes in. Early on, she was suggesting due process, we have to have something more of the rule of law to make this a legitimate process. And that's, then you have a two and a half year project, that Nate was part of too of, setting up this board and our job is basically just to look at what Facebook does and basically-- I don't say censor because only the government can censor-- but they essentially remove content from the side or they reduce the breadth of it.
Our job is to decide whether that's according to Facebook standards, the ideas are Facebook values, Facebook community standards, and increasingly parts of international law, which have some aspects of it that look a lot like the first amendment. So essentially we are a kind of court that will hear cases and pass judgment. And Facebook has agreed to be bound by what we do in those cases. So, if we tell them that they have to put something back up, or eventually take something down, they will do it, and they may apply it more broadly too, depending on the situation. So we're kind of -- and I should say also it's true Facebook set up the board -- but it's a kind of really complicated legal framework involving a trust that really does make, I think, as independent as an institution can be of its originator. So, independence of the board, I think will be an important issue. And I think it will be not an issue, but it will be something that is actually very good for us at the board.
The other thing I would say is our charter and bylaws enable us to enable Facebook to ask us about policy questions, and then we can give advice. Now, Facebook is bound to follow the case decisions. They're not bound to follow the policy advise. They could have, for example, asked us about the political ads policy and we could have given them-- they didn't in this case, but down the line with elections, they may well do that.
I think our role will really be those secondary. Not at the moment of the election, but to look back later after the battle is over, as it were, and decide whether actually Facebook fit... acted within its own rules, as they build them up over time, that'll be our role. So, and then we'll go forward for the next election. You'll have, maybe I would expect more constraints on, particularly if Facebook makes mistakes, more constraints on that ability. Which is very powerful, to take down speech during the electoral period. That's a pretty incredible power.
Jeffrey Rosen: [00:15:24] Thank you very much for that insight into the Facebook board. I know our listeners and I will have more questions about it. That idea of issuing advisory opinions about policy is really fascinating. And we'll look forward to learning more, soon. As you hear these moves that the platforms are taking, David Hudson, do they strike you as adequate to address the central question identified in the 2016 election that agents of the Russian government, according to bipartisan studies, purchased a significant amount of digital advertising during the election and Facebook's own investigation, as well as others, led it to spend more than $5 billion. Mark Zuckerberg said it was more than Facebook's initial capitalization on election related security, as well as to prevent this kind of disinformation. So we kind of review what you've heard so far and, and tell us if you think is adequate or not.
David Hudson Jr.: [00:16:21] Well, as soon as I certainly think they're all very positive steps. I think in having a community oversight board that looks at these troubling issues and the definitive problems of different disinformation, fake news and other problems online. I certainly think these are positive steps. I think the key question is we'll have to sort of take a wait-and-see attitude because we simply don't know the sheer amount of disinformation that may flood the social media platforms. I think what a lot of us in the first amendment community are hoping foS is that all of these measures will be taken in accordance with traditional first amendment norms, as best as possible. So for example, you know, you wrote the great, biography of Justice Louis Brandeis, right?
In one of Brandeis' most famous comments, from Whitney v. California in 1927, right, "if there'll be time to expose, to discussion the falsehood and fallacies, to avert the evils by the processes of education, the remedy to be applied as more speech, not enforced silence." Right. And that's the Genesis of the so-called counter speech doctrine.
There is no categorical exception for false speech. The United States Supreme Court made that clear in Alvarez versus United States in 2012. Right? So we want to make sure that these new governors-- as Professor Klonick has written so beautifully in her Harvard law review piece, and the Yale law journal piece -- are, acting according to first amendment norms, that they don't just take down information that they don't like; that they don't engage in blatant censorship; that they don't engage in viewpoint discrimination; that there's not any sort of tilting of the marketplace. Right. Because ultimately, what's the reason why we have, the first amendment?
Why do we protect free speech so much as society? Because it's inextricably intertwined with freedom of thought. And at the end of the day, we want people to make their own decisions as to what content they wish to view and what content they wish to accept. But, to answer your question directly, I think these are all very positive steps and I'm very encouraged by a lot of the people that they have participating in these, in these steps.
Jeffrey Rosen: [00:18:44] Thank you for all that. Thank you for quoting those beautiful words of Justice Brandeis, "as long as there be time enough for deliberation, the best response to evil counsels is good ones." Nate, you've discussed whether Brandeis' vision still applies in an age of warp speed communication, where disinformation can travel faster than efforts to combat it.
For this round, I'd like to ask whether you think what the platforms are doing is adequate or having studied the question deeply, and in fact, written with Alex Stamos, an important report on the 2016 disinformation. Are there other things you think they should be doing or other laws that you think should be passed to address the problem of election disinformation?
Nate Persily: [00:19:30] Well, I think there's a lot that could be done, but let me start by taking I think a different perspective on this as to whether the first amendment answers this question, right. Which is that these private companies are not, even though we call them as having the public square, they are not a public square.
Right. And it's not just that they are private companies that themselves have first amendment rights, so they can make decisions about what content is on their platform. But it's because what they do is organize information and the most important power that Facebook, Google and Twitter have is that they decide what goes at the top of what goes at the bottom, whether it's the top of a newsfeed or top of the search results or YouTube recommendations and the like. And those decisions are very different than ones that we normally make when you're talking about government, restriction of private speech.
And so, the decisions that are made--every community standard that these companies have for speech on the platform would violate the first amendment, if it was legislated by government. That's true about obscenity. That's true about hate speech. It's true about some of the disinformation stuff.
And we don't think, for example, that Citizens United should apply to Facebook, right? That they should then, have to, you know, run any advertisement from any corporation, for example. They have lots of rules on this as to what types of speech is allowed. Now, where then, the question is, well, what from the first amendment might be useful in trying to reign in the platforms to set appropriate ground rules?
And we might say that, look, you know, political neutrality and avoiding viewpoint discrimination within certain bounds is important. But we can't... they can't just sort of apply the first amendment because the technology, the product itself is not geared to that. All right. So that, that's just the groundwork, then on what else can be done when it comes to disinformation?
Let me punt a little bit on that in a particular way, which is that we do not have a sense of the scale of the problem yet. Okay. And, whichever problem we're talking about online. And so what I've been trying to work on for the last three, four years is really trying to open these companies up to greater transparency so that we can figure out how much disinformation is on their platform, who is receiving it, what they're doing about it, whether there is political bias in their take-downs and the like, and I think the first step to try to address this problem is to find out a lot more about what's going on under the hood of these companies.
Jeffrey Rosen: [00:21:57] That last comment addresses to some degree, Craig Dimitri's question, what level and sophistication of attacks are Russia, China, and Iran currently using to sow disinformation and discourse and why do they believe it serves address? Nate, you've said we really don't fully understand the scale of the problem and you made a very important intervention expressing doubt that traditional first amendment standards can or should apply, and instead recommending standards like political neutrality and lack of viewpoint discrimination. Kate, yesterday, to confirm the incredible timeliness of this panel, there was yet another hearing in Congress with the heads of the platforms, Mark Zuckerberg and his colleagues from Google and Twitter.
And it was supposed to be about Section Two 30 liability. And that sounds very technical, except everyone who's watching knows, because you're all so well-informed that Two 30 immunizes platforms from illegal content they host on their on their platforms. In the end, the hearing ended up not being so much about Two 30 liability, but conservatives were attacking Twitter for labeling President Trump's tweets and Democrats were saying that they should do more to combat false claims of victory. But you've studied this question that Nate just flagged so closely, you know, should the platforms be bound, should Two 30 be changed, and one of our questioners asks about that as well. So, should it be changed? If so, how and, what more do you think platforms could be doing to combat election disinformation?
Kate Klonick: [00:23:30] Yeah. So I want to kind of... so first Jeff, I didn't, I forgot I usually do this whenever I talk to you, but I usually credit you with turning me onto this very topic in the first place, because you wrote your pieces in 2012 that were in Wired and in the New York Times Magazine that were way ahead of their time and established and really kind of flagged this issue for anyone who's paying attention. And it was, I, you know, I built my... a lot of what I was trying to do and that the vision that I saw for this field, and it really is a field now, because it's not only today that's timely, literally I do these every three days. And they're always timely because something is blowing up in the online speech field. But, so David was mentioning, to just relate to the quick question about first amendment stuff that Nate and David were kind of tossing around.
So, I love the term free speech or first amendment norms, because I actually think that it encapsulates a, from a social norms perspective, it encapsulates an ethos around kind of an entitlement to the right that is not purely captured in freedom of expression. That being said, I wrote my entire-- the Harvard piece that you liked David-- I wrote it and had first amendment norms in there. And Robert Post was like, do not write this, like slashed it out of the whole thing and was like, put in free expression norms. So, I was like, okay, fine. But later, a couple of years later, you had Mark Zuckerberg take the stage at Georgetown University and give an address on literally New York times V Sullivan and the entirety of... and Brandenburg and the first amendment doctrine, through his lens of what he thought that these cases meant for better or for worse and his cultural understanding or his layman's understanding of the first amendment doctrine had created its own set of social norms that he of course, was like filtering down at his company. And so it really was first amendment norms to an extent.
So I want to give you a little bit of credit there. To Nate's point though, of course there's no legal cachet there. There's nothing for the first amendment in these private companies. They're not the public square. At the very best, I think it's taken me five years of empirical work to come around to kind of a formulation of, I'm starting to think of them at best as public accommodations.
If we think of them in terms of a way that we would want to possibly construct regulation around speech or about them. But I just, I think that the point that, you know, you kind of end up with, that Jeff kind of finished and touched on and asked me to kind of weigh in on, is that how are these companies eventually going to kind of be constructing these internal structures and then foisting them on other people and then not getting some type of accountability in return.
And how is that all going to happen? Too, you have to kind of, just as you have to interrogate what the first amendment means and free expression really means, and what you mean when you say that, because words are not simply words. I think that you have to really ask why you want to -- why you want the government, why people keep calling for government regulation of these companies.
And as I am not in any way, opposed to government regulation as an identity or in any way or anything else. I just kind of actually want to know, do you... if that's the most effective means of reaching whatever it is about these platforms and about this moment in time in this very uncomfortable adjustment period we're having with technology and our human rights that we're trying to reach towards a very comfortable and understandable type of accountability of reaching back to government and being like, government can help us. Government can do something. But, if we remember why it is that we empowered these private companies with the abilities that we did and protected them so much for the first amendment, we get to, you get to the answer of like, maybe the government is exactly not the entity that we want to answer and solve our speech problems for. So, what is it that we're really asking for? At the end of the day, I think what we're really asking for is accountability and some type of participatory feeling in the structure that is controlling our basic human rights of freedom of expression.
If you can build that into the system and in a retroactive way, as I kind of hope and... I faintly hope and I am optimistic about the oversight board in which John is on, and can maybe do, that is maybe a way forward that can align with regulation or forestall it.
Jeffrey Rosen: [00:28:43] Thank you for that really rich and sophisticated answer on the end for flagging that norms, in your view, are a better word than first amendment law, and for arguing for a more contextual approach that take into account these competing considerations and one of our attendees says, thank you for your answers, Kate. John, there's a question about the Facebook board, which many of our listeners are interested in. And, but before I do that, I should say several people have asked what is disinformation? If you can't define that, then why are we having this panel?
I live in Montgomery County. Montgomery County sent a voter letter to voters telling them their polling place. Later they sent letters listing the wrong polling places. So is it that kind of factual, disinformation or not? And, you might want to address what you think disinformation is.
We've also talked about, false claims of victory before there's a definitive tally, which all of the platforms said that they won't cover. But, I'll set the question up to you by asking whether the Facebook board has the ability to make the take down transparent, Mike Liverwright asks, by showing the Facebook rule that is applied when it was established and the details of the content that was taken down are prioritized.
And then just because I'm really interested, I'm going to ask you, as a member of the Facebook board, are you inclined based on your past writings to apply a version of American first amendment standards, which would protect hate speech and might also allow political ads to run right up to and after the election?
John Samples: [00:30:24] So, let me drop back for a moment and then get to that and also get to the disinformation issue, and suggest one way I'm not sure how this is going to go, but one way it might go. Keep in mind beyond all the issues, I obviously I've been a strong supporter of the first amendment, but keep in mind, apart from all the other issues, when you think about the Facebook community, that is Facebook users, you're talking about 3 billion people and in the United States, there's about 200 million users, estimated. So, it's a very small proportion of the overall user-base of Facebook. Now, I think it would be great if they all accepted the first amendment. And I think, you know, probably the world would be a better place, but they might not do that.
So, you need to go to them with, kind of, something that from their point of view, you can appeal to them. And I think that's not contrary to first, what we call first amendment values. Here's how this might work. On the one hand, there's Facebook, remember Facebook values, community standards and international law are three foundations, not the constitution, but those three things. Now, as it happens, Facebook values if you look at the charter of the board or at their mission statement, they say that voice speech expression is quote a "paramount value." And what paramount means is comes before all else, it's the most important value. So if we take that, as a, both something that we are supposed to be enforcing and a statement of Facebook's values, that's a beginning point for thinking about a strong and perhaps, weighted, some weight, more weight attached to the value of voice. Second thing I would turn to is international human rights law, which is now being widely discussed. If you look at Article 19 of the international commission of civil and political rights, what you find is a statement that is very similar to the first amendment that is the declared and the United States has ratified that.
So has many other countries throughout the world. There's some reservations and so on, but generally speaking, it's accepted. So what I'm saying is in international human rights law, there's something like the first amendment and more importantly, or as important, there is also in Article 19, something like language that we talk about here in the United States as strict scrutiny. That is, any kind of time you want to limit speech, under international law, you have to apply three tests to it. One of which is a typical vagueness test that we're accustomed to here in the United States in first amendment jurisprudence. So that brings us back to disinformation. You mentioned, how do we define it?
We're having discussions about how to define it. I haven't seen any cases, but to me that's a point of looking closely at regulations, looking closely at take-downs based on disinformation, because if people cannot predict beforehand, if we don't know what it is, there are certainly bad actors that know what they're doing, and this was one of the debates, but if there's a vagueness about it, there's a legality issue and under it both international law, and I would think under Facebook rules also, there's a question whether that could be sustained. But, my point being, to reach this larger audience, to get legitimacy under it, there are other ways to go then just talk about the first amendment, which is great in itself. But, there's aspects out there that have a larger claim on people. And I believe those might be brought into the Facebook on a kind of common law basis where we go through decisions and we say, Hey, this seems to be something that is applicable here. And you end up with a really syncratic kind of law for the oversight board, for Facebook, and maybe beyond that, that is really pretty responsive to the first amendment. That's what I would think is certainly possible.
Jeffrey Rosen: [00:34:24] Thank you very much for answering all of that, and so thoughtfully David, our friend and colleague, Jean Mulnisky from the Freedom Forum asks: how close are we, if at all, to declaring these dominant tech companies to be a kind of public utility? Private operations seem so crucial to society that the public must have a role in policy development and implementation accountability. With regard to these policies,Jean's question is incredibly salient the week after, I guess, the Justice Department filed its antitrust suit against Google. The Justice Department has indicated a similar suit against Facebook might possibly be forthcoming. We just podcasted on this yesterday, I guess it's publishing tonight with Tim Wu and Adam White. But that's the opposite side of the spectrum. From a first amendment norms perspective, Jean asks, should the platforms be regulated as public utilities? What's your answer to Jane?
David Hudson Jr.: [00:35:17] Yeah. I mean, there's some surface appeal to that, right? So, you know, somebody could say, well, does it make sense under our constitutional scheme for a small rule government official to be subject to constitutional constraints and a massive billion dollar company that has power to restrict far more speech than any small governmental agency to not be. However, I tend to agree with Nathaniel, that these are private companies, that they do have the first amendment, right to editorial discretion. Right, if we look at Tornillo versus Miami Herald in 1974, right. There's a similar analogy to all of newspapers, right? We don't want the government dictating what newspapers can print. They don't have to give equal time to different political candidates. So we want to have, we don't want to have freedom there.
I think what the problem in in some instances is that there's not been a consistent application of take-down policies in the past. So let me give you just a couple of anecdotal examples, from some of my friends. So I had a friend who posted a video of Malcolm X, and they got placed in Facebook jail, as it's sometimes normally called because they said that the Malcolm X video was hate speech.
Well, I don't think the Malcolm X video even remotely approached hate speech. It didn't, it was very self-empowering. It has some themes of African-American nationalism, but it certainly didn't approach any definition of hate speech. I had another individual who was placed in Facebook jail because they posted a picture of Melania Trump that was deemed to be disrespectful, I suppoose. So, in the past there's been very selective, uneven application of these principles and their decisions have sometimes been confounding. But that does not lead me to think some of Justice Kennedy's language in Packingham v. North Carolina in 2017, not withstanding whatJustice Alito referred to as undisciplined dicta.
Right? I mean, there is the seminal-- and I do recognize the state action doctrine itself has very racist roots. If we look at the civil rights cases in 1883, I don't think anyone can plausibly deny that, but there's a very strong reason to have the state action doctrine and there's a significant difference between a government official and a private company.
And, I essentially agree with Kate that to allow the government to step in and regulate, I think, may well pose far greater problems. And it's, I think it's a very encouraging sign that these companies are taking these steps, to try to have some sort of more consistent standards as to how they deal with different types of content.
Jeffrey Rosen: [00:38:12] Thank you for all of that. Thank you for citing the Packingham case, which we'll post in the chatbox. That was a case involving a North Carolina law prohibiting registered sex offenders from accessing various website and the court per Justice Kennedy said that it was too broad, that to be valid under the first amendment, the content neutral regulation of speech has to be narrowly tailored and can burden more speech than necessary.
Okay, Nate, we've got so much to pack into what maybe our final round, including many questions about election problems, like how to ensure that there are enough Dropboxes for ballots in Harris County, which had one drop box for 4 million voters, according to an anonymous attendee. Among your many important projects related to elections, you're involved with the Stanford-MIT healthy elections project, which is examining issues like healthy polling places, mail voting, and tools. Can you relate, what concrete problems you and the Stanford-MIT project are concerned about on election day with anything that platforms could do to address those problems and, you know, just to get down to brass tacks, what are you most concerned about when it comes to the platforms and the elections next week?
Nate Persily: [00:39:29] Well, we have a sort of sister project that is run by my colleague, Alex Stamos, former head of security at Facebook, called the Election Integrity Partnership, which you can look at eipartnership.net, which is doing investigations of sort of organized disinformation activities.
And so I am most worried from the disinformation side in the post-election period, because if it is a close election and if the networks are reluctant to call the victor and that's an opportunity for foreign and domestic actors to propagate the information ecosystem with a lot of misinformation.
And so that is something that's cheifly my concern. If I could, though, I just want to make sure I put a marker down for government regulation in this area with the platforms. There are certain things that government should be doing with the platforms, and some of the questioners raised this, and that has to do with antitrust and thinking about the power of these platforms and what, even if they're not public utilities, how we should think about their control of the information environment. Now some of these interventions are not necessarily speech restrictions. So it's not necessarily like having a white house office of disinformation policy or something like that.
But, for example, forcing Facebook and Twitter and others to allow other firms to, basically, allow you to choose your own moderation system, to try to have, to break up that monopoly that they have over the control of the organization of your newsfeed and the like, that's something that could happen.
Political advertising regulation, I think we have to, I think the platforms actually want more regulation on this in order to, bring us into the 21st century when it comes to online ads. Obviouslythere's other areas dealing with privacy and transparency that also will not just shed light on what's happening in the speech moderation practices, but also, potentially have an effect on those practices.
I think we do need industry organizations. We need more things like the oversight board and, similar ones in other firms, or even a macro board that that could be developed. And I'll say one thing, which is that even if we don't regulate them, the Europeans will. And so there's going to be a regulation of these platforms in their policies on disinformation.
One thing we've learned from GDPR, the German or the European privacy law, is that sometimes Brussels can have effects here. And so I think we need to think about what the American regulation should be of these platforms also.
Jeffrey Rosen: [00:41:54] Many thanks for all of that. And for calling our attention to the election integrity project. Kate, why don't we hone in on this very concrete concern that Nate has identified? The danger that if the election is close, and both sides are claiming victory, then, disinformation from abroad, as well as, domestically might create chaos about what's going on. Some of the platforms are saying they're responding to this by elevating reliable news sources like the AP. I think Facebook said at the hearing that it was going to do something like that. Is that adequate? Are you concerned about this problem and are the platforms even equipped to distinguish between true and false claims of victory, since as one of our questioners notes, you know, until the result is final, it would be just a political claim rather than something that could be deemed true or false.
Kate Klonick: [00:42:51] So, it's funny that you pose that question because it reminds me, you asked that question and the only thing that I think of was, do we beach Truman and the picture, like holding up this maybe not the--certainly not the original, but like kind of a paradigmatic, fake news moment.
And I think that there's... I think that the... I was actually just on the phone with a few people on the escalations team at Facebook yesterday, asking how they're gearing up for this type of thing and what I was hearing back, very kind of concretely was that they were doing fact-checking and they were prepared.
And what they were most worried about was not the ramp up to the election. But yes, in fact that the week or two afterwards. I'm sure it's the same thing that Nate is hearing from the people that he's working with as well. T here's not...let me just say that there's not a fantastic answer to fake news.
And in fact, I would just point out that the recent kind of moment with the Hunter Biden emails and New York Post and all of the, and everything that happened around that and Twitter and Facebook and Google, all making different decisions around their fact-checking boards to censor certain types of content, were in fact kind of in my opinion, very good examples of the shortcomings of fact-checking boards and the epistemological questions that are fundamentally at the core of a lot of these, a lot of these things that, can't be answered by a fact-checking board.
And, yet. And I think that a lot of people who did policy at Facebook knew that, but the public clamored for fact-checking boards and that was what they wanted and that was what they wanted to see. And so they spent a lot of my... all of the platforms spent a lot of... the major three platforms spent a lot of money creating these.
And I think that too, to a limited extent, they probably do catch some stuff around the margins and do a decent job, but there are really hard questions that you're never going to be able to put before a fact-checking board, and going to get good answers to. Like, is Seinfeld true?
I really don't like, I don't know-- is Seinfeld true? What do you mean? What is, was it truly a show? Yes, it was. Was it a true depiction of life and New York? Maybe? But was it actually a person living in New York, living there? No, it was a fictional show that happened. Right.
But like, this is a little bit... I'm trying to just complicate how these issues are just not, first of all meant to be dealt with by private corporations whose leaders just happen to win the startup lottery and have no natural expertise or ability to deal with these types of difficult trade-offs, and that they're questions that we've asked for all of time and humanity. We're just seeing them in a new valence.
Jeffrey Rosen: [00:46:08] Thank you so much for flagging what you so rightly call the epistemological aspects! The nature of knowledge is indeed at stake when we try to figure out what is true in a political context that helps understand, Andy Kennels comment, this is an awesome list of experts, which it is indeed. So, John, you know, what about that claim that no fact-checking board or oversight board is equipped to decide questions of political truth. Jack Dorsey did get into, got great criticism yesterday in Congress from presuming to have Twitter determine which of President's Trump's political claims were true or false and kind of honing in on this scenario of both sides claiming victory before all the votes are counted. What do you think the platforms could do or should do if anything?
John Samples: [00:47:03] So there's an intuition here, which is Mark Zuckerberg was once talking about fact-checking. He said, in that context, he said, you want to remove out and out hoaxes and conspiracy theories. So in his mind, he was thinking that the fact-checking was going to do that for you.
I would say that as a business, that's his choice. But, you know, there's a big difference between for example, with Joe Biden, running for president. The question that surely has an answer, did the Obama stimulus of 2009 produce more benefits than its costs? It has a factual answer, but it is something that should be left to debate, to party activists and to voters, right? The question, let's see, about whether aliens have been running the government for 50 years. And in fact, if you look under the skin of people at the FBI, you find large lizards.
Kate Klonick: [00:47:58] Prove it's not true, John. Prove it's not true.
John Samples: [00:48:02] I saw this happen to a person one time, a Stanford faculty member at Stanford, responding to someone who maintained that the 2001 attacks were an inside job and that, so I don't want to go there because he had a very hard time. They were living in different epistemological worlds. So I think, those two can be separated and need to be because if you go over the line, there's a crucial issue here, which is free speech and the first amendment is founded on the idea that you can say things to people.
You can say things that are wrong, and people can sort it out. That's Republican government, in a way. And if you start taking away answers or questions that should be debated or factual, and, preventing people from getting it, then you really do have third, you know, external effects on the democracy.
I think I do see and worry, Jeff, that we there's a tendency now to just assume none of this works and people have to be protected from information. Right. And maybe that's empirically the faith of free speech advocates is wrong empirically, but we've been going on this for awhile. And I think it's, we've got to stick with it for awhile before we have a definite refutation, because that's a crucial part of our society and our culture.
Jeffrey Rosen: [00:49:16] Well, it certainly is, and, David, I think it falls to you to have the last word. And that's, there's so much to ask you, but John indicates without arguing it strongly, so I'll just ask it. Why not let a thousand flowers, bloom have an unregulated universe when it comes to elections, not presumed to ban the ads before and after, allow either side to claim victory and basically apply first amendment standards, even though the companies are not formerly bound to do that. And then afte answering that question, why don't you sum up and leave our listeners with whatever thoughts you would like in this completely fascinating and rich discussion?
David Hudson Jr.: [00:49:58] Well, I think we do need to do something. I mean we saw in 2016 that there was widespread outside influences on the electoral process. There were a lot of bots that were producing stuff that was totally false. I don't think the term disinformation was created for censorship purposes. Right.
I mean, there, there's just a lot of negative material out there that could cause harm. Right. And at the end of the day, we don't... I agree with the sense and what John said was beautiful, right? That's the whole point of like, why we protect commercial speech right back in 1977 or 76 and Virginia Pharmacy, right? It's that we don't want to assume paternalistically that the government always knows what's best for people. On the other hand, look, there's a lot of harmful speech out there. There's a lot of hate speech. There's a lot of pure disinformation. And so I applaud the efforts to try to come up with ways to combat that stuff and, you know, part of it perhaps can be done by increasing digital literacy, literacy, across the board. I applaud the efforts to empower people to become more informed and be able perhap to tell and sort of do their own fact-checking.
At the, at the end of the day, right, this is the most participatory form of mass speech yet developed. I think that's what Judge Dullsville said back, at the three judge federal district court opinion in ACLU v. Reno that ultimately culminated in the Supreme Court's 1997 decision in Reno v. ACLU. It is the real epitomy or the zenith of first amendment freedoms.
Right. We now have the ability as an average individual to reach a mass audience. And with that, there's the amplification of all types of speech, right? Positive speech, negative speech, and everything in between. At the end of the day, I really applaud the efforts that are being taken to try to identify speech that would qualify as disinformation.
Jeffrey Rosen: [00:52:08] Before thanking our amazing panelists for a really rich discussion. I want to end with some quotations from our audience because it shows how incredibly engaged you are, friends, as you've listened closely to us. Lauren Roberts says, how do you draw the line between obviously objective and complex but objective? Maria de los Angeles: fake news often has fake readers and yes, it is epistemological and in the new valence.
Wow. And so true. And Sarah Cunningham in some ways sums up the central issue that's led us to this complex place. And that has to do with the breakdown of the enlightenment consensus about what facts are and the consensus that makes democracy possible. I see our panelists nodding.
Friends, democracy may be imperiled, but public debate is not because if we can have kinds of rich, engaged, deep, and deliberate discussions of the kind we're having tonight, where all of you have taken out an hour, right before the election, on an evening, to engage deeply with these fundamental questions about free speech and democracy, then there is hope for the future. And for that, I want to thank Jen Neuharth for her vision and suggesting this panel, so grateful for our collaboration with freedom forum. And let me thank you very sincerely, indeed, Nate Persily, Kate Klonick, John Samples, David Hudson, for an extraordinarily illuminating contribution to public debate.
Thank you all. Thanks to our friends. Have a good night.
Jackie McDermott: [00:53:33] This episode was engineered by the National Constitution Center's AV team, and produced by me, Jackie McDermott, along with Tanaya Tauber and Lana Ulrich. This program was presented in partnership with the Freedom Forum. For more on the election, check out the National Constitution Center's election resources and programming, including podcasts, episodes, video lessons, and more at constitution center.org/calendar/election dash day dash programming. We'll include that link in our show notes. As always, please rate, review and subscribe to Live at the National Constitution Center and join us back here next week.
On behalf of the National Constitution Center, I'm Jackie McDermott.