• We The People Podcast

Dueling Platform Policies and Free Speech Online

November 21, 2019

Twitter recently announced that it will stop paid political advertising, with Twitter CEO Jack Dorsey asserting that interest in political messaging should be earned, not bought. Meanwhile, Facebook’s CEO Mark Zuckerberg announced that Facebook would not stop hosting political ads, saying that the platform should not be responsible for policing speech online. Will Twitter’s efforts to regulate political ads work? Might Facebook’s more “hands-off” approach lead to unintended consequences for our democracy? Which approach to regulating speech might foster free expression the most? And how do policies of private institutions shape our free speech landscape, given that the First Amendment doesn’t bind Twitter or Facebook? This year marks the 100th anniversary of the Supreme Court decision Abrams v. United States, so we also consider: Are the landmark First Amendment cases, many of which were decided decades before social media existed, still relevant in a world of ever-changing digital platforms, bots, and disinformation campaigns? Digital speech experts Ellen Goodman of Rutgers University Law School and Eugene Volokh of UCLA Law join host Jeffrey Rosen.

Some terms you should know for this week:

  • Microtargetting: a marketing strategy that uses people’s data — about what they like, their demographics, and more — to segment them into small groups for content targeting on online platforms.
  • Interoperability: the ability of computer systems or software to exchange and make use of information. In this context, that means that if platforms like Facebook were required to share data with other developers, those developers could create new platforms and there would be more competition in the market.

FULL PODCAST

PARTICIPANTS

Ellen P. Goodman is Professor of Law at Rutgers Law School, and Co-Director and co-founder of the Rutgers Institute for Information Policy & Law (RIIPL). She blogs for the institute’s website and at medium.com. Professor Goodman is also a Senior Fellow at the Digital Innovation & Democracy Institute at the German Marshall Fund, and previously served as Distinguished Visiting Scholar at the FCC.

Eugene Volokh is Gary T. Schwartz Distinguished Professor of Law at UCLA Law School. He is the author of The First Amendment and Related Statutes among other works, and the founder and coauthor of The Volokh Conspiracy, a leading legal blog. Professor Volokh also just launched the project Free Speech Rules – a website featuring videos that explain the law of free speech – which you can check out at FreeSpeechRules.org.


Additional Resources


This episode was engineered by Kevin Kilbourne and produced by Jackie McDermott. Research was provided by Lana Ulrich, Sarah Byrne, and Frank Cone.

Stay Connected and Learn More
Questions or comments about the show? Email us at [email protected]

Continue today’s conversation on Facebook and Twitter using @ConstitutionCtr.

Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate, at bit.ly/constitutionweekly.

Please subscribe to We the People and our companion podcast, Live at America’s Town Hall, on Apple PodcastsStitcher, or your favorite podcast app.

TRANSCRIPT

This transcript may not be in its final form, accuracy may vary, and it may be updated or revised in the future.

Jeffrey Rosen: [00:00:00] I'm Jeffrey Rosen, president and CEO of the National Constitution Center, and welcome to We the People, a weekly show of constitutional debate. The National Constitution Center is a nonpartisan, nonprofit, chartered by Congress to increase awareness and understanding of the constitution among the American people.

Twitter announced recently that it would stop all political advertising. Twitter's new policy declares, “Twitter globally prohibits the promotion of political content. We've made this decision based on our belief that political message reach should be earned and not bought.” Around the same time, Facebook announced that it will continue to run political ads. And a few weeks ago, Facebook CEO, Mark Zuckerberg gave a speech about Facebook's free expression policy.

To compare, uh, the political ads policy of Facebook and Twitter and to discuss free speech online more generally, I'm joined by two of America's leading experts on digital speech. Ellen Goodman is professor of law at Rutgers Law School and co-director and co-founder of The Rutgers Institute for Information Policy & Law. She blogs for the institute's website and at medium.com. Professor Goodman is also a senior fellow at Digital Innovation and Democracy Institute of the German Marshall Fund and she previously served as distinguished visiting scholar at the Federal Communications Commission. Ellen, it is wonderful to have you on the show.

Ellen Goodman: [00:01:28] Thanks so much, Jeff. Really happy to be here.

Rosen: [00:01:30] And Eugene Volokh is Gary T. Schwartz Distinguished Professor of Law at UCLA Law School. He is author of The First Amendment and Related Statutes among many other works, and founder and co-author of The Volokh Conspiracy, a leading legal blog. Professor Volokh also just launched the project, Free Speech Rules, a website featuring videos that explain the laws of free speech, and I encourage We the People listeners to check it out at freespeechrules.org. Eugene, it's wonderful to have you back on the show.

Eugene Volokh: [00:02:00] Always a great pleasure.

Rosen: [00:02:01] All right. Let's begin with the facts. I just read from the preamble to Twitter's new political speech policy and in describing who is subject to the policy, Twitter goes on to say, “We define political content as content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” After that, there's some exceptions. Ellen, can you begin by describing a little more detail what Twitter's political ads policy is, why Twitter adopted it, and why Facebook has made a different choice?

Goodman: [00:02:39] So, um, I think, uh, Mark Zuckerberg was first out of the gate saying that, um, that Facebook would not apply its usual policies on disinformation to political ads. And that, um, the reason was because, uh, he said because they believed in free speech. And then Jack Dorsey from Twitter, um, in a kind of a trolling comment, [laughs] responded to that by saying that was irresponsible, um, because of course, platforms make decisions all the time on what content to host, but in specifically in this, um, context, what content to promote and what content to monetize. And so that Twitter was gonna come up with a policy about political ads.

And I think the first iteration or the first announcement of Twitter's policy, um, was that it would not take either political advertising or eschew advertising, and, and we can talk about those terms because they have sort of a pedigree in communications law. But then I think when the, um, uh, after, you know, fielding some criticism for that in the final policy that Twitter announced, um, they have distinguished between those two and they will take eschew ads. But interestingly, um, they will not micro-target. And that's another thing we can talk about, not micro-target, um, the direct- those ads, um, to very small niche audiences.

Rosen: [00:04:02] Eugene, what was your first reaction to the Twitter policy? Obviously, we can't speak as a matter of formal first amendment law because the first amendment doesn't bind Twitter. But broadly, do you agree with Twitter's decision to ban political ads and micro-targeting while allowing advocacy ads? And do you agree with its insistence that ads should reach audiences based on interest rather than money?

Volokh: [00:04:26] You know, it's an interesting question. I don't have an answer for it in part because I think it's an experiment. Uh, in a year or two, we'll see. Do, uh, does it look like the consequence of that kind of policy is to make, uh, political advertising even more expensive and require even more, uh, um, fundraising, for example, for political candidates? I mean, the more expensive, uh, you make political campaigns, the more- the more concerns people have about campaign finance for example.

Uh, on the other hand, is it something which, uh, helps Twitter avoid a lot of controversies and maybe even makes the environment more pleasant for Twitter users? The, one of the advantages of having this be done by private entities, even very, very large and influential private entities, uh, is that we, we get to experiment. We get to see, uh, what things are like, uh, uh, uh, under the new regime. Um, and uh, uh, it, it may be that, uh, ultimately, even just from a bottom-line perspective, Twitter will conclude, “Oh, the, you know, this is something that is losing us valuable advertising.”

Although apparently, uh, such advertising is a very modest source of income for, uh, for Twitter, the political advertising. Conversely, maybe Facebook will say, “Allowing this advertising causes so much controversy and so many complaints that we have to deal with that we're better off following the Twitter model.” Perfectly- any of them is perfectly plausible. Any of these possible outcomes is perfectly plausible and this is an experiment that will allow us to see what happens.

Rosen: [00:06:02] Ellen, as Eugene says, Twitter gets a little money from political advertising, less than 3 million out of a total revenue of 3 billion. By contrast, Facebook gets a far larger portion of its revenue from political ads. You've written critically about aspects of Twitter's policies saying it doesn't go far enough in some aspects and it goes too far in others. Uh, could you share some more of your thoughts about whether or not Twitter's new decision not to run political ads is a good thing?

Goodman: [00:06:28] I think, um, Twitter should be commended for, uh, you know, as Eugene says, for t-, for tr- taking on this experiment. One thing we know for sure is that there are gonna be unintended effects and it's gonna take a lot of heat for those effects. Um, I mean some of them, one of them might be that, uh, uh, s- for example, some people have pointed out Exxon will be able to advertise for fossil fuels because that is not an eschew ad and, um, climate change activists will not be able to advertise for a carbon tax because that will be a specific legislative proposal. Um, and so, you know, undoubtedly, Twitter's gonna take a lot of heat for this.

I can say that, um, two things that I think would improve this policy, and this is really in the spirit of what Eugene said was, you know, assessing this as an experiment, um, in order to do that, we will need a lot of transparency. It's not clear how much transparency there's going to be about this. So, for example, what kinds of ads is Twitter not taking? How does it respond when it gets one of the complaints that it will surely get that, um, er, you know, something it deemed political is not political or that something it led on because it, it w-, it, not thinking it was political actually is political?

How much transparency will we get into these decisions and what, um, how much access will we get to the data? One other thing that I, I really like about Twitter's what Twitter did and, um, again, we need a lot more data about this and I think this particular aspect of the policy could be expanded, is what it did on micro-targeting. So I and other people have really been harping on this micro-targeting as really being the evil here in, in some of these, um, surveillance ad networks. Um, and so I would like to see more action taken on micro-targeting.

Rosen: [00:08:17] Eugene, what are your thoughts about micro-targeting and about Twitter's idea that the reach of ads should depend on interest in them, not how much people pay for them? And what do you think about Twitter's efforts to distinguish between political ads? In other words as that include appeals for votes, solicitations for financial support and advocacy for or against political issues from paid advocacy ads, which they'll still permit.

Volokh: [00:08:41] Uh, well. So I think that, uh, uh, it's important to re- to, um, uh, stress that these are private institutions here. Uh, and, uh, it's true they are private institutions with a lot of power and a lot of influence over public debate. And, um, there are plausible arguments for regulating them or trying to pressure them socially to allow more speech. But I do think that that's one of the things that, uh, uh, makes it legitimate to, uh, to have them experiment with these kinds of things.

Uh, if the government were to tro- totally ban micro-targeting, uh, said, you know, “If you want, uh, if you want to advertise your political message, whether it's a candidate message or an eschew message, you've got to do it to millions of people or not at all.” I think that would be clearly unconstitutional. Among other things, can personal conversations, are the, are a form of micro-targeting? When, when parties, for example, encourage their members to talk to each other or inc- or bring in particular people who will go to a small group gathering and talk with an eye towards what that group thinks, uh, what that group is interested in.

Say, “Go to a homeowners association and talk about one set of things and talk to the ... And then go to, uh, a church group and talk about another set of things.” That's clearly, fully protected speech. At the same time, if Twitter thinks that that's not a good use of its platform, if it doesn't want to allow its platform to be used for that because it thinks that that's ultimately worse for our democracy than better, I think at the very least, they ought to be given a chance to try this out and see what happens. And then again, in a year or two, we'll see.

If uh, we have a lot of stories about people who can plausibly say, “Look, this loses us a good opportunity to convey the m- messages we need to convey to the particular groups we need to reach in a way that's likely to reach them effectively,” uh, maybe that will lead, uh, uh, lead Twitter to cha-, to change its mind, especially if there are specific, uh, specific examples. But I do think this is, this too, the micro-targeting is also something that I don't think can be categorically condemned or categorically praised. It's the sort of thing where it makes sense that there would be some experimentation with.

Uh, likewise more generally about political advertising, uh, or the, uh, the first amendment fully protects political advertising. Uh, it, it recognizes it as an important, uh, uh, use of free speech or freedom of the press, actually, if you think about it. Freedom of the press historically has been what, uh, how the framers and later generations refer to kind of mass communications. So that is, that is clearly protected, uh, against the government.

But at the same time, it may make sense for Twitter to say, uh, “Look, uh, we want for political purposes, our platform to be used for people expressing their views because they believe in them and not because they've been paid to or, uh, uh, getting views because somebody has, has paid for them.” Obviously, when it comes to commercial advertising, that's exactly what Twitter does, is it allows commercial enterprises to promote their products because, uh, because the enterprise is paying Twitter for that.

But I mean, again, that's something that if you want to have Twitter, you have to have some funding source for it. If it's gonna be free to everybody, then it needs to be, uh, supported by advertising. So, again, these are arguments that I would not accept from the government imposing regulations, but it makes sense to see how they work out. And we've seen Twitter policy and Facebook policy change over the years in the past in response to some of these experiments.

And I guess we'll see in the next several years how, uh, how much of this survives. Uh, but I don't think we should view any of this as sort of set in stone, “This is the way things are gonna be going forward indefinitely.” And I think a lot depends on what the outcomes look like they are after, after this is tried.

Rosen: [00:12:32] Ellen, in a piece you wrote with Karen Kornbluh in the L.A. Times called The more outrageous the lie, the better it is for Facebook's bottom line. You said that Twitter's definition of political ads might create problems.

You wrote, “In the week since Twitter banned political ads, we have already seen that slack definition and uneven enforcement may be as bad as Facebook's do-nothing approach. If Twitter continues to define “political” to include issue ads on controversial topics versus “electioneering” ads that advocate for the election or defeat of a candidate in a political campaign, the resulting morass will be predictable. Critics predict ads promoting fossil fuels will not be considered “issue ads.” But ads promoting responsible climate policy will be.”

That's the end of your provocative statement. Uh, tell us more about that criticism and how you think the definition of political ads could be improved.

Goodman: [00:13:24] Well, so I think, um, since we wrote that, and I'm not taking credit for this, but I think Twitter did [laughs] exactly what we suggested it should do, which is it tightened up the definition. So, so what it's done, um, since, uh, that was in response to its, its sort of f- first indication before it actually released, um, its final rules, so to speak. Uh, so we did tighten up the definition of political ads to try to address with that, that distinction between political ads and eschew ads.

And I think what you see Twitter doing is trying to align itself, um, with a co- coherent category, not unlike the kind of definition that we see in IRS rules or in lobbying rules about, um, what is a p- political or adv- or political advocacy as opposed to just discussion of issues. So, uh, you know, I give it credit for doing that. I don't think it will. It won't, um, extinguish some of the liminal, um, issues and questions and so we will see them.

But if I can just, um, say something about the first amendment, I mean, I think E- Eugene is, uh, absolutely right about, um, the so- sort of doctrine here and, um, uh, we're only having this conversation and we can only talk about sort of banning or tinkering with micro-targeting in the way I'd like to see because this is a platform. Um, but I do think there are bigger sort of questions about, you know, where first amendment doctrine, um, should be headed and sort of what are some of the philosophical, um, questions that we're gonna need to contend with?

And, and I'll just throw one out there is that, you know, do we need to rethink some first amendment doctrine as, uh, we move into, um, a, a time when really government censorship is, is less the bogeyman here and the, the ill to be combated? Um, then kind of, y- you know, di- di-, uh, information flooding, um, disorientation, distraction. Um, do- does that change the first amendment calculus in some ways?

Rosen: [00:15:24] So let's talk squarely about the first amendment. Eugene, if Twitter were the government, could it ban political ads and define them the way that it has? Tell us about the cases that would forbid it from doing so and also the principles that underlie those cases. And then tell us whether you think the fact that Twitter as a private company means that Twitter shouldn't have to abide by those principles.

Volokh: [00:15:44] So curiously, if you look at, uh, uh, a first amendment doctrine with regard to political advertising, a lot depends on whether it's the government acting as sovereign, setting up rules for what everybody, uh, in the country or in a state has to do or the government acting as property owner, controlling what happens in its own property. If the government wanted to ban political advertising, just make it a crime to put out political ads on a website or on a billboard or whatever else, clearly unconstitutional, uh, because the s-, uh, because political advertising is speech and speech paid for, uh, with money is fully protected.

The Supreme Court made that clear in Buckley v. Valeo, the campaign finance case, but that was well established by then. New York Times v. Sullivan, the famous libel case involved a political ad. It involved not a ad about a candidate, but an ad about an issue. Uh, it wasn't the New York Times that ran the story, uh, uh, uh, about, uh, uh, um, alleged misconduct that may or may not have been done by Mr. Sullivan, uh, by Commissioner Sullivan. Um, it was advertisers, people who bought an ad in the New York Times for that very purpose and that's long been understood as constitutionally, uh, protected.

I've seen some cases from the early 1800s involving advertising in, in newspapers that, but that the courts recognized were potentially protected by liberty of the press, although subject to the constraints imposed by libel law. Um, on the other hand, if the government say runs a tr- transportation system. There was a case, uh, involving, uh, a bus system in Shaker Heights, uh, Ohio. Uh, the case was Lehman v. City of Shaker Heights in 1974, um, uh, the government could say, “We'll only run commercial ads and not political ads.”

There, the concern was just they were afraid that political ads would create too much controversy. Maybe it would alienate some writers who ... Uh, uh, just in general, it would cause too much hassle for, uh, for the, the municipal bus system and the court to be sure by a five to four vote, but still the Supreme court said that is constitutionally permissible because that's the government controlling its own property. So it turns out that the rules for the government, uh, are kind of complicated. They depend on the road the government is taking.

But when we're talking about government constraining things in its own property, um, it has a good deal of authority. Now, one difference between government property and private property, and by the way, when I'm talking about government property, I'm talking, like, about things like the bus systems. Uh, when it comes to streets and parks and other kind of government property like that, there the government can't constraint speech, but on things like bus systems, it can.

So, when we're talking about private property, private property owners can discriminate even based on viewpoint. I would be, um, I would be more critical if Twitter and Facebook tried to suppress particular kinds of viewpoints, and to the extent that they do try to suppress them, I am kind of critical of that. Uh, but the government even in its own property, generally speaking, can't discriminate based on viewpoint when it comes to regulating priva- private party speech.

Rosen: [00:18:55] Thanks for that helpful summary of the case law and for teaching us that the government as speaker has different obligations based on where it's speaking and the nature of its property. Ellen, tell us more about your very interesting piece in The Guardian in October called How Facebook shot themselves in the foot in their Elizabeth Warren spat. You described Mark Zuckerberg argument with Senator Elizabeth Warren over the company's choice to run Donald Trump's advertising campaign containing lies about Joe Biden.

Senator Warren headlined her own Facebook ad with a claim that Zuckerberg had endorsed President Trump. Facebook then tweeted a response to Senator Warren by name and it compared itself to a local broadcaster who's required by law to carry political ads. And as you say, Facebook in the process made your argument for you in the sense that you floated the idea of that digital platforms are gatekeepers, which means that their self-regulation is inadequate. Tell us more about that Facebook spat and why you think it proved your arguments that more regulation is necessary.

Goodman: [00:19:54] Um, right. So, Mark Zuckerberg does not want to be regulated like a broadcaster. There's a question about whether Facebook could be regulated as a broadcaster, and that is a first amendment question and that's not at all clear. Um, but the reason why we found what he said to be ironic is because his point was that, “We are going to behave just like broadcasters who are required to take, to provide reasonable access to le- to legally qualified federal candidates.”

And when they do provide that access for those candidates ads, they are not allowed, uh, to censor those ads and they are given immunity for running them. So that's the law that applies to broadcasters. And our point was really, um, I mean there, there is a sort of practical point that there's a gap between how broadcasters actually and what that, that provision of the law says, which is part of the communications act and it's, it's an important practical difference. And then I'll get to the sort of the overarching, um, regulatory point.

The practical difference, and, and this really, it's important because it goes to the culture of these different platforms. It goes to their scale. It goes to their histories and it goes to sort of the connective tissue between how they behave with respect to politi- to political advertising decisions and then how they, um, what their other incentives are as regulated entities. So broadcasters, in fact, do negotiate with candidates about what their ads are gonna say.

And when they are presented with an ad that broadcasters think flat out lie, broadcasters will go to those ad agencies and they'll say, “Come on, we're not, we don't wanna run this.” And they will sort of, um, informally negotiate a resolution to that. You won't see that in the, [laughs] in the text of the act, but that's actually what happens. And that has to do with cultural reasons, with other, you know, reasons about their relationship with the FCC, etc. So that's the practical point.

The larger regulatory point is that, um, uh, and this is something Lee Bollinger has written a lot about, is that you have to understand, um, the whole fabric and, and sort of intricacies of broadcast regulation as living in a family of a sort of first amendment settlement where we have different, um, platforms that have different kind of first amendment standards, um, broadcast being the most heavily regulated print, um, completely unregulated. And that's a good thing for the ecosystem for there to be these, the sort of diversity of, um, uh, first amendment standards and responsibilities.

Um, and what Mark, Mark Zucker- Facebook lives outside of that ecosystem in a completely, um, unregula- ... You might say it's like print and that it's completely unregulated, but it behaves ... Its aff-, its technical affordances, um, give it a very different effect in the world than print has, and that comes back to micro-targeting. So because we, we might say, well, print, and you can take the, um, the New York Times versus Sullivan ad as an example. Um, whatever that says is going to be visible to everybody.

And so in a sense, um, there's a way in which even if it can't be, it's, it's not subject to libel and not subject to other kinds of, um, legal discipline, it's subject to the public eye and to sort of public discourse to the extent that it lies, um, or is otherwise injurious. Whereas micro-targeted ads are hidden and opaque because they go to very small sl- slivers of the audience and, and sort of the, the public eye is not on them. And so in that way, Facebook operates in a unique, in a unique way or the, the platforms all operate that way and in a very new way.

And so it's in that respect that we sort of said, you know, “Come on, you're not like a broadcaster, not in any way and you don't wanna be regulated like one. So it's a bad analogy for you.”

Rosen: [00:23:50] Eugene, what's your response to Ellen's provocative suggestion that the platforms might be regulated, including the regulation of micro-targeting to rule out the truth? In one of the videos for your Free Speech Rules project, you asked the question, “Can the law punish deliberate lies about public matters?” You say it depends and then you give six rules of fake news and I'll share them 'cause they're really helpful.

First, false statements that damage reputations can generally be punished. Second, deliberate lies aimed at getting money can be punished as fraud. Third, deliberate lies, as well as honest mistakes in commercial advertising, can be punished. Fourth, lies about the government can't be punished. Fifth, lies about big-picture topics generally can't be punished either. And six, lies about specific topics are more complicated. So I'd like to ask you about that sixth, the more complicated category.

Ellen suggests that micro-targeting can make lies about specific topics harder to detect. What do you think about the broad proposition of regulating the platforms in order to promote the truth and how in practice could the platforms distinguish between truth and falsehood?

Volokh: [00:24:57] Oh, right. So, uh, there are two kinds of regulations that I th- are, um, are implied by this question. It's always hard to talk about regulations in the abstract. We need to have a sense of what exactly is being proposed, but as I understand it, there are two possibilities. One is, let's ban micro-targeting, regardless of its content. Even if the statement is completely true, let's ban micro-targeting because in general, it's a, it's a way that might spread lies or even if it isn't outright lies, may spread, uh, information that's misleading. We'd be better off if there was no micro-targeting.

Um, that's a classic example of a restriction that is facially viewpoint neutral, doesn't require sort of discretionary judgments about what's true and what's false. Uh, but it's probably unconstitutional because it's a prophylactic rule. Uh,  the argument is in order to prevent the relatively small amount, uh, of material out there that it's an outright lie and perhaps the larger amount that's misleading, we're gonna ban this entire kind of medium of communication. That I think would be unconstitutional if the government were to do it. Although again, it, it may make sense for private, private entities to try to experiment with that.

Um, a second alternative might be, “Well, no, we're not really after micro-targeting generally, but we will require platforms or at least we'll encourage platforms to um, uh, uh, prohibit false statements in micro-targeting.” And there, I think if it were a requirement, I think it would raise some pretty serious problem, uh, serious- ... Yeah, it, it wou- ... If it were a requirement, I think would likely be unconstitutional because it would put these platforms in a position where often on very short notice without a public trial, without, uh, the kind of thing that usually happens when you've got, uh, a libel lawsuit being brought, they have to decide what's true.

And what's false and the problem is we shouldn't always trust them or the government to decide what's true and what's false. Uh, we have worked out systems for deciding that in some situations such again, uh, such as a, uh, uh, again, um, uh, libel lawsuits, but those take place in court. Those take place with the rules of evidence. Those take place over time. Those take place in public.

I don't think we should want to have a system, uh, where platforms routinely decide, “Oh, we're gonna ban this ad because it's false.” Among other things, because platforms are run by people and people have prejudices, people have political biases. Uh, and there will be constant questions and I think quite plausible questions about whether, uh, the platform's decision to ban this ad or, uh, or that ad may actually stem from disagreement with the ideology and not a fair-minded application of rules about literal truth or falsehood.

Rosen: [00:27:50] Ellen, what's your response to Eugene's argument that it's hard to distinguish between truth and falsehood and that it would be hard for the platforms to do that because their decisions might be infected by bias? And for that reason, a requirement that they distinguish between truth and falsehood might be unconstitutional.

Goodman: [00:28:06] Completely agree. Um, I co- completely agree with that and I also don't think banning micro-targeting is either unconstitutional or desirable. So let me give you three kinds of regulatory interventions, um, that I think we could think about. Um, so the first one is transparency. So transparency obligations are generally constitutional. Um, and also, you, you know ... Well, they're, they're generally constitutional. Um, the kind of transparency requirements I'd like to see, and I'd love to see them start as voluntary.

Um, and in fact, the platforms are trying to be more transparent, but they haven't gone far enough. So, um, with respect to micro-targeting, it would be good to know how these ads were being micro-targeted. And so, you know, under Facebook, you can, you can ... Uh, there's a ge- general disclosures about, um, the grossest categories of sort of demographics and geography about where ads, um, were sent, uh, but there's very little information on things like lookalike audiences and custom, custom audiences. And this is where advertisers are really able to zero in on, you know, women who are 25 who buys certain kinds of shoe and swim three times a week. Um, so more transparency.

Second of all, I think, um, and this-, uh, Eugene and I may not agree on the constitutionality of regulating, um, data collection under the first amendment, but all of this micro-targeting is only possible because of the harvesting of personal data and then, um, the exploitation of that data in order to target ads. So I think that would be, um, a, a, an indirect and more targeted way, um, of, of dealing with some of the surveillance accesses of these platforms. And it would affect micro-targeting, although that might not be the, um, uh, the p- the primary purpose of it. They would be framed as privacy regulations, I think.

And then the third basket are, these are sort of structural telecom style, um, regulations which generally do not implicate, um, the first amendment, uh, things like interoperability and portability. They're m-, they're more sort of descend from concerns about competition than concerns about speech. Um, but what they would allow you to do is, first of all, they would make it easier for there to be competing platforms. And I should say that having competition among platforms doesn't necessarily get you to less disinformation [laughs] or to, um, it doesn't ...

It, you know, may not solve these problems at all. And so this is more in the spirit of experimentation, but if we had different kinds of platforms and the only way to have that is so that you can, um, take your data with you to a new platform or to use modules on top of existing dominant platforms. Um, if, if, if we had different kinds of platforms, we could see how different business models, maybe ones that weren't so surveillance, that didn't rely so much on micro-targeting, um, that weren't driven by engagement, that didn't promote some of the speech pathologies that we have now. Um, we, we could see if those could take root.

And so those would be three kinds of, um, regulation that I think would not trigger so much as many first amendment problems as banning micro-tar- targeting or certainly policing for the truth. Although I do ... One little, um, addendum I, I do want to make to that is, don't think the government can or should require it. But of course, platforms are constantly making, um, truth decisions because they all have policies that basically forbid lying. It's only with respect to political ads that Facebook was saying it would not do that.

Rosen: [00:31:48] Eugene, Ellen just identified three regulations, she says might raise fewer first amendment concerns about micro-targeting. First, transparency. Second, data collection, and third, the interoperability of the platforms. What's your response?

Volokh: [00:32:00] Uh, yeah. I think this certainly would raise, uh, fewer first amendment problems. I agree that inter- interoperability requirements wouldn't be pose a first amendment problem. I think transparency, uh, would pose minimal th- first amendment problems, so I can imagine an argument that certain kinds of things that people are entitled to keep confidential. I just don't think that's that likely when they are trying to speak to a public, even not the whole public. Um, I don't think we'd want to have transparency in individual emails that people send.

Uh, that is to say we don't wanna have mandated disclosure. “Here are all the emails that the campaign has ever sent to anybody.” Uh, but I do think that when they're talking to largest groups, uh, then requirements that, uh, that Facebook,  let's say a report, or will have an, a database of all of these things out there so that others can monitor it, I think that would be permissible. I do think the data gathering, uh, requirements might pose some first amendment problems, but at the very least, fewer problems.

So I think these are all, uh, these are all plausible alternatives. Um, uh, I think they're better than either banning, uh, uh, micro-targeting or, uh, uh, requiring or pressuring, uh, Facebook into making, uh, truth falsehood, uh, decisions in these kinds of things. I think it's true. They may make those decisions to, to some things in some situations, but when it comes to ma-, having those decisions to be made about political, uh, advocacy, that's both the kind of situation where, um, the stakes are higher, that, uh, Facebook getting it wrong, uh, potentially troublesome, uh, or could be especially troublesome.

Uh, but also on top of that, uh, the risk of error is particularly infected by the human tendency to bias, uh, to see the best in one's friends and the worst in one's enemies. Um, I will say that these pr- that, uh, if you step back from the first amendment, uh, issues and look at kind of the politics of this and, uh, the likely lobbying that the platforms are going to engage in with regard to them, my sense is while the platforms would not be wild about various kinds of, uh, speech regulations, they might make their peace with them because that doesn't, that won't sharply interfere with their bottom line.

But on the other hand, if it comes to the interoperability requirement, that may face a huge amount of pressure because that will probably interfere with their bottom line. It'll make it a lot easier for people to come up to create competitors that um, that will siphon away, uh, some customers or some, uh, some users. Now, uh, it may still be really good for society. It may be good for the country to have these kinds of interopo- interoperability requirements. It's a plausible argument either way, but I do think that that's something, my guess is the platforms will fight tooth and nail.

Rosen: [00:34:45] Ellen, how does Facebook treat deliberate lies in its advertising? So, for example, if Senator Warren had tweeted, not that Zuckerberg effectively endorsed President Trump by profiting from his lies, but instead posted ads saying, “Zuckerberg has endorsed Trump.” Would Facebook run that? And more broadly, how does Facebook enforce whatever policies it has for distinguishing true from false political ads?

Goodman: [00:35:07] So I think, um, I mean, this was the question, um, Ocasio-Cortez asked, Zuckerberg right at the hearing, she specifically asked, uh, y- you know, “Cou- could there, um, uh, cou- could a campaign ad, uh, lie?” Um, and I think he said after hemming and hawing said, “I think it could.” Um, I'm not sure. I think if Elizabeth Warren, um, made any kind of a statement that would be considered Facebook, would, would be, would consider that a political, um, uh, political ad because she's a candidate.

And so w- would not, um, w- w- would not do anything, would, um, both allow it. And by the way, you know, so w- we are ... None of these platforms are going to censor, um, statements of political candidates. It's only about whether, uh, how they're going to treat paid promotions and amplification. Um, so, uh, so that's the decision that- that Zuckerberg would make on that. I'm sorry if I lost track of what [laughs] what else you were asking.

Rosen: [00:36:13] No, that's a helpful response. And that's interesting that Representative Ocasio-Cortez asked that question. Eugene, do you have any other sense of how, if at all, Facebook would regulate a deliberate lie in its advertising? And how does it track onto that really helpful list you gave us earlier about how generally deliberate lies and commercial advertising can be punished and lies about the government can't be punished while lies about the big picture can't be punished either. Does Facebook apply any version of that in its ad policy or not?

Volokh: [00:36:41] You know, uh, I don't know the ad policies, uh, well enough to speak to that. But let me just step back a bit and talk about the, what, what the government may do. Uh, the government does indeed ban false and even misleading statements in commercial advertising, which is to say advertising for ordinary products and services.

And one reason, in fact, that courts have been more open to that is the sense that there's not as much at stake, that if the government makes decisions about whether some ad about food or about gasoline or about whatever else is accurate or inaccurate, there's not that much danger that that's going to, uh, involve government officials, uh, trying to or, or essentially suppressing one side of a really important political debate. The sense is that comm- debates about commercial products are less significant and therefore we're less worried about government, uh, power there.

Now when it comes to false statements about people, about particular individuals, libel law does regulate them. And even when it comes to false statements about political officials, if something is a deliberate lie, it could lead to libel liability. But again, it would do that after a trial, after a public trial, uh, in which, uh, while on the one hand is extremely expensive and, and burdensome as a result, at least there's more likelihood that there's going to be an impartial decision being made.

Uh, so I think to the extent that Facebook is trying to, to avoid making these kinds of decisions when it comes to these sort of political ads that are most likely to be controversial, most likely, uh, that, that any judgment about them is going to be infected by political bias on the part of the decision-maker, I think that's a sensible position for them to take. I don't think it's the only plausible position, but I think that's a sensible position.

And one way of thinking about it is, uh, there's a spectrum, right? Of falsehoods. There are the kinds of things where, look, if there's really no doubt that if we sit down and we figure, or we, we look at it, it's indubitably this particular statement is false and there's solid evidence. It's deliberately false. And one question might be, “Well, what's the harm in either the government or in Facebook, uh, saying, “Well, we're not gonna allow this to be passed along.”

Well, the difficulty is there are a lot of other statements where some people say, obviously a lie. Other people say, “Well, no, it's actually true,” or, “It's actually a matter of opinion.” Or, uh, “Look, this is obviously a joke. You're calling it a lie that's just because you're deliberately failing to see the joke and the public at large will see the joke.” So, um, so the real question is how much you trust either the government or a Facebook to deal with things along this whole spectrum.

Uh, and I think there's a lot to be said for leaving that to, uh, to, uh, to public debate and public criticism. And I do agree that requiring that these ads be archived in someplace and accessible can facilitate that public debate and public criticism. So there is a lot to be said for those kinds of disclosure requirements.

Rosen: [00:39:42] Ellen, I'm looking now at Facebook's false news policy where they say that there's a fine line between false news and satire. And for that reason, they don't remove false news from Facebook, but instead, significantly reduce its distribution by showing it lower in the newsfeed. And then if you read further about what they're doing to reduce the distribution of false news, you see that they say they remove accounts and content that violate their community standards or ad policies.

And the ad policies include the following provision about misinformation, which says that Facebook prohibits ads that include claims debunked by third-party fact-checkers or in certain circumstances, claims debunked by organizations with particular expertise. Advertisers that repeatedly post information deemed to be false may have restrictions placed on their ability to advertise on Facebook.

What's your reaction to that? And the fact that none of the three of us knew about the policy, I suggest how buried it is, but does it make sense as a false news policy to remove claims debunked by third-party fact-checkers or organizations with particular expertise? Is that precise enough and is it even enforceable?

Goodman: [00:40:45] No. No, I mean, I think we did know about it. That's the policy that applies to nonpolitical advertising, right? So false claims in ads that have been fact-checked and proven to be false. That's how they know they're false. I mean, what they're saying is that, “We rely on third-party fact-checkers to tell us what's false. And also th-, um, it comes up through f- a flagging, um, system where the consumers can flag things and then they will get it fact-checked.

And, but that doesn't apply to political ads where, where they're, they're not gonna run that through, um, that fact-checking. Um, that, that's what they've exempted. But I wanna come back to, um, you know, I think just to, to come back to sort of our first amendment touchstones, um, you know, I think what Eugene said about commercial speech, um, uh, of, of course, it's right that we have a different standard for commercial speech than we do for a political speech for good reason. But in addition, so he talked about, it's considered lower value speech.

Um, a couple of other things about commercial speech is that it's considered, um, especially robust so that there won't be this sort of chilling effect if we get it wrong and we say something's false when it's not false. And th- there are, um, it can be restricted under FTC false advertising rules. Um, it's not just that it's low value, but it's that the speakers, the commercial speakers will, um, uh, wi- will be able to work with that, um, sort of false positive, um, because they're strong enough.

And then I think there's a third, um, value there, which is that the commercial advertisers are in a position to know what they're saying is false because they have the wherewithal and usually, they're advertising their own products. And I think it's interesting to think about how those sort of desiderata, you know, hold up in this world of, um, not just commercial speech but, but all speech and to think about, you know, especially we're at the 100 ...

I don't have to, uh, I don't have to tell you, Jeff, that we're at the 100-year anniversary [laughs] of the, um, Abram's, um, dissent, Justice Holmes, Justice Brandeis, and the explication of the marketplace of ideas. And, you know, to what extent are those, um, that, that sort of view of reality, um, that gave, that gave a rise to the beautiful idea of a marketplace of ideas, um, and the sense that we needed just an injection of a lot more ideas into the marketplace in order to, um, arrive at truth.

Um, you know, how does that hold up today? Um, who, what, what is the robust, the most robust speech on, um, on these platforms? Where does some speech, you know, need help or amplification that doesn't get it? Um, who is in a best position to know? Facebook and the policy that you read is saying third-party fact-checkers are in the best position, um, to know what's true. And I, I don't, I don't think that's wrong at all. And I credit them for, um, relying on third-party fact-checkers.

But w- how does the ecosystem actually work? So Facebook, you know, sort of notoriously has not funded these third-party fact-checkers. It relies on them, um, but it doesn't fund them and, and many of them have stopped doing this work because, um, it's too expensive for them. And so if we want a, you know, a really robust marketplace, what affirmative steps do we think should be take-, should be taken? And what is the responsibility for these platforms, um, to take them?

So I would almost propose this as a fourth basket. It's not really a regulation, but we ought to be thinking also about subsidies, subsidies for sort of good speech to, you know, make, um, truth more robust, um, to, you know, sort of support the audience's ability to sift truth from falsity themselves.

Rosen: [00:44:33] Thank you so much for reminding us of the 100th anniversary of the Abrams dissent and happy birthday Abrams dissent. Uh, I'll just, uh, recommend to We the People listeners a superb book on the Abrams dissent, Thomas Healy's The Great Dissent and I will have the pleasure now of reading a few of the immortal sentences from Justices Holmes dissent in Abrams 100 years ago. So here's Holmes in Abrams.

“But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas, that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out.

Eugene, those are such inspiring words. Ellen suggests that, at the moment, the marketplace of ideas may not be fully functioning and that subsidies for good speech may give it a fair fight and allow it to fulfill Holmes's hopes. What's your response on the 100th anniversary of Abrams?

Volokh: [00:45:45] Well, so remember, uh, Holmes says the best test of truth is the power of the thought to get itself accepted in the competition of the market. He doesn't say it's the, it's a perfect test of truth. It's just, as compared to what? And what he's comparing it to is the government saying, “Oh, we figured out the truth. The truth is this, the rest is false. That we're gonna abandon the false, that we're gonna allow only the truth.” That he says is a bad system. It's a worse system than the marketplace of ideas.

The marketplace of ideas is not gonna always produce the truth, uh, but it's more likely to reach that. And one way of thinking about it is why did he say that in 1919? Well, if you look back over the preceding 100 years, time had upset many fighting faiths. There were kind of routinely accepted views about a vast range of things about, uh, religion, about politics, uh, uh, through ... I mean, if you go 200 years before, it was broadly accepted that, of course, democracy isn't gonna work and monarchy or aristocracy is the right way, uh, of doing things.

And then it took the marketplace of ideas to, uh, to, to dis- dislodge that view. Um, uh, likewise, we look at what's happened in the previous, uh, in, in the following 100 years. Uh, how much has changed in our views about race, for example, about sex, about sexual orientation, about science, about again, religion, politics, all of those things. Uh, and, uh, uh, we see that if we, if the government were allowed to say, in 1919, “Here, uh, is the truth about race, sex, and sexual orientation and anything else is going to be criminally punished,” to, to say that the result would be s-, would be s-, uh, stagnation. The result would be less effective than the marketplace of ideas thought as it is.

Likewise, looking forward 100 years from now, why should we have any confidence that our views, even again about race, sex, sexual orientation, whatever else, uh, are going to, uh, to prove correct? Why should we assume that, uh, the things that we have come to believe are necessarily right as opposed to either wrong in big ways or at least wrong in significant, uh, enough ways? I mean, I believe in racial equality. I believe in sexual equality.

I believe, uh, that homosexual conduct and heterosexual conduct, in my view, are morally equivalent. Uh, uh, but maybe I'm wrong. Maybe all of us, uh, who believe those things are wrong, at least in a cert- certain degree and it's only by allowing debate that we can actually try to figure this out imperfectly as debate functions. This having been said, I do agree that there is room for subsidies. Look, I'm subsidized by the taxpayers and the students at the University of California.

Uh, the system of, uh, of university op- of how universities operate is a form of subsidy aimed at promoting the marketplace of ideas. Uh, tax exemptions for nonprofits and for donations to nonprofits are a form of subsidy as well, and maybe there's reason to have more subsidy. Uh, but, uh, and that I think he would have, uh, Holmes would've accepted, uh, that, that that is part of the market, but it's just that the marketplace is better than government regulation is what he was saying.

Rosen: [00:48:57] Ellen, what's your response and what does that say about Facebook's decision, for example, to significantly reduce the distribution of false news by showing it lower in the newsfeed as well as its decision to keep open the possibility of regulating political ads? Thank you for helping me understand the difference between the commercial and political ads policies. So then I went and looked at the political ads policies and they do have this proviso where appropriate Facebook may restrict issue electoral or political ads.

So in the end, Facebook is leaving the discretion in its own hands, even for the political ads. And Eugene is saying that the decision about what's false and isn't, uh, as, uh, per Justice Holmes is best made in the marketplace itself, as imperfect as that is, given the difficulty of having one authority decide on the test of truth. Are you confident in trusting that power to Facebook and Twitter?

Goodman: [00:49:48] So if we're, if the choice is marketplace versus government, yes, marketplace, but now let's look at what, h- how the marketplace is structured. And I think Facebook's, um, down ranking of, of certain kinds of content and it's doing this more and more. And I think that there was a Wall Street Journal article for, I think, Friday showing how Google, um, uh, down ranks or you know, has a human in the loop when it presents, um, search results.

So they're, they're ... Which is to say that these companies are structuring the marketplace. Um, you know, of course, B- Brandeis, you know, was also writing a lot at the same time about the curse of bigness and, um, the, the which is an acknowledgement that, that, um, marketplaces are structured and that there is a, uh, connection between the way marketplaces are structured and political power and, um, public and democratic discourse.

And so we want a marketplace that is designed in order to, um, to, to have us contest for visions of the truth so that we, um, absolutely as Eugene said, so that our, um, our normative views can evolve over time. And I question whether or not we have, um, a, a marketplace right now that's, um, conducive to the best sort of rough and tumble. And I'll just give you, you know, sort of one, um, one way that, that subsidies might work.

And so, you know, not long, um, after that decision and Abrams, well, you know, for the course of the next couple of decades, um, you know, there came to be subsidies in the form of, um, spectrum that was set aside for non-commercial stations. And then we had PBS and we had, um, Corporation for Public Broadcasting. And as Eugene says, we have public universities. We have public education. We have, um, NEH and NEA and sort of there was ... Um, we have the, uh, uh, you know, the postal service.

We have various sort of forms of intervention that are designed, um, to supply people with credible information. And, you know, I think to be a little optimistic, we're right now in this period of transition where we're getting used to these big platforms and, um, we don't know how to deal with global speech platforms. We don't know how to deal with the role of foreign governments or foreign actors in our speech environments and the melding of foreign and domestic and, um, uh, uh, sort of co- concerted action and individual action, truth and, and falsity.

And we, we're in sort of a, uh, state of turmoil and I think the optimistic view is that we will sort this out. Um, we will be able to sort truth from fact. The marketplace will work, but there are some assists, um, in, in terms of accelerating that development and also making sure that, um, uh, the marketplace relies on this sort of cognitive autonomy and our ability, the human mind's ability, um, to think for itself. And we- with the velocity and volume of, uh, data and information that's coming at us, we nay, may not be prepared to exercise that faculty and we may need, um, some, some help and that help can come in sort of various forms of structuring these platforms.

Rosen: [00:52:59] Thank you very much for that Brandeisian response. And Eugene, I'll ask for your thoughts and then we'll have closing arguments. Ellen says eloquently that Brandeis writing around the same times as Holmes warned of the curse of bigness and he had a different view of the purpose of free speech. In his immortal concurrence in Whitney versus California, he said, “Believing in the power of reason, as applied through public discussion, the framer's eschewed silence.”

And he emphasized that, “Those who made our independence believed that the final end of the state was to make men free to develop their faculties and that in its government the deliberative forces should prevail over the arbitrary.” Ellen is saying that if we're concerned about promoting reason through deliberation, reason may need some help in an age of unreason and therefore certain subsidies could promote reason and truth rather than thwarting it. What's your response and can you be specific about the kind of subsidies that you think might, in fact, promote reason, if there are any?

Volokh: [00:53:55] Well, that's too general a question. I think, I certainly think there's nothing unconstitutional about subsidies. I think some of those subsidies are a good idea. Uh, uh, there's always, you know, pe- always people want more subsidies, but the problem, of course, is subsidies come, come out of some, uh, [laughs] some, uh, uh, accounted in the budget, uh, from the government budgets, sometimes private entities budgets. And there's, there are all these trade-offs, but I'm not gonna speak out either in generally against subsidies or identifying particular subsidies that are necessarily, uh, the right approach.

Just to give you one example, some people say that one of the things that we get the least information on these days is about local and state government, that at least about national government and national issues, there are lots of people talking about and newspapers are writing about them, but on local and state government, it's very little. Maybe there should be some way of helping promote that by, for example, providing some sort of subsidies to local newspapers and such, or having some sort of a, um, uh, uh, either businesses or, or, uh, wealthy individuals, uh, set up funds to, to help promote that as, in fact, many have done through foundations.

So these are all, I think, uh, plausible positions. Let me also mention one other thing that came up, uh, which has to do with the ne-, Facebook newsfeed and Google search results. Now we've been so far talking mostly about Facebook and G-, uh, and say Twitter as platforms, but they also provide information to people that's aggregated from other parties, but is basically ... You might think of them as the recommendation function, “These are things that we think you're interested in seeing.”

So in Google, for example, if I go to Google News, I'm asking Google, “Recommend the most interes

More from the National Constitution Center

Carry the Constitution in Your Pocket! Download the App

The Interactive Constitution is available as a free app on your mobile device.

Visit the National Constitution Center

Find out about upcoming programs, exhibits, and educational initiatives on the National Constitution Center’s website.

Support the Interactive Constitution

The National Constitution is a private nonprofit. Please support our educational mission of increasing awareness and understanding of the U.S. Constitution.