We The People

Should Congress Regulate Facebook?

October 14, 2021

Share

Facebook whistleblower Frances Haugen recently testified before the Senate Subcommittee on Consumer Protection, telling senators that Facebook and Instagram stoke division, harm children, and avoid transparency and any consequences for their damaging effects. Her testimony amplified calls for regulation of the platforms. On today’s episode, we consider a variety of proposed reforms, whether they would violate any other laws and whether they would be constitutional. Host Jeffrey Rosen is joined by internet law experts Jeff Kosseff of the United States Naval Academy and Nate Persily of Stanford Law School. They also consider why it is so difficult to regulate the platforms as well as the unintended consequences that may arise if they are regulated, and unpack prior cases on free speech that influenced the overall approach to Internet regulation from its very beginning, including the passage of Section 230 of the Communications Decency Act.

FULL PODCAST

This episode was produced by Jackie McDermott and engineered by David Stotz. Research was provided by Michael Esposito, Chase Hanson, Sam Desai, and Lana Ulrich.

PARTICIPANTS

Jeff Kosseff is an Associate Professor of Cybersecurity Law at the United States Naval Academy. He is the author of the textbook Cybersecurity Law, and the book “The Twenty-Six Words that Created the Internet,” a history of Section 230 of the Communications Decency Act.

Nate Persily is the James B. McClatchy Professor of Law at Stanford Law School. He is codirector of the Stanford Cyber Policy Center and the Stanford Program on Democracy and the Internet, and the Stanford-MIT Healthy Elections Project.

Jeffrey Rosen is the president and CEO of the National Constitution Center, a nonpartisan nonprofit organization devoted to educating the public about the U.S. Constitution. Rosen is also professor of law at The George Washington University Law School and a contributing editor of The Atlantic.

ADDITIONAL RESOURCES

Stay Connected and Learn More

Questions or comments about the show? Email us at [email protected].

Continue today’s conversation on Facebook and Twitter using @ConstitutionCtr.

Sign up to receive Constitution Weekly, our email roundup of constitutional news and debate, at bit.ly/constitutionweekly.

Please subscribe to We the People and Live at the National Constitution Center on Apple PodcastsStitcher, or your favorite podcast app.

TRANSCRIPT

This transcript may not be in its final form, accuracy may vary, and it may be updated or revised in the future.

 [00:00:00] Jeffrey Rosen: I'm Jeffrey Rosen, President and CEO of the National Constitution Center, and welcome to We the People, a weekly show of constitutional debate. The National Constitution Center is a nonpartisan nonprofit chartered by Congress to increase awareness and understanding of the constitution among the American people. The Facebook whistleblower Frances Haugen recently testified before the Subcommittee on Consumer Protection. There are calls for regulation of the platforms in the air. And on today's episode, we are joined by two of America's leading experts, both on the regulation of the platforms and of the First Amendment to discuss what kinds of regulations Facebook might be good policy and whether or not they would be constitutional. Nate Persily is James B. McClatchy Professor of Law at Stanford Law School. He is co-director of the Stanford Cyber Policy Center and the Stanford Program on Democracy and the Internet, and the Stanford-MIT Healthy Elections Project. Nate, it is such an honor to have you back on the show.

[00:01:13] Nate Persily: Pleasure to be here.

[00:01:14] Jeffrey Rosen: And Jeff Kosseff is Associate Professor of Cybersecurity Law at the United States Naval Academy. He is the author of the textbook Cybersecurity Law, and the book The Twenty-Six Words that Created the Internet, a history of Section 230 of the Communications Decency Act. Jeff, welcome to We the People.

[00:01:33] Jeff Kosseff: Thanks for having me.

[00:01:34] Jeffrey Rosen: Nate, you recently proposed legislation that would allow Congress to get data that might inform future regulation. Tell us about your legislation and the kind of data that you think might be helpful.

[00:01:54] Nate Persily: Well, let me give you a little background of where this comes from, for the last four or five years, I've been working, principally with Facebook to try to figure out a safe, secure privacy protected way for outside researchers to get access to Facebook data. So we can find out, what's actually happening on the platform and answer some of the big questions about tech's influence on democracy in particular. We have, it has been quite a struggle, and, and, part of that challenge has been because, there's real fear of liability on the part of the firms if they were to make data available to outsiders. You may remember the Cambridge Analytica scandal, which, involved a researcher who, allegedly breached the privacy of, you know, certain Facebook friends and the like.

I mentioned all of that because I've become convinced that legislation is really, the only solution here that we can't wait for the generosity of the firms to give, data to outside researchers. And so what the legislative proposal does, is empowers the Federal Trade Commission to mandate that the large tech firms, which is to say those that are have over 40 million active monthly users, develop independent research sharing protocols. and so what this would mean is that outside researchers who would be vetted by, the FTC and vetted, through sort of National Science Foundation procedures, would be, allowed to have access to the data that these, that Facebook and Google and similar firms have.

they would not, it's not as if the data is just gonna be given to them and turned over on their laptops and it's not gonna be given to the government. The key thing is that the data resides in the firm, but then, unlike Cambridge Analytica, the researchers would go to the firm to get the data, to analyze it, and then to, you know, develop inferences on all these critical questions that, we've been wondering about.

[00:03:56] Jeffrey Rosen: Thank you so much for that. Jeff, you recently wrote an op-ed for The Washington Post with Daphne Keller, why outlying harmful social media content would face an uphill legal battle. And you argue that, calls for Congress both to regulate the spread of lies or disinformation on social media platforms and arguments that the laws should restrict misinformation, namely amplification of, this sort of content, could face an uphill battle passing First Amendment muster. Before we turn to your very important constitutional arguments. I want to get a sense of the range of legislation you think might be considered in Congress if you do endorse Nate's, call for transparency mandates and for more data, and you say that more facts would be useful if Congress were able to collect more facts as, as Nate suggests, what kind of legislative proposals might emerge?

[00:04:53] Jeff Kosseff: Well, so I, I think that, Nate's proposal is a really critical component of the much broader transparency that we need, for a variety of reasons, both for policymaking, as well as for coming up with technical solutions. I, my areas of research are both online speech and cybersecurity regulation and government surveillance. And for years now, I've had an easier time going to the intelligence agencies and getting basic information about how they operate than I do when I go to the platforms [laughs]. It, it, it should not be that way given the vital role that the platforms play. and I think that so- some of what Frances Haugen disclosed it's incredibly important.

And I worry that, when you rely on, on whistleblowers, to take tremendous amount of personal risk to reveal the sort of information you might not be getting the full picture. So I think that Nate's proposal, really would contribute to the policy debate. I also come at this from really looking, having spent years looking at the history of platform regulation and Section 230. And one of the justifications for Section 230 was that, rather than imposing government regulation of online content, because at the time the concern was miners being able to access pornography. So rather than having the FCC regulate the internet, will avoid the First Amendment concerns by empowering both the platforms and the users to come up with the solutions that best meet their needs.

And, it's very much a free market based idea that the Section 230 says, you know, the platforms have the freedom to moderate or not moderate, and also to provide tools or not to provide tools to users. And the failure that we've had is that markets require a certain amount of information and transparency, and we have not had that. and that's what I think Nate's proposal would get at. I think something else I've been proposing for a few years now is to have a nonpartisan congressional chartered committee that look, or commission that looks at all of the issues surrounding what's possible with moderation, and what's not very similar to the Cyber Solarium Commission because right now we're operating on anecdotes and hypotheticals. And I think that's a very, tha- short-sighted way to look at the future of the internet and platforms.

[00:07:29] Jeffrey Rosen: Thank you so much for that. Nate, recognizing that you think Congress needs data before it can thoughtfully legislate, give us a sense of the range of possible legislation that Congress might pass when it comes to misinformation and disinformation once it has appropriate data.

[00:07:51] Nate Persily: So I think that, Congress is gonna legislate before we have all the research we want. And so we should be realistic about that. And the, the Haugen disclosures are, have lit a fire, I think which didn't exist there before. And while it is extremely difficult to pass any legislation these days, I think Republicans and Democrats are surprisingly united in their hatred for big tech. Now, they perceive the problem in different ways, particularly as it relates to disinformation, if, if, if that's our focus so that in general, Democrats tend to be more in favor of taking and taking down more speech and Republicans are worried about the liberal Silicon valley companies censoring too much.

but there may be some middle ground there. So I, the way I would think about this as a kind of continuum of legislation that some of which is directly on the kind of speech regulation side and, and, and, some of it though is more on the kind of infrastructure or structure of the, of the platforms and the environment. So there's transparency legislation, like not just what I'm proposing for researchers, but other kinds of disclosures, algorithmic auditing, I think is in that kind of vein, not mandating what the algorithm would do, but getting greater, sort of transparency on what the algorithm is doing. Some people talk about algorithmic transparency.

I think that's actually quite difficult, but at least some kind of auditing, obviously privacy legislation, antitrust enforcement. We don't think about those necessarily as like tackling disinformation in the light, but they are, they're sort of levers through which you can, regulate the companies and maybe get at some of these problems. Then when it comes to like explicit regulation of, speech, then, then, then this is where Jeff's really the expert. the, you know, everybody is, you know, see CDA 230, Communications Decency Act Section 230 as a sort of boogeyman here that needs to be slayed, right?

And so most of the, reforms are targeting that, but really CDA 230 has become more of a metaphor for everything that ails the internet. And so when we talk about CDA 230 reformance, Jeff is, everyone should follow his Twitter feed on this. It's really just a, a way of talking about regulating the companies, to go after content on the platforms. so when it comes to like regulating disinformation or regulating hate speech online, I think there are very few options that are available. and so I agree with Jeff and Daphne's a piece in The Washington Post.

I think there are certain things you can do that kind of ring in. So the language of fraud or antitrust in the, like for example, to, hold the, the, the platforms more accountable for the rules that they apply online, but that there, that's fraud as well. you may remember that that was one of the arguments that President Trump had in his executive order, going after Twitter, early on, before he was taken down, was that, the idea was that they could be, prosecuted or fined under, under relevant fraud rules if they, did not, you know, apply their content moderation regime in a fair and honest and transparent way. So I think that that getting a disinformation is gonna be really difficult because as you know, in many respects, the First Amendment protects people's rights, not to tell the truth.

[00:11:21] Jeffrey Rosen: Jeff, that was a big endorsement from Nate of the central argument of your important Washington Post piece. in that piece you argue as, as Nate just suggested that Section 230 really isn't the barrier to direct regulation of misinformation or disinformation, it's the First Amendment. And you set out the cases, both in the Supreme Court and in lower courts that have said that the First Amendment allows some liability for lies like defamation, fraud, and false advertising, but generally prohibits lawmakers from banning misleading speech. give us a sense of the arguments in your op-ed about why the First Amendment might well forbid direct regulation of misinformation and disinformation.

[00:12:10] Jeff Kosseff: So thi- this really comes from a, the, the book that I'm currently writing, which is about, the First Amendment protections for false speech, why we have them and how we deal with misinformation without sacrificing the values that led to these protections that they weren't accidental. And that, and also to point out that there were some really hard cases. And, I look at some defamation cases, which is fairly unique and that you can hold people responsible for false speech, but it's a very high bar that, the Supreme Court and frankly, a common law has also set. But what I find actually more fascinating in a body of case law that I actually wasn't terribly, familiar with until I started researching the book are all of the non-defamation cases involving false speech.

and there are a lot of really tragic, fascinating cases where people have been very harmed by lies, by inaccuracies, by misrepresentations, and they largely, not, not entirely, but largely have not been able to hold the platforms liable because of the very strong protections we have for false speech. One case that I talked about was in the, in The Washington Post op-ed was a Ninth Circuit case where there were two people who were gathering mushrooms, wild mushrooms, and they were relying on a book called the Encyclopedia of Mushrooms, or they say they were, and they consume mushrooms that ended up being death caps, basically the most poisonous mushrooms available, and they both required liver transplants.

And even in that case, they sued for a variety of claims. They filed a lawsuit against the publisher for negligence, misrepresentation, products liability, and the Ninth Circuit just said, you know, this is a tragic case, but, the First Amendment counsels against us creating a duty for publishers to investigate the accuracy of the, of the contents. And I mean that, thi- this was almost as bad as you can get. And even in that case, the Ninth Circuit said no. there, in some of the cases, I mean, they range, there are other cases involving diet books that cause people to get very sick, people who rely on stock prices that are inaccurately reported and lose money. And one of the other themes that you see in these cases is the courts imposing a certain level of expectation of, responsibility on the consumer of the information.

And it's perhaps not a very comfortable [laughs] dis- discussion to have when they're really terrible harms, but, the, the courts do say, you know, there, there, there has to be, you, you have to be discerning. And that's one of the rea-, one of the expectations on consumers. So, when you take these values and you look at the current misinformation debate, and you, you look at all of the harms that are being caused, whether it's people who have misconceptions of how the presidential election ran, or really more, directly to the cases, people who don't take the vaccine, because they believe in misinformation. the, these are really significant problems, but it's difficult to look at the case law that we have right now on the First Amendment and false speech and see how you could vary wi-, wi- without changing the doctrine pretty substantially how you could impose direct liability for that speech.

[00:15:52] Jeffrey Rosen: Nate, Jeff is making a big point, he's saying when it comes to direct regulation of lies, the First Amendment ban is, represented in lower court cases. And the Supreme court case, called the Alvarez case, which held that a federal law that impose criminal penalties on those who falsely claimed to have received military honors, was invalid under the First Amendment, regardless of the intention or harm. And then he goes on to say in The Washington Post, that it would also be hard to ban the platforms from, sharing, misinformation or amplifying it, because, that also would, restrict internet users, speech would be subject to strict scrutiny.

And therefore Congress couldn't tell the platforms which of our currently legal posts should be banished to the bottom of newsfeed or promoters at the top. your thoughts on whether there are any wiggle room in current First Amendment doctrine that might allow direct regulation of misinformation and disinformation, or do you agree with Jeff that the First Amendment is likely to be interpreted to be a pretty impenetrable bar?

[00:17:07] Nate Persily: Well, I, I agree that it, it will be a, the First Amendment is a very high bar, even if, maybe it's not impenetrable here. and we'll, we'll see where the Supreme Court goes, with all this in part. what's been interesting is to see the shift of some of the more conservative and even normally libertarian justices when it comes to platform behavior. but let, let me, let me start with the amplification point because that's a really interesting area. And I think it's, you know, the, the, now we're into that, that next level of discussion of the kind of affordances of the platforms or the, or the way that the platforms may contribute to the, disinformation environment and the like. So I, I don't think you can pass a law that says, "You know, no platform may amplify disinformation."

But you can probably regulate amplification generally, in, in particular circumstances, but that's going to be a kind of time place and manner restriction where you're gonna go after, certain aspects of the platform in the way that they are promoting speech generally. Now, this, this, this also is fraught. I mean, there's, there are constitutional ways to do this and many unconstitutional ways to do this, but if you think that, when, when we talk about regulating amplification, we're also talking about regulating the algorithms, right? And so if there's going to be some kind of regulation of, of algorithms, it's gotta be consistent with the First Amendment.

And so, as we think about, obviously there's the transparency side, which is kind of easy, but maybe also on the technology, I'll give you an example, the use of automation and bots and the like, that is, that you can, I think constitutionally regulate the use of, automation in, in the way, you know, the platforms, present speech in the, like, some of it can be through, through kind of bot disclosure bills and the like, but that's an area where you can say, "Well, this is not really a, a huge First Amendment concern."

though I could see the First Amendment argument against regulating bots, but nevertheless, that's the kind of regulation that I could see. but, but it's, it's, it's not easy to do that to, try to figure out how to, develop a law that will, sort of just regulate the bad amplification without kind of the good, amplification, because part of what social media does, right, is it gives power to the individuals to then, reach a massive audience. And so, how are you going to get in the way of that kind of transfer of individual speech to a larger audience? 'Cause that is really what we're talking about, when we're talking about, amplification. One other area, and Jeff hit on this a little bit, is to think about advertising, right?

I can easily see, some, greater regulations, whether it's about on the disinformation side or, or a digital advertising generally, as well as micro-targeting and the like. I, whi-, whi- which can feed into some of this amplification that we're talking about. that's another area I think of fertile, potential regulation, to get at, you know, basically the purchase speech on the platforms, and, how you could get at both the truth of some of the claims that are there, as well as just the mere strategies that we use to run ads online.

[00:20:35] Jeffrey Rosen: Jeff, what kind of regulation, if any of the algorithms do you think the First Amendment might allow? Nate pointed to time, place, and manner restriction, regulation, bot regulation, and the regulation of advertising. There is a bipartisan bill introduced just today that would prevent tech platforms from using their power to disadvantage smaller rivals, with their algorithms. Do you believe that any of these algorithmic regulations, might be consistent with the First Amendment or not?

[00:21:10] Jeff Kosseff: I think they could be consistent with the First Amendment, de- depending on how they're designed. I think Nate's point about time, place, and manner is well taken. I, I, I struggled to see how you could do that effectively with an algorithm and with a law that regulates algorithms. I think that gets you far closer to what could pass First Amendment scrutiny. I, I think there's both legal and policy issues. The policy issue is, as Nate was getting at, that that there are actually some good uses for algorithms. They've kind of become, the boogeyman for everything bad about the internet, but, there, there also are ways that algorithms make the internet far more useful and prevent harmful content.

And I, I think that then you start saying, "Well, we wanna regulate the, only the bad algorithms and not the good ones." And that gets you back to the legal problem of well, is the law defining what's bad or good? And that that's when you start getting more into the content-based issues. I'll give one example, there's a proposal from earlier this year, very well intentioned to deal with the algorithmic promotion of health misinformation. And it says that if you use an algorithm to promote health misinformation, you lose 230 protections for that information, that misinformation. And then you look in the bill and it says, and it says health misinformation is, defined by guidance that's promulgated by the HHS secretary in consultation with any officials the secretary deems appropriate.

And, and that kind of terrifies me [laughs] because, you, when, when we, and in the op-ed, we have the quote from Justice Kennedy about how we don't have a ministry of truth in the United States. and it sounds good because it sounds like, "Oh, you've got someone saying what's health misinformation." Well, you might have a different HHS secretary in a few years who has very different viewpoints about all sorts of things that, and things that you might determine are misinformation. And now suddenly, they're determining what kind of speeches disfavor on the internet, and that could really be misused. And that's why we have such strong First Amendment protection.

So, I, I think a lot of the algorithmic concerns, I think many of them could be addressed through privacy legislation, because a lot of the concern is the use of target using algorithms to target people based on personal information that's collected. And I think many of the First Amendment concerns could be eliminated if you're dealing more with the collection and processing and sharing of data, rather than immediately going to speech restrictions.

[00:24:00] Jeffrey Rosen: Nate, is Section 230 reform in any way, desirable? Frances Haugen, and another Facebook employee have said that eliminating immunity for amplified content could, help and, stop platforms from prioritizing emotionally engaging but societally dangerous content. Jeff and Daphne Keller argued that this is both unlikely to work 'cause rank feeds won't go away. And most importantly, changes in privacy law or reforms grounded in competition are a better path than changes to Section 230. Do you agree with Jeff that 230 is a red herring and we should focus on alternatives, or do you think under some circumstances 230 reform might be a good idea?

[00:24:50] Nate Persily: So, again, when we, when we talk about 230 reform, we have a lot of different things that we're, we're throwing into that. So, so let me say a few things. First is, Section 230 doesn't just apply to Facebook and Google. It applies to the whole internet, right? And so, if you think about repealing 230, which is sort of the, the, the bludgeon that, that is often, or I don't know, the hammer that's, that's wielded these days rhetorically, well, that doesn't just, affect these, these big firms, right? It affects, small firms as well, not just small firms, but websites and, and as well as potential competitors to Facebook and Google and the like, right?

And so, if anything, if you were to remove the immunity, that Section 230 provides that the big firms would probably have an advantage because the little firms would not be able to compete. I think, I think there's sort of a lot of agreement on that. But suppose you even just, you, you, you excise the big firms from Section 230 immunity, what would that mean? well now it depends what your, what your, what problem you're trying to solve and when you're gonna get after it. Now, it, it's really hard, you know, part of the question is if you're going to, to create liability for amplified bad or injurious content, right?

you have to have a theory about how the platforms are going to, distinguish among certain types of content in order to prevent the amplification of the bad stuff. And so that's, that's quite a challenge. Uh it's, you know, maybe not insurmountable, but, but we, you're regulating two things at the same time. And one of the re- real contributions I think of Jeff and Daphne's article is to point out that, look, if you're, in the normal First Amendment realm, if you're regulating, you know, this banning the speech itself, or you're banning people from using, megaphones to, broadcast certain types of speech, right?

You're gonna run into some of the same problems, which is that if you have a content-based or viewpoint-based restriction that triggers this ban on amplification, it still runs into First Amendment problems. Having said all that. I think there's a really interesting question about how you might regulate amplification generally on the platforms. And let me, let me put out a few ideas that I, I just want a loft out there so that, that Jeff might be able to shoot them down, which is that, and this isn't, these are not realistic proposals. There's, there's some problems with it, but suppose, you know, coming out of the Haugen testimony that we then end with, you know, there's something really bad about the way that the big firms are organizing people's newsfeeds or recommendation out, algorithms and the like.

And so you mandate, you mandate that, so the Facebook newsfeed must be, I mean, don't just do it to Facebook, but anything, any firm like Facebook, it must be in reverse chronological order, right? So that the only, that, that they will not organize the information for you, that it's just whoever was the most recent person, to, in, in your friend list who will, will have spoken. All right. That's what a lot of, that's what actually Frances Haugen has argued for. It's what you can actually change your settings on Facebook to do that. would that be constitutional or not? Mandating that, that algorithm, basically limiting algorithms to, and the organizing information, to reverse chronological fees.

Now, in the normal courts of things you might, you know, if you were to, to, regulate programming on NBC and say, right, you may only do, you know, you have to do news at 8:30 and you're or, or, you know, you, you, you, the government was mandating the order of the presentation of information. That would be, that might be a problem. I'm sort of eager to hear what Jeff might say about whether that same concern might apply to regulating the algorithm, 'cause that's not a content based, regulation. It's just saying, "Well, we can debate whether it's content base, you're gonna hear it."

Related to that, I think that a lot of the senators and others are thinking about whether you might, regulate algorithms that are prioritizing engagement in some way, right? And this goes back to what Jeff was saying about the, the interaction with privacy. So if, if you remove the ability of the firm to collect certain private information, for example, your search history or viewing history and the like, then they will be paralyzed if they were trying to maybe, structure your feed personally for you as to what you've engaged with before.

And so, is there a way to go after that practice, which was really at the who of a lot of Frances Haugen's criticisms that, Facebook and, and other firms, prioritize the kind of content that you find engaging. And so therefore that leads to addiction. It leads to, maybe people consuming bad content. But those are the kinds of regulations of, of algorithms I could see being proposed, though I think they're not clearly constitutional.

[00:29:49] Jeffrey Rosen: Jeff, since Nate asks, do you believe that the two algorithmic regulations he suggests mandating that news feeds must be in reverse chronological order and regulating algorithms prioritizing engagement would be constitutional or not?

[00:30:05] Jeff Kosseff: So I'm gonna take the very bold position of saying that I don't know for sure, because, we, we, frankly, in part, because of Section 230 we don't really have a body of case law First Amendment case law involving algorithms, but we've had algorithm based claims before, and they've been dismissed either on Section 230 grounds or the sub-, or on the grounds of the substantive law, basically saying there's not a claim that was stated. So this is all basically applying case law involving cable television distributors and bookstores, which is kind of the, the more, one of the more fun things about, about this work.

And so, so I'd say, I don't know, I'd say that there would be reasonable argument that it is a content-based restriction, even just to say, to tell a private company you must present the feeds in a certain way, that you may not prioritize certain content. so I, I think that would be a relatively strong argument, but I think a court might look at this a little differently than a bookstore or a newspaper and say, you know, this is a little different and all it's saying is just present the information as some early social media platforms did and some still do. so, so I I'm really kind of on the fence on that one.

I, I think from a policy perspective I would question whether that would be terribly useful because there, again, are a lot of good reasons to prioritize certain content. And, at least that's been publicly stated. And this gets back to actually the first point we were talking about is with the lack of, of enough research on the topic and, be- because we, we don't know exactly, I mean, from, from what Frances Haugen disclosed, obviously, there are some really concerning ways that this has been used, but there also are very well some good reasons for this.

So, I, I, I think this would be tough. I mean, and, and I also think that, when, when you look at a lot of the online harms, it, it gets for when you deal with Section 230 in particular, a lot of them are much more individualized and content being about the individual person, things like defamatory content. And that obviously wouldn't really be addressed very well by algorithms, or by, by a 230 exception for algorithms. So I, I mean, I think it's worth discussing, and I think frankly it would, require some [laughs] trial and error in the court to see exactly how they would, interpret the First Amendment as applied to those restrictions.

[00:32:55] Jeffrey Rosen: Nate, another beat on potential regulation of false speech. As we've been discussing the Alvarez case involving the medal of honor, said that false of the alone doesn't suffice to bring speech outside the First Amendment. The statement has to be a knowing and reckless falsehood. Justice Kennedy said our constitutional tradition stands against the idea we need Oceania's Ministry of Truth. If Section 230 were repealed, and remember the 26 words, are, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. If those 26 words were repealed, could you imagine the Supreme Court upholding a law saying that the platforms can't continue to publish, information once they are informed and know that it's false?

[00:33:52] Nate Persily: No, I don't think that would be enough. I, I think that would violate the First Amendment, not just of the platforms, but of the users. And that's im- important, I think thing where you gotta put on the table here, which is that when, when the government regulates the platforms, it doesn't just regulate their policies and their speech, right? It's also regulating the individual speech and, and, and part of the story here, if you look around the world with, you know, intermediary liability laws, which are like Section 230, is that, if you, if you, create broad liability for stuff that appears on these platforms, they will naturally over censor in order to get at marginal locations, because take something like defamation, which is, you know, and people should understand that even if you repeal Section 230, it's not like all false speech is now somehow illegal, right?

It's only a narrow category of false speech. And I think Jeff was making this point, which is that you've got, you know, defamation and the kind of speech which might lead to offline or, you know, real world harm that could then lead to, kind of liability. and so, but, but if you were to have a, a, a, you know, repeal of Section 230 and liability for, for either, you know, false speech or certain types of false speech, you know, what are the platform going to do in the context of say defamation? So if, if, if someone says about me, you know, Nate personally, is a drunk or something like this, what, how did that, how, how is Facebook gonna know [laughs], what exactly are they going to have to figure out? Because, you know, a defamation case is a really hard case to bring, it takes a long time, right?

And presumably the injury occurs right at the time Facebook allows it on, on the platform. and, and so, you know, if they're notified that they that, you know, if I say, "Look, this is false." how are they supposed to adjudicate that in real time? And, you know, we have examples of this around the world, the NetzDG law in Germany, which deals with hate speech has, that kind of flavor to it. And, you know, there are ways to try to muddle through some of this. but especially if you're trying to, so of the broad contours of liability for just lies in general on the internet, then that's gonna be extremely difficult because how is, how are the firms going to be able to police, you know, any statement, as to whether it's true or not.

And, and mind you remember there, we're we, the, what we're talking about here are things like the newsfeed and, and maybe search results or something like this, but what about WhatsApp? What about text messaging? What about all the other kinds of products in which lies are, are traveling and sometimes reach, you know, high levels of virality? I mean, we, we talk about, think in the US that the newsfeed is where most of this is happening, but if you go around the world, WhatsApp is really, the arena in which all, all of this problematic content is, is achieving an audience. Let me just say one last thing about, the algorithms here, which is, you know, I think that, that, you know, if Congress, there, there are a lot of reasons not to mandate say a reverse chronological feed, because it all leads to gaming by sophisticated operators who then spam your account to put as much crap actually out there so that it makes it more likely that the more that they produce, the more likely it is at the top of your feed.

And so there's a lot of good reasons not to, allow not to have a reverse chronological feed as the preferred, option. but I think it, it really is an interesting question about what, how Congress could regulate the actual prioritization, in the, in, in your feed, you know, whether the bookstore is the right and analogy or whether, it's something different. And, and I'll say one last thing, which is, if we think about, you know, the, the coming First Amendment battle, I think it will be as to whether the normal kind of citizens united protections that we give to corporations as, as free speech actors are going to apply to Facebook and Google, right?

because yes, if they're, if you think of them as like a bowling club or as like a private association, then a lot of this stuff's gonna be unconstitutional, but is the Supreme court kind of almost gonna merge a little bit of this antitrust thinking into First Amendment doctrine and think that maybe they don't have the same First Amendment rights as your garden variety corporation.

[00:38:17] Jeffrey Rosen: Fascinating. Thank you for flagging that important, debate that's coming up. Jeff is your argument that Section 230 reform is unnecessary as well as ill advised? It's, it's, it's a red herring and focusing on 230 misses the more substantial issues. And, and if so, tell our listeners why you think that's the case.

[00:38:39] Jeff Kosseff: I think Section 230 reform often is a red herring. I think there, there are some changes to Section 230 that I think are worth considering. And frankly, there are others that I don't think we have enough, information to, to know yet, which is again, getting back to the need for transparency. We're kind of operating on very narrow bits of information and not, not a broader body of information to make a decision. I mean, one, one area that concerns me again is, that I think something could possibly be done about is, Section 230 is we all know it prevents in many cases, liability for the platform.

But one of the, just for Section 230 is that, you can hold the individual who posted the information accountable if it's defamatory or something like that. but that's often not the case. and I looked, this actually prompted me to write my book that's coming out, early next year on anonymous speech. there are both technological legal reasons why, for good reason why people are anonymous, but there also are cases where people will anonymously post very harmful defamatory and harassing content. and in some cases where, the plaintiffs actually do go to great lengths to sue, and the material is adjudicated to be defamatory in the lawsuit against the poster.

but in many times the platforms in that situation will voluntarily after an adjudication take down the material. but, the California Supreme Court a few years ago held that, Section 230 prevents a platform from being required to take down material that's been adjudicated defamatory. I, I see arguments, but there are good arguments for that ruling in part as Eugene Volokh is very well documented. There are a number of, cases where people have ob- obtained falsified court orders. also many of these cases are in the default judgment, pro stage because the, defendants never show up.

So, but, but at the same time, I do have a problem with material that is so harmful that it it's adjudicated defamatory if it doesn't come down. And there are a few really niche websites that traffic in this sort of information, and won't take it down. so that's the sort of 230 reform that I'm thinking of. And I know it's not what [laughs] is currently in the mainstream, but I, I mean, I'm most concerned about people whose lives get ruined for this, fro- from these protections. so those are the, but, but I think, I mean, Nate's exactly right there. So many cases where Section 230 really is, being used as what people, people are focusing on it, but it really is much more either a First Amendment issue or that there's just not a viable cause of action, because of the First Amendment or just because of the common law doesn't, allow liability in that situation. So in many cases, I, I fully agree. It's not a 230 issue.

[00:41:53] Jeffrey Rosen: Well, Nate, you put several possible reforms on the table, including time, place, and manner restrictions, regulations of automation and bot and regulation of advertising and microtargeting you're, disagreeing about whether or not that might be consistent with the First Amendment. in, in the time remaining recognizing that you think more data is necessary. What are the range of specific legislative proposals that you think would be most productive for Congress to consider?

[00:42:24] Nate Persily: let me start with one that I, I'm not sure about because I'd have to know more of the content, but where I think things are going very quickly. and look, there, there is a lot of sort of, closet Section 230 reform, whenever Congress is about creating liability for particular types of content. And so one of the things that came out of the Haugen testimonies that was most incendiary was, the use of Instagram by teen girls and the possibility that it, it, it makes it more likely that they would develop eating disorders. Now, there's a lot of debate about the, the science in there, but, but certainly there was, you know, internal Facebook research on that.

and so I think as is true in many other speech realms that we may find that a lot of the levers, to go after content may be justified on protecting children. and that is, on the one hand that seems like, "All right, everyone can rally around and I think people will rally around it." but, but it brings up one of the really kind of, critical, critical things about the internet. It ties into Jeff's, forthcoming book about children and the like, which is that, you know, the thing about, online experiences, right? It's, it's, it's, it's anonymous, right?

It's hard to verify who people are. If you had a verification scheme for all platforms, for all users, right? That would re-, that that maybe was government monitored that would allow for a lot more control of the internet, right? I mean, the, for example, the Chinese have now, passed rules about, young people, and how much they can, you know, be online in particular, platforms and like. so, I, I do think there, there's gonna be an interesting way that some of this legislation, or, or Section 230 reform is then filtered through the idea of protecting children online us when it comes to eating disorder content, but it's gonna be all kinds of things re- related to addiction and the like.

I, I continue to think that the most important, work here obviously will be done by transparency 'cause I'm, I'm promoting that. But, but thinking about structural questions, thinking about privacy, thinking about, antitrust and the like, and what we don't tend to think about those as free speech issues. they do have implications for the kind of behavior of these platforms, and, and, and their rules. I mean, part of the reason, the, the, the point about transparency and why I push researcher access as much as I do, it's not just so, academics can have more public.

If the firms know that they are being watched and that what's happening on the platform will be essentially reported by some independent third party, it will actually affect their behavior, right? So that it will have, you don't have to pa- pass a kind of blunderbuss, potentially First Amendment violative disinformation law. If you have scholars who are going to unearth how much disinformation is on the platforms or not, right? And so I do think transparency has that, character to it. I also think advertising regulation is, long overdue. And I think that there's a lot that we should be doing, with respect to advertising regulation on the platforms.

Some of it deals with disinformation and fraud and the like, but a lot of it would be dealing with, targeting, and, and, and things like that. I'm interested in kind of campaign codes of conduct, which kind of bleed over into this area. that's a, that's a hard one to reconcile. There are situations in which, politicians actually when they lie about their opponents, there are, are, some narrow statutes where that has been applied, but we can imagine how that would play out in this, information environment like. So, so those are the areas that, that I'm thinking about. I do, you know, only in favor of, of Section 230 reform when it's sort of serve in service of these other, kinds of interests, some of which will actually have the salutary effects that a lot of, the more sort of aggressive measures are trying to achieve.

[00:46:28] Jeffrey Rosen: Thank you very much for those concrete and, and really helpful, range of proposals. Jeff, based on your forthcoming book and your study of the field, what are some proposals for the regulation of anonymous or other speech that you think both would pass constitutional muster and would be a good idea?

[00:46:57] Jeff Kosseff: So I, I think that the, the proposal that I just talked about with once material is adjudicated defamatory and a litigation between the poster and the subject. I, I, I think that that would address many of my concerns particularly about anonymous speech. and, it, it, it's, it seems like a narrow issue, but again, I think it would really get at some of the, very bad acting niche websites that traffic and the sort of content. I, I think in terms of, in terms of other regulations on anonymity, there's a draft bill, that was circulated in the house that would require, identity verification for anyone who signed up for a social media service.

I don't think, I think based on some fairly clear case law on, on that very topic, I don't think that passes First Amendment scrutiny because the, the book goes into detail about the fairly strong First Amendment protections for anonymity for very good reasons. And moreover from a policy perspective, I, I think that, there's some confusion about anonymous speech, and this assumption that it's inherently bad when in fact marginalized groups rely on the ability to speak pseudonymously and anonymously far more than, than other groups.

So, so I worry too much, but I, but I think that's, that's clearly where one of the attacks is going is saying, "Okay, let's, if we're not gonna address Section 230, let's address anonymity." And I, I, I don't think either of those will really get at some of the broader underlying problems. I'm a former newspaper journalist. my newsroom when I started there out of college in 2001 was about 400 people. And now I think it's about 50 people. I, I think those are some underlying issues when you're, that, that you have to look at when you're dealing with, why, why does misinformation spread?

It's, well, I mean, there, there are some fairly fundamental structural changes that, reg-, perhaps regulating the internet is not going to solve. So, so, so I don't think there are any easy answers, but I, I do think that at the very least getting more transparency about the business practices and information flows about these very low large companies is, a great step toward, finding solutions.

[00:49:39] Jeffrey Rosen: Thank you for that and for that insight that neither addressing anonymity nor addressing Section 230, in your view, will, will get at the underlying issues. Nate, it, it's time for closing thoughts in this really fascinating discussion. Having discussed the range of possible regulation and the, and the various First Amendment difficulties. I, I wanna ask, is it possible that the data that you're calling for ones that were turned over would reveal that disinformation and misinformation on the internet isn't the algorithms fault that folks share explosive and also often false information because it appeals to their, emotions and that, it's user habits and not the algorithms themselves that are responsible for the spread of misinformation, disinformation? And if that's the case, should we be thinking about, not legislation and not changes with of platforms, but the way people consume information? Broadly, what do you think, what are you looking for once the data you're seeking is turned over?

[00:50:43] Nate Persily: Well, I'm looking for it all, but, I think there is something to what you're saying. And, and I will say that the, you know, as I put my political scientist hat on, that the literature that is emerging right now is suggesting that the algorithms are not as responsible as people think for the pervasiveness of whatever pathology you're pointing to, whether it's disinformation, hate speech, polarization and the like. That really it's, it's not as much that people are gonna get served up this information as much as they go out and seek it.

And so that we're, we're seeing a lot of that. and, and this is one of the reasons I'm, I'm trying to convince the platforms not to, to go after this legislation too hard, whatever story we tell about what's happening in the platform can't possibly be as bad as what everybody thinks right now, because everybody thinks it's, it's basically a cauldron or recess pool of, of this information, hate speech and the like, and everyone's seeing it all the time.

there, there is emergent research on the outside, which is, is trying to get a handle on this, but only if you're on the inside, can you figure out, for example, whether YouTube's recommendation algorithms, whether it's autoplay features and the like, lead people down rabbit holes or, or privilege this information, and the like. And so I think that, yes, it is possible that once the researchers, go in there, that they're gonna find out, you know, that, that we've been misplaced in the responsibility or blame that we put on the algorithms and that, it's in some ways, a more difficult and severe problem if what's happening is that people are going to the platforms, looking for this information, as opposed to just having it imposed upon them.

[00:52:23] Jeffrey Rosen: Thank you so much for that. Jeff, your closing thoughts about this possibility that it's not the algorithm's fault. It's our fault, we the people, as we seek out more explosive information rather than having it served up. If that's the case, is the problem even worse than you've suggested so far in the sense that it's not just First Amendment and, other, legal bans to reform, but the need for a fundamental change in the way people pursue truth?

[00:52:53] Jeff Kosseff: I think that very well could be the case. And again, I'm, I'm open to [laughs] any potential outcome because we don't have the data yet. So that's why Nate's proposal is so great. I, I do worry and I, I, I highly recommend anyone read, Joe Bernstein had a great article about, big disinformation and sort of a bit of the panic that's happening right now, is in Harpers in the most recent issue. which really kind of puts this into perspective. And I, I do, I do worry that it could be a much deeper problem. I mean, I, I think about what Justice O'Connor did right after, or she retired and focused on the need for civics education.

I, I, I think that's part of it. I, I, I think that, clearly the, there are some avenues for misinformation to spread online and it has been misused, but why is it being misused? And I, I just worry that no matter what we do with Section 230 or passing new misinformation regulations and chipping away at the First Amendment that we're not going to get at whatever the underlying problem is, which is why we, I, I really hope that we can get far more transparency in the debate.

[00:54:13] Jeffrey Rosen: Thank you so much for that. Well, civics education is clearly one solution to the problem of misinformation, and that's what the National Constitution Center and We the People exist to promote. Thank you, We the People listeners for tuning in every week for these thoughtful, deep, balanced discussions that try to cast light on the hardest and most important issues facing American democracy today. And thank you so much, Nate personally, and Jeff Kosseff for taking the time to shine light on one of the most difficult and most important questions facing our republic. Nate, Jeff, thank you so much for joining.

[00:54:53] Nate Persily: Thank you.

[00:54:53] Jeff Kosseff: Thank you.

[00:54:57] Jeffrey Rosen: Today's show was produced by Jackie McDermott and engineered by David Stotz. Research was provided by Sam Desai, Lana Ulrich, Michael Esposito, and Chase Hanson. Please write, review, and subscribe to We the People on Apple Podcasts and recommend the show to friends, colleagues, or anyone anywhere who is eager for a weekly dose of constitutional illumination and debate. And friends remember, always when you wake and when you sleep that the National Constitution Center is a private nonprofit. It's so great, that some of you are writing in and giving $5 or $1 or $10 to signal your support for our crucially important mission of civics education, which as you just heard from both of our panelists is, crucial to the survival of the republic. So please support the mission. Go to constitutioncenter.org/membership, or give a donation at constitutioncenter.org/donate. On behalf of the National Constitution Center, I'm Jeffrey Rosen.

Loading...

Explore Further

Podcast
Can the Government Pressure Private Companies to Stifle Speech?

The Supreme Court examines the limits of jawboning

Town Hall Video
Unpacking the Supreme Court’s Tech Term

Legal experts Alex Abdo, Clay Calvert, and David Greene explore key tech cases before the Supreme Court and important questions at…

Blog Post
A national TikTok ban and the First Amendment

The recent House passage of a bill banning TikTok from app stores in the United States has ignited a national constitutional…

Educational Video
AP Court Case Review Featuring Caroline Fredrickson (All Levels)

In this fast-paced and fun session, Caroline Fredrickson, one of the legal scholars behind the National Constitution Center’s…

Donate

Support Programs Like These

Your generous support enables the National Constitution Center to hear the best arguments on all sides of the constitutional issues at the center of American life. As a private, nonprofit organization, we rely on support from corporations, foundations, and individuals.

Donate Today

More from the National Constitution Center
Constitution 101

Explore our new 15-unit core curriculum with educational videos, primary texts, and more.

Media Library

Search and browse videos, podcasts, and blog posts on constitutional topics.

Founders’ Library

Discover primary texts and historical documents that span American history and have shaped the American constitutional tradition.

News & Debate