A PYMNTS Company

Section 230 Reform in an Era of Big Tech – Panel Transcript

 |  July 2, 2021

SECTION 230 EVENT 2021

Below, we have provided the full transcript of our panel discussion Section 230 Reform in an Era of Big Tech. Read below to see the timely discussion where a panel of experts discussed the implications behind changing this key piece of legislation.

Matt Perault Speaker BW

Matt PERAULT:

Welcome to today’s event, Section 230 Reform in an Era of Big Tech. I’m Matt Perault, the director of the Center on Science and Technology Policy at Duke. As Elisa said, this event is presented by Competition Policy International, in collaboration with our center at Duke. It’s sponsored by Google, which also supports our center at Duke. This event builds on the May issue of the Antitrust Chronicle, which focused on the interplay between Section 230 and competition issues. Several of our panelists wrote articles for the issue, and if you’re interested in checking them out, you can find them on the Antitrust Chronicle’s website. We’ve got a great lineup of esteemed Section 230 experts joining us today.

Eric Goldman, a professor at Santa Clara University School of Law and associate dean for research. He’s also a co-director of the High Tech Law Institute and supervisor of the Privacy Law Certificate, and he blogs regularly about Section 230 law and policy on his Technology and Marketing Law blog. His article for the Antitrust Chronicle was entitled Regulating Internet Services by Size.

Daphne Keller directs the program on platform regulation at Stanford’s Cyber Policy Center, and was formerly the director of intermediary liability at the Stanford Center for Internet and Society. She also previously worked as associate general counsel for Google, where she had primary responsibility for the company’s search products. She’s written extensively about the issues we’re discussing today, including an article published by the Knight Institute a few weeks ago, entitled Amplification and its Discontents: Why Regulating the Reach of Online Content is Hard.

Berin Szoka is the president of Tech Freedom, an organization that, in its own words, is bullish on the future, for the most part, it’ll be great, if we let it. His article in the Antitrust Chronicle was entitled Antitrust Section 230 and the First Amendment.

And finally, Kate Tummarello is the executive director of Engine, a non-profit that works with a community of high tech growth oriented startups across the nation to support technology entrepreneurship. Before Engine, Kate worked on surveillance technology issues at the Electronic Frontier Foundation, and before that, she was a tech policy reporter at Politico and The Hill.

Welcome everyone. Let’s start with a few opening questions. Eric, a lot of the criticism of Section 230 has focused on the idea that it gives an unfair advantage to big tech platforms. Is that an accurate criticism of Section 230?

Eric Goldman Speaker BW

Eric GOLDMAN:

Yeah. Thank you and thanks for putting this conversation together. It’s such an honor to be sharing the stage with my co-panelists, all of whom I’ve had the joy of doing that before and I’m thrilled to do it again today.

Let’s start with just some foundational principles. Section 230, at its very essence says websites aren’t liable for third party content. The idea is to channel responsibility towards the people who are originating content, and not towards everyone else who is in the middle of the chain of the conversation taking place between the originators and the readers or the listeners. The idea is that, by channeling that responsibility towards the people who are creating or originating the content, we free up everyone in the middle to develop the kinds of systems that are best suited for their audience.

Some services choose to do highly moderated environments, where they really want to control that conversation. Other services choose to do more lightly moderated environments, where it’s pretty much allowing the conversation to be free flowing with just a pruning at the edges. So, given the nature of helping services decide what’s the best approach to curate content for their audience, there isn’t really an advantage based on size, between big tech and something other than that, let’s call it small tech if you want, that it applies equally to all of the services, and it gives them all the same advantage.

Now, in practice actually, Section 230 grossly favors smaller services by making it so they don’t have to build the same kind of industrial grade solutions that the larger services choose to build. In that sense, it’s not an unfair advantage, but the advantage in Section 230 is actually weighted towards the smaller enterprises, not the bigger enterprises.

Whenever I see people talking about a reforming Section 230 to take away the advantage given to big services, usually that person grossly misunderstands the actual beneficiaries of Section 230 and how much it benefits the people who are not big tech.

PERAULT:

Great. Thanks so much for that overview, Eric. Daphne, there seems to be an emerging consensus in Congress that Section 230 should be reformed, and though Republicans and Democrats both seem to want reform, they’re motivated by different things. Democrats typically want more content to be removed, Republicans generally want less. You’ve written extensively about that assumptions underlying each reform camp, and you’ve talked repeatedly about what you call the stubborn misguided myth that platforms are required to be neutral. What is this myth? And why is it so stubborn?

Daphne Keller Speaker BW

Daphne KELLER:

Well, first of all, those weren’t my words, this is one of those things where you write an op-ed, and then the newspaper puts a title on it, so this is the Washington Post’s words. I think the reason they asked me to do that op-ed and the reason they came up with such a frustrated sounding title, is that the reporters who understand what 230 actually says couldn’t figure out why members of Congress kept insisting that Section 230 requires platforms to be neutral, when clearly it doesn’t.

So, I think part of where we get this idea that maybe every internet platform should be treated as either they have to be totally neutral, or they have to act like a publisher, is because a lot of people grew up in an era when those were the only two regulatory models that existed. We had phone companies on the one hand, who had to carry everything, and broadcasters at the New York Times, publishers on the other, who had lawyers who vetted every single thing. But neither of those is what an internet platform is today or what we want it to be. If we lived in a world with those kinds of rules, where you had to either act like a neutral conduit or act like a publisher, you would have two kinds of services. You would have the common carrier services, excuse me, sorry, that carry everything, the tide of garbage that the internet left unchecked would bring us. All the bullying, all the pornography, all the pro-anorexia content, the Nazis, it would be that version, which very, very few people want to spend time looking at.

Or, it would be something more like the nightly news, where everything you see has been checked by a lawyer and only a few privileged voices get through. In that world, you might as well just go watch TV, because you’re not getting something out of platforms that’s different. What we do get out of platforms, and what is enabled by the fact that they are not put in this neutral conduit category or the publisher category, is a world where we can all go speak instantaneously and we can share our baby pictures with distant family immediately, and we can communicate even if we aren’t the kind of privileged voices who get access to appearing on the news or appearing in a publication.

To address the fact that platforms function in this way that’s in between those two categories, where we want some moderation to keep the conversation civil, but we also don’t want everything so over lawyered that we as users never get to participate using platforms the way we’ve become accustomed to, we have laws like 230 which was designed to enable exactly those things, and which one of the law’s creators, Ron Wyden, Senator Ron Wyden, has called a law for people who don’t own television stations.

PERAULT:

Thanks so much, Daphne. Berin, you writing your Antitrust Chronicle piece about the important role of Section 230 in providing a procedural short cut in litigation. Why is this shortcut important? And how will it affect antitrust suits?

Berin SZOKA Speaker BW

Berin SZOKA:

Well, before I answer your question, let me just put a point on two things that Daphne just said. Ron Wyden’s quip there is cute, but it is inaccurate in a very important way. Section 230 does protect a company that owns radio stations or websites, and this is really important, because a lot of the frustration that’s being expressed today is on behalf of traditional media arguing that somehow Section 230 favors digital media, and that’s inaccurate in the sense that Section 230 makes it possible for local broadcasters, local newspapers, any form of traditional media to operate websites that host user content. It would not be possible for them to do so without 230. Just wanted to clarify that, and also of course that 230 applies not just to providers, large or small, of interactive computer services, but also to users. It wouldn’t be possible for us to re-share other people’s content, for example, if it weren’t for the protections of Section 230, and no less than Donald Trump himself used Section 230 to get a lawsuit against him based on re-tweeting someone else’s defamatory tweet dismissed. That just gives you an idea of how Section 230 is applied.

Eric and Daphne are exactly right about what Section 230 does. It effectively abolishes the distinction between platforms and publishers. And to give you another sense of how little that is understood on the hill, consider Wicker Bill, which imposes all sorts of common carrier duties to carry other party’s content, unless you say in public that you’re a publisher. When I read the bill the first three times, I didn’t understanding what that was supposed to do. And then I finally understood when I saw an email from someone supporting the bill saying, “Ah, this bill forces websites to choose between being protected by 230 or admitting that they are publishers and not being protected by 230.” Well, that’s not how the law works. It doesn’t matter whether you say you’re a publisher, the law still protects you, because C1 says those distinctions just don’t matter, period, end of story. It continues to be the case that not only reporters, but also the chairman of the, excuse me, ranking member of relevant committee, do not understand how the law works.

Now, to answer your question, what does the law do?

PERAULT:

Berin, actually before we get there, I just want to ask you, just because now you’re talking about your own set of myths, I want to ask the question that I asked Daphne. Why are the myths about platform publisher distinctions, why do you think that those myths exist? As you said, the statute itself actually makes clear that there shouldn’t be a distinction.

SZOKA:

Well, two basic reasons. One, this is hard stuff, and few people on the hill have the attention span any more to work through any legal issues of any complexity. And second, that’s especially true when motivated reasoning kicks in, and they want to believe a certain outcome. We’ve seen this for the last three years, that Republicans have become completely impervious to any explanation of how 230 works, because that’s not how they want it to work. And they’ve come up with all sorts of theories for how the law should be read, but bear no relationship to the text, how the courts have interpreted it, or to any standard of construction, or any understanding of how the law works. And it’s not just Congressional Republicans, I mean, it’s all the media stories about this. The current cover story on Wired Magazine grossly misunderstands the history behind Section 230 and why we have the law.

These mistakes are very common, and even when you get lawyers who write about them, as was the case with Wired Magazine cover story, rarely do they take the time to actually work through the difficult issues that they’re writing about, and they just leap to conclusions, their editors don’t know any better. Nobody else knows any better. They don’t bother to ask any of the people who might know better, some of whom are on this call. So, they publish mistake after mistake, that’s the vicious cycle that we’re trapped in.

May I answer your question though, originally?

PERAULT:

Please, yeah. Go ahead.

SZOKA:

Sorry for all the detours. You asked me what does Section 230 do? What do I mean when I say it provides a procedural shortcut?

Well, in a nutshell, Section 230 does two different things. On the one hand, it says that you’re not liable for third party content as you would have been under the common law if you moderated any content on your website or if you had reason to know of content on your website. A variety of ways that distributors or publishers could be held liable.

First, Section 230 says you can’t be held liable for any third party content under any of those theories. And then, second function is to say you can’t be held liable for moderating user content, for removing it, for refusing to host it, for withdrawing it, etc.

Those are the two broad functions. Now, people make the mistake of assuming that those two functions map onto the two most important operative provisions of the statute, because it looks like they do. C1 says you can’t be treated as a publisher, and C2a says you can’t be held liable for content moderation. But, in fact, as the courts have interpreted the statute, I think quite correctly, C1 actually covers both functions. It says that you may not be treated as a publisher and as Zeran noted in 1997 the first appellate court decision to interpret Section 230, that means you cannot be held liable for any of the things that publishers do, which include withdrawing content or refusing to run it.

So, Section 230, both of those key provisions, C1 and C2a protects content moderation. Now, to answer your question, in that sense, both of those provisions do not change the common law, because under the common law, the first amendment protected your right not to associate yourself with content you refuse to carry. And all Section 230 does in that sense is to provide a procedural shortcut that is a statutory right to have a lawsuit dismissed when it is filed against you because you have refused to carry content. You would have had that right under the first amendment, but you would have had to sue or litigate to vindicate that right, which could be so expensive that the right was essentially useless. So in that sense, Section 230, as I say, provides a mere procedural shortcut. It does not change the underlying allocation of rights.

But this is also hard to follow, because the other function of 230, the function that says that you’re not liable for user content, that does change the common law, because again, the common law would have held you responsible for that, and that would have really made it impossible for the internet to develop.

PERAULT:

Thanks, Berin. Kate, you lead Engine, a policy advocacy organization that’s focused on supporting startups. Maybe just as a first question for you, maybe we could pick up with what Berin just said about the procedural shortcut. Why is that something that is important to startups? And does Section 230 help or hurt a startup ecosystem as they try to compete with larger companies?

Kate Tummarello Speaker BW

Kate TUMMARELLO:

Yeah, so to take that in reverse, Section 230 absolutely helps the startup ecosystem in many ways, including as they try to compete with larger tech companies. Berin’s right, although I wouldn’t describe it as a mere procedural shortcut, I think it’s a very important procedural shortcut. One of the things we often hear, and we could do a whole event on the myths of Section 230 and content moderation, but another very pervasive myth is people saying “Well, let them win in court. This is legal content, or this is illegal content, or they would win on the first amendment, just let them fight it out in court and win.”

My response to that is the only people who can afford to fight it out in court and win are the exact people you are trying to get at with your proposals. It is incredibly expensive to be sued. I’m not a lawyer, I think I’m the only non lawyer on the panel, so no offense to lawyers, but they tend to be very expensive, even if you’re right. Even if you’re in the right, it can still cost a lot of money. We’ve done some research into the cost of litigation, and even using Section 230 to get a case dismissed at the earliest stages can cost tens of thousands of dollars. Once you move into discovery, you’re talking about hundreds of thousands of dollars.

Most startups don’t have that money, and if they do, they are using it to hire engineers and build products and find users and advertising and marketing and all the kinds of things that we expect young, innovative companies to do to grow. If they have to divert those resources to fending off lawsuits, especially lawsuits that are ultimately meritless, they lose out, the innovation ecosystem loses out, and I would argue all of us lose out. Section 230 is a very important shortcut that keeps companies from being sued out of existence at their earliest stages. And it is also a really important safeguard when investors start thinking about where to invest their money in companies. No investor wants to give a company a million dollars just to watch half of it go to one lawsuit. Without that kind of safety net, I think you would see a lot less investment flowing to companies that host user generated content, and that’s … We talk about 230 in the social media context, but that’s all kinds of things that are far outside of social media.

That’s like photo hosting and editing, and user review websites. We have a startup on our network that is essentially user generated content, e-commerce site for agricultural equipment. That is definitely not social media. That’s not what you find on most of Facebook, but it is very important in agricultural communities and people use it and they host all kinds of posts for sheep and tractors and stuff like that, and they’re as protected by 230 as everyone else, so I think that’s something that often gets lost in these kinds of conversations about 230. We all would lose out if companies like that couldn’t exist.

PERAULT:

I assume that you’re in regular touch with startups, obviously, because they’re part of your network, and we now have literally dozens of bills that have been introduced in the last Congress and the current one, to reform Section 230. What are the kinds of things that you’re hearing from companies? Do they have stories that they’re telling you about how they’d be affected by some of the various different bills that are under consideration?

TUMMARELLO:

Yeah. I think a lot of them are concerned about the consequences for content moderation efforts that they currently undertake. Content moderation is, as everyone here can tell you, is incredibly difficult and time consuming and expensive, and also inherently imperfect. You’re never going to make a content moderation decision that makes everyone happy. So, startups facing potential liability if 230 is amended, would mean that they have to keep a lot closer tabs on everything on their platform, and for a small company, that’s impossible. We often talk about size thresholds. I know Eric has written about this. It’s not like having a million users means you have a lot of money to hire tens of thousands of content moderators. So, for the startups that are especially trying to appeal to niche communities, or stay particularly relevant on a given issue, content moderation is already really difficult and adding liability to that, when you make mistakes or when people are just unhappy, would be ruinous.

PERAULT:

Eric, as Kate just said, your piece in the Antitrust Chronicle was explicitly about this issue, about the idea that there should be different obligations based on a company’s size. You wrote that good policy ideas should apply to all enterprises, regardless of size. Bad policy ideas should be ditched rather than imposed only on large enterprises. So, I have two questions for you. First, can you talk a little bit about this development? What are these different size based obligations? What are we starting to see now? And why do you think they may be problematic?

GOLDMAN:

If you look at the mountain of legislative proposals you referenced earlier, many of them have some sort of way of trying to distinguish between enterprises based on their size. The typical idea is to really segregate between big tech and non big tech, and subject big tech to more stringent rules. From a policy standpoint, there are times in which that might be in fact justifiable. If we want to foster the kind of growth of entrepreneurship that Kate and her organization work on, we might need to provide some additional zones of flexibility for smaller enterprises, until they can get their feet under them, until they’re able to financially afford more complex obligations.

But, that’s not what we’re really seeing in the bills that we’re talking about. What we’re really seeing is the idea of let’s define a universe of companies, and usually in the drafter’s mind, Facebook is public enemy number one. Let’s define some kind of boundary around something like Facebook and then say let’s hit them hard. Let’s make big tech pay.

What we’re really seeing is this stratification of obligations with the idea of trying to punish big tech through these regulatory obligations.

PERAULT:

Great. And then, why do you see it as so problematic then? So I understand your point, we shouldn’t potentially harm other companies just because we’re targeting a company like Facebook for instance, but what’s so misguided about this approach to legislative reform?

GOLDMAN:

It’s not inherently misguided. In fact, we can find dozens, if not hundreds of examples of circumstances where it actually is logical to make that distinction. But, treating larger enterprises more stringently only makes sense if the treatment of those large enterprises is in fact good policy. What I feel like we’re seeing, is a bunch of legislative proposals that are really just terrible policy with the idea but we’re only going to impose them upon these evil players like Facebook. As some kind of justification for bad policy, let’s limit the number of enterprises we put into these bad policy outcomes. That to me is not okay. I wanted to make sure that we thought about how do we properly draft a size based distinctions in the internet context. What are the mechanics to properly draft that?

But I didn’t want that to become an excuse to say, “Well, now that we know how to do it, let’s go come up with really terrible ideas to impose on big tech.” And in the paper, we talk a little bit about some of the bad consequences from the proposals we’re seeing as targeted towards big tech. Things that would do … A proposal that would lead to consequences like leading to much more draconian removal of content on those big services, or literally services simply deciding let’s get rid of user generated content all together. We’ll just move over to professionally produced content. We’ll throw up pay walls to be able to afford it. And we’ll exit the industry all together.

The proposals that we’re seeing are really about reshaping the very architecture of the internet. We shouldn’t do that if only applied to big companies, if it’s still going to lead to bad outcomes.

PERAULT:

And just to get specific on the specific legislative mechanisms that would result in those outcomes that you think are problematic. What are they? Or what are the tools that they’re using that you think are misguided in these particular instances?

GOLDMAN:

Well, the two principal mechanisms that we’re seeing are things that are distinguishing companies based on revenue, and things that are distinguishing companies based on site usage, things like the number of users on a site. Revenue based distinctions are logical. That makes sense, the more revenue they have, the more they should be able to afford more burdensome legislation. But, most of the bills don’t limit that revenue to the activity that’s purportedly being regulated. They will say if your enterprise has $100 million a year worth of revenue, then you must do all these things. Even if the amount of revenue coming from user generated content could be $1. At which point then, we know that  we’re not going to obligate ourselves to do all these terrible things for this very small amount of revenue. We exit from the industry.

With respect to size audience, and Kate had mentioned this earlier, you might have a site that actually has a very large following, but doesn’t make much, if any, money at all. Therefore, putting these onerous burdens on them will force them to either change their business model or simply again, to exit the industry, because it’s no longer sustainable. So, things like revenue and site usage are both logical approaches, but they need to be probably tailored. If you’re really trying to say we’re going to treat Facebook differently than something like Wikipedia, you have to actually draft it in a proper way, or you’re going to end up miscalibrating it.

PERAULT:

This seems like a really critical issue for the topic that we’re discussing about competitive dynamics in the tech sector and how it interplays with Section 230 reform. I’m curious if anyone else has a view on the set of issues that Eric raised? Daphne?

KELLER:

Yeah. I agree with Eric that the metrics that we have seen in US legislative proposals so far are often deeply flawed. They accidentally sweep in the farming implement platform that Kate was talking about, or they accidentally sweep in Walmart.com or very small entities or Wikipedia. But, I do think that if we could overcome that hurdle, if we had smart economists and people sit down and try to come up with a really good metric to capture who are the giant platforms we’re actually worried about? Then it seems quite defensible to me to have obligations on the mega platforms that don’t exist for little startups or for the New York Times comments section, etc. That includes things like procedural clarity when they take down content. Being relatively clear, as clear as it’s feasible to be, about what their content moderation policies are, having appeals processes that are clear for users who think that their content has been taken down in error, etc.

And if you are in a universe outside CDA 230, where there’s a legal obligation to take down certain content, as is the case in the US for copyright and for things that are federally, criminally barred, then you might want to think about whether there should be obligations on the big platforms, even more so in those situations, because there you have the force of law driving them to take things down, and often take things down in error, in ways that should be subject to appeal. So, having that kind of obligations put on the big guys and not the small guys makes sense to me, and I’ve been an advocate for having, outside the US, in the non-CDA 230 world, I’ve been an advocate for those kinds of procedural protections for users for a very long time. But I think we’re seeing a shift where more and more people accept that those kinds of procedures are useful and valuable, and now they’re erring way on the other side of saying and every platform should have to do this, even a platform that has five employees should have to have a relatively elaborate process around any content moderation.

That, I think is a very bad trade off between competition goals, which should allow those smaller platforms to grow and thrive under the same conditions that the giant incumbents got to. So, for competition purposes, we don’t want this heavy set of procedural obligations around content moderation. And then, on the other hand, maybe if you’re already at this point where you are the giant incumbent, then having those procedures in place to protect users becomes a more reasonable obligation.

PERAULT:

Berin, Kate, does that make sense to you guys?

SZOKA:

Well, would that we were only talking about procedural safeguards. That might be what Daphne wants us to talk about, but of course a lot of what we’re talking about is broadly worded non discrimination requirements or must carry mandates. And this is true on both sides of the aisle. I mean, for example, the Cicilline bill that has been proposed to deal with online platforms is clearly crafted with Amazon in mind and other online sellers. But, I’m not sure that the authors have understood that when they talk about how covered platforms deal with businesses that use the platform, they’re not just talking about sellers on Amazon, they’re also talking, and the way they’ve written the bill, about Gateway Pundit, and whether Facebook treats Gateway Pundit differently from the New York Times, when they are similarly situated. So, in other words, you have, whether they are doing it consciously or not, both Republicans and Democrats trying to impose common carriage style principals on internet platforms, and in most cases, the very largest ones.

What I would say to all of this as a telecom lawyer, is that’s not how common carriage works. Common carriage really has nothing to do with your size. Fundamentally, historically common law had to do with the nature of the offering. If you hold yourself out as serving everyone, for example, the inn keeper is the classic example, before railroads, you have an obligation to serve all parties. Because they’re dependent on you. That concept gets applied in the context of railroads for almost the same reason. You have essentially a monopoly over the shippers in your immediate area. Farmers can only sell their grain by transporting then on your railroad. The same dynamic applies.

Now, you could take those ideas and reasonably make an argument, though I don’t think it’s a great one as a policy matter, but you could reasonably make an argument that broadband providers, for example, do not hold themselves out as providing an edited service. In fact, they hold themselves out as not doing that, as saying that they won’t block or throttle content, and the FCC net neutrality rules from 2010 and 2015, in the most important detail that no one paid attention to until this finally got litigated, only applied to those broadband providers that made such representation. In other words, the essence of what was subject to regulation was not size, it was the nature of the offer.

We’re no longer talking about that. If we were talking about that, I think we’d be on much firmer ground. We could distinguish between those particular services that are offered by websites, and particular aspects of how a particular website might function, and recognize that maybe some of them are more amenable to common carriage regulation, while others are in fact edited services that, in the instances I’ve been talking about, involve the exercise of editorial judgment that is protected by the first amendment.

So, we’re not having that conversation. As a result, we’re talking about introducing common carriage regulation to effectively supersede the right of private parties to decide what kind of speech they would carry. So, I think that’s a real shortcoming of the way that we think about this.

I just would say one other thing. To the extent that the courts have been willing to uphold must carry mandates based purely on inn keeper power, most notably the case of Turner Broadcasting in 1994 and 1997, that idea was that cable operators had a unique physical connection to the home. It wasn’t just that they were very popular, it wasn’t just that they were the only newspaper in town, as was the case in Miami Harold, where the court said that didn’t matter, newspaper still had a right to decide what content to carry, cable providers were the only way to carry content to the consumer, period.

In that instance, the court did uphold must carry mandates. But, notably they did so only under intermediate scrutiny. Under strict scrutiny, that would not have mattered. So, then the question becomes what is subject to what level of scrutiny?

And in a nutshell, the cable providers never objected to the nature of the content they were being asked to carry. They merely objected to losing their right to make money off of putting the most profitable channels into their programming package. So, my view is that where the thing at issue is a business practice, such as perhaps on Amazon, maybe you could get to intermediate scrutiny, maybe you could have those mandates upheld, but where the issue is a website refusing to carry the kind of content that Daphne summarized earlier because they find it foreign, those objections, I think will ultimately be subject to strict scrutiny and it won’t matter how much gate keeper power the platform has.

TUMMARELLO:

If I could just jump in, echoing something Daphne, but I really liked the framework of thinking about a balance between protecting users and protecting innovation, because I think to Eric’s point, that’s not really the starting point, or at least it doesn’t feel like the starting point, from most of the legislative proposals we’re seeing, it really does feel like people want a pound of flesh from an industry, and it doesn’t matter what the collateral damage is. And then just because Daphne brought up appeals processes for content that’s been wrongfully taken down. I think that’s one of those things that sounds good and in the best cases would work really well, but in practice, especially for a smaller company, and probably even for the biggest companies, would end up being really easily abused much like we worry about amending 230 to open the court doors to bad faith actors who want to punish companies, whether or not they are right.

I do worry about opening up other avenues for punishing companies. Sending bad faith take out notices because you don’t like the speech or because you don’t like the person speaking. I think that could pour over pretty easily to sending bad faith appeals notices. That’s something that most companies, even big companies, would be easily overwhelmed by, but especially small companies would be drowned out by. I think that’s a good example of another thing to think about that framework with, is how do we make sure the right stuff stays online and the wrong stuff doesn’t? And I would love to hear a lot of lawmakers answer that question, because I don’t think they can even agree with the right stuff and the wrong stuff is.

PERAULT:

Kate, what’s your view of size based restrictions generally? Because presumably some of the companies in your network would be beneficiaries, at least in theory, of that legislative approach.

TUMMARELLO:

Yeah. Generally, I think we’re on the same page as Eric. Good policy doesn’t need thresholds. Let’s just write good policy. But, recognizing that we don’t have the pen, I think there are instances, like Eric said, where it makes sense, especially around things that do have not a regulatory cost in terms of money spent, but in terms of time. And if you’re rebuilding infrastructure or something, if you’re building a whole new process in, that’s really hard to do if you’re a small company that’s already launched. But generally, they create kind of weird artificial ceilings.

One thing that we talk a lot about is the number of employees as being maybe a better metric than the number of users, but even that, you can see a company saying okay, I’ll just start hiring contractors. If that doesn’t count toward the count, then why would I add anything to the cap? I think there’s a tiny place for it, but content moderation and Section 230, these issues don’t really map onto the size of the company super well. You can have a very large company with a very small amount of user generated content and vice versa. So, I’m not really sure they actually get at any problem, and I think they might cause more problems than they solve.

PERAULT:

I’m curious about content moderation regulation generally, if what we’re trying to encourage is diversity in content moderation practices. So, Eric in your opening, I think you said something about 230 being the basis for a wide range of different content moderation practices. And Daphne, I think when you were distinguishing between size based restrictions for smaller companies versus perhaps wanting some kind of obligations on larger companies, I would think those obligations on larger companies might in some instances at least, create more homogeneity in how various different companies, at least the large companies, approach content moderation, if they’re required to do X, Y, Z things for appeals process for instance, are they likely to do just those things and innovate less? I’m curious, is this a fraught area in terms of trying to encourage diversity? Or Daphne, is your view for instance that we just need to have some baseline because then at least we’ll have the baseline?

KELLER:

I do think it is a fraught area, and I think there are a lot of proposals out there that risk driving the big platforms into being a relative speech monoculture, all applying the same rules substantively, in terms of what they prohibit, in addition to procedurally in terms of whether they have appeals and how those appeals work. And, we see a problem with the substantive speech rules that the big platforms come up with potentially trickling downstream to their smaller competitors, so this homogeneous set of speech rules you’re talking about become more widespread.

The most obvious example of that right now is the global internet forum to counter terrorism or GIFCT, which is this voluntary entity that administers a database of hashes or fingerprints of known violent extremist content. Hundreds of thousands of images or videos that somebody identified as violent extremism, but nobody knows what the images are, or the videos. Nobody knows whether they’re mistakes and the wrong things are being taken down, or whether there are patterns of bias in what’s being blocked. This system for blocking things is farmed out to smaller companies that don’t necessarily have staff to check what a video in Arabic is saying, for example.

So, I think we can expect this kind of spread of a speech monoculture. That is absolutely something we want to look out for as we look at possible regulations, as I think Eric was gesturing to at the very beginning. This is something that Section 230 was designed to fix or to make it possible to fix by enabling there to be a wide array of different kinds of platforms with different kinds of speech policies and an option as a user to go participate in places with different dispersive rules because CDA 230 protects the content moderation choices that the platforms make.

TUMMARELLO:

Can I also-

PERAULT:

Yeah. Speech monoculture. Eric, you want to start?

GOLDMAN:

Sure, just quickly a small point. Every legislative proposal that is based on common carriage or must carry, actually would guarantee a uniform industry standard. There would be only one approach, which is no moderation whatsoever. Those laws would effectively blast away what I call the house rules, the proprietary idiosyncratic rules that sites adopt above and beyond what the law requires. All those house rules would be eliminated by all must carry obligations.

So, there’s a non-trivial segment of the regulators today who absolutely want there to be speech monoculture. They just want it to be everything.

SZOKA:

I would like to just interject here. We filed tech freedom, a brief in the litigation over the Florida law which attempts to impose a must carry requirement. Of course, its goal is to do precisely what Eric just mentioned. Our brief however, explains that that itself is a misunderstanding of common carriage regulation. Common carriers actually did have the right do to what amounted to content moderation. You could always throw a disorderly or drunken person out of your inn, you could kick them off the train, even for example, telephone companies had the right to decide if they weren’t going to carry certain classes of content in the yellow pages, like price advertising. What is different about common carriage, and what makes them, or what burdens their speech is that they have to justify those decisions as being reasonable. So, for example, you might be able to justify banning all political ads, but you get into litigation with the regulator over the reasonableness of that.

The legal analysis here is a little more complicated. It’s not that they couldn’t do any content moderation, it’s that this would become a weapon to force them to litigate. This is where we have to go back to Kate’s point, right? Which is that the reality of content moderation is that very few companies could afford to litigate. So even though under strict common law precedents, one could make an argument that the could do so, what you’ve done is to take an area that is currently works the same way that newsrooms work, where the publisher of a newspaper has the right to decide what op-eds it’s going to carry, and what its going to allow in its personal ads, that’s how content moderation works today, and instead impose a regulatory overlay upon that, where someone who is disgruntled can file a complaint with the regulator and allege that your practices were not reasonable and force you to litigate over that. Once you start doing that, the practical result of that will be to significantly discourage content moderation or in fact to cause platforms to shut down all together. But that is not common carriage principles at work. That is the application of those ideas to a context where they don’t scale.

This is really the most important point that that listener needs to take away here, we live in a uniquely litigious society. This country has 15 times more lawyers per capita than Canada. That is a shorthand for understanding how different our legal systems are. Most notably, for example, the commonwealth, that of course many people, like that Wired article magazine point to saying, “Oh look, websites exist in Canada and Britain, so the internet will be fine without Section 230.” Well, those countries have loser pay rules such that if you sue, and you lose, you’re responsible for everyone’s legal costs, 100% in the UK. Generally a third in Canada.

That discourages a lot of litigation. We don’t have that in this country. We have a uniquely litigious society. Essentially, to get back to your original point, Matt, Section 230 serves as kind of tort reform. It allows us to avoid that overlay, that dead weight loss caused by the legal system when applied to hosting content in general, which you wouldn’t be able to do under the common law in the US, or even to content moderation, which you have a first amendment right to do, but that right would be useless without that procedural shortcut that is afforded by Section 230.

PERAULT:

Kate, it looked like you wanted to come in on this point as well.

TUMMARELLO:

Yeah, just quickly back to your question about the baseline.

PERAULT:

Yeah.

TUMMARELLO:

I think there’s ample evidence of companies innovating past existing baselines in other contexts, so I’m less worried about companies giving up and saying, “Okay, this is all we have to do, this is what we’ll do,” because I think everyone is always striving to provide a better user experience, and that’s true of keeping all bad things off of the internet where they can. But, the problem with raising the baseline, is then whatever innovates off of that becomes the next baseline, and we see this in other contexts all the time, including the DMCA, where some companies have, some would say, gone above an beyond, and some would say found new ways to suppress user speech, it depends where you’re sitting on that debate.

We already see policy makers calling for that to be the new standard, what those giant companies have spent a hundred million dollars on. So, if we start with a baseline, and then big tech is able to innovate on top of that, then that becomes the new baseline, and suddenly only big tech can keep up there. I really worry about moving the goal post there, because I think it ultimately benefits the people who can afford to keep up with moving goal posts.

PERAULT:

So, I have a question for everyone now. I’m curious why the current debate in Congress looks the way that it does? I’m always struck in the conversations that we have with smart people from think tanks and academics and organizations like Engine, the debate that I think we have sounds so different than the one that’s happening in Congress, or even the one that’s happening in op-ed pages of newspapers. And it’s an interesting disconnect, because there seems to be, I think a fair amount of consensus from those in the academy, for instance, who cover tech issues pretty closely, about a general direction of travel on these issues. Daphne, I couldn’t see if you were agreeing or disagreeing with the look that you were giving. If you disagree, I’d love to hear it. But, the debates feel very different. I’m curious about what that is. What is that disconnect about? Daphne, you want to start?

KELLER:

Yeah. I’ll take a stab at that. I think in part, there’s just a pretty steep ramp up to understand these issues, and DC hasn’t been doing it that long, so we have a lot of people who look at questions about content moderation and they think about one question at a time. One instance of defamation that gets reported on the New York Times or whatever it is, and they think, “Well, I could resolve that,” and they aren’t thinking about the incredible scale that content moderation takes place at for a lot of companies, and the fact that this is systemic. You have to come up with a system to process all of that and so the solutions aren’t about passing laws that are like what you would do if you were the New York Times deciding if something was defamatory. It’s kind of more like putting a system in place for food safety at industrial scale.

If you compare the EU conversation, they have this big pending law, the digital services act, which is an overhaul of platform liability for user generated content. That is very sophisticated. Their conversation sounds a lot like what we are talking about here, and the reason is because they’ve been thinking about and trying to draft regulation on this since 2011, so they have 10 years of civil servants being deep in this and doing consultations and getting comments from civil society and academics and companies. So, they just through education and time, have arrived at having this more complex conversation. We’re just not there yet.

PERAULT:

Yeah, Berin?

SZOKA:

Amen to that. There are two other reasons. One of them is they have civil servants who can work on these things. This country, we take it for granted this is how things should work, but it is very bizarre that in this country, we have just a handful of staffers in each chamber who are responsible for these issues. That’s not how things work in Europe. They have a large civil service staff to draw upon to better understand these things and to build expertise over time.

So, second difference, to the extent that this debate is among Democrats is about trying to crack down on certain forms of bad content, and attacks on 230 are being used as a proxy for that to try to indirectly bridge the take down of content that can’t directly be required to be taken down. Europe doesn’t have that problem. Europeans can just directly regulate that content and impose direct legal mandates.

But in this country, because of the first amendment, we find ourselves with many people who are trying to work their way around that constitutional prohibition on government meddling in media, whether that is to indirectly coerce the take down of awful but lawful content, or on the other side of the aisle, to encourage companies to keep up content that they would have taken down. That really perverts our debate, and I think prevents us from having an honest debate about the reality that when it comes to lawful but awful content, it is really up to private parties to decide what content to carry.

PERAULT:

Kate or Eric, either of you have a view on this one?

TUMMARELLO:

Sure. I’ll just say I think that there is a very pervasive and probably counterproductive anti-tech sentiment stemming from a handful of decisions from three companies that has made even talking about this very difficult, because there’s this assumption that if you think the internet has created good things, and you want it to continue creating good things and good opportunities, then you must be totally blind to all the ills of the internet. I think maybe I heard Eric say this, but the internet for the large part did not create new ills, it just kind of made it easier to find them and for them to find each other. So, until we can solve humanity’s problems, I don’t know that we can solve content moderation problems, especially not with 230 reform.

PERAULT:

Eric?

GOLDMAN:

Yeah, just one more point to add, and it’s a little bit of a stereotype, so I apologize for being so glossy, but for about the first 20 years of the internet, many regulators actually had some humility about their ability to properly regulate the internet. They knew that they were dealing with something that was powerful and big, and that actually discouraged many regulators from getting too crazy. We’ve seen in the last five years or so, just the absolutely abdication of any regulatory humility. Regulator after regulator simply has come in and said, “I think I know how to run the internet better than anyone else in the world, better than any company that’s currently doing it, and I’m going to dictate that for the entire industry.” And you multiply that by the number of regulators who think that the internet is in their purview, and you can see that we’re just seeing this flood of bad proposals coming from people who are no longer feeling any inhibition whatsoever about the possibility they might just screw up.

To Kate’s point about the politicization of the issue, we’ve simply seen that people no longer seem to care whether or not it’s good policy. It’s purely about does it message well to their base? To whoever they’re trying to please? And if it messages well, then the fact that it could actually be terrible policy is totally immaterial. That’s I think what we’re going to see in the Florida situation. The Florida legislature was very pleased with itself about passing the social media regulation law, and if it is struck down, as I hope it is, they’re going to be taught that actually, this is what happens when the barking dog catches the car. You actually caught the internet with your regulations. But did you really, really mean what you say? And I think the answer’s going to be a clear no.

PERAULT:

Okay, so I think we fairly roundly criticized the status quo. What’s the right answer? How do we actually move forward on this issue productively? Anyone want to go first on that one?

KELLER:

I’ll do it.

PERAULT:

Thank you, Daphne.

KELLER:

Oh, sorry Kate. I think we should prioritize looking at competition and privacy. We have a complex set of policy issues that the rise of internet platforms, the very big internet platforms, have put in the foreground. And, a lot of the problems that we’re seeing, I think are much better viewed through a lens either of competition law or of privacy law. We really need federal privacy law.

I am with Berin. There’s some real problems in the particular laws that were introduced last week, in particular the Cicilline bill seems to open the door for Alex Jones and white nationalists to sue saying they should be ranked above the New York Times. I don’t even know what happens in that kind of litigation, but it sounds like a horrible mess, and expensive and useless. But, well crafted competition law, well crafted privacy law can help solve a bunch of problems, and then we can step back and say, “Is there actually a 230 problem that is distinct from those two issues?”

PERAULT:

Kate?

TUMMARELLO:

Daphne’s answer was much more thoughtful than mine. I was just going to say I think every staffer or regulator who thinks that they have the right 230 answer should have to spend a week moderating content at a medium sized tech company and just get a sense for how hard these questions are, because Daphne’s point earlier, it feels like someone has an idea, they see one problem, they see a clean solution, and they seem to forget the rest of it’s out there. I would really encourage everyone who thinks that they know the answer to this to walk in someone else’s shoes for a bit.

PERAULT:

Berin, what do you think?

SZOKA:

And if they won’t spend a week doing it, they could at least maybe watch some of the video from the content moderation online conferences that CDT and CKI and I think Engine hosted. Just walking people through hypotheticals, and demonstrating how inherently difficult and subjective content moderation is, and how few people will agree on what might seem to you to be easy questions.

But, as to Daphne’s suggestion about dealing with competition concerns, I encourage everyone to read my article in the CPI journal, which is also posted on SSRN. In a nutshell, you can bring an antitrust suit or any other kind of … Crafting other economic regulation for business practices. The media are not immune from enforcement of the antitrust laws or from regulatory scrutiny.

For example, if you are the Associated Press, and you allow the newspapers that are members of your pool to veto the entrance into the pool of the new newspaper that tries to start up in their town, that is an anti-competitive practice, and you’re not immune from the antitrust laws because you’re a newspaper. The same goes for the radio station that enters town, and finds itself victim of a boycott by the local newspaper that has refused to carry advertisements from local advertisers that also buy ads in the radio station. That’s an anti-competitive practice on the part of the newspaper. These are clear Supreme Court decisions from the 1940s and 1950s. I discussed them in my piece.

That is the kind of antitrust action, regulatory action, that you could bring today against new media companies. What you can’t do, is bring an antitrust action because you don’t like the editorial decisions that are being made by these companies. This is again, when I said earlier that we’re dealing with the problem of motivated reasoning, the problem here, especially for Republicans, is that’s why they want to bring some kind of regulatory hammer down upon big tech. And the first amendment stands in their way, and they are blindly shaking their fists in rage, because they want to do something and they don’t want to be told that there’s nothing that the government can do. So, read my paper. I lay it all out there, and I will just say on the other side of the aisle, that you do also hear Democrats say that somehow the answer would just be shrinking these companies, and bringing them down to scale. That’s not going to really fix the problem, because the reason that Facebook and Twitter and all the other social media sites removed Qanon content and Nazi content and White Supremacy and things about eating Tide Pods and so on and so forth, is not because they’re big. Even the small sites do that.

They removed that content because their users, for the most part, don’t want to see it, and no advertiser with a reputable brand, so everyone other than Mr. Pillow, no advertiser wants their products displayed next to that sort of content. That’s not going to change, no matter how small these services are. You have a site like Parler that caters to a niche audience of people who apparently celebrate bigotry and lunacy and misinformation. That is very much an outlier. And it is worth pointing out that Parler doesn’t have advertising. They have no business model that involves getting money from traditional sane companies to promote non-politically oriented products. That’s the reality of this market. That’s not going to change no matter how much you break up tech companies.

PERAULT:

Thank you, Berin. Eric, you get the last word. What’s the path forward?

GOLDMAN:

I want to emphasize two points that Kate had made. First, this idea that the internet acts as a mirror on our society. That we as a society, as humans, are flawed in how we interact with each other. That’s going to come on to the internet no matter how much content moderation takes place. We’re never going to squeeze that out of humanity. So, it’s frustrating to see how many proposals are basically designed to eliminate the kind of anti-social behavior we see in the human condition, and expect internet companies to fix that. We’re never going to be satisfied with that approach.

The other thing that Kate mentioned is about how content moderation could never be done perfectly. No matter what decision is made about a particular item of content, somebody is going to say you should have made a different decision. So, it helps us if we start with these two principles, one that the human condition is flawed. We’re always going to do awful things to each other. And two, no amount of content moderation is ever going to fix that perfectly. We’re always going to second guess it, or have people who feel like the outcome was wrong.

If we accept those two premises, and I put on to the table that Section 230 might be what I call the least worst outcome, that more content regulation or less content regulation will each only relocate the problem, it won’t eliminate the problem. But Section 230’s brilliance is that it allows a wide spectrum of content moderation practices. And that might allow it so that the places that people choose to be on the internet might be better tailored for the kinds of things that they’re looking for, and that we can have more regulated and less regulated environments within this broad spectrum. So, I put on the table, the path forward is actually to accept the possibility that for all of its flaws, all the critiques that it attracts, Section 230 might be the least worst choice that we have, and if so, any decision we’re going to make to change Section 230 is only going to take us to an even worse outcome.

PERAULT:

Thanks Eric. What a wonderful discussion, made wonderful by all of you. Thanks so much, Berin, Eric, Kate, Daphne. Thank you very much.