A PYMNTS Company

Letting the Internet regulate itself was good idea — in the 1990s

 |  July 7, 2019

It’s time for lawmakers to step back in, carefully.

By Margaret O’Mara, New York Times

Tech regulation may be the only thing on which a polarized Capitol Hill can agree. “We should be suing Google and Facebook and all that, and perhaps we will,” President Trump recently declared. Senator Elizabeth Warren, a Democratic presidential candidate, has made the breakup of tech companies a central plank of her campaign. Even Silicon Valley-friendly contenders like Pete Buttigieg have called for curbs on the industry’s power.

If Americans buy into the idea that the tech industry is an entrepreneurial, free-market miracle in which government played little part, then the prospect of stricter regulation is ominous. But that isn’t what actually happened. Throughout the history of the tech industry in the United States, the government has been an important regulator, funder and partner. Public policies — including antitrust enforcement, data privacy regulation and rules governing online content — helped make the industry into the innovative juggernaut that it is today. In recent years, lawmakers pulled back from this role. As they return to it, the history of American tech delivers some important lessons.

Advocates of big-tech breakup often point to precedent set by the antitrust cases of the twentieth century. The three biggest were Microsoft in the 1990s, IBM in the 1950s through the 1980s, and the moves that turned AT&T into a regulated monopoly in 1913 and ended with its breakup seven decades later. Microsoft and IBM didn’t break up, and even AT&T’s dissolution happened partly because the company wanted the freedom to enter new markets.

What made these cases a boon to tech innovation was not the breaking up — which is hard to do — but the consent decrees resulting from antitrust action. Even without forcing companies to split into pieces, antitrust enforcement opened up space for market competition and new growth. Consent decrees in the mid-1950s required both IBM and AT&T to license key technologies for free or nearly free. These included the transistor technology foundational to the growth of the microchip industry: We would have no silicon in Silicon Valley without it. Microsoft dominated the 1990s software world so thoroughly that its rivals dubbed it “the Death Star.” After the lawsuit, it entered the new century constrained and cautious, giving more room for new platforms to gain a foothold.

Bill Gates observed recently that a “winner take all” dynamic governs tech, encouraging only one product — IBM mainframes, Microsoft Windows, the Apple iPhone — to monopolize its market. History shows that he’s right, and that the actions of government have been a critical countervailing force.

Enforcement, however, needs to be savvy about the technology itself. When Congress first took up the issue of computer privacy in the 1960s, its focus was on the information-gobbling mainframe computers of the federal government. Lawmakers paid little attention to what private industry was doing, or could do, with personal data. And they had little inkling of what could happen when such databases became part of a networked communications system.

Continue Reading…