A PYMNTS Company

Pricing Algorithms and Implications for Competition

 |  May 28, 2019

May 2019

CPI Cartel Column edited by Rosa M. Abrantes-Metz (Global Economics Group) & Donald Klawiter (Klawiter PLLC) presents:

Pricing Algorithms and Implications for Competition By Rosa M. Abrantes-Metz1 (Global Economics Group)

Welcome to our first new issue of CPI’s Cartels column.  It’s been about two years since our last column, and much has happened in the area of collusion in that time.  We will be compiling interesting areas of discussion on cartels every month, most often with invited contributors, hoping to gather your interest and reaction.  In addition, please feel free to send us your thoughts on areas you would like us to cover.  For now, are starting with pricing algorithms and collusion.

Pricing algorithms are an increasingly integral topic of policy discussion.  Last November, as part of the Federal Trade Commission’s (“FTC”) Hearings on Consumer Protection and Competition, I had the privilege of participating in a distinguished panel on pricing algorithms and collusion.  The panelists were Ai Deng, Joe Harrington, Kai-Uwe Kühn, Sonia Kuester Pfaffenroth, Maurice E. Stucke, and myself, with Ellen Connelly and James Rhilinger2 as moderators.  The video for this panel clearly illustrates the divergence of opinions on this topic.3

What are pricing algorithms?  Put simply, pricing algorithms are computer models which predict the optimal (generally “profit maximizing”) price given various inputs.  These inputs would include signals of prevailing market demand and supply conditions, and could include the prices charged by competitors for similar (substitutable) or complimentary goods.

Pricing Algorithms and Benefits to Competition

In principle any business could develop and use a pricing algorithm.  Usually, however,  there is a connotation that pricing algorithms can quickly change prices given quickly changing information.  It is not practical for a brick-and-mortar retailer to retag all their products during the day, even if a pricing algorithm would suggest that the market would support a higher price during the lunch hour rush.  It is also not practical to send an employee to other brick-and-mortar stores and survey what their prices are throughout the day.  While these traditional retailers could use programs and econometric models to help them set prices over time, and while these models could fairly be called “pricing algorithms,” they are not the sort of application most people have in mind when they use the term.

Instead, when we talk about “pricing algorithms,” we usually have in mind an internet application of some sort.  An internet retailer, for example, could alter prices moment to moment as their algorithms consider (i) how many customers have been recently browsing those items and (ii) how many browsing customers decided to make a purchase at the old prices.  They could deploy “bots” – AI programs which continuously scour the websites of their competitors to see what prices they are charging.

Pricing algorithms provide many potentially procompetitive effects, enhancing both static and dynamic efficiencies.  They enhance price transparency, facilitate the collection and organization of information, and generally enhance efficiency.  By facilitating price discovery, these algorithms can help markets reach equilibrium more efficiently, which redounds to the benefit of both producers and consumers.

A very simple pricing algorithm could be, “determine what my competitors are charging, and set my price to be $1 lower than the lowest.”  But it could also be, “determine what my competitors are charging, and set my price to the average.”  By making prices more formulaic, they become more predictable to the competition – and perhaps allows competitors to reach super-competitive equilibria more easily, whether through tacit or explicit collusion.  How concerned should we be about this, and what, if anything, should policy makers do in response to this concern?  It’s fair to say that a consensus has not yet emerged.

One reason people fear an increase in collusive outcomes from algorithmic pricing, and one reason people fear the current law may not be adequate to address it, is that pricing algorithms might learn to collude without any human having programmed them to do so.  Some time ago, AI researchers developed a poker playing AI.  They did not teach it to bluff, but it learned to bluff by itself.  Suppose, in all good faith, I develop a pricing algorithm.  Suppose that algorithm learns that wherever I set my price, my competitor moves to it.  It therefore comes to learn that it can keep raising prices without fear of competitive reprisal up until the elasticity of demand becomes large enough.  The market thus reaches an equilibrium with super-competitive prices and decreased output.  How reasonable is this scenario?  And how would current law address it?

It is well established in economics that when a market has (infinitely) many competitors, the product is homogeneous, production functions are identical, there are no barriers to entry, and there is perfect information, then we have “perfect competition.”  The equilibrium price is the optimal price, and it is equal to the marginal cost of production.  This is the socially desirable benchmark against which economists compare competitive effects from real market outcomes.

By allowing a quicker dissemination of information in the market between relative supply and demand, and a more rapid response to market conditions, pricing algorithms would seem to be an agent of the “perfect competition” model.  Perfect competition requires perfect information.  As a general statement, it cannot be true that “more information” leads to non-competitive outcomes in the presence of the other perfect competition features, when the perfectly competitive outcome assumes complete information.

In my view, pricing algorithms are not, as a general rule, to be feared as instruments which could somehow convert an otherwise competitive market into an uncompetitive one.  Quite the opposite: we should expect them to enhance competition.  But what if the market structure is fundamentally uncompetitive to begin with?

Pricing Algorithms, Collusion and Empirical Evidence

If pricing algorithms increase the likelihood of collusive outcomes in markets prone to collusion (and that has yet to be shown), then there is a social welfare concern.  Furthermore, if such outcomes are considered legally tacit, since they are reached absent the sort of explicit human interaction we have historically associated with illegality, then there may be a legal problem.  The law may need to change to address this new reality.  At the FTC hearings, some of the panelists in our group put forward ideas on how to do so and why.

There are some market features traditionally seen as facilitating collusion such as a small number of competitors, high barriers to entry, and product homogeneity, among others.  It is at least possible that pricing algorithms, by providing greater transparency, higher frequency of information sharing (or interactions), and high frequency of trading, may facilitate the signaling and implementation of common pricing policies.  They would certainly seem able to facilitate the monitoring and punishment of deviations from collusion.  If so, pricing algorithms may well increase the likelihood of tacit collusion not only in oligopolistic markets with high barriers to entry and high degrees of transparency, but potentially also in other markets where, to date, collusion may have been harder to achieve and sustain over time.

This is the concern which many experts have raised, and it cannot be dismissed.  But there are mitigating considerations.  For example, everything else the same, demand elasticity is higher for internet-based shopping for fairly homogeneous products.  This decreases the profitability of charging higher prices, and it follows from the very low consumer search costs of the internet.  With a traditional brick-and-mortar retailer, I might be willing to pay more for the convenience of not getting back into my car and searching (perhaps unsuccessfully) for a better deal somewhere else, where “better” needs to consider the net of my time and transportation costs.  Grocery stores, for example, can offer loss-leaders which get people into the store, but then can charge higher prices for other items once those customers are relatively captive.  Yet there is no analog to “loss-leaders” on the internet, where searching for competitive prices and availability is virtually costless.  That decreases market power and enhances competition among internet retailers relative to brick-and-mortar retailers.

On the supply side, everything else the same, are barriers to entry weakened by the availability of big data and pricing algorithms?  Not clear.  On the one hand, pricing algorithms enhance incumbents’ ability to identify potential market threats more quickly and easily, allowing them to pre-emptively acquire possible entrants, or to react more aggressively to potential entry.  On the other hand, the availability of more pricing data may prove useful to potential entrants looking to improve their predictions and lowering entry costs, thereby enhancing the likelihood of successful entry.

It is therefore theoretically ambiguous whether pricing algorithms will lead to higher prices.  What is the empirical record?  How large are the net profit margins for the retail sector, for which so many companies provide web-based trade?  And how have retail net profit margins evolved in the last few decades in comparison to other sectors which are less directly affected by web-trading?

Each year, the S&P500 releases industry-specific returns on equity and net margins, and each year the retail industry is among the least profitable, with decreasing margins over time.  This is particularly true for web-only retailers, which often see margins as low as 0.5 to 3.5%.  The internet has made it easier than ever before for consumers to compare prices around the world.  And it has also made it easier for suppliers to observe each other’s prices and react promptly to competitors’ pricing.  Pricing convergence does seem to be occurring, but to a lower price level with decreased market power.

The market evolution in commodities trading also provides important data.  Over the last few decades, trading has been moving from Over-the-Counter (“OTC”) to exchanges.  Detailed market-wide trading information such as volumes and prices are not as easily available to all market players when products trade OTC, where trading is usually done through financial intermediaries who do not disclose such information.  In contrast, when products trade on an exchange, detailed market-wide data are readily and publicly available to all market participants.  Market players can see the whole market at every moment in time, reflecting high market transparency.  They can use the larger amount of data to develop their pricing algorithms to a larger extent than in OTC trading with more limited data availability.  What is the empirical evidence on this higher market transparency and higher incidence of pricing algorithms?

Despite exchange trading adding additional fees (for example, to operate the exchange) that do not exist in OTC trading, bid-ask spreads are generally narrower in exchange trading than in OTC.  This provides evidence of higher market efficiency and lower profit margins in exchanges.  While collusion may still happen in exchange trading (as evidencing of spoofing cases in metals futures for example), it has been in OTC trading that many of the large systematic collusive conduct, either alleged or actually uncovered in the last several years, has occurred. Of course, this does not mean that collusion will not occur through pricing algorithms, only that it seems less likely, everything else the same.

But are these more likely to be the exception or the rule?  That is what we need to study.

Concluding Remarks

Meanwhile, how should we screen to identify situations where collusion may be ongoing in the presence of pricing algorithms?  Equal prices across competitors, or “price convergence,” should not be the focus.  Pricing algorithms will likely lead to price convergence, but “price convergence” is a prediction of both competition and traditional price-fixing collusion.  What is important is to determine whether those prices are inflated or not.  The most obvious implication of converging to an inflated price is a greater profit margin.  Hence, for those industries which are naturally more prone to collusion, we should monitor market outcomes with a particular focus on screening for increasing net profit margins.

Finally, what, if anything, should authorities do about these concerns?  Authorities should continue to assess whether pricing algorithms are driving convergence towards lower competitive prices or towards higher, potentially collusive prices instead.  Monitoring and auditing of pricing algorithms is needed; regulators, economists and computer scientists should collaborate in this effort.  In addition, regulators should consider providing guidelines to market participants explaining what pricing algorithms should or should not do, or what information they can or cannot consider.  Finally, companies should be responsible for monitoring their algorithms.  If the algorithms learn to collude, like the poker AI which learned to bluff, we should consider holding the firms liable, and that may require revisiting our current notions of “tacit” and “explicit” collusion.

Click here for a PDF version of the article

Please feel free to submit your thoughts and comments on this topic at contact@competitionpolicyinternational.com, I look forward to reading them.

Rosa M. Abrantes-Metz


1 Rosa M. Abrantes-Metz, Managing Director, Global Economics Group and Adjunct Associate Professor at New York University.

2 Rosa M. Abrantes-Metz, Managing Director, Global Economics Group and Adjunct Associate Professor at New York University; Ai Deng, Principal, Bates White and Lecturer at Johns Hopkins University; Joseph E. Harrington, Jr., Professor at University of Pennsylvania; Kai-Uwe Kühn, Professor, at University of East Anglia, and Senior Consultant, Charles River Associates;  Sonia Kuester Pfaffenroth, Partner, Arnold & Porter; Maurice E. Stucke, Professor at University of Tennessee College of Law, Co-founder, The Konkurrenz Group; Ellen Connelly, Attorney Advisor, Federal Trade Commission, Office of Policy Planning; and James Rhilinger, Deputy Assistant Director, Federal Trade Commission, Bureau of Competition.

3 Video available at https://www.ftc.gov/news-events/audio-video/video/ftc-hearing-7-nov-14-welcome-remarks-session-1-algorithmic-collusion.