A PYMNTS Company

Developments in Cartel Screening

 |  January 10, 2023

CPI COLUMNS OECD

By Despina Pachnou & Daniel Westrik1

 

Interest in Cartel Screens Is Increasing

Hard core cartels are a pervasive white-collar crime. Between 1990 and 2016, international cartels affected nominal sales of over USD 50 trillion, with gross cartel overcharges of over USD 1.5 trillion. More than 100,000 companies were found liable for international price fixing.2

Detecting cartels is a public policy priority for the OECD and its members.3 Cartels may be uncovered through cartel screening tools, which are empirical methods that identify data patterns that may indicate collusion.

Recently, interest in, and use of, cartel screening tools increased. A large number of competition authorities announced that they already have or aim to develop cartel screens, focusing on public procurement markets.4 In an OECD roundtable on data screening tools for competition investigations held in November 2022, most of the 16 competition authorities that submitted written contributions mentioned that they use or intend to use cartel screens, almost entirely dedicated to checking public procurement datasets and detecting bid-rigging cartels. 5 Even private companies, such as Deutsche Bahn, have started screening their supply chains for bid-rigging cartels.6

Why is there this increase in cartel screening? First, cartel activity has not shown signs of abating, and therefore finding and punishing cartels remains an enforcement priority for competition authorities around the world. Secondly, leniency applications, the traditional channel through which cartels are self-reported, have declined7, thus driving competition authorities to engage in proactive methods to detect cartels, such as screening. The OECD has long recommended using “pro-active cartel detection tools such as analysis of public procurement data, to trigger and support cartel investigations”8 to complement leniency programmes. Thirdly, academic research and literature have recently developed and analysed new screening methods, including statistical, econometric and machine-learning approaches.9 Fourthly, concerning in particular the detection of bid-rigging cartels in procurement markets, the increasing availability of digital procurement data makes the application of screens easier. This fact, combined with the strategic importance of public procurement in the economy10, makes the development and use of bid-rigging screens an attractive enforcement option for competition authorities.

 

Cartel Screening Methods

Cartel screens are structural or behavioural. Structural screens identify markets which may be more prone to collusion (i.e., markets where a cartel is likely to form), while behavioural screens look for indications of collusive conduct (i.e., markets where a cartel may have formed). Most of the recent academic literature and competition authority practice focus on behavioural screens.

Competition authorities may, and often do, use a combination of both structural and behavioural screens. Specifically, once structural screens identify markets that are at risk of collusion, the authority can apply behavioural screens in those markets to find indications that collusion may indeed have occurred.

Harrington and Imhof classify behavioural screens into three broad categories: collusive markers, structural breaks and anomalies.11 A collusive marker is a pattern in the data that is more consistent with collusion than competition. A structural break is an abrupt change in the data-generating process. An anomaly is a pattern in the data that is inexplicable or inconsistent with competition but may ultimately be found consistent with collusion.

There is no single cartel screen that successfully identifies all cartels12. Therefore, it is useful to apply a range of screens to minimise screening errors and obtain more conclusive screening results.13 A mix of collusive markers and structural breaks applied to procurement markets would be a good start for a competition authority looking to start a cartel screening programme with limited resources.14 Furthermore, the development of machine-learning methods and their application to cartel screening has been particularly useful as a means to combine several cartel screens. The main machine-learning approaches are supervised and unsupervised learning.

Supervised learning uses inputs, known as predictors or independent variables, to estimate an output, known as the response or dependent variable. It relies on a training dataset of solved cases, known as “tagged” or “labelled” data, that provides a mapping between screen values (inputs) and whether the conduct is collusive or not (output).15 Supervised learning can be thought of as learning with a teacher. In supervised machine-learning, the algorithm can use several cartel screens as inputs, and determines the optimal weighting of these screens to optimise prediction. Supervised machine-learning methods combined with bid-distribution cartel screens were applied to procurement data in a range of industries in several countries (Brazil, Italy, Japan, Switzerland and the United States) and produced accurate results (i.e., correct detection rates) of up to 95%.16

Unsupervised learning also uses inputs to estimate an output; it however draws on “untagged” or “unlabelled” data, that is, data that only contain input values and not an associated output value. It can be a useful alternative to supervised machine-learning as it identifies suspicious outliers that are most dissimilar to the ‘norm’.17 For example, Romania and Spain use unsupervised machine learning techniques such as k-means partitioning, cluster and network analysis.18

 

The Risk of False Positives and False Negatives

Despite the progress in cartel screening, screens can give erroneous results, notably false positives or false negatives. False positives are competitive markets flagged by the screen as collusive, and false negatives are collusive markets that the screen identifies as competitive. Cartel screens carry an inherent risk of both types of error, as the screen values are sometimes consistent with either collusive agreement or independent action. Either way, screening results must be analysed to avoid jumping to conclusions.

By way of example, screens may fail to distinguish false positives from illegal conduct when they identify price correlation but fail to recognise that this may be the result of (legal) tacit collusion or coincidence. Competition authorities are often particularly worried about false positives, since they may lead them to open a case where there has been no illegal activity, thus wasting time and resources. Screens may also fail to distinguish false negatives when they are applied in a context different from that for which they were designed.19

Competition authorities face these inherent risks in much of their empirical work. The threat of false positives and false negatives should not undermine the adoption of cartel screening tools. Rather, authorities should consider the trade-off between false positives and false negatives,20 and try to mitigate these errors to the extent possible.

 

Availability of, and Access to, Good Quality Data Is a Major Hurdle

Behavioural screens detect collusion by comparing collusive and competitive market outcomes, such as bid values or prices. One of the biggest challenges for competition authorities is the availability of, and access to, good-quality data.

Data sources can be:

  • Publicly available information, such as information on public procurement platforms.
  • Information kept by public sector authorities including sector regulators.
  • Web-scraping, which is a method for “crawling websites and automatically extracting structured data.”21
  • Data bought from commercial data providers.

It can be difficult for competition authorities to assemble a centralised and comprehensive dataset to screen when data are kept in different formats (e.g., hard copy and digital), miss parts of information or are not collected correctly or consistently. For this reason, many competition authorities have begun significant data collection and processing projects, to build their own datasets. Some competition authorities are investing in advocacy, for example, engaging with procurement agencies to share templates of the type and format of procurement data that are needed to allow cartel screening.22 Other competition authorities signed agreements with government agencies to facilitate the sharing of information.23 In parallel, many authorities have started hiring data and computer scientists with skills and knowledge to collect, clean and analyse data.

It would be useful for governments to take action, including through the adoption of enabling legislation, to ensure that regulators, which have sector data, and public procurement bodies, which have tender data, share them with competition authorities. Data sharing should be accompanied by appropriate confidentiality protections to ensure that data are safely stored, accessed and used for defined lawful purposes.24

 

Initial Successes and Future Expectations

The extent and success of cartel screening are not known and cannot be estimated easily. Some competition authorities do not announce their screening initiatives, to avoid alerting the market and driving companies to be more sophisticated in their cartel arrangements. Likewise, we know more about successes than failed screening initiatives. There are, however, known successful cartel enforcement cases that relied on screening results in public procurement markets, in Brazil, Italy, Korea, Mexico and Switzerland.25

Screening results are rarely the sole basis of an infringement decision without relevant direct evidence, unless in exceptional cases.26 Typically, competition authorities rely on screens to prioritise cases and start the investigation, in which they aim to find direct evidence and use this as evidence for their enforcement decision.

The frequency of screening and the number of enforcement cases based on screening results can reasonably be expected to increase. There is a general interest in using technology and artificial intelligence to support competition enforcement, and there is higher quantity and better quality of market data overall, in particular in procurement markets. New methods of cartel screening and more sophisticated software have been developed and continue to be improved. Besides, the use of cartel screens by private companies on their own supply chains may lead to valuable findings, which can then be reported to the competent competition authorities.

In all events, co-operation among competition authorities to share expertise, experience and software is likely to be a key factor for the success of screening initiatives. Co-operation that does not involve sharing sensitive underlying data belonging to others would not be hampered by legal obstacles.27 For example, the Danish Competition and Consumer Authority developed a cartel screening tool (Bid Viewer) with several other authorities.28 As competition authorities are acquiring skills and developing data screens in parallel, their co-operation would save time and resources.

Click here for a PDF version of the article


1 Despina Pachnou is a Competition Expert and Daniel Westrik is a Junior Competition Expert, both at the OECD Competition Division.

2 Connor, J. (2016), The Private International Cartels (PIC) Data Set: Guide and Summary Statistics, 1990- July 2016, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2821254.

3 OECD Recommendation concerning Effective Action against Hard Core Cartels (2019), Background information https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0452#backgroundInformation.

4 Beth, H. & Gannon, O. (2022) Cartel screening–can competition authorities and corporations afford not to use big data to detect cartels? (https://www.elgaronline.com/view/journals/clpd/7/2/article-p77.xml).

5 Detailed information at www.oecd.org/daf/competition/data-screening-tools-for-competition-investigations.htm.

6 Beth, H. and T. Reimers (2019), “Screening Methods for the Detection of Antitrust Infringements”, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3501700.

7 Globally, the average number of leniency applications per jurisdiction declined by 64% in recent years. The average number of leniency applications was already declining prior to the onset of COVID-19; they were 42% lower in 2019 than 2015, OECD Competition Trends 2022 http://www.oecd.org/competition/oecd-competition-trends.htm.

8 OECD Recommendation concerning Effective Action against Hard Core Cartels (2019)

9 See Section 2.2 and Annex A of OECD (2022), Data Screening Tools for Competition Investigations Background Note, www.oecd.org/daf/competition/data-screening-tools-in-competition-investigations-2022.pdf.

10 In 2019, public procurement expenditure accounted for 12.6% of Gross Domestic Product across the OECD. Health expenditure represented the largest share of public procurement spending, averaging 29.3% across OECD countries in 2019. OECD (2021), Government at a Glance 2021, OECD Publishing, Paris, https://doi.org/10.1787/1c258f55-en.

11 Harrington, J. & Imhof, D. (2022) “Cartel Screening and Machine Learning”, https://law.stanford.edu/wp-content/uploads/2022/08/harrington-imhof-2022.pdf.

12 See Table 1 in OECD (2022), Data Screening Tools for Competition Investigations Background Note (www.oecd.org/daf/competition/data-screening-tools-in-competition-investigations-2022.pdf) that matches cartel screening indicators to specific cartel types in public procurement markets.

13 Fazekas, M. et al. (2022), “Public procurement cartels: A systematic testing of old and new screens”, Government Transparency Institute, http://www.govtransparency.eu/wp-content/uploads/2022/03/GTI-WP-Cartel_20220304-1.pdf.

14 Episode 14: Cartel Screening and Machine Learning (Harrington & Imhof) https://www.youtube.com/watch?v=NVhoIi8mFys (18:13)

15 Hastie, T. et al. (2009), The elements of statistical learning: data mining, inference, and prediction, New York: springer, https://link.springer.com/content/pdf/10.1007/978-0-387-84858-7.pdf.

16 Rodríguez, M. et al. (2022), “Collusion detection in public procurement auctions with machine learning algorithms”, Automation in Construction 133, 104047, https://www.sciencedirect.com/science/article/pii/S0926580521004982.

17 Deng, A. (2017), “Cartel detection and monitoring: a look forward”, Journal of Antitrust Enforcement 5(3), pp. 488-500, https://academic.oup.com/antitrust/article-pdf/5/3/488/21390250/jnw017.pdf.

18 For Romania, see section 2.3.1 here: https://one.oecd.org/document/DAF/COMP/WP3/WD(2022)37/en/pdf. For Spain, see section 4.3 here: https://one.oecd.org/document/DAF/COMP/WP3/WD(2022)33/en/pdf.

19 Huber et al. found false negatives applying a model trained on Japanese auction data to Swiss auction data, see Huber, M., D. Imhof and R. Ishii (2020), “Transnational machine learning with screens for flagging bid-rigging cartels”, Université de Fribourg, https://doc.rero.ch/record/329575/files/WP_SES_519.pdf.

20 For example, see https://www.cliffsnotes.com/study-guides/statistics/principles-of-testing/type-i-and-ii-errors.

21 See Lianos, I. (2021), “Computational competition law and economics – an inception report”, www.epant.gr/en/enimerosi/publications/research-publications/item/1414-computational-competition-law-and-economics-inception-report.html. It is worth caveating that web-scraping can be time consuming to set up, and may need ongoing updates as websites change, thus it may not be a viable long-term solution.

22 For example, in Australia: https://one.oecd.org/document/DAF/COMP/WP3/WD(2022)26/en/pdf.

23 For example, in Italy: https://one.oecd.org/document/DAF/COMP/WP3/WD(2022)41/en/pdf.

24 OECD (2022), Data Screening Tools for Competition Investigations Background Note, www.oecd.org/daf/competition/data-screening-tools-in-competition-investigations-2022.pdf.

25 Detailed information at www.oecd.org/competition/data-screening-tools-for-competition-investigations.htm.

26 Τhere is at least one example in Mexico, see Box 10 in OECD (2022), Data Screening Tools for Competition Investigations, Background Note, www.oecd.org/daf/competition/data-screening-tools-in-competition-investigations-2022.pdf.

27 Several respondents to a 2019 OECD and ICN survey reported that they do not have legal restrictions on sharing authority confidential information, OECD/ICN (2021), Report on International Co-operation in Competition Enforcement, www.oecd.org/competition/oecd-icn-report-on-international-cooperation-in-competition-enforcement-2021.htm.

28 See https://www.youtube.com/watch?v=Iovsp5aHcuU (2:40:12)