A multidisciplinary policy design to protect consumers from AI collusion

A multidisciplinary policy design to protect consumers from AI collusion

Legal scholars, computer scientists and economists must work together to prevent unlawful price-surging behaviors from artificial intelligence (AI) algorithms used by rivals in a competitive market, argue Emilio Calvano and colleagues in this Policy Forum. Whether this algorithmic collusion – when prices are unlawfully raised by profit-hungry competitors who agree behind closed doors to deceive the market – is purposeful or a programming oversight, it is nonetheless as dangerous to the consumer as are human-directed collusions; thus, there must be policies in place to hold firms accountable for collusive behaviors in their pricing algorithms, the authors say. Experimental and empirical evidence has shown how easily algorithms can adopt collusive behaviors; for example, when tasked to maximize profit, the AI will autonomously set off to learn all possible pricing rules of conduct – including collusion — to reach this goal without human intervention. Current policies are not equipped to stay appraised of the AI’s capability to adopt unlawful pricing rules. Cases of human-made collusion can be investigated through evidence of surreptitious communications between competitors that suggests they agreed not to compete. By contrast, such trackable evidence is not as apparent in cases of algorithmic collusion, and AI can even evolve beyond established economic theories and studies of human collusion. Therefore, the authors propose a three-step method to investigate pricing algorithms for collusion in controlled environments. This includes testing which AI pricing rules can lead to collusion in the laboratory; applying an auditing exercise to uncover collusive properties that produce high prices; and, finally, developing constraints on the learning algorithm to prevent AI from evolving to collusion. Once these methods are implemented and completed, policymakers can consider banning specific pricing algorithms and hold firms accountable for AI pricing rules that lead to collusive behavior. “There are sev­eral obstacles down the road, including the difficulty of making a collusive property test operational, the lack of transparency and interpretability of algorithms, and courts’ willingness and ability to incorporate tech­nical material of this nature,” the authors note.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Source link

#multidisciplinary #policy #design #protect #consumers #collusion

Leave a Reply

Your email address will not be published. Required fields are marked *