Regulating AI: Fears of Industry Capture and the Need for Meaningful Accountability

Regulating AI: Fears of Industry Capture and the Need for Meaningful Accountability

The recent Senate hearing on AI was notably congenial, with key industry figures, including OpenAI CEO Sam Altman, in agreement on the need for AI regulation.

However, this amicable atmosphere has prompted concerns about potential industry capture in AI. Critics worry that if large tech firms are allowed to draft the rules, it could stifle smaller companies and result in lax regulations.

Gary Marcus, a renowned AI critic, and Christina Montgomery from IBM, also voiced apprehensions about regulatory capture at the hearing. Despite the absence of representatives from Google and Microsoft, Altman essentially served as the tech industry’s spokesperson. While OpenAI is often referred to as a “startup,” it is one of the most influential AI companies globally, with its product launches and partnerships creating ripples across the tech industry.

Altman acknowledged the risk of regulatory capture at the hearing but was vague about his stance on licensing for smaller entities. He emphasized the need to avoid hindering smaller startups or open-source initiatives while still ensuring compliance.

Sarah Myers West from the AI Now institute expressed skepticism about the proposed licensing system. She cautioned against a superficial system that allows companies to tick a box affirming their knowledge of potential harms but does not hold them accountable when these systems fail.

Others highlighted the potential harm to competition. Emad Mostaque of Stability AI, along with Clem Delangue, CEO of AI startup Hugging Face, emphasized how such regulation might inhibit innovation and centralize power even more.

However, some believe licensing could work. Margaret Mitchell, now chief ethics scientist at Hugging Face, suggested that licensing could apply to individual developers rather than companies. She emphasized that good regulation depends on setting standards that cannot be easily manipulated and require a detailed understanding of the technology in question.

Mitchell and others are skeptical of Big Tech’s willingness to act in the public interest. Despite current issues with AI, such as bias in facial recognition, the focus at the hearing often veered towards hypothetical future problems rather than addressing known issues.

The EU’s forthcoming AI Act, which classifies AI systems based on risk and imposes data protection requirements, was lauded for its clear prohibitions of known harmful uses of AI, such as predictive policing algorithms and mass surveillance. As per West, this is the direction discussions need to take for meaningful accountability in the AI industry.