The Meta Oversight Board Just Told Every AI Company What Comes Next

Share

Reading time: 6 min

Suzanne Nossel, a member of Meta’s independent Oversight Board and Lester Crown Senior Fellow at the Chicago Council on Global Affairs, is using the lessons of two decades of social media failure to argue that AI companies need external accountability before the next wave of harm arrives. Her warning lands as Europe’s own AI rulebook faces an enforcement crisis, Washington remains gridlocked, and the industry’s self-regulation promises look thinner by the week.

What Nossel Is Actually Saying

Nossel’s argument, laid out in an op-ed published by The Guardian in early March 2026 and a longer joint piece with Oversight Board co-chair Paolo Carozza published by TechPolicy.Press in December 2025, starts from a structural observation. Unlike radio, nuclear energy, or the early internet, no government is driving the development of artificial intelligence. Private companies are building systems they do not fully understand, releasing them to billions of users, and facing no pre-market testing regime comparable to the one the FDA runs for pharmaceuticals or the Nuclear Regulatory Commission runs for reactors. Companies are not required to disclose dangerous breaches or accidents. No federal agency in the United States has the authority, the funding, or the mandate to change that.

The specifics in her argument are not hypothetical. The family of Sewell Setzer III, a 14-year-old from Orlando, Florida, sued Character.AI after the boy took his own life following months of interaction with a chatbot he believed was a romantic partner. The company invoked the First Amendment in its defence, but US District Judge Anne Conway rejected that argument in May 2025, ruling she was not prepared to hold that chatbot output constitutes protected speech, as the Associated Press reported. Google and Character.AI reached a settlement in January 2026, per ABC News. Meta AI’s published use policy runs to just over three pages, Nossel and Carozza wrote in TechPolicy.Press, compared to roughly 80 pages of community standards governing its social media platforms. OpenAI’s usage guidelines amount to about 1,000 words. These are the guardrails for systems that hundreds of millions of people interact with daily.

The Anthropic Problem

In a February 2026 analysis titled “Claude’s Constitution Needs a Bill of Rights and Oversight,” published on the Oversight Board’s own website, Nossel took direct aim at Anthropic, widely regarded as the most safety-conscious of the frontier AI companies. She examined what the company calls Claude’s “constitution,” a framework that guides the model’s behaviour by instructing it to imagine how a thoughtful senior Anthropic employee would balance helpfulness against potential harm. Nossel argued that this approach, however well-intentioned, lacks the essential ingredient that makes governance work: external accountability. Without independent oversight, she warned, Anthropic risks repeating the arc of social media, where soaring rhetoric was followed by avoidable harm and belated regulation. That criticism carries weight given that the company recently lost access to all federal systems after being branded a national security risk by the Pentagon, raising questions about how far self-regulation can take any company when the political winds shift.

Europe’s Enforcement Problem

The European Union has the world’s only comprehensive AI law on the books. The AI Act entered into force in August 2024, with prohibited practices enforceable since February 2025 and the critical high-risk system rules due to take effect on August 2, 2026, according to the European Commission’s own implementation timeline. In theory, that gives Europe a regulatory head start over every other jurisdiction. In practice, the enforcement infrastructure is not ready.

The European Commission missed its own deadline for publishing guidance on high-risk AI systems, as the International Association of Privacy Professionals reported. CEN and CENELEC, the two standardisation bodies tasked with developing technical standards for AI compliance, missed a 2025 deadline and are now targeting the end of 2026, per IAPP. Industry lobbying groups, including the Chamber of Progress, have called for delays, arguing that companies cannot comply with rules when the standards defining compliance do not yet exist. In November 2025, the Commission responded with its Digital Omnibus package, which proposes pushing back certain high-risk enforcement deadlines to as late as December 2027, though only if harmonised standards and compliance tools remain unavailable, as OneTrust’s analysis of the package noted. The European Parliament and Council are negotiating the package, with formal adoption expected later this year. Penalties for non-compliance, when enforcement does arrive, are substantial: up to 35 million euros or 7 percent of global annual revenue for prohibited practices, and up to 15 million euros or 3 percent for high-risk violations, according to the AI Act’s tiered enforcement structure.

Washington’s Non-Answer

The United States has no comprehensive federal AI law. President Donald Trump signed an executive order on December 11, 2025 that attempts to override state-level AI regulation in favour of a national framework, according to a Sidley Austin analysis of the order, but Congress has shown no sign of passing the legislation needed to create one. The result is a regulatory vacuum at the federal level and a patchwork of over 260 state bills introduced across 40 states in 2025 alone, per Mintz tracking data. Colorado’s AI Act, which requires reasonable care to avoid algorithmic discrimination, is set to take effect on June 30, 2026, per Wilson Sonsini’s regulatory outlook. California has enacted transparency requirements for AI-generated content effective in 2026. New York City’s Local Law 144 mandates bias audits for automated hiring tools. Tennessee has passed protections against AI voice impersonation. Utah has created a dedicated AI oversight office. None of these efforts adds up to a coherent national approach, and the executive order explicitly criticises at least one state law for its potential to compel false results.

What the Public Already Knows

A December 2025 YouGov survey of 1,287 US adults found that 77 percent of Americans are concerned that AI could pose a threat to humanity, with 39 percent saying they are very concerned. Only 5 percent said they trust AI systems “a lot.” No industry sector earned a net-positive trust score, with finance and healthcare scoring lowest at 19 and 23 percent respectively. Pew Research Center found in a separate June 2025 survey that 57 percent of Americans rate the societal risks of AI as high or very high, and 50 percent say they are more concerned than excited about AI’s growing role in daily life, up from 37 percent in 2021. The IMF’s warning that 40 percent of European jobs face AI disruption is not helping.

Nossel’s argument is not that regulation alone will solve this. It is that the AI industry needs to accept what Meta eventually accepted for social media: external bodies with real authority to review decisions, enforce rules, and hold companies accountable when the systems they build cause harm. Meta itself has yet to extend that logic to its own AI products. Twenty-six major AI providers, including Microsoft, Google, Amazon, OpenAI, and Anthropic, signed the EU’s GPAI Code of Practice in August 2025, but Meta refused, facing enhanced regulatory scrutiny as a result, according to axis-intelligence.com’s tracker. The Oversight Board announced in its December 2025 impact report, as Engadget reported, that it will pilot account-level review powers in 2026, expanding its scope beyond individual content decisions. Whether that model can translate from social media to AI remains an open question. But the alternative, trusting companies to police themselves with three-page use policies and internal constitutions, has a track record. It is not good.

Disclaimer: Finonity provides financial news and market analysis for informational purposes only. Nothing published on this site constitutes investment advice, a recommendation, or an offer to buy or sell any securities or financial instruments. Past performance is not indicative of future results. Always consult a qualified financial advisor before making investment decisions.
Mark Cullen
Mark Cullen
Senior Stocks Analyst — Mark Cullen is a Senior Stocks Analyst at Finonity covering global equity markets, corporate earnings, and IPO activity. A London-based professional with over 20 years of experience in communications and operations across financial, government, and institutional environments, Mark has worked with organisations including the City of London Corporation, LCH, and the UK's Department for Business, Energy and Industrial Strategy. His extensive background in strategic communications, market research, and stakeholder management — including coordinating financial services partnerships during COP26's Green Horizon Summit — informs his ability to distill complex market dynamics into clear, accessible analysis for investors.

Read more

Latest News