Friday, January 16, 2026

AI’s regulatory landscape – who is watching the watchers?

Artificial intelligence now shapes decisions that once belonged exclusively to humans. It influences who receives a loan, which medical images are escalated for review, how online speech is ranked, and increasingly how governments and corporations assess risk. As these systems have grown in power and reach, the question of regulation has moved from a niche policy concern to a central issue of public trust. Yet while governments around the world are rushing to assert oversight, the regulatory structures emerging around AI remain fragile, uneven, and in many cases dependent on the very institutions they are meant to supervise.

The pace of AI deployment has exposed a fundamental tension. Regulation is built on deliberation, consultation, and enforcement. AI development is built on iteration, speed, and scale. That mismatch has defined the regulatory landscape from the beginning and continues to shape its limitations. Declarations of intent and high-profile summits may signal seriousness, but intent alone does not guarantee control.

The European Union has moved further than any other jurisdiction in formalizing AI oversight. The EU Artificial Intelligence Act establishes a risk-based framework that classifies systems according to their potential harm, with strict obligations for those deemed high risk, including documentation, transparency, and human oversight requirements.[1]

In contrast, the United States has adopted a more fragmented approach. Rather than comprehensive legislation, it relies on executive authority and existing regulators. Executive Order 14110 directs federal agencies to develop standards for safe and trustworthy AI, but enforcement remains distributed across institutions whose mandates predate modern machine learning systems.[2]

The United Kingdom has pursued a different strategy, emphasizing coordination and international leadership over direct enforcement. Through the creation of the AI Safety Institute, the UK has positioned itself as a convening authority for research and dialogue, prioritizing alignment and cooperation rather than prescriptive regulation.[3]

Despite their differences, these regulatory models share a structural weakness. Oversight bodies often depend on information supplied by the companies they regulate, creating asymmetries that limit effective scrutiny. Intergovernmental analyses have repeatedly warned that self-reporting frameworks weaken accountability and obscure real-world risk.[4]

Time further complicates governance. Legal and regulatory systems evolve slowly, while AI systems are updated continuously. Scholars have long observed a widening gap between the pace of technological change and the capacity of legal frameworks to respond, a lag that becomes more consequential as systems gain autonomy and scale.[5]

Jurisdiction adds another layer of complexity. AI systems cross borders effortlessly, while regulatory authority does not. This dynamic produces spillover effects, where rules set in one region influence global practices, while simultaneously enabling regulatory arbitrage as firms navigate differences between regimes.[6]

Finally, governance efforts face the persistent risk of regulatory capture. Close relationships between industry and oversight bodies can subtly reshape priorities and enforcement over time, reducing the independence that regulation requires to function effectively.[7]

References

[1] European Union, Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union

[2] United States, Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Federal Register

[3] UK Government, AI Safety Institute, UK National Archives snapshot

[4] OECD, Artificial Intelligence in Society, OECD Publishing, 2019

[5] Marchant, G., Allenby, B., and Herkert, J., The growing gap between emerging technologies and legal-ethical oversight, Springer, 2011

[6] European Parliament, Artificial intelligence governance: Cross-border challenges and regulatory cooperation

[7] Dal Bó, E., Regulatory capture: A review, Oxford Review of Economic Policy, 2006

(Mark Jennings-Bates, BIG Media Ltd., 2026)

Mark Jennings-Bates
Mark Jennings-Bates
Mark Jennings-Bates is a pioneer in next-generation AI frameworks at MosaicDM, where he leads the development of deterministic intelligence systems that fundamentally transform how artificial intelligence operates. His approach to innovation mirrors the precision and strategic thinking that made him a Canadian-championship-winning rally driver and enabled him to lead a team to a Guinness World Record for the longest paramotor expedition. Unlike traditional AI that relies on probabilistic outputs and statistical approximations, Mark's work focuses on creating AI solutions that deliver mathematically rigorous, reproducible, and trustworthy results.
spot_img

BIG Wrap

Ukraine’s new defense chief reveals 200,000 soldiers have gone AWOL and 2 million are dodging draft

(CNN) Ukraine estimates that 200,000 of its soldiers are absent without official leave (AWOL), meaning they have left their positions without permission to do...

UN Security Council holds emergency meeting on deadly protests in Iran

(Al Jazeera Media Network) The United Nations Security Council has held an emergency meeting to discuss deadly protests in Iran amid threats by United States President Donald...