Friday , May 8 2026
Home / comment / Beware of Government by AI

Beware of Government by AI

HH Sheikh Mohammed bin Hamad bin Mohammed Al Sharqi, Crown Prince of Fujairah, has attended the launch of the UAE’s first fully integrated digital government services center in Fujairah,

 

As countries like the UAE delegate decision-making to algorithms, evidence shows  these systems can cause widespread harm and erode democratic accountability

 

COMMENT | GABRIELA RAMOS & EMILIJA STOJMENOVA DUH | Earlier this month (April), the United Arab Emirates announced a plan to have half of its government services run on agentic AI within the next two years. Under the scheme, AI is supposed to serve as an “executive partner” that “analyzes, decides, executes, and improves in real time” without human intervention. Having spent our careers at the intersection of entrepreneurship, research, and digital policy, we can confidently pronounce this plan reckless. And because the UAE presents itself as a global digital model, other governments will feel pressure to follow suit.

That is a danger we must not ignore. We already know what happens when governments delegate decision-making to algorithms. In 2021, a self-learning system in the Netherlands wrongly accused roughly 35,000 families of childcare benefit fraud. Parents were ordered to repay tens of thousands of euros they never owed; homes were lost; and more than 2,000 children were taken into state care.

This outcome had actually been built into the system’s design. Dual nationality and foreign-sounding names were flagged as risk factors, baking illegal discrimination directly into the model. The result was a national scandal that ultimately led to the resignation of then-Prime Minister Mark Rutte’s government.

A similar dynamic played out in Australia. Between 2015 and 2019, the Robodebt scheme pursued 433,000 welfare recipients for A$1.7 billion (US$1.2 billion) in supposedly unlawful debts. The harm was profound, with mothers testifying that their sons killed themselves after receiving debt notices they had no way to challenge. A Royal Commission later found the programme “neither fair nor legal.”

In the United States, meanwhile, Arkansas and Idaho replaced nurses with algorithms to assess eligibility and levels of home care. People with cerebral palsy, quadriplegia, and multiple sclerosis had their care cut by 20–50% overnight. The courts eventually ordered a halt to the use of these systems, but not before the damage was done. Some patients were left without adequate support, leading to preventable medical complications.

Each of these cases involved a single system within a single agency. Now imagine such systems handling half of all government services, as the UAE’s plan proposes. Consider, for example, a single mother whose childcare benefits are frozen after an AI agent flags her bank activity, leaving her to navigate an appeals process that sends her from one automated system to another, with no human point of contact, just as the rent comes due. What about a migrant worker whose residency renewal is denied because the system cannot parse his employer’s filings—rendering him effectively undocumented—or an elderly widow whose pension is paused because two databases conflict and she cannot make sense of the interface?

These are not hypotheticals. They are documented patterns that agentic AI intensifies in ways no training programme can address within the UAE’s two-year timeline. Three key risks stand out. The first is scale: when a caseworker makes a mistake, one person suffers; when an AI agent does, thousands can be affected before anyone even notices. Then there is the opacity of AI decision-making. Given that agentic systems make decisions in sequence, with each step building on the last, the causal trail is effectively lost by the time harm becomes visible. Arkansas’s algorithmic health-benefit system offers a stark example. No one—not even its creators—could fully explain how it worked, prompting a federal court to describe it as “wildly irrational.” Moreover, a lack of transparency may be built in through trade secrets or proprietary frameworks underlying the algorithms.

Lastly, AI systems reverse the burden of proof, forcing citizens to prove their innocence rather than requiring the state to justify its actions. As the childcare-benefit scandal in the Netherlands and the Robodebt scheme in Australia showed, those who are least able to do so—people with limited time, money, language proficiency, and access to legal support—are hit the hardest.

The UAE claims that the guiding principle of its AI programme is “people come first.” But the design suggests otherwise. A government that evaluates ministries by “speed of adoption” and “mastery of AI” is not tracking what matters but replicating the same logic of efficiency that has already caused significant harm around the world.

Speed of adoption is a vendor’s metric. But a government’s core responsibility is a duty of care grounded in human judgment. This aligns with citizens’ expectations that the government will be accountable and transparent, and that it will explain decisions that affect their rights and freedoms. When governments enthusiastically embrace autonomous decision-making in the name of efficiency, they are, in effect, signing away that accountability.

Every algorithm-related scandal of the past few years has raised the same fundamental questions: Who is in charge, and who made the decision? In a government run by agentic AI, those questions no longer have clear answers. The system decides, updates itself, and moves on, leaving citizens with no recourse when things go wrong.

With the advent of AI, democratic accountability erodes not through an open power grab, but through a series of procurement decisions that quietly displace human oversight. By undermining trust in institutions at a time when it is already dangerously low, these systems ultimately serve the interests of the tech titans driving the AI revolution.

But it need not be this way. The UAE has the resources, talent, and political stability needed to build a genuinely human-centered digital government that could set the global standard by augmenting, rather than replacing, human decision-making.

The costs of getting this wrong will not be confined to the UAE. They will be borne by a single mother in another country whose benefits are cut by an algorithm she never knew existed, and by countless others like her around the world.

*****

Gabriela Ramos, is the Co-Chair of the Task Force on Inequalities and Social-Related Financial Disclosures, is a former assistant director-general for social and human sciences at UNESCO, where she oversaw the development of the Recommendation on the Ethics of AI, and a former OECD chief of staff and sherpa to the G20, G7, and APEC.

Emilija Stojmenova Duh, is an Associate Professor of Electrical Engineering at the University of Ljubljana, is a member of the Globethics Board of Foundation and a former minister of digital transformation of Slovenia.

Source: Project Syndicate

Leave a Reply

Your email address will not be published. Required fields are marked *