Elon Musk has said his goal is to quickly cut the US budget deficit by at least $1 trillion.

Media reports citing anonymous sources suggest that the DOGE Ministry is using AI to speed up cost-cutting decisions.

David Evan Harris, an AI expert who used to work for Meta's responsible AI division, calls this a “terrible idea.”

Experts say the approach mirrors the “cut first, fix later” mindset Musk adopted at Twitter two years ago, leading to thousands of job losses, technical glitches, lawsuits and controversies that have eroded the social network.

However, the consequences of dismantling government agencies, systems and services would be much more serious than a tech company.

Using AI to guide cost-cutting decisions

According to the Washington Post, in early February, DOGE members fed sensitive Department of Education data into AI software to analyze the department's programs and spending.

Wired reported that DOGE employees developed an AI chatbot for the General Services Administration (GSA) called GSAi, which helps analyze large volumes of contract and procurement data.

Another report from NBC News said that DOGE is considering using AI to analyze federal employee job feedback to determine which positions are no longer needed.

Elon Musk Bloomberg
Elon Musk (standing) and his son at the White House. Photo: Bloomberg

In February, Wired reported that DOGE had modified the Department of Defense's AutoRIF software to automatically rank personnel for firing purposes.

Last week, 21 employees of the United States Digital Services Agency (USDS) resigned in protest, accusing DOGE personnel of mishandling sensitive data and disrupting critical systems.

However, White House press secretary Karoline Leavitt asserted that anyone who thinks protests, lawsuits or legal action can stop President Trump “must have been clueless for the past few years,” according to AP.

On X, Musk also called these people “conservatives” who refused to return to the office.

Part of the problem, according to Amanda Renteria, CEO of Code for America, a nonprofit that develops digital tools and builds technical capacity for governments, is that building effective AI tools requires a deep understanding of the data used for training, something the DOGE team doesn't have.

The results that an AI tool produces may be useless, or the technology lacks the information and context to make the right decision. It may suffer from “illusion”.

According to many news agencies, DOGE staff is a group of young men in their 20s, coming from other Musk companies.

Concerns Surrounding DOGE’s Use of AI

Experts worry that AI could replicate biases that are common in humans. For example, some AI recruiting tools favor white, male candidates over other candidates.

If AI is used to determine which positions or projects to eliminate, it means that some important people or jobs will be cut simply because of their appearance or who they serve.

Harris analyzed the example of using AI to evaluate feedback from federal employees: talented employees whose native language is not English may be judged by AI as inferior to English speakers.

While these concerns are not new, they could have far-reaching consequences if they go wrong in government. Musk himself has admitted that DOGE can make mistakes and has sidelined some important efforts, such as Ebola prevention.

It is unclear whether AI was involved in this decision-making.

There is no denying that AI helps increase work efficiency, synthesize and analyze large amounts of information. However, if not used carefully, it will put sensitive government data or people's personal information at risk.

Without proper protection and restrictions on who can access the system, data fed into an AI program can unexpectedly appear in responses to other requests and fall into the hands of people who shouldn't know.

Harris was particularly concerned about DOGE's handling of personnel records, which he described as being among “the most sensitive documents in any organization.”

Perhaps the most pressing concern, however, according to experts, is the lack of transparency surrounding DOGE’s use of AI. What tools are being used, how they are being monitored, and whether humans are checking and validating the results… are all open questions.

Julia Stoyanovich, associate professor of computer science and director of the Center for Responsible AI at New York University, argues that for AI to be effective, users must be clear about their goals for the technology and fully test whether the AI ​​system meets those needs.

“I'm really, really curious to hear the DOGE team elaborate on how they measure performance and the correctness of the results,” she shared.

(According to CNN)