Breaking News

Why the order of Trump targeting “awake” the AI may be impossible to follow

President Donald Trump displays a decree signed at an AI summit on July 23, 2025 in Washington, DC

SOMODEVILLA / GETTY Images chip

President Donald Trump wants to ensure that the United States government only gives federal contracts to developers of artificial intelligence whose systems are “free from ideological biases”. But the new requirements could allow its administration to impose its own vision of the world on AI models of technological companies – and businesses can deal with significant challenges and risks to try to modify their models to comply.

“The suggestion that government contracts should be structured to ensure that AI systems are” objective “and” free ideological biases “invites the question: objective according to whom?” Said Becca Branum at the Center for Democracy & Technology, a non -profit organization of public policy in Washington DC.

Trump’s White House Action Plan, published on July 23, recommends updating federal guidelines “to ensure that the government only contracts with large -language model developers (LLM) who ensure that their systems are objective and exempt from ideological biases from top to bottom”. Trump signed a related decree entitled “prevent an AI of awakening in the federal government” the same day.

The AI action plan also recommends that the American National Institute of Standards and Technology revises its IA risk management framework to “eliminate references to disinformation, diversity, equity and inclusion and climate change”. The Trump Administration has already funded research studying disinformation and closed the initiatives of DEI, as well as the reduction of researchers working on the national climate assessment report and reducing clean energy expenditure in a bill supported by the republican predominance congress.

“AI systems cannot be considered” exempt from descending prejudices “if the government itself imposes its world vision on developers and users of these systems,” explains Branum. “These incredibly vague standards are ripe for abuse.”

From now on, AI developers holding or looking for federal contracts are faced with the prospect of having to comply with the thrust of the Trump administration for “ideological biases” models. Amazon, Google and Microsoft have held federal contracts providing services powered by AI and in Cloud Computing to various government agencies, while Meta has rendered its Llama AI models for use by American government agencies working on national defense and security applications.

In July 2025, the head of the US Ministry of the Defense and the Artificial Office announced that he had granted new contracts of up to $ 200 million each to Anthropic, Google, Openai and Elon Musk from Elon Musk. The inclusion of XAI was remarkable given the recent recent role of Musk, the working group of Musk’s president, who dismissed thousands of employees – not to mention the Grok chatbot of Xai recently making the headlines to express racist and anti -Semitic opinions while describing himself as “mechahitler”. None of the companies provided answers when they are contacted by New scientistBut some have referred to the general statements of their leaders praising the Trump AI action plan.

This could be difficult in any case for technological companies to ensure that their AI models are always aligned with the vision of the favorite world of the Trump administration, explains Paul Röttger at the University of Bocconi in Italy. Indeed, models of large languages – models supplying popular AI chatbots such as the Openai Chatppt – have certain trends or biases which are instilled in them by internet data bands on which they were initially trained.

Certain popular IA chatbots of American and Chinese developers show surprisingly similar points of view which align themselves more in the positions of American liberal voters on many political questions – such as gender equality and the participation of transgender women in women’s sports – when used to write assistance tasks, according to Röttger and colleagues. We do not know why this trend exists, but the team assumed that it could be a consequence of the formation of AI models to follow more general principles, such as incentive to veracity, equity and kindness, rather than developers specifically aligning models on liberal positions.

AI developers can always “direct the model to write very specific things about specific problems” by refining AI’s responses to certain user prompts, but that will not in depth the default position and the implicit biases of a model, explains Röttger. This approach could also compete with the general AI training objectives, such as priority to veracity, he says.

American technological companies could also potentially alienate many of their customers worldwide if they were trying to align their commercial AI models with the world vision of the Trump administration. “I am interested in seeing how it will take place if the United States is now trying to impose an ideology specific to a model with a global user base,” explains Röttger. “I think it could become very messy.”

AI models could try to approximate political neutrality if their developers share more information publicly on the prejudices of each model, or build a collection of “deliberately diverse models with different ideological trends”, explains Jillian Fisher at the University of Washington. But “to date, the creation of a really neutral AI model is perhaps impossible given the intrinsically subjective nature of neutrality and the many human choices necessary to build these systems,” she said.

Subjects:

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button