Meta plans to replace humans with AI to assess the risks: NPR

People speak near a meta-public outside the company’s headquarters in Menlo Park, California.
Jeff Chiu/AP
hide
tilting legend
Jeff Chiu/AP
For years, when Meta has launched new features for Instagram, WhatsApp and Facebook, teams of criticism have evaluated the possible risks: could it violate user privacy? Could that harm minors? Could this worsen the spread of deceptive or toxic content?
Until recently, what is known in meta-meta as privacy and integrity reviews has been carried out almost entirely by human assessors.
But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.
In practice, this means that things like criticism of meta algorithms, new security features and modifications of how the content is authorized to be shared on company platforms will be mainly approved by a system propelled by artificial intelligence – which is no longer subject to a careful examination by staff members to debate the way a platform change could have impartial or be angry.
Inside Meta, change is considered a victory for product developers, who will now be able to publish updates and application features faster. But the Meta current and old employees fear that the new automation thrust has the price to allow the AI to determine delicate determinations on the way in which META applications could cause real damage.
“Insofar as this process functionally means more things that launch more quickly, with less rigorous examination and opposition, it means that you create higher risks,” said an old Meta Executive who asked for anonymity for fear of reprisals on the part of the company. “The negative externalities of product changes are less likely to be avoided before starting to cause problems in the world.”
Meta said in a statement that he had invested billions of dollars to support user confidentiality.
Since 2012, Meta has been under the supervision of the Federal Trade Commission after the agency has concluded an agreement with the company on the way it manages the personal information of users. Consequently, product confidentiality examinations were necessary, according to the Meta, current and old employees.
In his declaration, Meta said that changes in product risks aim to rationalize decision -making, adding that “human expertise” is always used for “new and complex problems” and that only “low -risk decisions” are automated.
But the internal documents examined by NPR show that META plans to automate journals for sensitive areas, including the security of AI, the risks for young people and a category known as integrity which encompasses things such as violent content and the propagation of lies.
Former Meta employee: ‘Engineers are not confidentiality experts
A slide describing the new process indicates that the teams of products will now in most cases receive an “instant decision” after fulfilling a questionnaire on the project. This decision led by AI will identify the areas of risk and the requirements to meet them. Before the launch, the product team must verify that they have satisfied these requirements.
The founder and CEO of Meta, Mark Zuckerberg, speaks in Llamacon 2025, a conference of AI developers, in Menlo Park, California, Tuesday April 29, 2025. (AP photo / Jeff Chiu)
Jeff Chiu/AP/AP
hide
tilting legend
Jeff Chiu/AP/AP
As part of the previous system, the updates of products and features could not be sent to billions of users until they have received the blessing of risk assessors. Engineers now create meta-products are authorized to make their own risk judgments.
In some cases, including projects involving new risks or when a team of products wishes additional comments, projects will be revised by humans, says the slide, but it will not be by default, as it was before. Now, team construction products will make this call.

“Most product managers and engineers are not confidentiality experts and this is not the objective of their work. This is not what they are mainly assessed about and this is not what they are encouraged to prioritize,” said Zvika Krieger, who was director of responsible innovation at META until 2022.
“In the past, some of these types of self-evaluation have become auditing exercises that lack significant risks,” he added.
Krieger said that although there was a place to improve the rationalization of criticisms in Meta by automation, “if you push it too far, inevitably the quality of the exam and the results will suffer.”
Meta has minimized the concerns that the new system will introduce problems in the world, stressing that it verifies the decisions that automated systems take projects that are not assessed by humans.
Meta documents suggest that its users in the European Union could be somewhat isolated from these changes. An internal announcement indicates that decision -making and surveillance of user products and data in the European Union will remain at Meta European headquarters in Ireland. The EU has regulations governing online platforms, including the Digital Services Act, which requires companies, including META, to strictly control their platforms and to protect users from harmful content.
Some of the changes to the product examination process were first reported by information, a technological new site. The internal documents seen by the NPR show that the employees were informed of the renovation shortly after the end of his program to verify the facts and released his hate speech policies.
Overall, the changes reflect a new accent on Meta in favor of a more without restriction and the update more quickly of its applications – a dismantling of various railings that the company has promulgated over the years to limit the abusive use of its platforms. The big changes of the company also follow the efforts of CEO Mark Zuckerberg for Curry the favor of President Trump, whose electoral victory Zuckerberg described as “cultural tilting point”.
Moves faster to assess the risksAuto-Defeating ‘?
Another factor that stimulates changes in product exams is a wider and longer push to exploit AI to help the company move more quickly in the middle of the growing competition from Tiktok, OpenAI, SNAP and other technological companies.
Meta said earlier this week that he was counting more on AI to help apply his content moderation policies.
“We start to see [large language models] Operating beyond that of human performance for certain political areas, “wrote the company in its latest quarterly integrity report. He said that it also used these AI models to filter certain articles that the company is” very confident “does not break its rules.
“This releases the capacity of our examiners allowing them to prioritize their expertise on the content which is more likely to violate,” said Meta.
Katie Harbath, founder and CEO of the Anchor technology policy Change, who spent a decade working on public policy on Facebook, said that the use of automated systems to report potential risks could help reduce duplication efforts.
“If you want to move quickly and have high quality, you will need to incorporate more AI, because humans can only do a lot in a certain time,” she said. But she added that these systems must also have checks and counterweights from humans.
Another former Meta employee, who spoke under the guise of anonymity because they are also afraid of the company’s reprisals, wondering if going faster on risk assessments is a good strategy for Meta.
“It seems almost self -destructive.
Michel Protti, META confidentiality director for the product, said in a March article on his internal communication tool, Workplace, that the company “autonomizes product teams” with the aim of “the evolution of META risk management processes”.
The deployment of automation increased in April and May, said a current meta-e-employment familiar with product risk assessments which was not authorized to speak publicly internal operations.
Protti said that the automation of risk opinions and the fact that product teams say more about the potential risks posed by product updates in 90% of cases aim to “simplify decision -making”. But some initiates say that the pink summary of the elimination of humans from the risk assessment process considerably minimizes the problems that changes could cause.
“”I think it is quite irresponsible given why we exist, “said Meta employee close to the risk examination process.” We provide human perspective on how things can go wrong. “”
Do you have information on Meta changes? Hand hand to these authors through encrypted communications on signal. Bobby Alllyn is available on Ballyn.77 and Shannon link is available at Shannonbond.01




