Will Smith’s concert video highlights the concern concerning IA simulation crowds: NPR

A crowd always created by Ai-Créée during a major public event in OpenAi’s advertising video for its new platform for generation of SERA 2 videos. The AI crow’s scenes have traditionally posed a great technical challenge for companies like Openai and Google. But their models improve all the time.
OPENAI
hide
tilting legend
OPENAI
A Will Smith concert video AIVELIED on the Internet recently – not for its performance, but for the crowd. Eagle eyes viewers have noticed strange fingers and faces in the audience, among other visual problems and suspected manipulation of AI.
The crowd scenes have a particular technological challenge for IA Image creation tools – in particular video. (Smith’s team has not commented on publicly – or responded to an NPR on – How the video has been made.) “You manage so many complex details,” said the visual artist and researcher based in San Francisco, Kyt Jana, an expert in IA images. “You have every individual human being in the crowd. They all move independently and have unique characteristics – their hair, their face, their hat, their phone, their shirt.”

But the latest models of AI video generation such as Google I see 3 And Openai Sora 2 become fairly well. “We are entering a world where, in a generous estimate of one year, the lines of reality will become really vague,” said Janae. “And check what is real and what is not real will almost become like a practice.”
Why the images of the crowd count
This observation could potentially have serious consequences in a society where images of large crowds engaged during public events such as rock concerts, demonstrations and political rallies have a major currency. “We want a visual metric, a way to determine whether someone succeeds or not,” said Thomas Smith, CEO of Gado imagesA company that uses AI to help manage visual archives. “And the size of the crowd is often a good indicator.”
A report From the global consulting company, Capgemini shows that nearly three -quarters of the images shared on social networks in 2023 were generated using AI. The technology becoming more and more skillful to create convincing crowd scenes, the handling of visuals has never been easier. With this, both a creative opportunity – and a societal danger. “AI is a good way to cheat and inflate the size of your crowd,” said Smith.

He added that there is also a setback of the medal to this phenomenon. “If there is a real image that surfaces and it shows something that is politically annoying or a bodies, there will also be a tendency to say:” No, it is a false AI. “”
An example of this happened in August 2024, when the Republican Party candidate Donald Trump Distribute false complaints This Kamala Harris democratic team used AI to create an image of a large crowd of supporters.
Charlie Fink of Chapman University, who writes on AI and other emerging technologies for ForbesSaid that it is particularly easy to dive people to believe that a crowd of false people is real or that a real crowd scene is false because of how the images are delivered. “The challenge is that most people are looking at content on a small screen, and most people are not terribly critical of what they see and hear,” said Fink. “If it looks real, it’s real.”
Balance creativity and public security
For technological companies behind image generators and social media platforms, where fixed images and videos generated by AI have a delicate balance to find between allowing users to create increasingly realistic and credible content – including detailed crowd scenes – and potential damage.
“The more we can create the more realistic and credible results, the more he offers people of creative expression,” said Oliver Wang, main scientist of Google Deepmind who co-directs the generation of the company’s images. “But disinformation is something that we take very seriously. We therefore braid all the images that we generate with a visible watermark and an invisible watermark.”
However, the visible watermark – that is to say the public – currently displayed on the videos created using VEO3 from Google is tiny and easy to miss, nestled in the corner of the screen. (Invisible filigranes, like Google Syntheticare not visible in the eyes of regular users; They help technological companies monitor AI content behind the scenes.)
And AI labeling systems are always applied quite unevenly on all platforms. There are not yet standards at the industry level, although the companies with which NPR have maintained themselves for this history have declared that they are motivated to develop them.
Meta, the parent company of Instagram, Currently Labels downloaded from the content generated by AI when users disclose it or when their systems detect it. Google videos created using its own generative AI tools on YouTube automatically have a label in the description. He request Those who create media using other tools to disclose to self-divulguer when AI is used. Tiktok need The creators to label a content generated by the Ai-Généré or considerably published which shows scenes or realistic people. The unselated content can be deleted, restricted or labeled by our team, according to the damage it could cause.
Meanwhile, Will Smith has fun more with AI since the release of this controversial concert video. He posted a fun player follow up In which the camera takes place from the singer’s images performing vigorously on stage to reveal an audience filled with pumping cats. Smith included a comment: “The crowd was Poppin ‘Tonite !!”




