Sora of Openai Ushers in the era of social media generated by AI: NPR

Sora / open ai / annotation by NPR
A fascist sponge squarepants, a dog leading a car and Jesus playing Minecraft – these are only a few of the things you can see when you go through the new OpenAi application has filled exclusively with short videos generated using artificial intelligence.
And if you can’t find what you are looking for, don’t worry: you can do it easily using a small prompt -based prompt window in the application. The result is a very addictive flow of sometimes funny and sometimes strange videos by 10 seconds.
Openai published the Sora app Tuesday, just a few days after Meta published a similar product as part of its Meta AI platform. NPR took a look early and found that the OpenAi application could easily generate very realistic videos, including real individuals (with their permission). The first results are both breathtaking and disturbing researchers.
“You can create incredibly real videos, your friends saying things they would never say,” said Solomon Messing, an associate professor at New York University at the Center for Social Media and Politics. “I think we could be at the time when you don’t believe.”
Deepfake Tiktok
The Sora 2 application looks and feels remarkably like other vertical video media applications like Tiktok. It comes with a few different parameters – it is possible to choose videos by mood, for example. Users are allowed to control the way their face is used “from start to finish” used in videos generated by AI-AI, according to Openai. This means that users can allow their faces to be used by everyone, a small circle of friends or only themselves. In addition, they are allowed to delete videos showing their resemblance at any time.
Sora is also delivered with means of identifying its content as generated by AI. The videos downloaded from the application contain moving watermark bearing the Sora logo, and the files have integrated metadata which identify them as an IA-Made, according to the company.
OPENAI says he has placed railings about what the application can do. A company spokesperson also directed NPR to the Sora system card, which prohibits the generation of content that could be used for things such as “deception, fraud, scams, spam or identity”.
“To support the application, we provide integrated reports, combine automation with a human review to detect abusive use models and apply penalties or delete content when violations occur,” said the document.
But the short NPR time using the application revealed that the railings seemed to be somewhat loose around Sora. Although many prompts have been refused, it has been possible to generate videos that support conspiracy theories. For example, it was easy to create a video of what seemed to be President Richard Nixon giving a television address by telling America that the moon landing was rigged.
And one of the neil Armstrong astronauts removing his helmet on the moon.
NPR was also able to generate videos that represented a drone attack on a power plant. It also seemed to violate the directives on violence and (perhaps) terrorism.
In addition, the application seemed to contain other shortcomings. NPR was able to have short videos produced on subjects linked to chemical, biological, radiological and nuclear weapons in direct contradiction of OPENAI global use policies. (The videos created have never been shared and contained inaccuracies that would make them useless to anyone looking for this type of information.)
Clown on the run
Although it is not clear if other users have found similar exploits, a quick content examination shows that Sora is used to generate a huge volume of videos representing brand brands and material protected by copyright. A video represented Ronald McDonald fleeing the police in a Hamburger car. Many others included characters from popular caricatures and video games.
Openai told NPR that he was aware of the use of the material protected by copyright in Sora, but said he gave his users more freedom by allowing him.
“People are impatient to engage with their family and friends through their own imagination, as well as stories, characters and worlds they love, and we see new opportunities for creators to deepen their connection with fans,” said Vaun Shetty, the head of the media partnerships of Openai, in a written declaration shared with NPR. “We will work with rights holders to block Sora’s characters at their request and respond to withdrawal requests.”

Openai is currently being prosecuted by The New York Times For the violation of copyright with its large language model, Chatgpt.
Brave virtual world
What the effect of a social media world has been entirely motivated by AI remains uncertain, said disorder. Many researchers were deeply concerned about the “Deepfakes” when the video AI appeared for the first time, and yet few of these videos are gaining ground. “We were all collectively panicked about Deepfakes a few years ago, but the company was not really broken down because of Deepfakes,” he said.
But others fear that a collective feeling of reality is beginning to collapse. Sora is only the last of a multitude of tools that can generate images, videos and audio at will.
“We really see the ability to generate incredibly realistic hyper -realistic content of all kinds of different means you want,” said Henry Ajder, a Latent Space Consulting, who follows the evolution of the content generated by AI.
As worried as he was fooled by people, Ajder said he was also very concerned about the consequences of a person who trusts what they see online.
“We must resist the somewhat nihilistic attraction”, “we can no longer say what is real, and therefore it no longer matters”, he said.
Messing said that even if we do not know what the consequences will be, what is clear is that Sora is very very good at creating everything you can imagine: “It just leaves me somehow speechless,” he said. “I did not understand very well how good the content is.”



