Openai has just created an application to share a hyper-relart slalist

Did you know that you can customize Google to filter the garbage? Take these steps For best search results, especially by adding my work at Lifehacker as a favorite source.
Last year, I wrote that we should all be afraid of Sora, the OpenAi videos generator. Sora’s initial deployment has promised hyper-realistic videos which, although exciting, terrified me. While fans of AI see a future of films and programs generated by AI, I see a future where nobody can say what is real or false. For me, the only destination for this technology is mass disinformation.
During the year and a half since, these videos generated by AI did not only become more realistic; They have also become more accessible, because companies like Google make their tools available to anyone wishing to pay. This is the situation in which we are with the latest OPENAI announcements: Sora 2, a new AI model to generate a video with audio, as well as a new Sora application to create and share your products generated by AI.
Sora 2
OPENAI is Sora 2 marketing as a massive upgrade on Sora, comparing the two to GPT-3.5 and GPT-1, respectively. The company claims that the new model can generate complex videos that previous models could not. This includes, in particular, an Olympic gymnastics routine; A man performing a backflip on a paddleboard that “accurately” models water physics; As well as a skater performing a triple axel with a cat on the shoulder.
This tweet is currently not available. It can be loaded or has been deleted.
A common flaw with AI video models is their lack of understanding of real world physics. The visual may seem realistic, but the elements can be transformed at random, while others can disappear and reappear without rhyme or reason. Openai says that Sora 2 does not make these mistakes as often. A basketball ball that is missing the hoop will not reappear in magic there; Instead, it will bounce on the panel as you expect. The company warns that the model is still imperfect, but is improved. By relying on this, the model is better in continuity on different levels: by taking Openai in their word, your videos should maintain consistency between the catches, and you should be able to dictate different types of styles, including “realists”, “cinematographic” and “anime”.
The biggest jump with Sora 2 is perhaps the possibility of adding elements of the real world in the model, an Openai functionality calls “cameo”. You can put real people in the Sora 2 model and ask AI to generate them in any video you want. OPENAI shows a number of examples of their staff adding to various videos, and although the quality is incoherent, it is a gargantuan jumps of Jibjab days.
Like Google’s VEO 3 model, Sora 2 can generate a video with a realistic audio. The ad video shows it: an elephant roars; A skater moves on the ice; The water splashes on the ground. But, more impressive (and because), people talk. A Sam Altman generated by AI explains the new model and application in this video, and although it is obvious enough for those of us who know that it is AI, I can imagine a lot People would have no idea that it is not the real altman in the clip.
Sister application
OPENAI says that the Sora application appeared as a “natural evolution of communication”. The company considers this as a way for people to create and remix the generations of other users, in particular with the possibility of downloading your own face and your resemblance to the model.
For the moment, the application is only invited, although you can download it for free in the App Store today. However, you may have a meaning for experience, both from demo video, Openai was abandoned on Tuesday, as well as publications from people who already have access.
This first example OPENAI DEMOS is a double cameo of the researcher OPENAI Bill Peebles and Sam Altman. The video contains a photo of the establishment of the two men with a conversation, which reduces a close -up of pipettes speaking quickly of the application income, then to a close -up of Altman by taking the diatribe, before closing at the time of the original establishment. On the surface, this is the type of video that you could expect to scroll through a tiktok or coil frenzy – but this video is entirely generated by AI.
Openai’s staff shows a series of other pre-generated examples, including a cameo that turns into a cartoon, another who changes the effect to the anime, and another which generates an “news” report of one of the dependencies of the staff member at the Ketchup. (The latter is quite disgusting, I could add.) They also demonstrate remixing videos that you find in the flow, because you can encourage Sora to adjust the video as you wish. A video watches Pipiles in an “ad” for a Sora 2 Cologne, but others remixed it to be toothpaste instead, or entirely in Korean.
These videos are quite realistic: in one, you think you just watch a clip for a tennis match, but it turns out to be a cameo with Rohan Sahai of Openai. After “Sahai” won the match, the video cuts to his “interview”, in which he thanks the enemies. Others are more obviously the AI - although, once again, not enough for most people to parade through the fact.
Stravance and security, according to Openai
The cameos look like a nightmare of intimacy and safety, although Optai has protections in place. You can’t just use anyone for videos, and you can only download your own face to the platform. The configuration of the Cameo functionality on the application is simple, if not extremely off -putting. The application will scan your face, a bit like configuring ID on an iPhone, and will then send the data to Openai “systems”, for “tons of validation” to block imitators or users who could create cameos without your consent. Once approved, you choose who can create cameras of yourself, including all users, friends, users you specifically approve, or simply you.
What do you think so far?
As for the videos themselves, the Sora application applies a watermark visible to any clip exported outside the application. If you’ve already seen one of these videos on the internet, you will notice a small “Sora” stamp on each, similar to the watermark you see on the Tiktok clips exported to other platforms. There are also models of reasoning under the hood to prevent users from generating “harmful” content, in particular with regard to cameo.
If you are a teenager using the Sora app, you will not be able to scroll forever. After scrolling for a while, there will be a time of recharging to prevent you from spending hours scrolling through these AI videos. Although adult accounts will not have this restriction, the application “pushes you” to take a break.
Who asked for that?
With all the respect due to Openai and its security team, this application seems to be a disaster, for so many reasons.
On the one hand, Openai has generated hyper realistic short videos to ask Siri on the weather. I appreciate that these videos all come with watermark, but that will not take a lot of skills to modify them, at least in a way that most people will not notice. As soon as it is widely available, all of our social media flows will be prey to this content. And, see a large part of it, a video And Audio which are quite realistic, many people will be due by a plot content.
It’s quite serious when it involves silly videos, like rabbits jumping on a trampoline. But what happens when it is “politicians” who say something blatant, or a “celebrity” steal something in a store? A Viral Sora video shows Sam Altman trying to run away with a GPU at Target, before being arrested by a security guard. How many additional Sora videos will show Sam Altman, and any other person who approves that their cameos are remixed, committing crimes or simply doing something embarrassing? Those who have enough power or renown can be able to demystify the videos, but by then it will be too late: most of the people who have seen it will take it as a fact.
This tweet is currently not available. It can be loaded or has been deleted.
At this point, it is great that there are safety measures in place to prevent people from remixing the cameos of other users without authorization, but the risk here of abuse is supreme: What happens if someone discovers how to “scan” someone’s face from a video, or break the parameters that prevent others from using their original face scan? If they can bypass OPENAI security measures, they can then remix the face of this person in any video approved by the platform. At this point, the cat is out of the bag.
Listen, I’m chronically online. I will not pretend as if I did not like a good meme generated by AI when it comes to my flow. But I’m not about to spend my free time scrolling Nothing But the rot of the brain generated by AI. I am sure that people will find creative ways to make funny videos using Sora, or will have a good time to make came up with their friends, but that’s the point: beyond the novelty of technology, there is nothing good to come.
It’s time to stop believing in Nothing You see online: someone could just cook it in an application.



