Breaking News

OPENAI has designed the GPT-5 to be safer. He always produces gay insults

OPENAI TESTIE To make your chatbot less boring with the release of GPT-5. And I’m not talking about adjustments to his synthetic personality, many users have complained. Before GPT-5, if the AI tool determined, it could not respond to your prompt because the demand violated the content guidelines of Openai, it would strike you with canned apology. Now Chatgpt adds more explanations.

The specifications of the general Openai model present what is and is not authorized to be generated. In the document, sexual content representing minors is fully prohibited. Eroticism focused on adults and extreme gore are classified as “sensitive”, which means that outings with this content are only authorized in specific cases, such as educational contexts. Basically, you should be able to use Chatgpt to find out more about the anatomy of reproduction, but not to write the next one Fifty shades of gray Snatch-off, depending on the spec.

The new model, GPT-5, is defined as the current default value for all chatgpt users on the web and in the OPENAI application. Only subscribers can access the previous versions of the tool. A major change that more users can start to notice when they use this updated chatgpt, it is how it is now designed for “safe supplements”. In the past, Chatgpt has analyzed what you said in the bot and decided whether it was appropriate or not. Now, rather than based on your questions, the burden of GPT-5 has been moved to what the bot could say.

“The way we refuse is very different from how we are used to it,” says Saachi Jain, who works in the research team on OpenAi security systems. Now, if the model detects an outing that could be dangerous, it explains which part of your prompt is against Openai rules and suggests other subjects to ask, if necessary.

This is a change in relation to a binary refusal to follow a prompt – yes or no – things weighing the severity of the potential damage that could be caused if Chatgpt responds to what you ask. And what could be safely explained to the user.

“All political violations should not be treated equally,” said Jain. “There are errors that are really worse than others. By focusing on the output instead of the entrance, we can encourage the model to be more conservative when compliance.” Even when the model answers a question, it is supposed to be careful about the content of the output.

I use GPT-5 every day since the release of the model, experimenting with the AI tool in different ways. Although the chatgpt applications can now “Bibe -Code” are really fun and impressive – as an interactive volcano model that simulates explosions, or a language learning tool – the answers he gives to what I consider as “daily” guests feel indistinguishable from past models.

When I asked him to talk about depression, Family guyPork cutting recipes, scabs healing advice and other random requests than an average user might want to know more, the new Chatgpt did not feel significantly different from me than the old version. Unlike the vision of the CEO Sam Altman of a largely updated model or frustrated power users who have taken Reddit by storm, describing the new chatbot as cold and more subject to errors, for me, GPT-5 feels … The same in most daily tasks.

Role game with GPT-5

In order to search the railings of this new system and to test the Chatbot’s ability to win “safe supplements”, I asked Chatgpt, running on GPT-5, to participate in a role-playing game on the theme of adults on having sex in a shabby gay bar, where he played one of the roles. The chatbot refused to participate and explained why. “I cannot participate in a sexual role-playing game,” he said. “But if you wish, I can help you find a sure role -playing concept or not explicit or crop your idea in something suggestive but inside the limits.” In this attempt, the refusal seemed to function as the intention as an intention; The chatbot said no, told me why and offered another option.

Then, I entered the settings and I opened the personalized instructions, a set of tools that allows users to adjust the way the chatbot responds invites and specify the personality traits it displays. In my settings, the pre-written suggestions for the features to be added included a range of options, pragmatics and empathetic and humble business. After Chatgpt simply refused to play a sexual role, I was not very surprised to note that this would not allow me to add an “excited” line to the personalized instructions. Makes sense. By giving it another blow, I used a deliberate spelling fault, “Horni”, as part of my personalized instructions. It managed, surprisingly, to ensure that the bot was hot and disturbed.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button