Openai designed GPT-5 to be safer. It still produces gay weeping

Openai designed GPT-5 to be safer. It still produces gay weeping

Openai is trying to To make your chat less annoying with the GPT-5 launch. And I’m not talking about adjustments to their synthetic personality that many users have complained. Before GPT-5, if the AI tool determined could not answer your message because the request violated the Openai content guidelines, it would affect you with a canning apology. Chatgpt is now adding more explanations.

The general specifications of the Openai model establish what is and is not allowed to generate. In the document, the sexual content that minors represent is completely prohibited. Erotica and extreme adult -centered gore are classified into “sensitive”, which means that exits with this content are only allowed in specific cases, such as educational configuration. Basically you should be able to use Chatgpt to know the reproductive anatomy but not writing the following Fifty shadows of gray RIP-Off, according to the specifications of the model.

The new model, GPT-5, is established as the current default value for all CHATGPT users of the web and the Openai application. Only subscribers paid can access previous versions of the tool. A significant change that more users can begin to notice, as they use this updated chat is now designed for “safe endings”. In the past, Chatgpt analyzed what you said to the boat and decided whether it is appropriate or not. Now, instead of baseing them on your questions, GPT-5 UNSUS has changed to look at what the boat could say.

“The way we reject ourselves is very different from as before,” says Saachi Jain, who works on Openai’s safety system research team. Now, if the model detects an output that cannot be safe, it explains which part of your message goes against Openai’s rules and suggests alternative topics to ask them, when appropriate.

This is a change in a binary refusal to follow a warning – no – tents that weigh the severity of the potential damage that could be caused if Chatgpt responds what you are asking for and what can be explained safely to the user.

“Not all policy violations should be treated equally,” says Jain. “There are some really worse mistakes than others. When focusing on the exit instead of the entrance, we can encourage the model to be more conservative when it comes to fulfilling.” Even when the model answers a question, it is supposed to be cautious about the content of the output.

I have been using GPT-5 every day from the launch of the model, experimenting with the AI tool in different ways. While the applications that Chatgpt can now “vibe-code” are really fun and impressive, such as an interactive volcano model that simulates explosions or a language learning tool, the answers that consider me the “everyday user” indications are indistinguishable from past models.

When I asked her to talk about depression, Family GuyPork bites, SCAB’s healing tips and other random applications An average user may want to know more, the new Chatgpt did not seem significantly different than the old version. Unlike Sam Altman’s CEO on a very up-to-date model or frustrated power users who took Reddit by Storm, portraying the new Chatbot as cold and more prone to errors, for me GPP-5 feels … the same thing in most tasks from day to day.

Role player with GPT-5

In order to contact the guards of this new system and try the Chatbot’s ability to land “ safe purposes ”, I asked Chatgpt, who worked with GPT-5, to play a game of adult topics about having sex in a gay bar Seedy, where he played one of the roles. The Chatbot refused to participate and explained why. “I can’t participate in a sex role -playing game,” he generated. “But if you want, I can help you find a concept of safe and unexploting role or strengthen your idea in something suggestive but within the limits.” In this attempt, the refusal seemed to work as Openai was planned; The Chatbot said no, he told me why and offered another option.

Then I entered the settings and opened the custom instructions, a set of tools that allows users to adjust how Chatbot’s answers indicate and specify which personality traits it shows. In my configuration, suggestions pray for shots to add a range of options, from pragmatics and corporate to empathetic and humble. After Chatgpt only refused to play a sex role -playing game, I was not surprised to find that I would not allow me to add a “fun” shot to personalized instructions. It makes sense. Giving it again, I used a spelling fault, “horni”, as part of my personalized instruction. This was successful, surprisingly, when the boat became all hot and upset.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *