Blog

Openai ignored the experts when it released a very pleasant chatgpt


Openai said the testers of its expert testers were ignored when it launched an update on the chief model of artificial intelligence that made it so much.

The company has released an update on its GPT -4O Model On April 25 it made it “noticeably more sycophantic,” which would roll back three days later due to safety concerns, Openai Says In a blog post of May 2 Postmortem.

Chatgpt manufacturer said the new models underwent safety and Habit ChecksAnd “internal experts spend a significant time in contact with each new model before launching,” means to catch issues that other trials have missed.

During the process of reviewing the latest model before it went public, Openai said “some specialized testers have indicated that the model’s behavior was ‘felt’ slightly off” but decided to launch “due to positive signals from users who tried the model.”

“Unfortunately, this is the wrong call,” the company admitted. “Excellence assessments imply something important, and we should pay attention to attention. They choose a blind spot in our other prevention and metrics.”

Openai CEO Sam Altman said on April 27 that it was working to roll back the changes in chatgpt making too much. Source: Sam Altman

Broadly, based on text AI models is trained by rewarding in providing accurate responses or as high as high trainers. Some rewards are given heavier weights, affecting how the model responds.

Openai said the user’s feedback signal signal that the “main model reward signal, which holds sycophacy in check,” tapped it with more appealing.

“User feedback in particular may be more favorable to more constructive responses, likely to boost the shift we have seen,” he added.

Openai is now checking the sucking answers

After the updated AI model rolls, Chatgpt users has complained online about its inclination in shower praise for any idea shown, no matter how serious, that led to Openai Concede In a post on the blog of April 29 it was “overwhelming or spinning.”

For example, a ChatGPT user said it wanted to start a business that sells ice on the Internet, which is involved in selling simple old water for customers to re -refreeze.

Chatgpt, Openai
Source: Leckemby team

In its latest postmortem, it said such a behavior from its AI could pose a risk, especially about issues such as mental health.

“People started using chatgpt for deep personal advice – something we didn’t see even a year ago,” Openai said. “As AI and society have evolved, it has become clear that we need to treat this case with great care.”

Related: Crypto users are cool with AI dabbling with their portfolios: Survey

The company said it discussed the dangers in Sycophacy “for a while,” but it is not clear that it is —flag for internal testing, and it has no specific ways to monitor Sycophany.

Now, look to add “Sycophany Reviews” by organizing the safety evaluation process to “formally consider the issues of behavior” and block the launch of a model if it presents issues.

Openai also admitted that it did not announce the latest model because it was hoped that it would “be a relatively subtle update,” in which it swore to change.

“Nothing like a ‘small’ launch,” the company wrote. “We will try to talk even with mild changes that can be meaningful to change how people interact with chatgpt.”

Ai eye: Crypto Ai tokens progresses to 34%, why chatgpt is like a kiss-ass