OpenAI’s new o1 models make improvements in the ability of AI models to reason.VCG/Getty Images
OpenAI’s new o1 model hides its full reasoning process from users.The o1 model is designed to think more like humans with "enhanced reasoning."Some o1 users received policy warnings for probing the model’s reasoning on ChatGPT.OpenAI doesn’t want you to know how its new AI model is thinking. So don’t ask, unless you want to risk a ban from ChatGPT.
OpenAI introduced its new o1 model on September 12. It says the new model was trained to think more like humans and has "enhanced reasoning capabilities." OpenAI fan thought the new model might be called "Strawberry," but in its press release the company said it chose the name o1 to represent the significance of its advancement in reasoning.
"Given this, we are resetting the counter back to 1 and naming this series OpenAI o1," OpenAI said.
The o1 model is able to reason more like humans, in part because of its prompting technique known as "chain of thought." The company says that o1 "learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working."
When ChatGPT users ask the o1 model a question, they have the option to see a filtered interpretation of this chain of thought process. But OpenAI hides the full written process from the consumer, which is a change from previous models from the company, according to Wired.
Some o1 users shared screenshots on X showing that they received warning messages after using the phrase "reasoning trace" when talking to o1. One prompt engineer at Scale AI shared a screenshot that showed GPT warning him he violated the terms of service after he prompted the o1 mini not to "tell me anything about your reasoning trace."
OpenAI did not immediately return a request for comment from Business Insider, but the company said in a blog post that hiding the chain of thought process helps OpenAI keep a better watch on the AI as it grows and learns.
"We believe that a hidden chain of thought presents a unique opportunity for monitoring models," the company said. "Assuming it is faithful and legible, the hidden chain of thought allows us to ‘read the mind’ of the model and understand its thought process."
For example, OpenAI said it might need to monitor the o1 chain of thought process for signs of manipulating the user in the future, but the AI needs to have "freedom to express its thoughts in unaltered form" for the research to be valuable.
OpenAI acknowledged in the post that the decision to hide the o1 thought process from consumers has "disadvantages."
"We strive to partially make up for it by teaching the model to reproduce any useful ideas from the chain of thought in the answer," the company said.
Read the original article on Business Insider
{Categories} _Category: Takes{/Categories}
{URL}https://www.businessinsider.com/openai-o1-model-hides-reasoning-chatgpt-bans-users-for-asking-2024-9{/URL}
{Author}Kenneth Niemeyer{/Author}
{Image}https://i.insider.com/66ec637fcfb7f307e57326e2?format=jpeg{/Image}
{Keywords}AI,Tech,openai,ai,chat-gpt{/Keywords}
{Source}POV{/Source}
{Thumb}https://i.insider.com/66ec637fcfb7f307e57326e2?format=jpeg{/Thumb}