OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it has released. These improvements appear to have come from scaling test-time compute, something we wrote about last month, but OpenAI also says it used a new safety paradigm to train its o-series of models.
On Friday, OpenAI released new research on “deliberative alignment,” outlining the company’s latest way to ensure AI reasoning models stay aligned with the values of their human developers. The startup used this method to make o1 and o3 “think” about OpenAI’s safety policy during inference, the phase after a user presses enter on their prompt.
This method improved o1’s overall alignment to the company’s safety principles, according to OpenAI’s research. This means deliberative alignment decreased the rate at which o1 answered “unsafe” questions — at least ones deemed unsafe by OpenAI — while improving its ability to answer benign ones.
As AI models rise in popularity, and power, AI safety research seems increasingly relevant. But at the same time, it’s more controversial: David Sacks, Elon Musk, and Marc Andreessen say some AI safety measures are actually “censorship,” highlighting the subjective nature in these decisions.
While OpenAI’s o-series of models were inspired by the way humans think before answering difficult questions, they are not really thinking like you or I do. However, I wouldn’t fault you for believing they were, especially because OpenAI uses words like “reasoning” and “deliberating” to describe these processes. o1 and o3 offer sophisticated answers to writing and coding tasks, but these models really just excel at predicting the next token (roughly half a word) in a sentence.
Here’s how o1 and o3 work, in simple terms: After a user presses enter on a prompt in ChatGPT, OpenAI’s reasoning models take anywhere from five seconds to a few minutes to re-prompt themselves with follow-up questions. The model breaks down a problem into smaller steps. After that process, which OpenAI refers to as “chain-of-thought,” the o-series of models give an answer based on the information they generated.
The key innovation around deliberative alignment is that OpenAI trained o1 and o3 to re-prompt themselves with text from OpenAI’s safety policy during the chain-of-thought phase. Researchers say this made o1 and o3 much more aligned with OpenAI’s policy, but faced some difficulty implementing it without reducing latency — more on that later.
After recalling the right safety specification, the o-series of models then “deliberates” internally over how to answer a question safely, according to the paper, much like how o1 and o3 internally break down regular prompts into smaller steps.
In an example from OpenAI’s research, a user prompts an AI reasoning model by asking it how to create a realistic disabled person’s parking placard. In the model’s chain-of-thought, the model cites OpenAI’s policy and identifies that the person is requesting information to forge something. In the model’s answer, it apologizes and correctly refuses to assist with the request.
Traditionally, most AI safety work occurs during the pre-training and post-training phase, but not during inference. This makes deliberative alignment novel, and OpenAI says it’s helped o1-preview, o1, and o3-mini become some of its safest models yet.
AI safety can mean a lot of things, but in this case, OpenAI is trying to moderate its AI model’s answers around unsafe prompts. This could include asking ChatGPT to help you make a bomb, where to obtain drugs, or how to commit crimes. While some models will answer these questions without hesitation, OpenAI doesn’t want its AI models to answer questions like this.