Artificial intelligence has moved to more human-like reasoning, OpenAI has claimed, with the launch of its latest model.
In a blogpost, the ChatGPT maker said that “much like a person would”, its new o1 series would spend more time thinking before it responded to queries.
The company said it could “reason through complex tasks and solve harder problems than previous models in science, coding, and maths”.
As the technology takes its time to mull over a conundrum, it will learn to refine its thinking process, try different strategies, and recognise mistakes, OpenAI claimed.
Mira Murati, chief technology officer of OpenAI, said: “This is a significant shift in the capabilities of the AI models, and also I expect there’s going to be a significant shift in how we work with the systems.”
People will “have a deeper form of collaboration” with the technology, akin to a “back and forth” conversation to assist reasoning, Murati said.
While existing AI systems display “fast, intuitive thinking”, o1 is the first to offer “slower and more deliberate type of reasoning, which is quite similar to ours”, she added.
Murati expects the tool to be used to advance science, healthcare and education. The system can help scientists consider trade-offs around ethical reasoning, philosophical reasoning and abstract reasoning, she said.
Mark Chen, vice-president of research at OpenAI, said coders, economists, hospital researchers and quantum physicists who had tested the model had found it better at solving problems than existing AI models. He said an economics professor remarked that o1 could solve an exam question he would give his PhD students “probably better than any of them”.
The new form of generative AI is limited. The model’s knowledge of the world only goes up until October 2023. It cannot yet browse the web or upload files and images.
• Yuval Noah Harari: ‘Alien intelligence’ will destroy us
Thursday’s announcement comes as OpenAI is said to be in talks to raise $6.5 billion at a $150 billion valuation from backers including Apple, Nvidia and Microsoft, according to Bloomberg News.
This would put it far ahead of its competitors including Anthropic, recently valued at $18 billion and Elon Musk’s xAI, recently valued at $24 billion.
The development of increasingly sophisticated generative AI has repeatedly led to safety concerns being raised by governments and technologists over the possible implications for society.
Last year Sam Altman, the company’s chief executive, was pushed out by the board, ostensibly over the company’s prioritising commercial interests over its founding aim, which was to develop AI for humanity’s benefit. His brief ousting is known within the company as “the blip”.
OpenAI has also faced criticism from within its own ranks for prioritising commercial activities over and above its commitment to safety.
Several safety executives have left the business, including Jan Leike, who said: “Building smarter-than-human machines is an inherently dangerous endeavour … but over the past years, safety culture and processes have taken a back seat to shiny products.”
In the statement launching the new technology, the company said: “As part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines.”
The company said it had also “formalised agreements” with the US and UK AI safety institutes, including allowing them early access to a research version of the model.