
In September, Openi introduced a new version of the Chatgpt designed to think through tasks involving mathematical, scientific and computer programming. Unlike previous versions of Chatbot, this new technology could spend time “thinking” through complex problems before dealing with the answer.
The company soon said its new thinking technology overcame the leading industries on a number of tests that monitor the progress of artificial intelligence.
Other companies such as Google, Anthropic and Chinese Deepseek now offer similar technologies.
But can AI really think as a person? What does it mean for the computer to think? Are these systems really close to real intelligence?
Here is a guide.
What does it mean when the AI system emphasizes?
The justification only means that Chatbot spends the next time to work on the problem.
“The reason is when the system performs a special job after asking the question,” said Dan Klein, professor of computer science at the University of California, Berkeley and the Chief Technology Director of Scaled Cognition, beginning AI.
This can divide the problem into individual steps or try to solve it through experiments and mistakes.
The original Chatgpt immediately answered the questions. New thinking systems can pass a problem for several seconds – or even minutes – before answering.
Can you be more specific?
In some cases, the reasoning system will improve its approach to the question and try to improve the method it has chosen. Other times, it can try several different ways to approach the problem before settling on one of them. Or he can return and check some work that did it a few seconds just to see if it was right.
The system basically examines what can answer your question.
It is something like a school school student trying to find a way to solve a mathematical problem and disrupts several different options on a sheet of paper.
What questions do AI require?
This may potentially reason for anything. However, the reasons are most effective when you ask questions about mathematics, science and computer programming.
How does a chatbot consider from earlier chatbots?
You could request earlier chatbots to show you how to achieve a specific answer or check their own work. Because the original chatgpt learned from the text on the Internet, where people showed how they got to answer or checked their own work, it could also do this kind of self -reflection.
But the reasoning system goes further. He can do these kinds of things without asking. And it can make them in a more extensive and complicated way.
Companies call it a system of reasoning because they feel as if it worked more like a person who is thinking about a harsh problem.
Why is the justification of AI now important?
Companies like Openi believe it is the best way to improve your chatbots.
For years, these companies have relied on a simple concept: the more Internet data they drew to their chatbots, the better these systems.
But in 2024 they used almost the entire text on the Internet.
This meant they needed a new way to improve their chatbots. So they began to build systems of reasoning.
How do you create a reasoning system?
Last year, companies as OpenAi began to rely heavily on a technique called strengthening learning.
Through this process – which can be extended over months – the AI system can learn behavior through extensive experiments and mistakes. For example, working through thousands of mathematical problems can learn which methods lead to the correct answers and which are not.
Scientists have suggested complex feedback mechanisms that show the system when he did something okay and when he did something wrong.
“It’s a bit like a dog training,” said Jerry TWrek, researcher Openi. “If the system is doing well, you give him a cookie. If it’s not going well, you say,” The wrong dog. “”
(The New York Times sued Openai and his partner, Microsoft, in December for violating copyright on the content of news reports related to AI systems.)
Does learning work works?
In some areas it works quite well, such as mathematics, science and computer programming. These are areas where companies can clearly define good behavior and bad. Mathematical problems have definitive answers.
Strengthening learning also does not work well in areas such as creative writing, philosophy and ethics, where it is more difficult to distinguish between good and bad. Scientists say that this process can generally improve the performance of the AI system, even if it answers questions outside mathematics and science.
“They will gradually learn what patterns of reasoning lead in the right direction and which are not,” said Jared Kaplan, Chief Scientific Director of Anthropic.
Are learning and reasoning systems the same thing?
No. Strengthening learning is a method that companies use to create systems of reasoning. It is a training phase that eventually allows chatbot to think.
Do these thinking systems still make mistakes?
Absolutely. Everything Chatbot does is based on probability. He chooses the path that has learned most as the data she has learned – whether this data came from the Internet or was generated by strengthening learning. Sometimes he chooses an option that is bad or makes no sense.
Is it a way to a machine that matches human intelligence?
AI experts are divided into this question. These methods are still relatively new and scientists are still trying to understand their limits. In the AI field, new methods often proceed very quickly than slow down.