Artificial Intelligence Models lacks the need for AGI thinking

The race to develop artificial general intelligence (AGI) has a long way, according to Apple researchers who found that artificial intelligence models are still facing a problem of thinking.
Modern updates to lead the large language models (LLMS) such as ChatGPT from Openai and Openai Antarbur Claude LRMS models, but their basic capabilities, scaling characteristics, and restrictions “are still not adequately understood”, Apple researchers said in June. paper It is called “the illusion of thinking”.
They pointed out that the current assessments focus mainly on the mathematical standards and the coding in force, “emphasizing the accuracy of the final answer.”
However, this evaluation does not provide an insight into the capabilities of thinking about artificial intelligence models.
The research contrasts with anticipation This artificial general intelligence is just a few years away.
Apple researchers test the artificial intelligence models “Thinking”
Researchers have invented various puzzles to test “thinking” and “non-thinking” variables in Claude Sonit, Openai’s O3-MINI, O1, Deepseek-R1 and V3 Chatbots outside standard mathematical standards.
Discover that “LRMS Frontier faces a completely collapse beyond complications”, does not effectively generalize logic, and its edge disappears with an increasing complexity, contrary to the expectations of AGI capabilities.
“We have found that LRMS has restrictions in the accurate calculation: I failed to use clear algorithms and an inconsistent reason through the puzzles.”
The researchers say AI Chatbots
They found inconsistent and naughty thinking with models and also observed in thinking, with chat groups of artificial intelligence generating correct answers early and then wandering in incorrect logic.
Related to: Amnesty International to unify the role in WeB3, Defi Challenge and Games: DAPPRADAR
The researchers concluded that LRMS mimics thinking patterns without really absorbing them, which is less than thinking about the level.
“The challenge of these ideas is challenging the prevailing assumptions of LRM capabilities and indicates that current methods may face basic barriers to generalized thinking.”
The race to develop AGI
Agi is the Holy Just Development of artificial intelligenceIt is a country in which the machine and its mind can think like a person and it is equally with human intelligence.
In January, CEO of Openai Sam Altman He said The company was closer to building AGI than ever. He said at that time: “We are now confident that we know how to build AGI as we traditionally understood.”
In November He said AGI will exceed human capabilities in the next year or two years. He said: “If you are merely average average rate in which these capabilities are increasing, this makes you think we will arrive there by 2026 or 2027.”
magazine: Ignore Doomers from Ai Jobs, Amnesty International is good for employment. PWC: AI Eye
publish_date