In Brief
Artificial general intelligence is expected to appear in 1.5 years due to a combination of big money, open frameworks, and turning LLMs into cognitive agents.
David Shapiro’s video analysis suggests that this time may be enough for the appearance of AGI.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
The turning point has been passed, and AGI will appear in 1.5 years. A combination of big money, open frameworks, and turning LLMs into cognitive agents will work. A video analysis published yesterday by a colleague, David Shapiro, is very interesting with a stereoscopic view of what is happening, taking into account three not all obvious factors.

Indeed, if we combine the potential influence of the three factors named by David on the development of AI in the next 1.5 years, this time may well be enough for the appearance of AGI on the planet.
In order to avoid empty terminological disputes, we will immediately clarify.
- There are dozens of definitions of “artificial general intelligence” — many of which are quite different and frequently contradictory, necessitating iterative (or even recursive) clarification of the concepts used in these definitions.
- Therefore, it is better to leave terminological disputes to philosophers and simply use the “duck criterion”: If AI as an intelligent agent is able to look like a person in the eyes of people, perform any intellectual work like people, and act in situations that are new to it as people would act in its place, we will consider that this AI is artificial general intelligence (AGI).
- That the phrase “AGI will appear in 1.5 years” means that there will be an AI that will satisfy the “duck criterion” mentioned above.
David Shapiro’s argument that one and a half years is enough to create AGI is based on three bases.
1) Businesses believe that AI could really work wonders. And therefore, in the next 18 months, huge investments will be injected into AI development in order to radically reduce the price of “intelligent inference” for the end user (for example, on their smartphone) due to the very high cost of training large models. David gives a good example from the Morgan Stanley report: “We think GPT 5 is currently training on 25K GPUs — NVIDIA hardware worth $225M or so — and inference costs are likely much lower than some of the numbers we’ve seen.”
2) Frameworks for developing applications based on language models, for example, LangChain, not only allow you to access the language model through the API but also:
– allow the model to be aware of the data: connect the language model to other data sources;
– allow you to turn the model into an agent; allow it to interact with the environment.
3) Systemic paradigms (e.g., MM-REACT) have already been developed that combine ChatGPT with a pool of experts to achieve multimodal thinking and action to solve complex comprehension problems. Within such a paradigm, it will be possible to create cognitive action flows, the process of generating responses to users through a combination of ChatGPT reasoning and expert actions.
If all three of these factors: cheap intellectual inference, turning a model into an agent, and generating cognitive action flows, work, then in 18 months, we will no longer be arguing about the definitions of AGI because it simply won’t matter anymore in light of the competencies acquired by AI.
And these competencies will be so human-like and inclusive that it will no longer be a problem to find a definition for this AGI.
Read more related articles:
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.