What if artificial general intelligence was already here… and we just hadn’t really understood it? Update

Some philosophers say that this means a quiet line has already been crossed.

For a long time, artificial general intelligence seemed like a long way off. A growing number of researchers now say that by sticking to narrow definitions of “intelligence,” we might be missing the fact that something very close to human-level reasoning is already running on servers today.

Is agi still a goal, or have we already reached it?

People usually think of artificial general intelligence (AGI) as a system that can do a lot of different things as well as a person, not just one specific job like playing chess or recognising faces.

Also read
Workers in this field often earn bonuses no one talks about Update Workers in this field often earn bonuses no one talks about Update

OpenAI, Google DeepMind, and Anthropic are some of the big AI labs that see AGI as a goal for the future. They often use phrases like “building AGI safely” or “steering towards AGI” in their public roadmaps. People inside the company have different ideas about when things will happen. Some say the early 2030s, while others say it could be one to three years from now.

An opinion piece in the journal Nature, written by philosopher Eddy Keming Chen and linguistics, computer science, and data science experts, adds fuel to the fire in that debate. Their point is clear: if we only ask for what we need from human intelligence, then the newest generation of large language models (LLMs) already meets the definition of artificial general intelligence.

When compared to people instead of mythical super-geniuses, today’s AIs tick off more “general intelligence” boxes than most of us want to believe.

That argument makes us think more deeply: are we looking for superintelligence while ignoring general intelligence when it’s right in front of us?

Changing definitions of intelligence: from grail to grey area

The term “AGI” is part of what makes things confusing. There is no one scientific definition that everyone agrees on. The authors in Nature make a simple suggestion: if we call AI “smart,” we should use the same standards for people.

No one can be an expert in every area. A talented programmer might have trouble with basic music notation. A brilliant surgeon might not pass a driving test. But we still call them “smart.”

In this view, AGI does not mean flawless or universal. It means broad competence across many areas, at roughly human or expert level in at least some of them. That is a lower bar than the science‑fiction vision of a machine that effortlessly outsmarts us in every field.

AgI versus superintelligence

The Nature paper makes a sharp distinction between two ideas that are often blurred together:

  • Artificial general intelligence: systems that can handle many kinds of tasks, often at or near human‑expert level, across different domains.
  • Superintelligence: systems that exceed the best humans in almost all domains, possibly by a huge margin.

By that split, the authors argue, modern LLMs fit the first category already. They read and write at graduate level, pass law and medical exams, conduct basic coding and data analysis, draft policy briefs and explain quantum mechanics to teenagers.

Superintelligence, in contrast, would look very different: original breakthroughs in science, radically new mathematics, deep strategic planning and flawless long‑term memory. That remains hypothetical.

Calling today’s systems “not AGI” just because they are not superintelligent is, in this framing, like denying a teenager is an adult because they are not an Olympic athlete.

The Turing test is quietly being passed

One of the most symbolic measures of machine intelligence goes back to 1950. Alan Turing proposed a simple test: put a human judge in a text conversation with two unseen entities, one human and one machine. If the judge cannot reliably say which is which, the machine could be said to “think”.

For decades, chatbots failed this test in serious settings. Early “winners” leaned on tricks and evasions. That has changed. In informal experiments and formal studies, large language models like GPT‑4 are now judged to be human more often than real human participants in text‑only conversations.

By the standards of earlier generations of AI researchers, that milestone alone would have been treated as clear evidence of human‑level intelligence. Instead, the goalposts have shifted. The Nature authors argue that we keep rewriting the rules because we are uncomfortable accepting that intelligence might look like autocomplete on steroids.

“Stochastic parrots” and other objections

Critics still counter that language models are sophisticated mimics, not genuine thinkers. They call them “stochastic parrots” that remix training data without understanding.

The Nature article takes on a list of popular objections:

Objection Why critics raise it Counter‑evidence cited
They only repeat training data Seen as fancy copying, not reasoning They solve new maths problems and novel puzzles
No physical “world model” Words are not grounded in reality They predict consequences of actions and basic physics scenarios
Lack of autonomy They wait for prompts and lack goals Autonomy is not required for intelligence; many humans function reactively
Need huge amounts of data Humans learn from far fewer examples Learning efficiency is separate from final capability

The authors highlight that people, too, are unreliable. We hallucinate memories, fall for illusions and echo slogans we do not fully grasp. Yet we treat those flawed processes as components of intelligence, not disqualifying bugs.

If a system can learn, reason, transfer skills and improve with feedback, insisting it is “just statistical noise” begins to look like denial.

Hallucinations: the glaring hole in the case

Even supporters of the “AGI is here” thesis admit one big problem: hallucinations. That term describes confident, detailed answers that are simply wrong, often with invented citations or fake sources.

Model makers say newer systems hallucinate less than their predecessors. Guardrails block some nonsense, and tools that let models check code or query databases cut errors on factual tasks.

Also read
Why some homes feel colder despite proper heating Update Why some homes feel colder despite proper heating Update

Yet hallucinations have not vanished. In safety testing, models still fabricate court cases, misdescribe medical conditions and invent quotes. OpenAI has suggested that even future models in the GPT‑5 range could hallucinate in roughly one out of ten answers on open‑ended questions.

Human reasoning is far from perfect, but a 10% rate of bright, confident nonsense in critical areas like law or healthcare would be alarming. That disconnect fuels those who say “AGI cannot be here yet”. For them, reliability and verifiability, not raw capability, are non‑negotiable parts of general intelligence.

Does intelligence need a body?

Another long‑running objection is that real intelligence needs a body. Human minds evolved with muscles, senses and physical risks. Without that grounding, can an AI truly understand what it means for a glass to shatter, a car to skid or a person to suffer?

The Nature authors push back. They note that current systems already handle multiple forms of input and output. Multimodal models can read images, describe video, interpret audio and control software tools. Robotics researchers are starting to connect such models to physical machines, from warehouse arms to humanoid prototypes.

This trend is sometimes called “physical AI”: linking large models with sensors, cameras and actuators. It will not make a robot “feel” in a human sense, but it does tighten the loop between language and real‑world consequences.

The claim is not that a bodiless AI understands pain as we do, but that a body is not a strict prerequisite for flexible, general problem‑solving.

Why definitions suddenly matter so much

This debate is not just academic hair‑splitting. How we define AGI shapes regulation, investment and public expectation.

If we accept that something close to AGI already exists, calls for stricter oversight gain urgency. Policymakers may want stronger safety standards for deployment, especially in finance, healthcare and critical infrastructure. Companies might face pressure to share more about training data, failure modes and evaluation.

If we insist AGI is still far away, current systems can be framed as “just tools”. That can delay tough conversations about labour displacement, military use and deep dependence on opaque algorithms.

There is also branding at stake. Tech leaders including Mark Zuckerberg have started talking less about AGI and more about “superintelligence”. That rhetorical shift lets firms claim bold futures without admitting we might already have unleashed a new, if imperfect, type of intelligence.

Anthropocentrism: are we refusing to recognise a strange new mind?

Running through the Nature argument is a psychological point: humans tend to protect the uniqueness of human minds. When machines reach a level we once thought impossible, we invent new thresholds.

Chess was once seen as a pinnacle of intelligence, until Deep Blue beat Garry Kasparov. Then the bar moved to Go, to common sense, to conversation. Each time a system succeeds, that task is reclassified as “mere computation”.

The authors suggest this may be blinding us. Current AI does not think like we do. It lacks childhood, hormones, social fear and bodily discomfort. Yet it juggles concepts, composes arguments and adapts to new instructions. Refusing to call that “intelligence” might say more about our pride than about the systems themselves.

Key terms that shape the agi debate

Several technical terms keep resurfacing in these discussions.

  • Chatbot: an AI interface that interacts through natural language, usually text, sometimes speech. Modern chatbots often sit on top of LLMs.
  • World model: an internal representation of how things relate and change. Some researchers, like Yann LeCun, argue that robust world models, beyond pure text patterns, are vital for deeper intelligence.
  • Hallucination: a confident but false output from an AI model. Unlike lying, the system has no intention; it is a by‑product of statistical pattern‑matching.

How we interpret each of these shapes our sense of what has been achieved. A chatbot that can reason across law, physics and literature looks very different if you see it as a “talkative search engine” versus an early artificial mind.

What changes if agi is already here?

Imagine, for a moment, that the Nature authors are right and that 2024‑era systems qualify as a rough form of AGI. Daily life does not suddenly flip. There is no robot uprising, no overnight disappearance of jobs.

The shifts are subtler:

More knowledge work, from drafting contracts to early‑stage diagnosis, is shared with or checked by AI.
Education leans on AI tutors that adapt to individual students and generate custom exercises.
Research teams run AI “colleagues” to propose hypotheses and scan vast literatures.
Regulators treat certain models as high‑risk infrastructure, like air‑traffic systems or nuclear plants, subject to audits and testing.
Risks multiply alongside benefits. Misinformation can be generated and translated at scale. Malware gets easier to write. A small mis‑specification in an AI‑driven trading system can cascade into real‑world shocks.

These scenarios do not require superintelligence. They only require highly capable, widely deployed systems that are sometimes wrong in ways that are hard to predict.

Practical lenses for readers and policymakers

For people outside labs, one useful shift is to stop focusing purely on labels like “AGI” and to look instead at concrete properties:

  • Scope: how many different tasks can the system handle reasonably well?
  • Reliability: how often does it fail, and in what ways?
  • Reversibility: what happens if it makes an error, and can humans catch and fix it in time?
  • Alignment: whose goals does it serve, and how transparent are those goals?

Whether we officially say “AGI has arrived” or not, those questions decide how safely we can weave these tools into courts, hospitals, schools and factories. The philosophical argument that we are already living with a new kind of general intelligence simply raises the stakes on getting those answers right.

Share this news:
🪙 Latest News
Join Group