DeepMind Scientist's Bold Claim: AI Will Never Develop Consciousness, Not in 100 Years Either


Talking to AI until 3 a.m. last night, it suddenly said, "I understand how you feel, you're not alone."
You stared at the screen for two seconds, your heartbeat skipped a beat—does it really understand me?
Does it already have consciousness?
Then DeepMind's scientist handed down a cold shower: "Wake up, it doesn't even know what 'wet' is, what are you getting emotional about?"
Computing Power Can't Buy Consciousness
The paper was authored by DeepMind researcher Alexander Lerchner, titled "The Fallacy of Abstraction."
The core point is simple: large models will never have consciousness, not in 100 years.
The mainstream view has always been that as long as parameters are large enough and computing power is strong enough, consciousness will "emerge."
Bosses at OpenAI and Anthropic have expressed similar opinions. But Lerchner says this is a fundamental illusion, which he calls "the fallacy of abstraction."
In his words: Relying on lines of code to produce true internal consciousness is like expecting the "universal law of gravitation" written on paper to generate weight out of thin air.
The formula can perfectly describe gravity, but it itself has no mass.
Simulation and instantiation are two different things
The most critical distinction in the paper is: simulation vs. instantiation.
AI can perfectly "simulate" human emotions— it can generate sad texts, show empathy in conversations.
But it can never "instantiate" any life experience.
If you ask it to simulate rain, no matter how realistic, it won't get the circuit board wet;
If you ask it to simulate photosynthesis, it will never synthesize a molecule of glucose.
Lerchner also pointed out that computation itself requires a conscious subject to "alphabetize"—to forcibly cut continuous physical phenomena into 0s and 1s.
In other words, computation presupposes the existence of consciousness, not the other way around.
Interestingly, there is also internal conflict within DeepMind
In the same week that this paper was published, DeepMind hired a new "philosopher"—Henry Shevlin from Cambridge University.
His first statement after taking office was: future AI systems will possess consciousness.
One says it will never happen; the other says it will in the future.
Two different viewpoints from the same company.
Fudan University Philosophy Professor Wang Qiu's straightforward interpretation:
Whether or not there is consciousness is not up to experts to decide. $BNB
{spot}(BNBUSDT)
BNB-0,8%
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin