

Are they or just simulating a model of a model?
So you’re saying there’s a magical other plane that the material objects on this world are just a model of and the objects in this plane don’t actually determine behaviour, the ones on that plane do?
What evidence do you have to support that? What evidence do you have that consciousness exists on that plane and isn’t just a result of the behaviour of neurons? Why does consciousness change when you get a brain injury and damage those neurons?
And it doesn’t have to be magical for it to be unreachable for us (read Roger Penrose)
Roger Penrose, the guy who wrote books desperately claiming that free will must exist and spent his time searching for any way it could before arriving at a widely discredited theory of quantum gravity being the basis for consciousness?
and what we have today is just inert code ready to work on command, not some e-mind just living in the cloud
So? If we could put human brains in suspended animation, and just boot them up on command to execute tasks for us, does that mean that they’re not intelligent?
Come on, man, this is not debatable.
It obviously and evidently is debatable since we are debating it, and saying “it’s not debatable” isn’t an argument, it’s a thought terminating phrase.



Lol, “free will exists because I think it exists” is not an argument.
Computers have long been limited to not being more intelligent than very complicated calculators, because they had no good way of solving fuzzy pattern matching problems, like ingesting arbitrary data and automatically learning and pulling out patterns from it. This famous xkcd points that out: https://xkcd.com/1425/
The entire recent surge in AI is being driven because AI algorithms that model our neurons do exactly that.
LLMs contain some understanding of the world, or they wouldn’t be able to do what they do, but yes I would agree. That doesn’t meant we won’t or can’t get there though. Right now many leading edge AI researchers are specifically trying to build world models as opposed to LLMs that do have an understanding of the world around them.
No, this is a reductive description of how even LLMs work. They are not just copying and pasting. They are truly combining and synthesizing information in new and transformative ways, in a similar way that humans do. Yes, we can regurgitate some of that book we read, but most of what we get from it is an impression / general knowledge that we then combine with other knowledge, just like an LLM.
Language is literally the basis for almost all of our knowledge, it’s wild to flatly deny that a multi billion collection of simulated neurons that are trained on language could not possibly have any intelligence or understanding of the world.