Short note on consciousness and the CSR

Bram van Es
4 min readJun 1, 2024

--

Interestingly, some people use the Chinese room argument to hint to “we can’t really tell so we might as well assume it is true”, which is the inverse of the original intent of CSR. But what if we follow this logic?

There are mechanistic requirements to mimic the external capabilities of our brain, such as introspection, direct feedback from world interactions, continuous learning and no doubt you can come up with more requirements which are functionally just as quite clear. These mechanistic requirements can likely be met in the coming years (VERSES AI?), as they are primarily engineering challenges, that however hard they are, can be solved with enough resources and time. Obviously, energy wise the human brain is many orders of magnitude more effective than anything that currently passes as the state-of-the-art, but let’s say we accept a much larger form-factor, and a huge energy requirement. Then we arrive at something that has introspection (at least in accordance to our mechanistic definitions), has access to most of the world’s knowledge and can interact with the world. This would be, for all intents and purposes, the first true AGI. But, will we call this, sentient or conscious. Or, will we preserve this category for ourselves and other mammalians? If we allow this AGI to procreate, will we call it life?

The most disturbing thing about this is that we have to reject the idea of a “spark”, a soul, basically anything that is not necessary to explain the results of quantitative, observational measures of intelligence. But this idea of the “soul” is not just some mumbo jumbo mystification of the brain, it is a functional barrier to prevent dehumanization through reductionism.

But this idea of the “soul” is not just some mumbo jumbo mystification of the brain, it is a functional barrier to prevent dehumanization through reductionism.

Of course, philosophers were and are aware that a “soul” is a vague concept, that is perhaps not falsifiable, much like the deities that we worship, but perhaps that is exactly the point?

It is fairly easy to imagine the possible consequences of a “dead” interpretation of the human mind, one that says We ourselves, are a self-emerging property of sufficiently complex arrangements of dead soulless neurons. That we ourselves are nothing more than a general intelligence operating on a biological substrate, which can be inferred simply from observing our behavior, given (the history of) external inputs. It may almost immediately reduce us to half-fabricates, it may redefine the meaning of human in human resources. And it will lift up those who can wield the AI to the equivalent of gods, i.e. the current incumbents from Silicon Valley, if that is not already the case with the poor enforcement of anti-trust laws.

Is this reductionist view of human intelligence a logical consequence of the empiricism we have been pushing for as scientists? Did we screw up and lead us on a straight road to dehumanisation, fascism and totalitarianism?

The CSR is not just about falsifiability and the well-posedness of AGI theorems, it is also about not opening up a Pandora's box that will leave us stripped down to an objectively defined form of intelligence on a biological substrate.

A counterpoint to this is that the assignment is purely anthropocentric, if not for the inclusion of other mammals with large-enough brains.
But, what then stops us from assigning a “soul” and “sentience” to animals from any other species? What would the criterion be, if not the same type of behaviorist reasoning that is used now to proclaim we have achieved AGI with a token-based autoregressor?

Another counterpoint is that we are free to separate the “soul” from “general intelligence” and “sentience”. Perhaps the ego strings everything together as some higher order survival mechanism in the context of human group dynamics. Perhaps then, the soul is an artifact of our ego interacting with other ego’s. An individualized myth to engrave a feeling of uniqueness and purpose.

Perhaps the inoculation against this reductionism is that we attach rights to any form of apparently sentient, conscious AGI, on any substrate, including all animals that fit the definition. Perhaps, the ones advocating for robo-rights are, unwittingly, preventing a “Soylent Green” scenario instead of steering us towards it, and are creating a future in which we live in harmony with our natural environment. Perhaps this is the context in which we should view the remark of Sergey Brin

He wanted to created “digital superintelligence, basically digital god,” Musk said. The Tesla CEO said it was “the last straw” when Page called him a “speciesist” for wanting to implement safeguards to protect humanity from AI. A speciesist is a term for an individual who believes all other living beings are inferior to humans. — source

I stop on this hopeful perspective, before I digress into an irrelevant rant about surveillance capitalism and overpaid developers deleveraging hard working artists.

Earlier notes that you might find interesting

--

--