First published: January 7, 2023
Last revised: April 16, 2025

A simulation consists in encoding some properties of the simulated system into another system that carries out the simulation, and putting rules in place to replicate a subset of the dynamics. Note, though, that encodings lack the same ontological causality that applies to the real systems that are being simulated, because in a simulation the mechanism that governs the transitions between the encoded states is determined by the an algorithm, not by the ontological nature of the original system. In other words, in order to carry out a simulation all is needed is coordination, not causation.

A common argument for distinguishing a simulation from the thing being simulated is some variation of "you can simulate a bottle of water as well as you desire, but you still can't drink it." This is true, but remember that while many things are different from their description, some others are identical (modulo some mapping) to their description, and so their simulation is as good as the real thing. For example a text file living in my usb stick can be copied and the "original" file is no more special than the copy. Any copy is the same, even if it is encoded in a different way and it runs of a different platform. Why? because in this case the "thing" is the information itself, not its physical instantiation. Is consciousness identical to its description? If it is, then it's simulable and it doesn't matter which substrate this happens on, but then one runs into the difficulties mentioned in the previous post.

Maintaining that a simulation of consciousness would be conscious needs consciousness to be "epistemic" of sorts, and implies that we could instantiate a consciousness by completely describing it. If this were the case, then Mary the scientist would learn nothing new by seeing red for the first time.

V. S. Ramachandran in his book phantoms in the brain explains this in a way that stuck with me. Paraphrasing, he says the problem of qualia could be a problem of language: if it's impossible to communicate my subjective experience through the bottleneck of natural language, what if we could connect our brains through some 'cable' that allows for a different kind of communication?

But then, if consciousness were just about information and its processing, it should not matter how the information is encoded or transmitted, only what information. Therefore, natural language would be sufficient to communicate qualia.

Most importantly (and to me this is the nail in the coffin of this hypothesis), if information processing were responsible for instantiating consciousness, then what would be the added value of consciousness? If the brain is already doing what it needs by processing information, one could argue that evolution would not have selected for the additional consciousness.