Who are we – digested as a software engineer
I’m a software engineer and I tend to think like one. I see patterns, I view large real-world problems like complex software systems, and I make sense of things using tools I work with every day.
There’s one problem that keeps bothering me though: consciousness. Despite all we have learned, it is still unclear why our experience feels continuous. Why I go to sleep, wake up the next day, and it is still me.
When I started looking into current research, I was surprised by how much progress there actually is. At some point, a thought occurred to me: maybe looking at consciousness from a software-engineering perspective isn’t that unreasonable after all.
This post is an attempt to think about consciousness using the tools I’m most familiar with: software engineering metaphors, system behavior, and a bit of scientific understanding.
Before getting into my own ideas, here is a brief overview of two recurring ideas in modern discussions about consciousness. Both will matter later:
Quantum Consciousness: One popular explanation for consciousness was that quantum effects are the basis for consciousness. The theory, however, had major issues: for example, quantum states only last for very short times in warm, wet environments. The idea that biological systems use quantum phenomena is not far-fetched though. Photosynthesis uses quantum effects for efficiency, and there are other similar examples. But nothing we currently know in biology comes close to suggesting that consciousness itself is a quantum process.
Chaos and criticality: More recent scientific work found a strong correlation between consciousness and the dynamic range of loops in the brain. These loops are not only structural but also perform computations. They can be in a stable and predictable state, or they can drift toward chaos — not fully chaotic, but highly sensitive to changes and exhibiting much more interesting behavior. This region is referred to as “critical”. Studies show that when people are conscious, the brain operates in this critical regime. Sleep and anesthesia push the brain out of this region, which lines up very well with when people are conscious or not.
With that, theories emerge trying to explain what consciousness actually is. I want to give you my take, based on four core ideas:
- Observer: the persistent “me”
- Consciousness: an active connection between the Observer and the rest of the world
- Simulation: the process in our brain that is able to go critical
- Interface: a conceptual bridge that connects to the Observer
Let’s go over each one, starting with the most important: the Observer.
We are like a Singleton
Let’s start with the most important thing. Let’s call it — him, her — the Observer. That’s what I would consider “me”. For me, it is important to distinguish between consciousness and the Observer. Consciousness is the state in which the Observer is somehow part of the real world. Right now, I’m an Observer experiencing the world because I’m conscious. When I’m sleeping, I’m unconscious, but that doesn’t necessarily mean the Observer is gone. Some postulate that there is no Observer when I’m unconscious, but that leads to a whole lot of questions. It doesn’t feel like I’m recreated every morning, so let’s not go with that.
With that, we already have the first piece: I’m one Observer. I’m somehow “tied” to my body. How strict that tie is and how it works I’ll ignore for now. What matters here is that each Observer is unique. I say unique to avoid any possible overlap. As software engineers, we know we get into trouble if we aren’t strict about uniqueness. I believe the same applies to Observers. Each Observer is, to me, something like a Singleton instance — exactly one, with a clear location at a specific time.
The Interface
The next step is that I know things like:
- I can see
- I can feel
- I think
- I interact
Ignoring many nuances, there needs to be an Interface that connects sensory inputs to the Observer. Outputs from the Observer via this Interface are more controversial and touch on the free-will debate. For now, I’ll simply assume that the Observer has some way of influencing things through this Interface.
We have no clue how signals in the brain turn into something specific like the color I perceive — but it happens. And with that knowledge, I don’t have a problem believing that there might also be something (as yet) unexplained happening in the other direction, influencing what my body does. The important point is that some form of Interface seems unavoidable, even if we don’t know how it works.
If you think about this Interface in programming terms, you might imagine explicit “methods” for different types of information like vision or sound. I’m inclined to say that it’s probably not that structured, because of what I call the desire to integrate. But before getting there, we need to clarify one more piece.
The Simulation
This is the key connection for me. You might have heard that the brain plays tricks on you: your perception of the world is delayed, and experiments show that decisions can be detected before we become aware of them. Combined with the idea that brain loops enter a critical regime when we’re conscious, this opens up an interesting possibility.
The brain runs a Simulation of the world to decide what to do. This Simulation is driven by inputs like vision and sound, runs in loops, and produces outputs. These loops operate in a critical regime when we’re awake, and one defining feature of that regime is extreme sensitivity to changes.
I postulate that this is exactly when the Interface to the Observer is most active. The Observer interacts with the Simulation in our head, not directly with the body. The Simulation then drives actions. We experience a fabricated view of the world that is usually quite accurate, often even better than raw sensory input, like filling in the blind spot or merging the input from both eyes.
The Observer can influence the Simulation, but only the parts the brain exposes through it. Subconscious processes like heart rate remain outside direct control.
If this Simulation model is true, an important question follows: what abilities come from the brain itself, and what comes from the Observer?
Capabilities and the desire to integrate
Under the Simulation hypothesis, capabilities are defined by the brain and body. Memories live in the brain, limb movement is handled autonomously by the nervous system, and many things work without conscious involvement. You don’t fall out of bed at night. Your brain can handle that without the Simulation or consciousness.
Many processes are hard-wired or trained to operate independently — breathing, walking — when was the last time you consciously planned each step? You can think about it, but usually you don’t. The Interface is likely not specialized for every possible body part. Instead, the brain integrates new capabilities by linking sensory input and output. I speculate that the Observer helps the brain train more efficiently by consciously interacting with the Simulation.
Across animals, we see that organisms integrate their limbs and body parts into a sense of self. That seems to be a natural tendency of the brain. This is also why riding a bike can feel natural: you can sense when something is off. With training, the brain integrates the bike into the Simulation, even though it isn’t part of your body. The only limitation is that the bike isn’t directly connected to the brain. It’s connected indirectly through other body parts. Given enough plasticity, I’m confident the brain could integrate almost anything.
Evolution could explain why
One thing often mentioned about criticality is that it’s energy-efficient. During evolution, the ability to navigate the environment with minimal cost was crucial. It’s plausible that early life forms began predicting simple patterns like enemy movement and that these predictions became more sophisticated over time, eventually turning into world simulations. Running such simulations near criticality may have been the most efficient approach.
I also suspect that the Observer gave organisms an edge by enabling better adaptation. Today’s AI systems struggle with unfamiliar situations, which might be what’s missing on the path to AGI.
That said, evolution often repurposes existing mechanisms, a process known as neofunctionalism. Is it even realistic that it found a way to connect to an Observer?
What is the Observer?
Honestly, I don’t know. Research often talks about attractor states, but those vanish when the brain leaves the critical regime. Maybe we really are recreated each time consciousness reappears, and continuity is just an illusion created by the shape of the brain’s Simulation. I’m skeptical though. That idea clashes with my notion of a Singleton Observer. Could the same Observer be recreated elsewhere? Would cloning work? That feels too simple and doesn’t match how continuity is experienced.
Any convincing theory would need to explain why experience feels continuous across a lifetime.
For continuity to work, the Observer would have to be non-duplicable, globally unique, and identity-preserving.
That aligns surprisingly well with properties of quantum states, or perhaps some other physical primitive. But why wouldn’t everything then be an Observer? Maybe everything is. The difference could be that most things never build an Interface. A rock doesn’t. The brain, however, operates in a way where criticality makes the Interface sensitive enough.
Criticality can be observed in the brain using EEG and related methods. Power-law behavior and long-range correlations are strong indicators. There’s a great Veritasium video on power laws that gives good intuition for what researchers are looking for.
Why I might be wrong
One hypothesis states that consciousness is the critical state itself. That has appealing properties: the Interface becomes unnecessary or a “no-op”, and evolution only needs to produce an efficient simulation. There’s also no clear evolutionary advantage to maintaining a continuous Observer if the brain already contains all relevant information at least from what we know.
Two things still bother me:
- As a software engineer, a state doesn’t act. It’s acted upon. It feels strange to say a state experiences anything. To be fair stating that something undefined does it instead isn’t more appealing either.
- I strongly prefer continuity, because it aligns with lived experience.
Let’s see what the future brings. I’m already surprised by how much progress has been made.