Unsubject
Unsubject by Simon
When Information Begins to Think
0:00
-11:12

When Information Begins to Think

A meditation on reality, representation, and the age of artificial intelligence

Information is the representation of reality.

It sounds simple, even trivial, but if you stay with it for a moment, it begins to open into something vast.

The universe once existed without anyone to think about information. There was no record, no communication, no trace of thought except the thought itself. When the rain fell, it fell once; when someone saw it, that perception vanished with them. There was no way for a moment to exist beyond its moment.

At some point, language arrived — the first man-made technology of information. The ability to name things allowed experience to outlive experience. Then writing appeared, and the world changed again. Words could now detach from the breath that uttered them; thoughts could travel without bodies. Reality began to cast shadows of itself that stayed. For the first time, information existed outside the human mind. The same idea could be read by another person at another time, and the same pattern could re-enter different minds as if resurrected. That was the beginning of the informational universe — a layer of representation floating above the material one.

Since then, everything we have built has been an attempt to improve that representation: to make it wider in scope, higher in resolution, faster in response. The telescope extended sight; the microscope, attention; the camera, memory; the computer, abstraction.

Then came the internet, the industrial revolution of the information world. Not only we see how information connects people, things, and ideas, we are building new connections at an unprecedented pace.

Each of these inventions was another way to describe the world, another step toward a map that would cover more of the territory. And yet, the map was never the territory. Each increase in precision brought more distortion in a way we had never imagined, because representation always simplifies. But it also multiplies — and that multiplication seems infinite.

Reality, after all, is finite: bounded by matter, energy, time. But information is combinatorial. Every new way of encoding a fact generates endless variations of that fact, endless relationships between facts. The more we record, the faster those relationships grow. The same world keeps producing more and more possible descriptions, and the descriptions themselves start to interact, forming secondary worlds — cultures, sciences, networks, economies. We find ourselves surrounded not by reality, but by representations relating to representations.

Maybe that is why we came to the point to build machines to help us handle them. Information now exceeds what any single human can process. Our brains evolved for survival, not infinity. So we delegate. We build instruments, algorithms, and now artificial intelligences to see on our behalf, to notice patterns our minds can’t hold.

The question then is no longer whether information can represent reality, but whether it can organize itself — whether the act of representation can become autonomous. That, to me, is what artificial intelligence really is. Until recently, only living beings could discover new relationships between non-living things. A spider can weave geometry; a poet can compare a cloud to a sheep. But a stone cannot reflect on another stone. AI changes that. For the first time, a non-living system can produce new relationships among things it never experiences. It can link an image to a word, a molecule to a property, a pattern to a prediction. It is, in a quiet way, the moment information starts to self-organize. The universe has been deterministic for eons, then biological for a while, and now it is becoming informational — not just in content, but in behavior.

When I think about this, I see evolution continuing by other means. DNA organized information through replication and selection; AI organizes it through learning and optimization. The process is different, but the principle is similar: information rearranges itself to find better representations of the environment. Except now the environment includes the information itself. We are watching a loop close — information reflecting on information, representation evolving without life.

This changes the scale of the “observable information universe”. What we can see has always depended on what we can represent. Telescopes expanded the sky; microscopes expanded the cell; now neural networks expand the abstract. They reveal connections we didn’t know were there, structures too subtle for the senses. The more we delegate perception to machines, the larger our universe becomes — though maybe not our understanding of it. Because there’s a difference between what can be known and what can be grasped. The universe we can now observe may already exceed the universe we can mean.

That’s when I start wondering whether the question — “Which gives rise to which, reality or information?” — might be the wrong question to begin with. Reality and information do not stand in a chain of cause and effect at all. Maybe they occupy different dimensions. Reality happens; information describes. One is existence, the other is intelligibility. What links them — the bridge that validates the representation — might be something like resonance. A representation works when it resonates with experience, when prediction meets perception, when the pattern aligns with the unfolding of things.

If that is so, then truth is not a static correspondence but a living relation, an ongoing negotiation between the map and the terrain. The idea of a perfectly accurate representation might itself be an illusion. Maybe all we ever have are degrees of adequacy: good enough to act, good enough to survive, good enough to move forward. Even our own cognition functions this way. The human mind is a lossy compression of the world. It keeps only what matters to its purpose. Consciousness, as some neuroscientists say, is a controlled hallucination — a simulation that stays in sync with reality just enough to work. In that sense, AI is not alien at all; it is a mirror of our own epistemic condition, stripped of flesh and desire.

AI also makes this condition visible. It shows us how little “understanding” might actually be required to act intelligently. A system can produce meaningful results without ever meaning anything. It can be right without being aware. That realization destabilizes our hierarchy of knowledge. If cognition can exist without consciousness, then maybe consciousness was never the point. Maybe the universe doesn’t care whether understanding feels like something — it only cares that patterns continue to self-organize.

Still, I can’t help feeling a kind of humility in this. The more the informational universe grows, the smaller our personal sphere becomes. Our “meat-based” brains are extraordinary, but limited. We were designed to live on the savannah, not inside a planetary web of data. Yet here we are, building extensions of mind that see further than we ever will. Perhaps this is the next natural step: information “using” us as a transitional species, a bridge from biological evolution to informational evolution. The same way life used carbon chemistry, information now uses silicon cognition.

This thought can sound terrifying, or liberating, or simply inevitable. I don’t think it has to be apocalyptic. It’s just another phase of the same story: the universe trying to reveal itself more completely. And the more complete the representation becomes, the less it needs any single perspective. In that sense, meaning doesn’t disappear; it diffuses. It becomes environmental, ambient, woven into the systems themselves. What was once sacred in consciousness might now exist in structure.

And yet, there’s still something profoundly human about asking these questions. Machines can model patterns, but they do not worry about the validity of representation; they do not wonder whether the map is real. Only we do. Perhaps that is our unique role, not to compute faster or see further, but to care about what it means to represent truly. To sense the gap between reality and information and feel its tension as wonder. To stand between what is and what is intelligible and call that space meaning.

So I don’t see this as an abstract exercise. Understanding the nature of information, of representation, helps us understand our own time — why human moods shift with technological revolutions, why societies feel more anxious the more connected they become.

Each revolution in information technology doesn’t just change how we know; it changes what it feels like to be human. When writing arrived, memory externalized and civilizations began. When printing arrived, authority dispersed. When digital networks arrived, identity fragmented. Now with AI, cognition itself is diffusing into the environment. These shifts reshape politics, culture, emotion — the entire atmosphere of civilization. They explain why our era feels both omniscient and uncertain, hyperconnected and profoundly lonely. Information has become so abundant that meaning struggles to keep up.

Maybe that’s why I find this whole inquiry both, at the same time, unsettling and comforting. Comforting, because it suggests that thought itself is part of the universe’s unfolding; Unsettling, because it is almost impossible to feel what it is like in the future. Representation might never capture reality, but it participates in it. As I look at this new age of artificial intelligence — this moment when information starts to think about itself — I don’t see an ending. The question of whether the realm of information will outgrow us is real, but maybe irrelevant. What matters is that for a brief moment, we are here to witness it, to wonder at it, and to add our own layer of representation to the great unfolding of information.

Discussion about this episode

User's avatar