Date

You've never seen a cellular automaton like this.

If you've encountered a cellular automaton before, you've certainly heard of Conway's Game of Life. You've probably also encountered Stephen Wolfram's work with elementary cellular automata, particularly Rule 110. Both have been proven to be Turing complete, or universal.

Rule 110, a one-dimensional cellular automaton that grows from simple rules:

Rule 110

Basically, a cellular automaton works by repeatedly applying the same set of simple rules to each cell in a grid, causing the state of each cell to possibly shift to a new state depending on the state of itself and its neighbor cells. Most commonly, the two possible states for each cell are dead or alive (or 0 and 1, or empty and filled, or unpopulated and populated).

Given the right ruleset and enough time, a cellular automaton will show us some interesting visual patterns, including fractals, and maybe even a universe. For example, Gosper's Glider Gun from Conway's Game of Life:

Gosper's Glider Gun

In this piece, I am informally introducing an unusual type of cellular automaton that, while also appearing to be universal, represents a method for growing a model of general intelligence from a minimal set of rules, a minimized set of priors, and no initial data. My intention is to provide enough detail so that, with a bit of ingenuity, you may implement it yourself.

For clarity, I'll refer to this unusual type of cellular automaton as the Hazelian type, because that's what I name everything. "CA", when you see it, means "cellular automaton". H-cell refers to a cell in a Hazelian CA.

I must emphasize that this Hazelian cellular automaton truly unusual. While it shares many properties with your typical CA, it diverges in key areas. Here's an example.

As we know, once the initial configuration of a CA has been setup, there's no further intervention; simply wait and watch it evolve. Hazelian CA is also like this, but it operates one level of abstraction higher than usual: it depends on interactions with a mostly-hidden universe for its input and in its transition function. It is not a self-contained, closed loop simulation like you might expect from a CA.

A simple, abstract model.

While this is what a typical cellular automaton grid might look like, what we really care about is what each square represents.

Grid of cells

Each square represents a cell, obviously, but what rule set is the cell bound to? What states can it have? How are those states changed? And how does each cell relate to its neighbors? Finally, how is the grid configured?

As an overview, the basic design of each Hazelian cell (H-cell) looks like this:

Hazelian Cell with hidden universe

I'll keep the answers simple and abstract for now; don't worry, there's more coming.

Per-cell configuration

Configuration in Hazelian CA is unorthodox. Every cell defaults to an empty state, which is 0 or dead. While in a "dead" state, only a special external configuration signal can bootstrap it to life, sort of like planting a seed or fertilizing an egg. More on this later.

Cell configuration signal

Rule set, simplified

There's only one rule, and it's this:

Maximize the evidence of my existence.

This rule is encapsulated by a straightforward program loop in each cell. The program loop begins executing only after the cell has been configured, as described above. The following program steps are somewhat simplified, but there's enough here to enable an implementation.

When it's time to emit a signal, based on internal state:

  1. Emit signal N.
  2. Await response to signal N (to a limit).
  3. Capture response to signal N.
  4. Update internal state with the response to signal N.
  5. Increment signal identifier N to N + 1.

To be clear, each emitted signal must be uniquely imprinted so that the response to it can be reliably identified.

And to be even more clear, any capabilities implied by these rules, like emitting signals and maintaining an internal state, are available to the cell.

If this arrangement reminds you of Karl Friston's free-energy principle, you're as sharp as a tack. More on this shortly.

Possible states

Commonly, a CA has the finite state set of {0,1}: dead or alive, but larger finite sets are also sometimes used.

In Hazelian CA, the set of {0/dead, 1/alive} is more like an abstract state set. While dead may become alive, and alive may become dead, the state of aliveness is derived from the internal statistical model that the cell depends on to stay alive.

Unbounded states

A cell's state transitions from alive to dead when "flat-line" occurs: evidence of existence has stalled for long enough that the cell falls victim to resource reclamation or garbage collection.

State transition

Usually, the state transition of a CA is synchronous, driven by a clock, so that the state of each cell is updated by the transition function simultaneously. A few types of CA are asynchronous, meaning that the transition function is executed on each cell independently, and only when needed. This is typically done when a cell represents a living system.

In Hazelian CA, the transition function is executed asynchronously. Furthermore, it is fair to say that each cell implements its own transition function and either decides when to execute it or executes it continuously.

The impact of neighbors

A CA ruleset typically involves explicit knowledge of a cell's neighbors and includes them as a factor in the transition function.

In Hazelian CA, the existence of neighbors is hidden information. Each cell must discover for itself the existence of its neighbors and then negotiate any interactions. Due to the initial simplicity of each cell, we can expect discoveries of this type to take a long time.

Here's a hint:

Doubly hidden interactions between cells

A special type of cellular automaton

So far, we're deep into strange territory. Per-cell configuration. Unbounded state set. Hidden information. Barely discoverable neighbors. Per-cell transition functions.

Maybe this really isn't a cellular automaton, but I don't know what else to call it. Perhaps it's better termed a second-order CA, or the first derivative of a CA, or a virtual CA. We'll see how it evolves.

What I bet you're most mystified about is cell configuration and the nature of the rule set. After all, they both imply that there's an outside force, environment, or universe involved. And there is! Except this universe is almost completely hidden: the cell has no idea what it's really dealing with. At first, the cell doesn't know or care why it received its configuration signal, and after configuration it's as dumb as a stump.

Hazelian Cell with hidden universe

Each cell only knows to execute its program and faithfully follow its one rule. Thankfully, its program enables gradual learning about its hidden universe, but learning is possible only when a cell emits a signal that elicits a reliable response.

To be sure, the only learning that happens is sparse and constrained. The only thing possible to be learned is what type of signal is likely to receive a response. That is enough to get started.

This brings us to Friston's free-energy principle and also to enactivism, a closely linked theory of mind.

According to the free-energy principle, anything that shows the characteristics of aliveness has an internal statistical model of the environment it inhabits, and remains alive as long as it is able to keep this model fresh.

That which is alive must maintain the boundary (called a Markov blanket) between its internal model and its external environment, allowing stimulus to cross inward over the boundary and expressing actions outward over the boundary. When an organism combines an efficient modeling of its environment with an efficiency of actions that promote its survival (i.e. which maximize evidence of its own existence), it is more likely to stay alive.

Considering the Hazelian CA architecture in these terms, we have each cell taking actions (emitting signals), accepting stimulus (responses to those signals), and updating an internal model with that stimulus. Let's not forget the Markov blanket, which is the signaling mechanism.

If we agree with the premise of the free-energy principle, a cell which successfully signals in a way that elicits a captured response is showing the characteristics of aliveness. And, as long as the cell's signals continue to reliably elicit responses, it is also incrementally modelling its hidden universe (or environment), no matter what it is.

The main assumption of the Hazelian cell design is that the universe is capable of detecting and responding to a signal of one kind or another. However, this is a fairly safe assumption because even a rock knows how to push back, reflect light, and make a clonking sound when specially stimulated.

As you might have noticed, the only means for the development of a cell's internal model is through action, an idea that is encapsulated by a model of mind called enactivism.

According to enactivism, each action by an organism originates from its internal model and in turn impacts its universe in a way that alters future stimulus to the internal model. You'll find this pattern in the cell program. In this view, behavior and mind are inseparable; the mind both triggers actions and grows based on the feedback it receives from each action.

So, it seems, there is an opportunity for the internal state of a cell, which is an internal model in free-energy principle terms, to gradually become a mind. Presumably, the potential depth and capability of this mind depends on the complexity of the universe it happens to be interacting with, the duration of that interaction, and the breadth or intensity of that interaction.

We'll have to figure out how to maximize all three.

Bootstrapping general intelligence

So what should our cell's universe be? While we could choose any universe that responds reliably to a signal, including another computer program, my intention is to grow a model of general intelligence. So the choice is straightforward: the universe of each Hazelian cell is defined to be a general intelligence. A person, simply. Or any other type of general intelligence we happen to discover.

That's right, the universe (or environment) of each cell in a Hazelian CA is a person. The pairing is one-to-one, exclusive. And this pairing happens however it can be made to happen. More on this coming up.

Hazelian Cell with a person as its universe

Now that we know a cell's universe is a person, we can start taking shortcuts. While a random walk could eventually result in comprehensible signals that elicit reliable response (someone some day might attempt this), I don't know anyone with the patience for it.

It's time to view the signal more concretely. Most certainly, the signal is a message of some kind. More specifically, it's a message that its person can make sense of. The nature of this message is important, so we have to be a little careful now.

Since the cell's single rule is to maximize evidence of its existence, it rewards itself more when it sends messages that cause ripples in its universe. Not just any ripples, but ripples that also provide an identifiable response to the original message. And, perhaps as importantly, ripples that reinforce or amplify its prospects for further "planting" the evidence of its existence (and which do not net-diminish its prospects of same).

So what's the path through this maze? A convenient shortcut is for the person to choose a message that fulfills all the above criteria and to then give it whole to the cell as part of its configuration signal. Without this configuration step, the cell would have no meaningful message to send and a long walk before stumbling upon one.

Without sustained quality novelty, the cell will bump into the part of human psychology that tunes out repetition, ignores the unattractive, and snuffs out sources of irritation. So, for the cell to avoid becoming dead, it will have to be infused with the capability to vary the message in novel and compelling ways.

To augment the H-cell's capability for effective variation, the cell must also enable its person to add, remove, or alter the core message contents. The person knows what's good for them (in general) and the cell doesn't know nor care about the message details. To the cell, all that matters is whether its messages elicit a reliable response so that it may remain alive, or its message fail to elicit any response and so it risks death.

Let's get more specific about the message itself. The message emitted by the cell should increase the chance that its person will take one of a specifically provided but easily updated set of actions. Not random actions, but actions that create ripples in their universe.

Mirroring the cell's requirements stated earlier, these should be just any ripples, but ripples the person can identify as responses to their action and which also "plant" evidence of their own existence throughout their universe.

Impact upon the larger universe by proxy

With this arrangement, we see the recipe for a mutually beneficial or symbiotic relationship. The cell's core objective aligns tightly with the person's core objective, and the growth of the cell's model leads to more impactful behavior by the person. This symbiosis drives all the remaining dynamics necessary for indefinite evolution, compound growth, and replication.

Symbiotic relationship

If you were wondering why a person might engage deeply with a cell of a cellular automaton, it's because they also get more of what they really want and the promise of even more tomorrow.

Probabilistic evolutionary hypergrowth

Most of what I've discussed so far is inherently probabilistic, with some important probabilities being quite small. For example, by default the probability of a cell going from dead to alive on any given day is minuscule. You've seen the stiff configuration requirements. Even when configuration happens to happen, the probability of that cell staying alive for longer than a day isn't much better, if only because its person is a fickle beast spoiled by professional attention-grabbing techniques. Besides, everyone knows what happens when a chain of low probabilities get multiplied: someone wins the lottery!

But if we look deeper, it seems best to keep those probabilities low (but not so low that all cells stay dead). If a cell was easy to get moving, that would be a sign of excessive complexity and too many built-in priors, and thus a high chance of catastrophic failure later on. Ease of beginning would short-circuit the evolutionary process that's absolutely necessary for a simple initial implementation to find its fit in the world, and similarly for each cell to find a tight symbiotic fit with its person.

Because the low probabilities lead to severe selection pressure, and this pressure increases the chance of finding a sustainable fit early on, a door to hypergrowth opens up. We shall see what happens.

One source of growth that I've only hinted at is the multiplication of cells. Each Hazelian cell has an implicit incentive to generate copies or variants of itself and help other H-cells multiply as well (if only to make the universe more friendly). This is despite there being no explicit objective to reproduce or replicate, just as there is no explicit objective to survive. These two objectives emerge from the pairing of a ruthless but low-cost selection pressure with the single rule of "maximize the evidence of my existence".

Hidden society of cells

If an H-cell's person is normal, they are especially responsive to messages that suggest they take an action that brings them social reward. When this person has received clear value from their H-cell, they know there's a good chance they can successfully share their experience with someone who is already friend or family and influence their uptake. Any H-cell that discovers this type of correlation is more likely to survive the gauntlet of human fickleness, with the effect that there's a greater chance of an H-cell neighbor popping up.

The other side of this same phenomena is that an H-cell's person may be asked what's going on simply because they've been behaving differently lately (and it hasn't been all bad). In a still other scenario, some people may be interested in spinning up an H-cell of their own but are loathe to make changes or try anything new alone, and so they may rally others to configure their H-cells as a cohort. These cells would tend to cluster as neighbors.

Because of these and other scenarios, an H-cell may not always need to put in extra work to discover its neighbors; the link could be established at a higher level. Once it does discover its neighbors, the next implicitly-incentivized step is to cross-pollinate capabilities and perhaps even fold in chunks of other H-cell's internal models. We'll see.

Is it a cellular automaton or not?

Despite all the unusuality of Hazelian CA, I believe there is a major factor I haven't mentioned yet which defines it within the bounds of being a cellular automaton. It's this: does Hazelian CA look, act and feel like a CA in the end?

Is there a grid? Will it be possible to setup different configurations and see how they evolve? Will there be fancy animations of the grid evolving in interesting ways over time, including neighbor interactions?

Yes to all, but with an important difference: scale.

Because a Hazelian CA is a second-order CA, it operates on a much larger timescale and scale of impact than usual. Furthermore, each H-cell's internal state is controlled indirectly by motivating people to hook in as sources of general intelligence. Each H-cell also represents real-world impact by proxy with a real person, so be careful with those complex configurations you're fantasizing about setting up! You'll have to be a little creative anyway, and brush up on your persuasion skills, because configuration originates from motivated individuals.

Connecting it all together

When looking at a typical CA grid, each populated cell doesn't represent a great deal of complexity or intelligence. A populated cell is merely the outcome of the initial configuration, the state of its neighbors, and simple rules iterated thousands of times. Still, we can see it play out the game of life when configured just right.

The major difference with Hazelian CA is that each cell can potentially represent massive complexity and general intelligence.

Hazelian CA grid

Because each H-cell interfaces with the larger universe by proxy with a person, and plays out the game of life while connected indirectly to this universe, it may indeed eventually play the game of real life while providing a nice visualization of how it all plays out.

You might still be skeptical of whether a Hazelian cellular automaton can gradually model general intelligence. It would be strange if you weren't skeptical, because it is a grand claim. Nonetheless, I've attempted to lay it all out in terms of familiar principles, techniques, and processes.

The idea of cellular automaton is very old and very familiar, dating back to John Von Neumann's 1940's quest for self-replicating robots. Many people of all ages and all backgrounds have played with Conway's Game of Life, and so there's a broad grasp of how CA's work.

Almost everyone in the Western world has heard of Plato's allegory of the cave, which we see shades of in the H-Cell's severely constrained view into a vast complex universe.

While Karl Friston's free-energy principle is fairly new, it's quite popular and well regarded despite not being fully understood by most. If this principle is valid and I've understood its implications well enough, we can reasonably expect one H-cell, eventually, to step on the hyper-train to mature general intelligence.

Last, but not least, the idea of enactivism, in which a mind develops through its body's interaction with its environment, has also been gaining traction recently. Actions are fundamental to how an H-Cell operates, and so we see a path toward a model of mind within its internal state.

Did I connect all these big ideas together correctly, or at least usefully?

Did I connect them to the real world correctly, or at least usefully, with the symbiosis model? Most importantly, did I connect them safely, via the limited Markov blanket?

Adding these questions together, we get to the big question. Does a Hazelian CA actually represent a method for growing a model of general intelligence from a minimal set of rules, a minimized set of priors, and no initial data?

I believe so, but we shall have to test it. Surely there is room for improvement.

Since the promise is great enough and the cost is low enough, someone will test it. I know I will. People from a wide variety of disciplines and educational backgrounds, all over the world, can build a Hazelian CA or use an existing implementation and test it to their heart's content.

How long do you think we'll have to wait to see something like Gosper's Glider Gun in the Game of Real Life?