**computation**: a running state of any model of computation; for example, a specific SKI calculus expression, or a specific turing machine with its rules, current state, and current tape values. given that any model of computation can run the computations of any other model, it does not really matter which one we choose, and i will be juggling between different models throughout this post.

**rich**: a computation is rich if it is generally computationally irreductible. as a tentative formal definition for richness, i'm tempted to say that a computation is rich if there is no function able to generally predict any of its future states in a time less than linear in the number of steps it would take to arrive at that state normally. for example, rule 30 *looks* rich: it looks like to calculate the value of cell at index `i`

at time step `j`

, it generally takes about `O(abs(i) × j)`

steps of computation. on the other hand, it looks like rule 54 and rule 60 can generally have their cells predicted in time logarithmic to the number of computational steps it would naively take to arrive at them.

note that richness is not the same as halting: while a halting computation is necessarily not rich, a non-halting computation can either be non-rich (like rule 54), or rich (possibly like rule 30).

it seems clear to me that rich computations exist: for example, it is known that sorting a list of `n`

elements takes `O(n × log(n))`

steps, and thus a computation running a sorting algorithm of that complexity cannot have its result predicted in a smaller time complexity than it took to calculate naively. the ease with which i can demonstrate that, however, makes me doubt my tentative formal definition; maybe something more akin to polynomial time complexity would better capture the essence of computational irreductibility: perhaps a better determining question for richness could be "is there a function which can tell if a pattern looking like this will ever emerge in that computation, in time polynomial to the size of that pattern?" or "is there a function that can, in time polynomial to `n`

, predict a piece of state that would naively take `aⁿ`

steps to compute?"

to **instantiate a computation** means for that computation to, somewhere, eventually, be ran (forever or until it halts). i start from the fact that i'm observing a coherent-looking universe, deduce that at least *some* computation is happening, and which other computations are happening (as in, are being observed somewher, or which i could have observed). as clarified before, one can't just assume that all computations are equally happening: things look way too coherent for that, there seems to be a bias for coherence/simplicity (one which i've tentatively attributed to how soon that computation spawns).

looking at the cosmos (the set of instantiated computations) from a computational perspective, it seems like it contains at least our universe, which is expanding. if this expansion is, as has been hypothesized, caused by the computational substrate of the universe manufacturing new vertices of spacetime, and computations can run on this new fabric as it is produced, then it's possible that some computations can run forever, including potentially rich ones.

however:

a **causal bubble** is a piece of computation that can run forever with the guarantee that it won't be physically interfered with from the outside; see yes room above paperclips.

for example, while one can build a turing machine inside conway's game of life, a stray object on the same conway's game of life plane can eventually collide with said machine and break its computational process.

however, in some graph rewriting rulesets, as well as in expression-rewriting systems with nested expressions such as a varient of SKI calculus or lambda calculus where the evaluation rule expands all sub-expressions, some pieces of computation can run without ever being physically interfered with by other pieces of the computation.

(i'm specifying "*physically* interfered with" because acausal coordination or mutual simulation can lead to interference, but at least that interference is up to the singleton (such as a superintelligence) "running" said bubble (if any); they can just choose to never acausally coordinate and to never simulate other bubbles)

in our own spacetime, it seems like causal bubbles exist thanks to the expansion of spacetime: some pairs of points get further apart from one another faster than the speed of light, and thus should never be able to interact with one another so long as that expansion continues and FTL travel is impossible. under the perspective of wolfram physics, however, it is not clear that both of those things will necessarily be the case forever; spacetime might be hackable.

note that the splitting of universes with nondeterministic rules (such as ours with quantum mechanics) into different causally isolated timelines is another way for causal bubbles to exist, assuming the implementation of such a nondeterministic universe is that all possibilities are instantiated at any nondeterministic choice.

the presence of causal bubbles allows some pieces of spacetime to survive superintellingences appearing in other pieces of spacetime, while the absence of causal bubbles makes it that a superintelligence or collection of superintelligences probably eventually does take over everything.

if they exist, then causal bubbles are a blessing and a curse: they save us from alien superintelligences and, between timelines, from our own superintelligences, but they might also ensure that our own aligned superintelligence (once we have figured out alignment) cannot reach all computation, and thus that any random person has a good chance of existing in a bubble that hasn't been "saved" by our aligned superintelligence.

**universal complete computations** (such as the annex in this post) instantiate *all* computations, over time.

if one takes the perspective that a top-level "root" bubble existed first, then the answer to this question is up in the air.

maybe we are this root computation, and the deterministic fate of the cosmos (in all timelines) is, for example, for physics to break at some point and kill everything, or for a superintelligence to appear at some point and kill everything (the two being pretty equivalent) leaving no room for bubbles.

maybe the root bubble does spawn a finite and small (after deduplicating by identical computations) number of bubbles, and each of those is fated to be killed in its entirety.

or, maybe somewhere in this chain, one of the bubbles spawns *many* new, different bubbles, at which point it becomes likely enough that eventually one of those bubbles either is, or itself later spawns, a universal-complete program. in which case, the initial set of the "root" bubble and maybe a few other next bubbles serve together as merely the boot process for the program that will eventually spawn *all computations*.

it might be interesting to find out how small universal-complete programs can get, both in bubble-friendly frameworks like systematically-expanded SKI calculus, and bubble-unfriendly frameworks like cellular automata; to get an idea how likely they are to randomly be stumbled into.

unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.

unless explicitely mentioned, all content on this site was created by me; not by others nor AI.