tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
suppose physics is hackable, and a hard to accomplish hack that requires intelligence (like a fancier version of rowhammer) can break the fabric of spacetime — maybe in ways that said intelligence can take advantage of, such as embedding its computation into something that survives said breakage, in a way that could help such a superintelligence accomplish its goal.
we could expect that boxing an AI could be really hard: even without access to the outside, it might be able to guesses physics and hack it, from the comfort of its box.
as usual in such X-risk scenarios, i believe we just keep living only in timelines in which, by chance, we don't die.
these sort of hacks are not ruled out by wolfram physics. indeed, they are plausible, and can spread at some speed faster than the speed of light — because they can run in the substrate underlying spacetime — such that nobody would ever be able to observe such hacks: the hack reaches and destroys you before the result of the breakage can reach your sensory organs, let alone your brain.
so, maybe "dumb-goal" superintelligences such as paperclip maximizers are popping up all over the place all the time and constantly ruining the immense majority of not-yet-hacked timelines, and we keep living in the increasingly few timelines in which they haven't done that yet.
now, let's stop for a minute, and consider: what if such a hack isn't hard ? what if it doesn't need an intelligent agent ?
what if, every planck time, every particle has a 99% chance of breaking physics ?
well, we would observe exactly the same thing: those hacked universes either become computationally simple or boot up more universes; either way, we don't survive in them, so we don't observe those hacks.
in this way, it is S-lines and U-lines that are very special: outcomes in which we survive, thanks to a superintelligence with a "rich" goal. the rest is just timelines constantly dying, whether it be due to X-risk superintelligences, or just plain old physics happening to cause this.
in fact, let's say that the universe is a nondeterministic graph rewriting system with a rule that sometimes allows everything be reduced to a single, inactive vertex. would this count as "sometimes everything is destroyed" ? or would this make more sense to be modeled as a weird quirk of physics where the graph of possible timelines includes the production of passive vertices all the time, which can be safely ignored ?
what if instead of a nondeterministic system, we have a deterministic one which just happens to expand all timelines. in such a system, "different timelines" is no longer a primitive construct: it is merely an observation about the fact that such a system tends to, when ran, create from a given piece of data, several newer ones. let's say that in such a system there is a rule where from every piece of data we'd consider a timeline, numerous inert vertices are also created.
would we say "aha, look! every time a computation step happens, many inert vertices are created around it, and i choose to interpret this as the creation of many timelines (one per inert vertex) in which everyone in that universe dies, and others (new complex pieces of data) in which everything keeps existing",
or would we, in my opinion more reasonably, say "well, it looks like as a weird quirk of how this system runs, many inert vertices are popping up; but they're simple enough that we can just ignore them and only consider richer new pieces of data as timelines proper."
i believe, if we are to worry about what states this universe ends up in, we ought to use a measure of what counts as a "next state of this universe" that measures something about the richness of its content: maybe the amount of information, maybe the amount of computation going on, or maybe the number of moral patients. and, depending on what measure we use, "losing" timelines to paperclip maximizers (which turn the universe into something possibly simple) is no more of a big deal than "losing" timelines to a rewriting rule that sometimes creates inert vertices, and neither of which should really count as proper timelines.
otherwise we end up needlessly caring about degenerate states because of what we believe to be, but really isn't, an objective measure of what a timeline is.
timelines might be in the map, while what is in the territory is just what we end up observing and thus, computed states that contain us.
finally, what about universe states where all outcomes are an inert vertex or an otherwise simple universe (such as as infinite list of identical paperclips) ? while those might happen, and i'd say would count as X-risks, you don't need to consider simple states as timelines to make that observation: maybe some timelines end up in a state where no new states can be created (such as a locally truly terminated piece of computation), and others end up in a state where only simple new states are created. those ought to be considered equivalent enough, and are what a true X-risk looks like.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.