avatar

posted on 2021-12-25

yes room above paperclips?

in two previous posts i talk about the ultimate inability for interesting things to happen when everything has been tiled with paperclips, even if the superintelligence doing the tiling isn't very good at it — i.e. lets room exist "besides" (by superintelligence not actually consuming everything), or "above" (using as a substrate) said paperclips (or whatever else the universe is being tiled with).

but, actually, this is only true if the spare compute (whether it's besides or above) only has room for one superintelligence; if that spare compute is composed of multiple bubbles causally isolated from one another, then maybe a superintelligence permanently kills everything in one, but another one creates even more bubbles in another.

in fact, as long as the first superintelligence to create many bubbles preceeds the first superintelligence to create no bubbles at all, and then if the amount of bubbles tends to be slightly more than one, and assuming superintelligences can't (or can only do so at a lesser rate than new bubbles being created) just "hack back upwards" (to escape to their parent universe), we can expect the set of pre-X-risk superintelligence bubbles to just increase over time.

this might provide a better explanation than just us dying forever, for the weird fact that we exist now when the future could contain very many (plausibly infinitely many) persons: it's not just that the amount of pre-singularity population is large compared to future timelines multiplied by their low likelyhood of being populated, it's that it grows over time forever and so makes it harder for U-lines or S-links to "compete", expected population wise.

we can then run into weird questions: rather than a tree, or even a DAG, why couldn't this be just a general graph? if the "seeds" for complex universes can be simple, it makes sense to imagine bubbles causating each other: maybe someone in Rule 30 eventually boots a superintelligence that takes over everything but happens to cause a Rule 110 bubble to appear (perhaps among many others), and then in that Rule 110 bubble someone creates a superintelligence that causes a Rule 30 bubble to appear again.

conceptually navigating this likely cyclical graph of pre-superintelligence bubbles seems like headache so i'll put the matter aside for now, but i'll be thinking on it more in the future. for the moment, we should expect bubbles with simpler seeds to be highly redundanced, and ones with more complex seeds to be rarer; but there's no reason to assume any ceiling on bubble seed complexity (in fact, if even just one of these bubbles is universal complete, then any seed eventually gets instanciated!), and it seems nigh impossible to predict which types or complexities of seeds could lead to which outcomes, superintelligence-wise.

in the meantime, rember that while things might look pretty hopeless with this perspective, it's plausible that we can actually actually causate very far.

posted on 2021-12-25

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.