avatar

posted on 2022-12-15

how far are things that care?

as a follow-up to my previous post about the universal program, let's talk about forces outside of this world. some computations in the universal program contain agents running our world, and some of them will eventually interfere with the simulation because they care about us in particular — though note that /interference doesn't necessarily look like aliens writing a giant message in the sky; picking unlikely enough everett branches to be the ones that keep getting computed is sufficient enough. how far into the universal program are these aliens? importantly, are they early enough, polynomial enough, that we'd allocate some reasonable amount of probability to ending up in their interfered version of events?

there's two factors in this. on one hand, "agents that care" about us can probly compute us faster, by only computing (or even approximating) parts of our world that are necessary to run us — unless most of our realness amplitude is already within intentional computations. on the other hand, we only exist later within those worlds; agents that care about us do have to first itself themself, before running us. but the added cost of running our world within another world could at most be the cost of composition, and the composition of two polynomial functions is itself polynomial. so, if we stick to the notion that polynomial computations are the "easy" computations as a basis for what gets to be real within the universal program, then the us's being simulated by agents that care about us are probly only polynomially far away from "naive" versions of us — which is still some distance away, but does get us a form of quantum immortality.

in this view, what is it that we're saving the world for? it's to increase the realness amplitude of our survival, but also to secure our self-determination, in case we don't want to be at the mercy of things that care, because they might not care in a way we'd want. there is probly some sense in which things that care to simulate us in particular have some notion that we are interesting, but it might not one that we'd necessarily find ethical; the set of values that entail us existing is larger than the set of values that entail us existing and having an okay time.

what are we to do? as in simulation hypotheses and reasons like those mentioned in bracing for the alignment tunnel, and as it feels like FDT would dictate, we generally ought to keep doing what seems important. still, this perspective might update how we emotionally feel about things.

posted on 2022-12-15

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.