posted on 2022-08-07

quantum immortality and local deaths under X-risk

assume quantum immortality, for mankind as a whole when facing X-risks. then, depending on two factors,

it can be the case that the majority of person-experience happening can be in-between point-of-no-return and actual-extinction (in red below) rather than in continuously surviving (green below)

this probably matters both from an anthropics and from an ethics perspective. this is a good reason to work on reducing X-risk (reducing P) even under the assumption of quantum immortality, if you value knowing that you're probably not uselessly working in an already-doomed timeline, or if you believe that doomed timelines experience particularly more suffering. another way to avoid spending experience in those doomed timelines is to reduce T: to make sure that, once doomed, we die as soon as possible.

in addition, if you think forking the timeline costs us forking bits — if you think we can only fork the timeline so much, and we wanna preserve as many forks as we want for utopia — then reducing P becomes more important than reducing T, because you save more "realness juice" or "forking bits" for later, when we've solved AI alignment and start populating the timelines with utopia.

which, thankfully, agrees with the straightforward no-quantum-immortality perspective on X-risks: reducing the chances of it is the important thing.

posted on 2022-08-07

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.