tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
(edit 2021-07-18: this post is probly not very good, as there's some anthropic principle research out there and i haven't read any and just gone off thinking about it on my own.)
the imminent intelligence explosion is likely to go wrong.
how likely?
if you imagine that you live pretty much at the cusp of such an event, you should expect as per the anthropic principle that there are about as many observer-instants before you, as there are after you. (an observer-instant being an instant at which you have a chance of making observations about that fact; see this and notably Nick Bostrom's Self-Sampling Assumption)
i've previously calculated that the future from now until heat death has room for roughly 10^200 human lifespans (of 80 years) (an estimation based on the number of particles in the observable universe, the amount of time until heat death, and the computational cost of running a human brain).
the past, on the other hand, holds about 10^11 human lifespans (most of them not full 80-year lifespans, but such details will get amortized by using orders of magnitude).
if intelligence explosion is, as i believe, likely to result either in total death or in well-populated futures (whether good or bad), then the fact that i'm observing being right next to the event (in time) rather than observing being one of the (in well-populated timelines) countless observers to exist after the event, must be compensated by such well-populated timelines being particularly rare within the set of future possible timelines.
how rare? about 1 in (10^200 / 10^11), which is 1 in 10^189.
factors which may make this calculation wrong:
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.