avatar

posted on 2021-07-18

when in doubt, kill everyone

one thing that is way worse than mere existential risks, possibly by a factor of infinity, is suffering risks, or S-risks.

i could see (though going by what i could see is not a reliable apparatus) someone make an AI and, while trying to align it to human values, accidentally misaligns it to something that happens to tile the universe with suffering humans. this would be an instance of S-risk.

whereas, an AI that merely wants to accomplish a relatively simple goal will probly just tile the universe with something simple that doesn't contain suffering persons; and given that we're all probly quantum immortal, we just "escape" to the timeline where that didn't happen.

considering this, a 99% chance of X-risk 1% chance of utopia is preferable to a 1% chance of S-risk 99% chance of utopia. so, one thing we might want to do if we figure out superintelligence before we do alignment (which seems pretty likely at this point; see also "Zero percent" on this page), we might want to keep a ready-to-fire paperclip AI on standby and boot it up in case we start seeing S-risks on the horizon, just to terminate dangerous timelines before they evolve into permanent exponential hell.

in fact, just to be sure, we might want to give many people the trigger, to press as soon as someone even suggests doing any kind of AI work that is not related to figuring out goddamn alignment.

posted on 2021-07-18

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.