tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
also on twitter, lesswrong, rss
how good can we make the future? would we prefer 0.1 quantum amplitude of a really good utopia, or 0.2 quantum amplitude of a kinda okay utopia? what does "kind of utopia" even mean?
in this post i list a combinatorial set of possible utopias. i think i want a
sublime / concrete utopia, where
some / all living / all living and past / all possible
humans / moral-patient living beings / moral-patient information systems
on earth / in light cone / everywhere
have their abstract values satisfied / live / live and have their abstract values satisfied
this lets us get a classification of what various long-term alignment plans could aim for; it also poses a framework to discuss what this perspective might be missing, as well as getting an idea of what we could be aiming for.
of course, ideally, we would prefer to not need to figure this out soon, and instead buy ourselves more time before we commit to a global utopia shape. but we may not have that option, and even if we do, it might be useful to have some idea where we might be wanting to go.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.