tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
(edit: maybe it doesn't)
what if a phenomena is powerful enough to kill everyone, but not smart enough to be optimal at reasoning? (such as a grey goo event, or a "dumb" superintelligence with a faulty decision mechanism)
then, in all likelyhood, it eventually dies to an alien superintelligence that is better at decision-making and thus at taking over everything.
our superintelligence doesn't just need to be aligned enough; it needs to be aligned enough, and on the tech side, to be maximally intelligent. hopefully, it's smart enough to start making itself smarter recursively, which should do the trick.
the point is: when talking about the eventual superintelligence(s) that reign over the cosmos, assume whichever one(s) to have "won" to be optimal at decision making, because others probly got outcompeted.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.