tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
we are facing imminent AI X-risk. but, we have a bunch of tools around us to figure out that this is a problem, and to even start thinking about solutions.
we have enough physics to think about heat death, enough computational complexity to think about how NP-complete solutions are probly not reasonable, enough rationality to organize a small movement around AI alignment work and figure out things like solomonoff induction or the malignhood of the universal prior, the ability to do some anthropics, and even a few mild ideas as to what the fuck human values even are.
isn't this kind of weird? it feels to me like most civilizations about to die of AI X-risk would be entirely missing several to most of these; but somehow, unless i'm missing a crucially important unknown unknown field, it does kind of look like we have almost enough to work with in the various fields required. even the geopolitical situation and the public awaraness situation, while disastrous, are not entirely hopeless.
i wonder if this has any meaning, whether it be anthropic or simulation theoritic or otherwise.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.