isn't it weird that we have a chance at all?

we are facing imminent AI X-risk. but, we have a bunch of tools around us to figure out that this is a problem, and to even start thinking about solutions.

we have enough physics to think about heat death, enough computational complexity to think about how NP-complete solutions are probly not reasonable, enough rationality to organize a small movement around AI alignment work and figure out things like solomonoff induction or the malignhood of the universal prior, the ability to do some anthropics, and even a few mild ideas as to what the fuck human values even are.

isn't this kind of weird? it feels to me like most civilizations about to die of AI X-risk would be entirely missing several to most of these; but somehow, unless i'm missing a crucially important unknown unknown field, it does kind of look like we have almost enough to work with in the various fields required. even the geopolitical situation and the public awaraness situation, while disastrous, are not entirely hopeless.

i wonder if this has any meaning, whether it be anthropic or simulation theoritic or otherwise.

CC_ -1 License Unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
This site lives at https://carado.moe and /ipns/k51qzi5uqu5di8qtoflxvwoza3hm88f5osoogsv4ulmhurge2etp9d37gb6qe9.