there is a post called musk's non-missing mood that resonates quite well with me. it is indeed kind of disconcerting how people who seem rationally aware of AI risk, don't seem to grok it as an actual thing. despite how real it is, it's hard to think of it not as fantasy fiction.
i totally understand why. i've been there too. but eventually i managed to progressively update.
i'm still not quite there yet, but i'm starting to actually grasp what is at stake.
"detaching the grim-o-meter" remains a reasonable thing to do; you don't want to become so depressed that you kill yourself instead of saving the world. but you also don't want to remain so deluded that you don't quite weigh the importance of saving the world enough either.
i'll learn japanese after the singularity. i'll make my game and my alternative web and my conlang and my software stack and many other things, after the singularity. it is painful. but it is what's right; it's closer to the best i can do.
and i know that, if at some point i give up, then it won't look like pretending that everything is fine and compartmentalizing our imminent death as some fantasy scenario. it'll be a proper giving up, like going to spend the remaining years of my life with my loved ones. even my giving up scenario is one that takes things seriously, as it should. that's what being an adult capable of taking things seriously is like.
how you handle your mental state is up to you. there is a collection of AI-risk-related mental health posts here. do what it takes for you to do the work that needs to be done. that's not becoming a doomer; your brain is straight-up not designed to deal with cosmic doom. but that's not remaining blindly naive either. the world needs you; it won't be saved by pretending things are fine.
and it certainly won't be saved by pretending things are fine and working on AI capability. that's just bad. please don't.
please take AI risk seriously.