tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
caution: this post is a cognitively hazardous idea which may cause you to change your behavior and regret having learned about said idea. please don't proceed without informed consent, and please don't tell people about cognitive hazards without their own informed consent.
six months ago, i was worried about AI development going too far. today, things keep going badly, to the point that i think it has become utilitarianistically useful to release this idea i've thought about recently.
in how timelines fall i talk about how, if we are to keep observing a timeline that somehow survives X-risks even as they become increasingly likely, we should observe our timeline doing whatever chance things it takes to avoid them happening — including global economic collapse, if that's a likely enough event.
turns out, if you, personally choose to do something which might help AI development (and thus increase the probability of X-risk, or if you prefer, the amount of timelines that die to X-risk) then you make yourself something that will tend to have been incapacitated in surviving timelines. you might die, which would be unpleasant to the people who like you; but, you might also just eventually quit that job, or become unable to work for whatever reason.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.