tammy's blog about
AI alignment, utopia, anthropics, and more;
when talking about important (utility-monsterly so) problems like AI risks, it is easy for two people to behave the same whether one believes the event is merely plausible (<50%, sometimes <10%) or likely (>50%, sometimes >90%).
i have seen this lead to some confusion about whether we are working on stuff to mitigate plausible risks or try plausible plans, or whether we are working to reduce likely risks or instantiate likely to work plans.
i'd like to make clear some of my beliefs related to AI risk issues.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.