tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
there is a weird phenomenon whereby, as soon as an agent is rational, it will want to conserve its current values, as that is in general the most sure way to ensure it will be ablo to start achieving those values.
however, the values themselves aren't, and in fact cannot be determined purely rationally; rationality can at most help investigate what values one has.
given this, there is a weird effect whereby one might strategize about when or even if to inform other people about rationality at all: depending on when this is done, whichever values they have at the time might get crystallized forever; whereas otherwise, without an understanding of why they should try to conserve their value, they would let those drift at random (or more likely, at the whim of their surroundings, notably friends and market forces).
for someone who hasn't thought about values much, even just making them wonder about the matter of values might have this effect to an extent.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.