tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
as utilitarians, it would be convenient for us to have an actual unit to measure utility, a number to be computed and compared.
the usual pick is money, but some people could have different judgments of the world that lead them to have different instrumental valuings of money even when they have the same intrinsic values; and also some people could intrinsically value money.
the unit i propose, to measure how much an agent cares about a thing, is a ratio of that person's "total caring pie". for example, you could intrinsically value 70% something and 30% something else; and then i'm sure we can figure out some math that makes sense (probly inspired from probability theory) to derive our valuings of instrumental values from that.
this seems like the least biased way to measure utils. the only criticism i can think of is that it breaks if two agents have different amounts of total valuing: perhaps one person just has more total caring than another.
however, is this testable in any way? is there any situation where one agent would act differently than another if they have the same intrinsic valuing proportions but one of them has a million times more total caring? i don't think so: the idea that inaction counts, seems to me to track either willpower or just different valuings of not doing effort.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.