Instrumental rationality: systematically achieving your values.
How does one determine their core (axiomatic) values ? Here's how i do it: i start from what i think is my set of values, and then i extrapolate what would happen if a superintelligent singleton tried to implement those values.
Generally, the result looks like hell, so i try to figure what went wrong and start again with a new set of values.
For example: imagine i think my only core value is general happiness. The most efficient way for a superintelligence to maximize that is to rewire everyone's brain to be in a constant state of bliss, and turn as much of the universe as possible into either more humans that experience constant bliss (whichever form of "human" is the cheapest resource-wise to produce) or into infrastructure that can be used to guarantee that nothing can ever risk damaging the current set of blissful humans.
So, clearly, this is wrong. The next step is freedom/self-determination; such that people can do whatever they want.
However, the most efficient way to make sure people can do what they want is to make sure they don't want to do anything; that way, they can just do nothing all day, be happy with that, and some form of freedom is maximized.
To address this issue, my latest idea is to value something i'd like to call exstential self-determination: the freedom to exist as you would normally have. It's a very silly notion, of course; there is no meaninful "normally". But still, i feel like something like that would be core to making sure not just that existing people can do what they want, but that humankind's general ability to be original people who want to do things is not compromised.
Unless otherwise specified on individual pages, all posts on this website are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.