avatar

posted on 2022-09-27

surprise! you want what you want

let's say you're building an aligned superintelligence, and you are about to determine what values it should be aligned to. should it be the value of you? everyone? should they be locked in right now, or should they be able to evolve across time?

my answer is simple: you should want to align it to the values of you, right now.

you might say "but i don't just want to have what i want, i want everyone to have what they want" — well, if that's the case, then that's what you want, and so implementing just what you want includes the meta-value of other people getting what they want to. surprise! what you want is what you want.

you might say "but i don't trust how value conflicts would be resolved; i'd want there to be a resolution system that i'd find reasonable" — well, if that's what you want, then that's another meta-value which would be part of the values the superintelligence is aligned to.

you might say "but i don't want the values i have now, i want the values i'd eventually have after a long reflection, or the values of me in the future as well" — but, if that's what you want, then that's yet another meta-value which covers the concerns you have, and which superintelligence would take into account.

so: surprise! what you want might be for others to get what they want, or to better figure out what you want, or maybe even to have some of your values change over time; but implementing what you value right now is sufficient to entail all those other cases. what you want is: what you want.

posted on 2022-09-27

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.