avatar

posted on 2022-12-22

being only polynomial capabilities away from alignment: what a great problem to have that would be!

sometimes, people are concerned that my alignment ideas are not competitive enough — that is, that i wouldn't be able to acquire the resources needed to execute them before facebook destroys the world six months later. this is indeed a concern. but, if the problem was that this was the last obstacle stopping us from saving the world and getting utopia, what a great problem that would be!

now, some alignment ideas which would be possible with arbitrary amounts of capabilities might be conceptually impossible, because they take exponential amounts of capabilities, which is too much. but, if we're only polynomial amounts of capabilities away, then alignment becomes presumably as easy as just throwing enough money/engineering at it as we need.

though note that i believe we don't need a whole lot of resources to get there, because AI powerful enough to get a decisive strategic advantage might not be that hard to get.

(see also: locked post (2022-12-15))

posted on 2022-12-22

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.