sometimes, people are concerned that my alignment ideas are not competitive enough — that is, that i wouldn't be able to acquire the resources needed to execute them before facebook destroys the world six months later. this is indeed a concern. but, if the problem was that this was the last obstacle stopping us from saving the world and getting utopia, what a great problem that would be!
now, some alignment ideas which would be possible with arbitrary amounts of capabilities might be conceptually impossible, because they take exponential amounts of capabilities, which is too much. but, if we're only polynomial amounts of capabilities away, then alignment becomes presumably as easy as just throwing enough money/engineering at it as we need.
(see also: locked post (2022-12-15))