i've come to not be a fan of the term "Artificial General Intelligence", and i favor tabooing it in serious discussions.
we can't agree on what it even means; some think it's very remote from current tech, while other would say we already have it.
more importantly, i don't think it's super critical to AI risk mitigation.
- it's not necessary for doom; recursive self-improvement seems easier, and possibly closer at hand, depending on your definition; dumber AI dooms are also possible, such as someone plugging a non-general AI into a protein-printing thing to see what happens and bootstrapping a nanobot swarm or superplague on accident.
- depending on your definition, it might not be sufficient for doom either; some think we have AGI now, and yet we're not dead.
- it's not necessary for saving the world; i think some simple agentic thing with recursive self-improving capability coupled with formal alignment would do it.
- it's not sufficient for saving the world; this is the one point we're all more or less in agreement on.
so to me it doesn't feel like a particularly important crux of AI risk, and we're wasting a bunch of energy figuring out what it means and whether we're there, when it might end up fairly irrelevant to AI risk and alignment.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.