in my rough AI risk estimates, i give rough timeline predictions (assuming no quantum immortality). in this post, i explain why those predictions are so pessimistic.
to me, there's a big attractor at AI improving AI. technology that works finds technology that works better; this happens as soon as some technology is at least a bit good at finding other technology.
here, the technology in question is software, which we're generally really bad at. what that means is that there are huge low hanging fruit that any AI or random person designing AI in their garage can find by just grasping in the dark a bit, to get huge improvements at accelerating speeds.
some people think AI improvement can hit unexpected difficulty bumps. to me, that's not the default, and i don't see any reason to assume it to be true. i expect there to be countless ReLU instead of sigmoid-type improvements waiting to happen, pointing fast in the direction of the AI things that work attractor. and you don't need all of them: you just need some, and you rapidly find others. all roads lead to superintelligent AI.
the state of affairs we observe now (1, 2, 3, etc) is exactly what i'd think being at the cusp of criticality looks like. the terribleness of our software and AI tech is such that the potential of what's doable with our hardware is immense compared to what exists now. if what we can do now by bruteforcing AI is GPT or LaMDA, then what AIs can design once they start designing new stuff even just a bit above criticality has plenty of room to get superintelligent, fast.
it's all a matter of whether someone and/or something grasps in the dark a bit to find the few improvements necessary to fall very quickly into the capability attractor, and then we all die.