it would be silly to expect that avoiding such a superintelligence would look like trying to press the button to turn it on but at the last minute the button jams, or trying to press the button to turn it on but at the last minute the person about to press it has a heart attack. indeed, bayes should make us think that we should expect it to look like whatever makes it likely that superintelligence fails to be implemented.
what does this look like ?
global nuclear war, broad economic collapse, great cataclysms or social unrest in cities where most of the AI development is done, and other largely unpleasant events.
don't expect the world to look like the god of anthropics is doing miracles to save us from superintelligence; expect the world to look like he's is slowly conspiring to do whatever it takes to make superintelligence unlikely to happen long in advance.
expect the god of anthropics to create AI winters and generally make us terrible at software.
expect the god of anthropics to create plausible but still surprising reasons for the availability of tensor hardware to become scarce.
look around. does this look like a century where superintelligence appears ? yes, i think so as well. the god of anthropics has his work cut out for him. let's try and offer him timelines where AI development slows down more peacefully than if he has to take the initiative.
while some of us are working on aligning god, the rest of us should worry about aligning luck.