my takeoff speeds? depends how you define that
what takeoff speeds for transformative AI do i believe in? well, that depends on which time interval you're measuring. there are roughly six meaningful points in time to consider:
- development: the AI that will transform the world starts being developed
- launch: this AI is launched — more formally, this AI gets past its least meaningful impact by a human, and is just doing its own thing afterwards
- impact: the AI starts having significant impacts on the world, eg hacks the internet and/or people to get power
- observable: the AI starts having impacts on the world that people unrelated to its development can notice — not everyone, but let's say at least people who are alignmentpilled enough to guess what might be happening
- DSA: the AI achieves decisive strategic advantage, which for us is the point of no return
- transformation: the AI starts having the effect that we expect it to ultimately have on us; for example, with unaligned AI this is when we die, and with aligned AI this is when we get utopia
note that this view is, i think, qualitatively orthogonal to how aligned a transformative AI is; those are all meaningful thresholds regardless of whether the AI is taking over everything to build utopia or to tile the universe with paperclips. that said, it can still be quantitatively different when it comes to the durations between any two points in time; for example, one generally expects that the time between development and launch takes longer for aligned AI than unaligned AI.
my model is currently:
- development to launch: weeks to years, but maybe hard to define because nothing is developed from scratch. closer to years if aligned.
- launch to impact: hours to weeks (recursive self-improvement is strong!)
- impact to observable: also hours to weeks (but low confidence; the world is complex)
- observable to DSA: probly negative? if it's smart and powerful enough, it achieves DSA first. especially if it's aligned, because then it should want to avoid people panicking in ways that might cause damage.
- DSA to transformation: could be zero? depends on your perspective, too; if the AI uploads everyone, then spends 10¹⁰⁰ years taking over the universe, and only then starts running us in utopia, then that's a short time from our perspective. but ultimately this measure isn't very useful, since it's after the point of no return so there's nothing we can do anyways.
in any case, that last measure is not very useful: if we're past the point of no return, there there's nothing we can do anyways.
(see also: ordering capability thresholds and local deaths under X-risk)