i tend to assume AI-borne X-lines are overwhelmingly more likely than S-lines or U-lines, because in almost all cases (such as paperclip manufacturing) the AI eventually realizes that it doesn't need to waste resources on moral patients existing (whether they're having an okay time or are suffering), and so recycles us into more resources to make paperclips with.
but if wolfram's idea is correct — a possibility which i'm increasingly considering — it may very well be that computation is not a scarce resource; instead printing always more paperclips is a trivial enough task, and the AI might let "bubbles" of computation exist which are useless to its goals, even growing bubbles.
and those could contain moral patients again.
of course this reduces to the no room above paperclips argument again: inside that bubble we probly just eventually make our own superintelligence again, and it takes over everything, and then either bubbles appear again and the cycle repeats, or eventually in one of the layers they don't anymore and the cycle ends.
but, i still think it's an interesting perspective for how something-maximizing AIs might not need to actually take over everything to maximize, if there's nonscarce compute as wolfram's perspective can imply.