we are at a node in a state graph (or MDP), where every state points to a bunch of other states, notably by way of:
on one hand, booting up irreversible superintelligent singletons should be very carefully considered, as the irreversibility forces us to commit to a specific system, potentially ruling out whole scopes of utopia entirely.
on the other one hand, it is to be kept in mind that, even though the current world sure seems like it has enough quantum amplitude or anthropic juice to feel pretty real, we must be careful of generating civilization-wide (possibly quantum) micromorts damaging the realness of valuable future states. it might be that we only have 1 unit of anthropic juice to allocate to future states, some of which gets consumed every time we create a bunch of dead timelines.
i believe it is useful for people and groups working on AI risk mitigation to keep a (mental or physical) picture of this graph, and carefully choose where they want to aim. making the correct consequentialist choice is not a trivial matter, and indeed blindly following what you believe to be your best shot without looking around could be a large mistake.