it's pretty interesting and there are insights in and around it that are of importance for the far future, and thus for AI alignment.
the most notable is that wolfram thinks there's compute everywhere. the motion of the wind is doing compute, the motion of the seas is doing compute, the fabric of spacetime is doing compute, and even the state of heat death is still doing compute.
that last point notably means we might be able to embed ourselves into heat death and further, and thus get computed literally forever. this multiplies the importance of AI alignment by potentially literally infinity. i'm not quite sure how we are to handle this.
some of the compute may be doing things that are opaque to us; it might appear homomorphically encrypted. as we want (and expect) our superintelligence to spread everywhere to enforce values, we would hope civilizations living inside homomorphically encrypted spaces can be inspected; otherwise, nuking them altogether might be the only way to ensure that no S-risk is happening there.
wolfram postulates that one might be able to hack into the fabric of spacetime; one of the mildest effects of this would be the ability to communicate (and thus, likely, move) faster than the speed of light (but probably still slower than some other hard limit). if you didn't think AI boxing was hopeless enough as it is, hackable spacetime ought to convince you.
finally, there is, value wise, an immense amount of compute being wasted; even just standard model particles live way above true elementary computation. if superintelligence is well-aligned, this provides us with an hard estimate as to how much computing power we can live on to enjoy value, and it's probably a very large amount; wolfram talks about something like 1e400 vertices in our universe.