tammy's blog about
AI alignment, utopia, anthropics, and more;
(2020-11-15 edit: this post is now largely superceded by Two Principles For Topia)
In a similar way to the Hierarchy of Needs, I have been thinking about what post-singularity utopia we would want in term of layers.
I want a Layer 0, a universal set of guarantees that apply to everybody; so that, on top of that, people can build voluntary societies and sub-societies as far as they want. Ideally, their socities would be mutually compatible; one could partake of multiple societies and have friends in both. But they wouldn't have to be. It's all dependent on what society you want to join.
But, I think we do need a universal Layer 0. One that at least makes the singularity AI prevent other AIs from emerging and turning everyone into paperclips or other existential risks. As this is the layer that applies universally, we want it as thin as possible, so that societies built on top of it have as much freedom to implement whatever they want; in particular, almost everything mentioned here can be opted out of (such as when joining a social contract). It's just your starting kit.
These should be the universal guarantees that everyone starts with: physical safety and basic living ressources.
For the physical safety, it's fairly easy to think of telling the singularity AGI to implement what I like to call NAPnobots — nanobots that are omnipresent in physical reality (and would manifest as mandatory added laws of physics to virtual realities) and enforce the NAP; that is, prevent people and their physical property from being subjected to aggression without their consent ("without their consent" could be a tricky part — also, should people be able to permanently opt out of some NAP violation protections ?).
You want to create a society in which it's fine to punch each other in the face ? That's fine with me. All I ask is that that society be purely opt-in.
You want to create a communist utopia in which all belonging are shared ? That's fine with me. Just create a voluntarily contract where people consent to pooling their properties together for shared use.
You want the freedom to hurt yourself ? Just consent to being hurt by yourself. You want the freedom to hurt non-consenting others ? No. That's my personal opinion, of course, but I do think the maximum reach of a social contract should be that it can't force others into joining it, and I hope everyone else can agree that at least requiring one's consent to partaking of interaction should be a guaranteed absolute. On the other hand, I would definitely consent, personally, to very large ranges of interactions with at least my friends. They can punch me if they want; I trust them to not do that, and if my trust is betrayed beyond what I consider reasonable, I can always unconsent.
Also, since the brain evaluates every input it receives, this includes for the most part having to consent to any form of communication; rember that unconsented advertising is nigh-literally rape.
The second part, basic living ressources, is a concern of economics. Unless we manage to escape to universes where ressources not only are infinite, but can be accessed faster than humans can appear (which may become as simply as duplicating a running process on a computer), it requires among other things limiting the number of humans that can exist, or you run into malthusian traps.
The best way I can think of doing this is: when the singularity starts, give everyone enough basic assets that the dividends that can be generated from them are easy to live off of. Then, for people to have a child, they need to acquire a number of assets that will provably generate the same amount of dividends for the child, and give those to him. On top of this arbitrarily complex liberal contracts can be established, of course; you can have a socialist society where everyone has consented to their ressources being taxed by exactly how much giving basic living assets to the new kids costs (kids which won't themselves be taxed in that way unless they consent to in turn join that society). There is the issue of what amount of ressources constitutes basic living, as well as if there is a cap on the amount of unviolable property a person or group can have — can an environmentalist group very quickly just plant a flag on all nearby useful planets (as to declare them their unviolable property) and then forever refuse for them to be interacted with (as the NAPnobots will enforce) ?
The choice of living ressources should be included: if what we choose to eat has as much of an effect on our psyche as we are starting to find out it does, then choosing what configuration of nutrients we receive should part of the basic living guarantees.
One of those basic ressources of course is healthcare, and healthcare should include guaranteed immortality unless you opt out of it. There's just no particular reason old age should have any more right to hurt a non-consenting person than any other outside aggressor; "outside" because people are their brain's information system, not their body. Becoming a virtual person would be the "easy" solution to immortality.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.