avatar

posted on 2021-08-31

(2023-12-22 EDIT: No longer endorsed. I'm still very much for a cosmopolitan utopia which strongly favors people's automony, I just think this post is way too authoritative about a problem which would probably have so many better solutions once we have an aligned superintelligence to help figure it out.)

∀V: A Utopia For Ever

∀V (read "universal voluntaryism" or "univol") is my utopia proposal. for people who are familiar with me or my material, this may serve as more of a clarification than an introduction; nevertheless, for me, this will be the post to which i link people in order to present my general view of what i would want the future to look like.

what's a person?

you'll notice that throughout this post i've stuck to the word "person" instead of, for example, "human". this isn't just in case we eventually encounter aliens who we consider to be persons just like us, but it's also possible some existing animals might count, or even beings whose existence we largely don't even envision. who knows what kind of computational processes take place inside the sun, for example?

by person i mean something deserving of moral patienthood; though this is still hard for me to even start to determine. it probably requires some amount of information system complexity, as well as a single point of explicit consideration and decision, but apart from that i'm not quite sure.

i do know that pretty much all currently living humans count as moral patients. other than that, we should probably err on the safe side and consider things moral patients when in doubt.

systems within systems

all top-level systems are, in the long term, permanent.

if you want society to "settle what it wants later", then your top-level permanent system is what they'll eventually settle on.

if you want society to never be stuck in any system and always have a way out, then the top-level permanent system is "going from system to system, being forever unable to settle" and you better hope that it spends more time in utopian systems than dystopian systems.

if your view is "whatever happens happens", then your top-level permanent system is whatever happens to happen. by not caring about what the future looks like, you don't make the future more free, you only are less likely to make sure it's one you'd find good.

if there is going to be a top-level system no matter what, no matter how flexible its internals are, we ought to care a lot about what that system is.

enforcement: superintelligence

even if you don't think superintelligence explosion is imminent, you should think it will happen eventually. given this, what may very well be the "infinite majority" of remaining time will be one where a superintelligence is the ultimate decider of what happens; it is the top-level.

i find this reassuring: there is a way to have control over what the eternal top-level system is, and thus ensure we avoid possibilities such as unescapable dystopias.

generalized alignment

in AI development, "alignment" refers to the problem of ensuring that AI does what we actually want (rather than, for example, what we've explicitly instructed it to do, or just maximizing its reward signal).

when we think about how to organize future society, we actually care not just about the alignment of the top-level superintelligence, but also "societal alignment" before (and within) that. i will call "generalized alignment" the work of making sure future society will be in a state we think is good, whether that be by aligning the top-level superintelligence, or aligning the values of the population.

so, even if you don't think a superintelligence is particularly imminent, you should want society to start worrying about it sooner rather than later, given what you should consider being the amount of unknown variables surrounding the time and circumstances at which such an event will occur. you want to align society now, to your values as well as the value of figuring out superintelligence alignment, hopefully not too late.

not just values

at this point, one might suggest directly loading values into superintelligence, and letting it implement whatever maximizes those values. while this may seem like a reasonable option, i would kind of like there to be hard guarantees. technically, from a utilitarian perspective, there exists a number N sufficiently large that, if N people really want someone to genuinely be tortured, it is utilitarianly preferable for that person to be tortured than not; my utopia instead proposes a set of hard guarantees for everyone, and then, within the bounds of those guarantees, lets people do what they want (including "i just want superintelligence to accomplish my values please").

one might consider the solution to that to be "just make it that people never want others to be tortured", but that's a degree of freedom on people's thoughts i'd rather keep if i can. i want persons to be as free as possible, including the freedom to want things that can't ethically (and thus, in my utopia, can't) be realized.

a substrate for living

i am increasingly adopting wolfram's computational perspective on the foundation of reality; beyond offering great possibilities such as overcoming heat death, i feel like it strongly supports the informational view of persons and the ability for people and societies to live on any form of computation framework; and those aren't particularly less or more real than our current, standard model-supported reality.

given this, the most efficient (in terms of realizable value per unit of resource) way for superintelligence to run a world is to extract the essence of valuable computations (notably the information systems of persons) into a more controllable substrate in which phenomena such as aging, attachment to a single physical body, vulnerability to natural elements, or vulnerability to other persons, can be entirely avoided by persons who wish to avoid them. this extraction process is often referred to as "uploading", though that implies uploading into a nested world (such as computers running in this world); but if wolfram's perspective is correct, a superintelligence would probably be able to run this computation at a level parallel to or replacing standard model physics rather than on top of that layer.

this is not to say that people should all be pure orbs of thought floating in the void. even an existence such as a hunter-gatherer lifestyle can be extracted into superintelligence-supervised computation, allowing people to choose superintelligence-assisted lifestyles such as "hunter-gathering, except no brutal injury please, and also it'd be nice if there were unicorns around".

universal voluntaryism

at this point, we come to the crux of this utopia, rather than its supporting foundation: ultimately, in this framework, the basis of the existence of persons would be for each of them to have a "computation garden" with room to run not just their own mind but also virtual environments. the amount of computational resource would be like a form of universal basic income: fixed per person, but amounts of it could be temporarily shared or transferred.

note that if resources are potentially infinite over time, as wolfram's perspective suggests, then there is no limit to the amount of raw computation someone can use: if they need more and it's not available, superintelligence can just put either their garden or everyone's gardens on pause until that amount of computation resource becomes available, and then resume things. from the point of view of persons, that pause would be imperceptible, and in fact functionally just an "implementation detail" of this new reality.

persons would have the ability to transform their mind as they want (though having a bunch of warnings would probably be a reasonable default) and experience anything that their garden can run; except for computing the minds of other persons, even within their own mind: you wouldn't want to be at the mercy of someone just because you happen to be located within their mind.

persons would be able to consent to interact with others, and thus have the ultimate say on what information reaches their mind. they could consent to visit parts of each other's gardens, make a shared garden together, and all manner of other possibilities, so long as all parties consent to all interactions, as determined by superintelligence — and here we're talking about explicit consent, not inferred desires even though superintelligence would probably have the ability to perfectly determine those.

for a perspective on what a society of "uploaded" persons might look like, see for example Diaspora by Greg Egan.

rationale and non-person forces

the goal of this structure is to allow people to live and associate with each other in the most free way possible, making the least possible restrictions on lifestyle, while retaining some strong guarantees about consent requirements.

in a previous post i talk about non-person forces; those being for example social structures that act with an agenthood of their own, running on other people as their own substrate.

at the moment, i simply don't know how to address this issue.

the problem with the "dismantlement" of such forces is that, if every person is consenting to the process, it's hard to justify superintelligence coming in and intervening. on the other hand, it does feel like not doing anything about them, short of being able to align sufficiently many people forever, will tend to make people dominated by such structures, as a simple process of natural selection: if there is room for such structures and they can at least slightly causate their own growth or reproduction, then they will tend to exist more than not. this may be thought of as moloch "attacking from above".

one major such potential non-person force is superintelligence itself trying to make people tend to want to live in ways that are easier to satisfy. if everyone wants to sit in their garden forever and do nothing computationally costly, that makes superintelligence's job a lot "easier" than if they wanted to, for example, communicate a lot with each other and live computationally expensive to run lifestyles; and the reason superintelligence will want to make its job easier is to increase the probability that it succeeds at that job (which it should want).

if informationally insulating people from superintelligence except when they outright consent to it intervening in their decisions is not sufficient, then maybe we can add the rule that people can never ask superintelligence to intervene in their life unless there is one single optimal way to intervene, and hopefully that's enough. the idea there being: if, for any request to superintelligence, there is only a single optimal way to accomplish that request, then superintelligence has no degree of freedom to influence people and thus what they want.

on new persons

there are some reasons to be worried about the creation of new persons.

one is malthusian traps: if the amount of resources is either finite or growing but bounded, or if it's unknown whether the amount of resources will end up being finite or not, then you have to cap population growth to at most the speed at which the amount of resource grows (if the amount of resources grows, the maximum speed of population growth should preferably be lower, so that the amount of resource each person has can grow as well). while it does seem like in current society people tend to have less kids when they have a higher quality of life, in a system where persons can live forever and modify their minds, one can't make such a guarantee over potentially infinite time.

another is replicator cultures: if there is no limit on creating new persons, and if people can influence even just a bit the values of new persons they create, then soon the world is overrun by people whose values are to create kids. or: making a world in which "new person slots" are filled by whoever wants to fill them first, will just select for people who want to fill those slots the most.

there might also be weird effects such as, even if resources were infinite, allowing arbitrary amounts of persons to be created could "stretch" the social network of consenting-to-interact-with-each-other persons such that, even if someone has registered an automatic consent to interact even just a bit with the kids of persons they already consent to interact with, they are soon flooded with a potentially exponentially growing network of kid interactions; though this probably can be addressed by that person by revoking this automatic consent.

beyond various resource and network effects, new persons create an ethical dilemma: does a person consent to living? or, for a child, do they consent to being taken care of for some amount of years after they are born — a time during which we often consider them to require affecting them in ways they might be unable to consent to?

if such philosophical quandries don't have a solution, then the safest route is to simply forbid the haphazard creation of new persons, whether that be through conventional human infants, headmates and tulpas if those are "real" enough to count as persons, and potentially other ways of creating new persons that can't consent to future interactions because they don't exist yet. 2022-08-11 edit: this idea has its own post now.

on the other hand, one way to increase the population with consent, is simply to "fork" existing persons: create a duplicate of them. because both are a continuation of the original single person, the original person's consent counts for both resulting persons, and there is no issue. the "merging" of consenting persons together might be possible if it can be reasonably estimated that their shared consent "carries through" to the new, merged person; i am currently undecided about how to even determine this.

finally, if resources are finite, creating a new person (whatever the means) should require permanently transferring one "universal basic computation amount"'s worth of computation garden to them, as no person should start out without this guarantee. this could be done by a person consenting to die and give up their own computation garden, it could be done by several "parents" consenting to give up a ratio of their gardens to the new person, it could be done by reclamating the redistribution of persons who die and don't make any decisions about what should be done with their computation garden, etc.

posted on 2021-08-31

unless explicitely mentioned, all content on this site was created by me; not by others nor AI.