tammy's blog about
AI alignment, utopia, anthropics, and more;
in predictablizing ethic deduplication, i talk about how when we don't know how reality works, we can task a singleton superintelligence with "adding a layer" to reality, which guarantees that inside that simulated reality we are able to function with known ethics.
in addition there's a sense in which, if one principle overrides the other, in general with arbitrarily many layers of reality-simulation we should tend to favor whichever option overrides. so for example: if our reality is actually on top of 1000 layers of reality simulations, then it only takes one of them to be (truly, deeply) deduplicating for our universe and any sub-universe we simulate to also have deduplication.
or, more precisely, for any set of mutually exclusive traits with a dominance ordering (such as deduplication > no-deduplication), we can expect the take one of those shapes:
(click on the image to expand)
i will call this the "generalized adding reality layers" (GARL) device, and i think it could have a broad use to reason about properties of the cosmos (the set of instantiated universes), even ones that might seem axiomatic and untestable.
for any set of mutually exclusive traits, we care about four properties:
so, what other sets of traits can we examine using GARL ? here are some that i can think of off the top of my head, as well as my guess for the questions above..
|question||dominance ordering||most varied sub-universes||most moral patient experiences||most deeply caring actors|
|moral patient deduplication||dedup > no-dedup||unaffected ?||no-dedup > dedup||i've no idea|
|infinite compute ¹||finite > infinite||infinite > finite||infinite > finite||infinite > finite ?|
|type of computate ¹||classical > quantum > hyper||hyper > quantum > classical ?||unknown||hyper > quantum > classical ?|
|moral realism ²||realism > non-realism||unsure||whichever maximizes good||realism > non-realism ?|
|deeply-caring superintelligence||present > absent||depends on its goals||depends on its goals||present > absent|
¹: these two questions are similar to one another in that they have one dominant variant that restricts computation, and one recessive variant that doesn't; as a result, i would tend to assume that the recessive variant has a higher chance of spawning most kinds of stuff
²: my reasoning: once what is true becomes aligned with what is good, then the orthogonality thesis becomes falsified in that sub-cosmos, and superintelligences are more easily aligned by default
other questions to which GARL may be applicable but i haven't figured out how:
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.