tammy's blog about

AI alignment,
utopia,
anthropics,
and more;

in the course of discussing anthropics with a friend — notably the SIA vs SSA discussion — i have produced an example case which i believe demonstrates not just the usefulness of anthropics, but also how SIA and SSA can differ. it goes as follows:

suppose you ask the question, "if a civilization were to follow roughly the same technological progress as us, would we expect them to have killed themselves with AI doom by the time of their 2022".

suppose you have reduced your thinking to two hypotheses, which you believe to the following extents:

`S`

(safe) hypothesis: 1/3 chance that with only the technology of up until now, they would still be alive basically-for-sure.`R`

(risk) hypothesis: 2/3 chance that with only the technology of up until now, they would have a 3/4 chance of having killed themselves.

(to be clear, this pair of hypotheses does not represent my actual beliefs — they are a toy example i provide for the purpose of this post)

here, the question is: which of the hypotheses is true? is it true that, given our technology, they almost certainly would still be alive? or is it instead true that, given our technology, they would only have a 1/4 chance of still being alive?

this question is clearly useful: using anthropics, we might be able to get (from the fact that we exist) information about the risk posed our current level of technology — and such information would surely be useful to reason about the risk posed by near-future technology, as it is roughly similar to ours.

the scenario looks like this:

```
if S hypothesis is true (1/3):
survive
if R hypothesis is true (2/3):
if 1/4 chance that we survive despite the risk:
survive
else:
extinct
```

now, is there anywhere where such a scenario has been ran, where we can make an observation? well, yes: *our own* world! we notice that, in our own world and despite our level of technology, we are surviving rather than extinct. how does this observation update us with regards to the prior?

if you are using SIA, you're comparing the *expected number of people in your epistemic situation* (hereby `#ES`

) across both possibilities (`S`

and `R`

). in `S`

, there is four times as many expected people in your epistemic situation than in `R`

(this is true however you draw the set of "people in your epistemic situation"!). so, you update:

```
SIA(S)
= (P(S) × #ES(S)) / (P(S) × #ES(S) + P(R) × #ES(R) )
= (1/3 × #ES(S)) / (1/3 × #ES(S) + 2/3 × #ES(S) / 4)
= (1/3 × #ES(S)) / (1/2 × #ES(S) )
= (1/3 ) / (1/2 )
= 2/3
```

(`P(x)`

is the prior probability of `x`

)

so, SIA updates from the fact that you exist, towards `S`

from the prior probability of 1/3 to a posterior probability of 2/3.

on the other hand, in SSA you're comparing not the raw `ES`

, but the proportion of *your reference class* (hereby `RC`

) that is in your `ES`

. notice that, *in this scenario*, it does not really matter what kind of reference class you draw up — it could be just you, all humans doing anthropics reasoning, all humans, or all living beings, and the answer would be the same in all cases because AI doom causes *all* of those to be destroyed — the proportion `#ES / #RC`

is the same, because none of the hypotheses change the number of, say, you vs humans, or humans vs living beings. either everything dies or everything lives.

the only reference class that could change something here is either distant aliens, or the AI that kills everything itself (or it subsystems) — but for simplicity, we'll rule those out. i'll call the result of this restriction "reasonable-SSA".

so, `#ES(S) / #RC(S) = #ES(R) / #RC(R)`

.

```
SSA(S)
= (P(S) × #ES(S) / #RC(S)) / (P(S) × #ES(S) / #RC(S) + P(R) × #ES(R) / #RC(R))
= (1/3 × #ES(S) / #RC(S)) / (1/3 × #ES(S) / #RC(S) + 2/3 × #ES(S) / #RC(S))
= (1/3 × #ES(S) / #RC(S)) / (1 × #ES(S) / #RC(S) )
= (1/3 ) / (1 )
= 1/3
```

so here, SSA makes no update from the fact that you exist — it stays at 1/3 for `S`

.

i believe that the fact that SIA and reasonable-SSA would bet on different things here (SIA bets on `S`

and reasonable-SSA bets on `risk`

), and that what you believe about this question could be very useful (we want to know how dangerous the technology we're using now is, because we don't want to die!), demonstrates the usefulness of anthropics as well as the importance of the SIA vs SSA question.

unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.

unless explicitely mentioned, all content on this site was created by me; not by others nor AI.