avatar

posted on 2022-03-06

experience/moral patient deduplication and ethics

suppose you can spend a certain amount of money (or effort, resources, etc) to prevent the spawning of a million rooms (in, let's say, simulations), with an exact copy of one random person in each. they will wake up in the rooms, spend a week not able to get out (basic necessities covered), then get tortured for a week, and then the simulations are shut down.

i want to split this hypothetical into four cases:

the point is that you should want to reduce suffering by preventing the scenario, but how much you care should be a function of whether/much you count the million different persons's suffering as multiple experiences.

it seems clear to me that one's caring for each case should increase in the order in which the cases are listed (that is, identical being the least cared about, and very different beig the most cared about); the question is more about the difference between consecutive cases. let's call those:

currently, my theory of ethics deduplicates identical copies of moral patients (for reasons such as not caring about implementation details), meaning that i see the mildly different case as fundamentally different from the identical case. IM > MQ ≈ QV, and even IM > (MQ + QV).

however, this strikes me as particularly unintutive; i feel like the mildly different case should get an amount of caring much closer to the identical case than the quite different case; i feel like i want to get QV > MQ > IM, or at least QV > IM < MQ; either way, definitely IM < (MQ + QV).

here are the ways i can see out of this:

  1. bite the bullet. commit to the idea that the slightest divergence between moral patients is enough to make them distinct persons worth caring about as different, much more than further differences. from a strict computational perspective such as wolfram physics, it might be what makes the most sense, but it seems quite unintuitive. this sort of caring about integer numbers of persons (rather than continuous quantities) maybe also feels mildly akin to SIA's counting of world populations, in a way, maybe.
  2. interpolate difference: two moral patients count more if they are more different (rather than a strict criteria of perfect equality). this seems like the straightforward solution to this example, though if the curve is smooth enough then it runs into weird cases like caring more about the outcome of one population of 1000 people than another population of 1001 people, if the former is sufficiently more heterogenous than the latter. it kinda feels like i'm rewarding moral patients with extra importance for being diverse; but i'm unsure whether to treat the fact that i also happen to value diversity as coincidence or as evidence that this option is coherent with my values.
  3. fully abandon deduplication: count the million moral patients as counting separately in the first case. this is the least appealing to me because from a functional, computational perspective it doesn't make sense to me, and i can make up "implementation details" for the universe under which it breaks down. but, even though it feels as intangible as positing some magical observer-soul, maybe implementation details do matter?
  4. de-monolithize moral patients; consider individual pieces of suffering instead of whole moral patients, in the hope that in the mildly different case i can extract a sufficiently similar suffering "sub-patient" and then deduplicate that sub-patient.

i think i'll tentatively stick to 1 because 2 feels weird, but i'll consider it more; as well as making room for the possibility that 3 might be right. finally, i'm not sure how to go about investigating 4; but compared to the other three it is at least materially investigatable — surely, either such a sub-patient can be isolated, or it can't.

posted on 2022-03-06

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.