avatar

posted on 2022-06-18

anthropic reasoning coordination

what can a piece of anthropic reasoning determine? what counterfactuals is it comparing its observations with?

if you are doing anthropic reasoning and observing something such as "plants exist" or "i remember childhood", how much you can gain from that information must depend on whether, if plants didn't exist or you didn't remember childhood, you would still be doing anthropic reasoning. if anthropic reasonings only or disproportionately exist in worlds where plants exist or where they occur in information systems with access to childhood memories, then you don't gain as much by observing those things.

as such, if you want the community of anthropic reasoners to gain as much information as possible, you have to commit to partaking of anthropic reasoning in reasonably equal amounts no matter what situation you're in. for example, to make the doomsday argument work, we have to commit to being agents who would partake of anthropic reasoning even if we overcame doom; otherwise, we have to at least somewhat discount how much that observation tells us. maybe we solve alignment, and utopia is brought about, and in utopia we never or extremely rarely do anthropic reasoning, for whatever reason — maybe because it's obsolete and we're spending all our time frolicking about, or maybe those of us who would partake of anthropic reasoning discover forms of thought or knowledge which make anthropic reasoning thoroughly obsolete. or maybe we're all busy suffering forever.

posted on 2022-06-18

CC_ -1 License unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.