tammy's blog about

AI alignment,
utopia,
anthropics,
and more;

*this work was done by Tamsin Leake and Julia Persson at Orthogonal.*

*thanks to mesaoptimizer for his help putting together this post.*

what does the QACI plan for formal-goal alignment actually look like when formalized as math? in this post, we'll be presenting our current formalization, which we believe has most critical details filled in.

this post gives a brief explanation of what QACI tries to do, but people unfamiliar with this alignment scheme might want to read the narrative explanation, which is a recommended introduction to QACI — though keep in mind that it's not entirely up to date.

this post straightforwardly builds up the math for QACI from the bottom up; and while it does explain all of the math, it does so by presenting it all at once. you might find prefer reading the companion post, *"an Evangelion dialogue explaining the QACI alignment plan"*, which builds up this math gradually and provides more context.

in this first part, we'll be defining a collection of mathematical constructs which we'll be using in the rest of the post.

we'll be assuming basic set theory notation; in particular, $A\times B\times C$ is the set of tuples whose elements are respectively members of the sets $A$, $B$, and $C$, and for $n\in \mathbb{N}$, ${S}^{n}$ is the set of tuples of $n$ elements, all members of $S$.

$\mathbb{B}=\{\top ,\perp \}$ is the set of booleans and $\mathbb{N}$ is the set of natural numbers including $0$.

given a set $X$, $\mathcal{P}(X)$ will be the set of subsets of $X$.

$\#S$ is the cardinality (number of different elements) in set $S$.

for some set $X$ and some complete ordering $\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{X}^{2}\to \mathbb{B}$, ${\mathit{\text{min}}}_{}$ and ${\mathit{\text{max}}}_{}$ are two functions of type $\mathcal{P}(X)\backslash \{\varnothing \}\to X$ finding the respective minimum and maximum element of non-empty sets when they exist, using $$ as an ordering.

if $n\in \mathbb{N}$, then we'll denote $f{\circ}^{n}$ as repeated composition of $f$: $f\circ \dots \circ f$ ($n$ times), with $\circ $ being the composition operator: $(f\circ g)(x)=f(g(x))$.

$\lambda x:X.B$ is an anonymous function defined over set $X$, whose parameter $x$ is bound to its argument in its body $B$ when it is called.

$A\to B$ is the set of functions from $A$ to $B$, with $\to $ being right-associative ($A\to B\to C$ is $A\to (B\to C)$). if $f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}A\to B\to C$, then $f(x)(y)$ is simply $f$ applied once to $x\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}A$, and then the resulting function of type $B\to C$ being applied to $y\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}B$. $A\to B$ is sometimes denoted ${B}^{A}$ in set theory.

$A\stackrel{H}{\to}B$ is the set of always-halting, always-succeeding, deterministic programs taking as input an $A$ and returning a $B$.

given $f\in A\stackrel{H}{\to}B$ and $x\in A$, $R(f,x)\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}\backslash \{0\}$ is the runtime duration of executing $f$ with input $x$, measured in compute steps doing a constant amount of work each — such as turing machine updates.

i'll be using a syntax for sums $\sum $ in which the sum iterates over all possibles values for the variables listed *above* it, given that the constraints *below* it hold.

$\begin{array}{cc}\hfill & x,y\hfill \\ \hfill & \sum \phantom{\rule{0.278em}{0ex}}y\hfill & \hfill =\phantom{\rule{0.278em}{0ex}}1\\ \hfill & y=x\phantom{\rule{0.278em}{0ex}}\mathit{\text{mod}}\phantom{\rule{0.278em}{0ex}}2\hfill & \hfill \\ \hfill & x\in \{1,2,3,4\}\hfill & \hfill \\ \hfill & x\phantom{\rule{0.278em}{0ex}}\le \phantom{\rule{0.278em}{0ex}}2\hfill & \hfill \end{array}$

says "for any value of $x$ and $y$ where these three constraints hold, sum $y$".

for any countable set $X$, the set of distributions over $X$ is defined as:

${\mathrm{\Delta}}_{X}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{f|f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}X\to [0;1],\begin{array}{c}\hfill \stackrel{x}{\sum _{x\in X}}\end{array}f(x)\phantom{\rule{0.278em}{0ex}}\le \phantom{\rule{0.278em}{0ex}}1\}$

a function $f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}X\to [0;1]$ is a distribution ${\mathrm{\Delta}}_{X}$ over $X$ if and only if its sum over all of $X$ is never greater than 1. we call "mass" the scalar in $[0;1]$ which a distribution assigns to any value. note that in our definition of distribution, we do not require that the distribution over all elements in the domain sums up to 1, but merely that it sums up to *at most* 1. this means that different distributions can have different "total mass".

we define ${\mathrm{\Delta}}_{X}^{0}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{X}$ as the empty distribution: ${\mathrm{\Delta}}_{X}^{0}(x)=0$.

we define ${\mathrm{\Delta}}_{X}^{1}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}X\to {\mathrm{\Delta}}_{X}$ as the distribution entirely concentrated on one element: ${\mathrm{\Delta}}_{X}^{1}(x)(y)=\{\begin{array}{cc}1\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}y=x\hfill \\ 0\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}y\phantom{\rule{0.278em}{0ex}}\ne \phantom{\rule{0.278em}{0ex}}x\hfill \end{array}$

we define ${\mathit{\text{Normalize}}}_{X}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{X}\to {\mathrm{\Delta}}_{X}$ which modifies a distribution to make it sum to 1 over all of its elements, except for empty distributions:

${\mathit{\text{Normalize}}}_{X}(\delta )(x)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{\begin{array}{cc}\frac{\delta (x)}{\begin{array}{c}\hfill \stackrel{y}{\sum _{y\in X}}\end{array}\delta (y)}\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}\delta \phantom{\rule{0.278em}{0ex}}\ne \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{X}^{0}\hfill \\ 0\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}\delta ={\mathrm{\Delta}}_{X}^{0}\hfill \end{array}$

we define ${\mathit{\text{Uniform}}}_{X}$ as a distribution attributing equal value to every different element in a finite set $X$, or the empty distribution if $X$ is infinite.

${\mathit{\text{Uniform}}}_{X}(x)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{\begin{array}{cc}\frac{1}{\#X}\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}\#X\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}\hfill \\ 0\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}\#X\phantom{\rule{0.278em}{0ex}}\notin \phantom{\rule{0.278em}{0ex}}\mathbb{N}\hfill \end{array}$

we define ${\mathit{\text{max}}}_{X}^{\mathrm{\Delta}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{X}\to \mathcal{P}(X)$ as the function finding the elements of a distribution with the highest value:

${\mathit{\text{max}}}_{X}^{\mathrm{\Delta}}(\delta )\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{x|x\in X,\forall {x}^{\prime}\in X:\phantom{\rule{0.278em}{0ex}}\delta ({x}^{\prime})\phantom{\rule{0.278em}{0ex}}\le \phantom{\rule{0.278em}{0ex}}\delta (x)\}$

given distributions, we will define a notation which i'll call "constrained mass".

it is defined as a syntactic structure that turns into a sum:

$\begin{array}{cccc}\hfill & {v}_{1},\dots ,{v}_{p}\hfill & \hfill & {v}_{1},\dots ,{v}_{p}\hfill \\ \hfill & \mathbf{M}\phantom{\rule{0.278em}{0ex}}[V]\hfill & \hfill \u2254\phantom{\rule{0.278em}{0ex}}& \sum \phantom{\rule{0.278em}{0ex}}{X}_{1}({x}_{1})\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\dots \phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}{X}_{n}({x}_{n})\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}V\hfill \\ \hfill & {x}_{1}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{X}_{1}\hfill & \hfill & {x}_{1}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathit{\text{domain}}({X}_{1})\hfill \\ \hfill & \vdots \hfill & \hfill & \vdots \hfill \\ \hfill & {x}_{n}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{X}_{n}\hfill & \hfill & {x}_{n}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathit{\text{domain}}({X}_{n})\hfill \\ \hfill & {C}_{1}\hfill & \hfill & {C}_{1}\hfill \\ \hfill & \vdots \hfill & \hfill & \vdots \hfill \\ \hfill & {C}_{m}\hfill & \hfill & {C}_{m}\hfill \end{array}$

in which variables $x$ are sampled from their respective distributions $X$, such that each instance of $V$ is multiplied by $X(x)$ for each $x$. constraints $C$ and iterated variables $v$ are kept as-is.

it is intended to weigh its expression body $V$ by various sets of assignments of values to the variables $v$, weighed by how much mass the $X$ distributions return and filtered for when the $C$ constraints hold.

to take a fairly abstract but fully calculable example,

$\begin{array}{cccc}\hfill & \stackrel{x,f}{\mathbf{M}}\phantom{\rule{0.278em}{0ex}}[f(x,2)]\hfill & \hfill \u2254\phantom{\rule{0.278em}{0ex}}& \sum ^{x,f}\phantom{\rule{0.278em}{0ex}}(\lambda n:\{1,2,3\}.\frac{n}{10})(x)\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}{\mathit{\text{Uniform}}}_{\{\mathit{\text{min}},\mathit{\text{max}}\}}(f)\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}f(x,2)\hfill \\ \hfill & x\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}\lambda n:\{1,2,3\}.\frac{n}{10}\hfill & \hfill & x\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathit{\text{domain}}(\lambda n:\{1,2,3\}.\frac{n}{10})\hfill \\ \hfill & f\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Uniform}}}_{\{\mathit{\text{min}},\mathit{\text{max}}\}}\hfill & \hfill & f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathit{\text{domain}}({\mathit{\text{Uniform}}}_{\{\mathit{\text{min}},\mathit{\text{max}}\}})\hfill \\ \hfill & x\phantom{\rule{0.278em}{0ex}}\mathit{\text{mod}}\phantom{\rule{0.278em}{0ex}}2\phantom{\rule{0.278em}{0ex}}\ne \phantom{\rule{0.278em}{0ex}}0\hfill & \hfill & x\phantom{\rule{0.278em}{0ex}}\mathit{\text{mod}}\phantom{\rule{0.278em}{0ex}}2\phantom{\rule{0.278em}{0ex}}\ne \phantom{\rule{0.278em}{0ex}}0\hfill \\ \hfill & \hfill & \hfill =& \sum ^{x,f}\phantom{\rule{0.278em}{0ex}}\frac{x}{10}\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\frac{1}{2}\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}f(x,2)\hfill \\ \hfill & \hfill & \hfill & x\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\{1,2,3\}\hfill \\ \hfill & \hfill & \hfill & f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\{\mathit{\text{min}},\mathit{\text{max}}\}\hfill \\ \hfill & \hfill & \hfill & x\phantom{\rule{0.278em}{0ex}}\mathit{\text{mod}}\phantom{\rule{0.278em}{0ex}}2\phantom{\rule{0.278em}{0ex}}\ne \phantom{\rule{0.278em}{0ex}}0\hfill \\ \hfill & \hfill & \hfill =& \frac{1\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\mathit{\text{min}}(1,2)}{10\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2}+\frac{3\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\mathit{\text{min}}(3,2)}{10\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2}+\frac{1\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\mathit{\text{max}}(1,2)}{10\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2}+\frac{3\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}\mathit{\text{max}}(3,2)}{10\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2}\hfill \\ \hfill & \hfill & \hfill =& \frac{1\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}1+3\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2+1\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}2+3\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}3}{20}=\frac{1+6+2+9}{20}=\frac{18}{20}=\frac{9}{10}\hfill \end{array}$

in this syntax, the variables being sampled from distributions are allowed to be bound by an arbitrary amount of logical constraints or new variable bindings below it, other than the variables being sampled from distributions.

${\mathbb{B}}^{*}$ is the set of finite bitstrings.

bitstrings can be compared using the lexicographic order ${}_{{\mathbb{B}}^{*}}$, and concatenated using the $\Vert $ operator. for a bitstring $x\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$, $|x|\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}$ is its length in number of bits.

for any countable set $X$, ${\mathit{\text{Encode}}}_{X}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}X\to {\mathbb{B}}^{*}$ and will be some reasonable function to convert values to bitstrings, such that $\forall (x,y)\in {X}^{2}:\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Encode}}}_{X}(x)={\mathit{\text{Encode}}}_{X}(y)\phantom{\rule{0.278em}{0ex}}\iff \phantom{\rule{0.278em}{0ex}}x=y$. "reasonable" entails constraints such as:

- it can be computed efficiently.
- it can be inverted efficiently and unambiguously.
- its output's size is somewhat proportional to the actual amount of information. for example, integers are encoded in binary, not unary.

we posit $\sigma \phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{\overline{\sigma}}$, the set of "signatures", sufficiently large bitstrings for cryptographic and uniqueness purposes, with their length defined as $\overline{\sigma}={2}^{31}$ for now. this *feels* to me like it should be enough, and if it isn't then something is fundamentally wrong with the whole scheme, such that no manageable larger size would do either.

we posit a function $\mathit{\text{ExpensiveHash}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}\stackrel{H}{\to}\sigma $, to generate fixed-sized strings from seed bitstrings, which must satisfy the following:

- it must be too expensive for the AI to compute
*in any way*(including through superintelligently clever tricks), but cheap enough that we can compute it outside of the AI — for example, it could require quantum computation, and the AI could be restricted to classical computers - it should take longer to compute (again, in any way) than the expected correct versions of $\mathit{\text{Loc}}$'s $f,g$ functions (as will be defined later) could afford to run
- it should tend to be collision-resistant

at some point, we might come up with more formal ways to define $\mathit{\text{ExpensiveHash}}$ in a way that checks that it isn't being computed inside $\mathit{\text{Loc}}$'s $f,g$ functions, nor inside the AI.

for any countable set $X$, we'll be assuming ${\mathit{\text{EvalMath}}}_{X}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}\to \{\{x\}|x\in X\}\cup \{\varnothing \}$ to interpret a piece of text as a piece of math in some formal language, evaluating to either:

- a set of just one element of $X$, if the math parses and evaluates properly to an element of $X$
- an empty set otherwise

for example,

$\begin{array}{cc}\hfill & {\mathit{\text{EvalMath}}}_{\mathbb{N}}(\text{"1+2"})=\{3\}\hfill \\ \hfill & {\mathit{\text{EvalMath}}}_{\mathbb{N}}(\text{"hello"})=\varnothing \hfill \end{array}$

for any countable sets $X$ and $P$:

${K}_{X}^{-}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{X}$ is some "kolmogorov simplicity" distribution over set $X$ which has the properties of never assigning 0, and summing/converging to 1 over all of $X$. it must satisfy $\forall x\in X:\phantom{\rule{0.278em}{0ex}}{K}_{X}^{-}(x)0$ and $\begin{array}{c}\hfill \stackrel{x}{\sum _{x\in X}}\end{array}{K}_{X}^{-}(x)=1$.

${K}^{-}$ is expected to give more mass to simpler elements, in an information-theoretic sense.

notably, it is expected to "deduplicate" information that appears in multiple parts of a same mathematical object, such that even if $x\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$ holds lots of information, ${K}_{{\mathbb{B}}^{*}}^{-}(x)$ is not much higher (higher simplicity, i.e. lower complexity) to ${K}_{{\mathbb{B}}^{*}\times {\mathbb{B}}^{*}}^{-}(x,x)$.

we could define ${K}_{X}^{-}$ similarly to cross-entropy, with some universal turing machine $\mathit{\text{UTM}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}\times \mathbb{N}\to {\mathbb{B}}^{*}$ returning the state of its tape after a certain number of compute steps:

$\begin{array}{cc}\hfill & i,n\hfill \\ \hfill {K}_{X}^{-}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Normalize}}}_{X}(\lambda x:X.& \sum \phantom{\rule{0.278em}{0ex}}\frac{1}{({2}^{|i|}\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}(n+1){)}^{2}}\phantom{\rule{0.278em}{0ex}})\hfill \\ \hfill & i\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}\hfill \\ \hfill & n\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}\hfill \\ \hfill & \mathit{\text{UTM}}(i,n)={\mathit{\text{Encode}}}_{X}(x)\hfill \end{array}$

*kolmogorov simplicity over $X$ with a prior from $P$*, of type ${K}_{P,X}^{-~}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}P\to {\mathrm{\Delta}}_{X}$, allows elements it samples over to share information with a prior piece of information in $P$. it is defined as ${K}_{P,X}^{-~}(p)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Normalize}}}_{X}(\lambda x:X.{K}_{P\times X}^{-}(p,x))$.

in this section we posit some formalisms for modeling world-states, and sketch out an implementation for them.

we will posit some countable set $\Omega $ of world-states, and a distribution ${\Omega}_{\alpha}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{\Omega}$ of possible initial world-states.

we'll also posit a function ${\Omega}_{\alpha}^{\to}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \to {\mathrm{\Delta}}_{\Omega}$ which produces a distribution of future world-states for any specific world-state in the universe starting at $\alpha $.

given an initial world-state $\alpha \phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega $, we'll call ${\Omega}_{\alpha}^{\to}(\alpha )$ the "universe" that it gives rise to. it must be the case that $\begin{array}{c}\hfill \stackrel{\omega}{\sum _{\omega \phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega}}\end{array}{\Omega}_{\alpha}^{\to}(\alpha )(\omega )=1$.

when $\alpha $ describes the start of a quantum universe, individual world-states $\Omega $ following it by ${\Omega}_{\alpha}^{\to}$ would be expected to correspond to many-worlds everett branches.

for concreteness's sake, we could posit $\Omega \phantom{\rule{0.278em}{0ex}}\subset \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$, though note that $\alpha $ is expected to not just hold information about the initial state of the universe, but also about how it is computed forwards.

given a particular $\alpha \phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega $:

finally, we define ${\mathit{\text{SimilarPasts}}}_{\alpha}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \times \Omega \to [0;1]$ which checks how much they have past world-states ${\omega}_{\mathit{\text{past}}}$ in common:

$\begin{array}{cc}\hfill & {\omega}_{1}\hfill \\ \hfill {\mathit{\text{SimilarPasts}}}_{\alpha}({\omega}_{2},{\omega}_{2}^{\prime})\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}& \mathbf{M}\phantom{\rule{0.278em}{0ex}}[{\Omega}_{\alpha}^{\to}({\omega}_{1})({\omega}_{2})\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}{\Omega}_{\alpha}^{\to}({\omega}_{1})({\omega}_{2}^{\prime})]\hfill \\ \hfill & {\omega}_{1}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\Omega}_{\alpha}^{\to}(\alpha )\hfill \end{array}$

we will sketch out here a proposal for $\Omega $, ${\Omega}_{\alpha}$, and ${\Omega}^{\to}$ such that our world-state $w$ has hopefully non-exponentially-small ${\Omega}_{\alpha}^{\to}(\alpha )(\omega )$.

the basis for this will be a universal quantum turing machine. we will posit:

- $\mathit{\text{Tape}}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{s|s\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathcal{P}(\mathbb{Z}),\#s\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}\}$ the set of turing machine tapes, as
*finite*(thanks to $\#s\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}$) sets of relative integers representing positions in the tape holding a 1 rather than a 0. - $State$ some finite ($\#S\in \mathbb{N}$) set of states, and some ${\mathit{\text{state}}}_{0}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathit{\text{State}}$.
- $\Omega \phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\mathit{\text{Tape}}\times \mathit{\text{State}}\times \mathbb{Z}$: world-states consist of a tape, state, and machine head index.
- ${\mathrm{\Delta}}_{\Omega}^{q}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{f|f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \to \u2102,\begin{array}{c}\hfill \stackrel{\omega}{\sum _{\omega \in \Omega}}\end{array}{\Vert f(\omega )\Vert}^{2}=1\}$ the set of "quantum distributions" over world-states
- $\mathit{\text{Step}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{\Omega}^{q}\to {\mathrm{\Delta}}_{\Omega}^{q}$ the "time step" operator running some universal turing machine's transition matrix to turn one quantum distribution of world-states into another

we'll also define ${\mathrm{\Delta}}_{\mathbb{N}}^{2}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{\mathbb{N}}$ as the "quadratic realityfluid distribution" which assigns diminishing quantities to natural numbers, but only quadratically diminishing: ${\mathrm{\Delta}}_{\mathbb{N}}^{2}(n)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Normalize}}}_{\mathbb{N}}\left(\frac{1}{(n+1{)}^{2}}\right)$

we can then define ${\Omega}^{\to}$ as repeated applications of $\mathit{\text{Step}}$, with quadratically diminishing realityfluid:

$\begin{array}{cc}\hfill & {n}_{1},{n}_{2},s\hfill \\ \hfill {\Omega}_{\alpha}^{\to}({\omega}_{1})({\omega}_{2})\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}c\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}& \mathbf{M}[s({n}_{1},{\omega}_{1})\phantom{\rule{0.278em}{0ex}}\cdot \phantom{\rule{0.278em}{0ex}}s({n}_{1}+{n}_{2},{\omega}_{2})]\hfill \\ \hfill & {n}_{1}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{\mathbb{N}}^{2}\hfill \\ \hfill & {n}_{2}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{\mathbb{N}}^{2}\hfill \\ \hfill & s(n,\omega )={\Vert \mathit{\text{Step}}{\circ}^{n}({\mathrm{\Delta}}_{\Omega}^{1}(\alpha ))(\omega )\Vert}^{2}\hfill \end{array}$

where the constant $c$ is whatever scalar it needs to be for $\begin{array}{c}\hfill \stackrel{\omega}{\sum _{\omega \phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega}}\end{array}{\Omega}_{\alpha}^{\to}(\alpha )(\omega )=1$ to be satisfied.

this implementation of ${\Omega}_{\alpha}^{\to}$ measures how much ${\omega}_{2}$ is in the future of ${\omega}_{1}$ by finding paths from $\alpha $ to ${\omega}_{1}$, and then longer paths from $\alpha $ to ${\omega}_{2}$.

and finally, we define ${\Omega}_{\alpha}$ as a distribution giving non-zero value to world-states $(t,stat{e}_{0},0)$ where $t$ is a tape where no negative-index cells are set to 1.

${\Omega}_{\alpha}(t,s,i)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\{\begin{array}{ccc}\hfill & {\mathrm{\Delta}}_{\mathbb{N}}^{2}\left(\begin{array}{c}\hfill \stackrel{n}{\sum _{n\in t}}\end{array}{2}^{n}\right)\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}s=stat{e}_{0},i=0,t\phantom{\rule{0.278em}{0ex}}\subset \phantom{\rule{0.278em}{0ex}}\mathbb{N}\hfill \\ \hfill & 0\hfill & \text{otherwise}\hfill \end{array}$

because we selected a universal (quantum) turing machine, there is at least one input tape implementing any single quantum algorithm, including the quantum algorithm implementing our physics.

finally, we get into the core mechanisms of QACI.

the core idea of QACI is "blob location": mathematically formalizing the idea of locating our world and locating bitstrings (which i'll call "blobs") stored on computers within that world, out of the space of all possible computational universes, by sampling over functions which extract those blobs from world-states in $\Omega $ and functions which can produce a counterfactual world where that blob has been replaced with another blob of the same length (in number of bits).

throughout these functions, we will posit the following constants:

- the initial factual question blob $q\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$
- two "observation" blobs ${\mu}_{1}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$ and ${\mu}_{2}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$

${\mu}_{1},{\mu}_{2}$ are variables which will be passed around, called "observations". in normal AI agent framings, an AI would have a history of actions and observations, and decide on its next action based on that; but, in the one-shot framing we use, there is only a single action and a fixed set of observations. the observations, in practice, will be a very large pieces of data helping the AI locate itself in the multiverse of all possible computations, as well as get a better idea of how and where it is being ran. we will likely include in it things like:

- a full explanation of the QACI alignment plan, including the math
- the AI's code
- a dump of wikipedia and other large parts of the internet
- a copy of some LLM

${\mu}_{1}$ will be produced before the question blob is generated, and ${\mu}_{2}$ will be produced after the question blob is generated but before the AI is launched.

the overall shape of what we're doing can be seen on the illustration below: we start at the start of the universe ${\alpha}$, and use four blob locations and a counterfactual blob function call to locate five other world-states. the illustration shows distributions of future and past world-states, as well as a particular sampling of for all four blob locations.

- we sample ${{\omega}_{{\mu}_{1}}}$ using ${\mathit{\text{Loc}}(\alpha ,{\Omega}_{\alpha}^{\to}({\alpha}),{{\mu}_{1}},\xi )}$, world-states containing the first observation ${{\mu}_{1}}$
- we sample ${{\omega}_{{\mu}_{2}}}$ using ${\mathit{\text{Loc}}(\alpha ,{\Omega}_{\alpha}^{\to}({{\omega}_{{\mu}_{1}}}),{{\mu}_{2}},\xi )}$, world-states containing the second observation ${{\mu}_{2}}$
- we sample ${{\omega}_{q}}$ using ${\mathit{\text{Loc}}(\alpha ,{\Omega}_{\alpha}^{\to}({{\omega}_{{\mu}_{1}}}),{q},\xi )}$, world-states containing the question blob ${q}$, but requiring that its world-state ${{\omega}_{q}}$ precede the world-state ${{\omega}_{{\mu}_{2}}}$
- we get ${{\omega}_{q}^{\prime}}$, the world-state with a counterfactual question blob, using blob location ${{\gamma}_{q}}$ found by sampling ${{\omega}_{q}}$
- we sample ${{\omega}_{r}}$ using ${\mathit{\text{Loc}}(\alpha ,{\Omega}_{\alpha}^{\to}({{\omega}_{q}^{\prime}}),{r},\xi )}$, possible world-states containing an answer to a given counterfactual question ${{q}^{\prime}}$

the location path from ${{\omega}_{q}\prime}$ to ${{\omega}_{r}}$ is used to run QACI intervals, where counterfactual questions ${{q}^{\prime}}$ are inserted and answers ${r}$ are located in their future.

(we could also build fancier schemes where we locate the AI's returned action, or its code running over time, in order to "tie more tightly" the blob locations to the AI — but it is not clear that this helps much with blob location failure modes i'm concerned about.)

for the moment, we merely rely on ${{\mu}_{1}}$ and ${{\mu}_{2}}$ being uniquely identifying enough — though implementing them as *static bitstrings* might suffice, perhaps they could instead be implemented as *lazily evaluated associative maps*. when the AI tries to access members of those maps, code which computes or fetches information from the world (such as from the internet) would be executed determines the contents of that part of the observation object. this way, the observation would be conceptualized as a static object to the AI — and indeed it wouldn't be able to observe any mutations — but it'd be able to observe arbitrary amounts of the world, not just amounts we'd have previously downloaded.

we could make the QACI return not a scoring over actions but a proper utility function, but this only constrains the AI's action space and doesn't look like it helps in any way, including making QACI easier for the AI to make good guesses about. perhaps with utility functions we find a way to make the AI go "ah, well i'm not able to steer much future in world-states where i'm in hijacked sims", but it's not clear how or even that this helps much. so for now, the math focuses on this simple case of returning an action-scoring function.

(remember that while this section does explain the blob location math, it does so by presenting it all at once. for a gentler introduction, see part **7. blob location** (and onwards) of the dialogue explaining QACI)

for any blob length (in bits) $n\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\mathbb{N}$:

first, we'll posit ${\Gamma}_{n}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{n}\to \Omega $ the set of blob locations; they're identified by a counterfactual blob location function, which takes any counterfactual blob and return the world-state in which a factual blob has been replaced with that counterfactual blob.

${\mathit{\text{Loc}}}_{n}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \times {\mathrm{\Delta}}_{\Omega}\times {\mathbb{B}}^{n}\times \Xi \to {\mathrm{\Delta}}_{{\Gamma}_{n}}$ tries to locate an individual blob $b$ (as a bitstring of length $n$) in a particular world-state sampled from the time-distribution (past or future) $\delta $ (which will usually be a distribution returned by ${\Omega}_{\alpha}^{\to}$) within the universe starting at $\alpha $.

it returns a distribution over counterfactual insertion functions of type ${\mathbb{B}}^{n}\to \Omega $ which take a counterfactual blob and return the matching counterfactual world-state. the elements in that distribution typically sum up to much less than 1; the total amount they sum up to corresponds to how much $\mathit{\text{Loc}}$ finds the given blob in the given world-state to begin with; thus, sampling from a distribution returned by $\mathit{\text{Loc}}$ in a constrained mass calculation $\mathbf{M}$ is useful even if said result is not used, because of its multiplying factor.

note that the returned counterfactual insertion function can be used to locate the factual world-state — simply give it the factual blob as input.

$\Xi $ is some countably infinite set of arbitrary pieces of information which each call to $\mathit{\text{Loc}}$ can use internally — the goal of this is for multiple different calls to $\mathit{\text{Loc}}$ to be able to share some prior information, while only being penalized by ${K}^{-}$ for it once. for example, an element of $\Xi $ might describe how to extract the contents of a specific laptop's memory from physics, and individual $\mathit{\text{Loc}}$ calls only need to specify the date and the memory range. for concreteness, we can posit $\Xi \phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$, the set of finite bitstrings.

$\begin{array}{cc}\hfill & f,g,\omega ,{b}^{\prime},\tau \hfill \\ \hfill {\mathit{\text{Loc}}}_{n}(\alpha ,\delta ,b,\xi )(\gamma )\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}& \mathbf{M}\left[\frac{{\mathit{\text{SimilarPasts}}}_{\alpha}(\omega ,g({b}^{\prime},\tau ))}{R(g,({b}^{\prime},\tau ))+R(f,g({b}^{\prime},\tau ))}\right]\hfill \\ \hfill & (f,g)\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{K}_{\Xi ,(\Omega \stackrel{H}{\to}{\mathbb{B}}^{n}\times {\mathbb{B}}^{*})\times ({\mathbb{B}}^{n}\times {\mathbb{B}}^{*}\stackrel{H}{\to}\Omega )}^{-~}(\xi )\hfill \\ \hfill & \omega \phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}\lambda \omega :{\mathit{\text{max}}}_{X}^{\Delta}(\lambda \omega :\Omega .\{\begin{array}{cc}\delta (\omega )\hfill & \text{if}\phantom{\rule{0.278em}{0ex}}f(\omega )=(b,\tau )\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}).\delta (\omega )\hfill \\ \hfill & {b}^{\prime}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Uniform}}}_{{\mathbb{B}}^{n}}\hfill \\ \hfill & \begin{array}{cc}\hfill \forall {b}^{\prime \prime}\in {\mathbb{B}}^{n}:\phantom{\rule{0.278em}{0ex}}& \gamma ({b}^{\prime \prime})=g({b}^{\prime \prime},\tau )\hfill \\ \hfill & f(\gamma ({b}^{\prime \prime}))=({b}^{\prime \prime},\tau )\hfill \end{array}\hfill \end{array}$

$\mathit{\text{Loc}}$ works by sampling a pair of functions $f,g$, which convert world-states forth and back into {pairs whose first element is the blob and whose second element represents everything in the world-state except the blob}.

that latter piece of information is called $\tau $ (tau), and rather than being sampled $\tau $ is defined by the return value of $f$ on the original world-state — notably, $\tau $ is not penalized for being arbitrarily large, though $f$ and $g$ are penalized for their compute time.

for a given fixed pair of $f$ and $g$, $\mathit{\text{Loc}}$ finds the set of hypothesis world-states $\omega $ with the highest value within the time-distribution $\delta $ for which $f,g$ work as intended. this is intended to select the "closest in time" world-states in $\delta $, to avoid adversarial attackers generating their own factual blobs and capturing our location.

it then weighs locations using, for every counterfactual blob ${b}^{\prime}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{n}$:

- the degree to which counterfactual world-states tend to share pasts with the original factual world-state, for ${b}^{\prime}$.
- the compute time of $g$ and $f$ on counterfactual blobs and world-states respectively.

note that ${\mathit{\text{Loc}}}_{n}$, by design, only supports counterfactual blobs whose length $n$ is equal to the length of the initial factual blob $b$ — it wouldn't really make sense to talk about "replacing bits" if the bits are different.

in effect, $\mathit{\text{Loc}}$ takes random $f,g$ decoding and re-encoding programs, measures how complex and expensive they are and how far from our desired distributions are world-states in which they work, and how close to the factual world-state their counterfactual world-states are.

we'll define $\Pi \phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{|q|-\overline{\sigma}}$, the set of possible answer bitstring payloads.

counterfactual questions will not be signed, and thus will be the set of bitstrings of the same length as the factual question — ${\mathbb{B}}^{|q|}$.

we'll define $\mathit{\text{Sign}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Pi \times {\mathbb{B}}^{*}\to {\mathbb{B}}^{|q|}$ as $\mathit{\text{Sign}}(\pi ,k)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\mathit{\text{ExpensiveHash}}(\pi \Vert k)\Vert \pi $. this functions tags blob payloads using a "signature" generated from a seed bitstring, concatenating it to the blob payload.

we will posit $A\phantom{\rule{0.278em}{0ex}}\subset \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{*}$ as the finite set of actions the AI can take, as a finite set of bitstrings.

we'll call $U\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}A\to [0;1]$ the set of "scoring functions" over actions — functions which "have an opinion" about various actions. this is similar to utility functions, except it's over actions rather than over worlds or world-histories.

they can be composed using $\mathit{\text{Compose}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathrm{\Delta}}_{U}\to U$, which could be simple scoring-function averaging:

$\begin{array}{cc}\hfill & u\hfill \\ \hfill \mathit{\text{Compose}}(\delta )(a)\phantom{\rule{0.278em}{0ex}}\stackrel{?}{\u2254}\phantom{\rule{0.278em}{0ex}}& \mathbf{M}\phantom{\rule{0.278em}{0ex}}[u(a)]\hfill \\ \hfill & u\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}\delta \hfill \end{array}$

but alternatively, we could use something like Diffractor's Rose bargaining to reduce the ability for scoring/utility functions to threaten each other — and notably ours.

$\mathit{\text{Compose}}\phantom{\rule{0.278em}{0ex}}\stackrel{?}{\u2254}\phantom{\rule{0.278em}{0ex}}\mathit{\text{Rose}}$

(where i'm using $\stackrel{?}{\u2254}$ to mean "maybe define this way, but i'm not sure")

using those, we define $\mathit{\text{QACI}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \times {\Gamma}_{|q|}\times \Xi \times {\mathbb{B}}^{|q|}\to {\mathrm{\Delta}}_{\Pi}$ which given a physics hypothesis $\alpha $, a question blob location ${\gamma}_{q}$, and a blob location prior $\xi $, returns the highest guess returned answer payload ${\pi}_{r}$ for a given counterfactual question ${q}^{\prime}$.

$\begin{array}{cc}\hfill & {\gamma}_{r}\hfill \\ \hfill \mathit{\text{QACI}}(\alpha ,{\gamma}_{q},\xi ,{q}^{\prime})({\pi}_{r})\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}& \mathbf{M}\phantom{\rule{0.278em}{0ex}}[1]\hfill \\ \hfill & {\gamma}_{r}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Loc}}}_{|q|}(\alpha ,{\Omega}_{\alpha}^{\to}({\gamma}_{q}({q}^{\prime})),\mathit{\text{Sign}}({\pi}_{r},{q}^{\prime}),\xi )\hfill \end{array}$

$\mathit{\text{QACI}}$ works by sampling answer blob locations ${\gamma}_{r}$, from world-states in the future of the counterfactual question world-state ${\gamma}_{q}({q}^{\prime})$, signed using ${q}^{\prime}$.

with its first three parameters fixed, $\mathit{\text{QACI}}$ becomes the straightforward counterfactual query function ${\mathbb{B}}^{|q|}\to {\mathrm{\Delta}}_{\Pi}$ as used in the narrative explanation of QACI: one can call it with arbitrary counterfactual text inputs (within the size limitation), and get a distribution over possible answers, which can easily be collapsed using ${\mathit{\text{max}}}_{\Pi}^{\mathrm{\Delta}}$.

the top-level call to the $\mathit{\text{QACI}}$ query function, ${\mathit{\text{QACI}}}_{0}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}\Omega \times {\Gamma}_{|q|}\times \Xi \to {\mathrm{\Delta}}_{U}$ interprets its output as a piece of math and executes it with, as parameters, various global and contextual values it might need access to, and returns a distribution over action-scoring functions:

$\begin{array}{cc}\hfill & {\pi}_{r},f\hfill \\ \hfill {\mathit{\text{QACI}}}_{0}(\alpha ,{\gamma}_{q},\xi )(u)\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}& \mathbf{M}\phantom{\rule{0.278em}{0ex}}[1]\hfill \\ \hfill & {\pi}_{r}\phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}\mathit{\text{QACI}}(\alpha ,{\gamma}_{q},\xi ,{q}_{0}^{\prime})\hfill \\ \hfill & f\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathit{\text{EvalMath}}}_{\{q\}\times \{{\mu}_{1}\}\times \{{\mu}_{2}\}\times \Omega \times {\Gamma}_{|q|}\times \Xi \to U}({\pi}_{r})\hfill \\ \hfill & f(q,{\mu}_{1},{\mu}_{2},\alpha ,{\gamma}_{q},\xi )=u\hfill \end{array}$

where ${q}_{0}^{\prime}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}{\mathbb{B}}^{|q|}$ is the initial counterfactual blob, such as "return a good scoring of actions" encoded in ASCII, padded with zeros to be of the right length.

${\mathit{\text{QACI}}}_{0}$'s distribution over answers demands that the answer payload ${\pi}_{r}$, when interpreted as math and with all required contextual variables passed as input ($q,{\mu}_{1},{\mu}_{2},\alpha ,{\gamma}_{q},\xi $), returns an action-scoring function equal to $u$ — this is how it measures the weight of any action-scoring function $u$.

$\mathbf{M}\phantom{\rule{0.278em}{0ex}}[1]$ makes it that ${\mathit{\text{QACI}}}_{0}$'s distributions are only determined by the sampled variables and logical requirements.

$\mathit{\text{EvalMath}}$'s $f$ function having access to $\mathit{\text{QACI}}$'s distribution over output texts rather than best candidate allows it to discard as many invalid candidates as it needs and stick to ones that match whatever constraits it has.

we'll posit the AI as $\mathit{\text{AI}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}U\to A$ — a program which tries to satisfy a scoring over actions, by making a high-expected-score guess.

we define $\mathit{\text{Score}}\phantom{\rule{0.278em}{0ex}}\in \phantom{\rule{0.278em}{0ex}}U$, the action-scoring function which the AI will be making guesses about as a scoring function over actions, which happens to be one that is, hopefully, good. this is the scoring function for which the AI will be trying to produce an action that is as favorable as possible, within its limited capabilities.

$\begin{array}{cc}\hfill & \alpha ,\xi ,{\gamma}_{{\mu}_{1}},{\gamma}_{{\mu}_{2}},{\gamma}_{q}\hfill \\ \hfill \mathit{\text{Score}}\phantom{\rule{0.278em}{0ex}}\u2254\phantom{\rule{0.278em}{0ex}}\mathit{\text{Compose}}(\lambda u:U.& \mathbf{M}\phantom{\rule{0.278em}{0ex}}[{\mathit{\text{Normalize}}}_{U}({\mathit{\text{QACI}}}_{0}(\alpha ,{\gamma}_{q},\xi ))(u)]\phantom{\rule{0.278em}{0ex}})\hfill \\ \hfill & \alpha \phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{\Omega}_{\alpha}\hfill \\ \hfill & \xi \phantom{\rule{0.278em}{0ex}}:\phantom{\rule{0.278em}{0ex}}{K}_{\Xi}^{-}\hfill \\ \hfill & \begin{array}{cccccccccc}\hfill & {\gamma}_{{\mu}_{1}}\hfill & \hfill & :\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Loc}}}_{|{\mu}_{1}|}\hfill & \hfill & (\alpha ,{\Omega}_{\alpha}^{\to}(\alpha \hfill & \hfill & ),{\mu}_{1}\hfill & \hfill & ,\xi )\hfill \\ \hfill & {\gamma}_{{\mu}_{2}}\hfill & \hfill & :\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Loc}}}_{|{\mu}_{2}|}\hfill & \hfill & (\alpha ,{\Omega}_{\alpha}^{\to}({\gamma}_{{\mu}_{1}}({\mu}_{1})\hfill & \hfill & ),{\mu}_{2}\hfill & \hfill & ,\xi )\hfill \\ \hfill & {\gamma}_{q}\hfill & \hfill & :\phantom{\rule{0.278em}{0ex}}{\mathit{\text{Loc}}}_{|q|}\hfill & \hfill & (\alpha ,{\Omega}_{\alpha}^{\to}({\gamma}_{{\mu}_{1}}({\mu}_{1})\hfill & \hfill & ),q\hfill & \hfill & ,\xi )\hfill \end{array}\hfill \\ \hfill & {\Omega}_{\alpha}^{\to}({\gamma}_{q}(q))({\gamma}_{{\mu}_{2}}({\mu}_{2})){\Omega}_{\alpha}^{\to}({\gamma}_{{\mu}_{2}}({\mu}_{2}))({\gamma}_{q}(q))\hfill \end{array}$

where the following variables are sampled:

- an initial state of the universe $\alpha $.
- a blob location prior $\xi $, sampled for simplicity.
- the blob locations of ${\mu}_{1}$, ${\mu}_{2}$, and $q$.

and the world-state containing the second observation ${\mu}_{2}$ is required to be in the future of the word-state containing the question $q$.

then, we rate possible actions $a$ by composing the scoring functions produced by ${\mathit{\text{QACI}}}_{0}$ for all of those blob location hypotheses.

for any question location, the set of action-scoring functions sampled by ${\mathit{\text{QACI}}}_{0}$ is normalized. this is because pairs of *AI-action location and question location* should not be penalized for having a "harder to find" answer — once the observations and question have been located, we want to obtain the answer no matter what.

finally, we'll just execute the action returned by $\mathit{\text{AI}}(\mathit{\text{Score}})$.

unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.

unless explicitely mentioned, all content on this site was created by me; not by others nor AI.