tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
suppose you program a computer to do bayesian calculations. it does a bunch of math with probability numbers attached to, for example, logical belief statements.
but, suppose that each time you access one of those numbers, there is a tiny chance ε
of hardware failure causing the memory/register to return an erroneous number — such as cosmic ray bit flips.
this fact can inform our decisions about how many bits of information we are to store our numbers as. indeed, the computer can never have probability ranges outside of [ε ; 1-ε]
: the probabilities are clamped by the chance that, while they were computed, a random hardware failure occured.
if a probability is calculated that is a function of many calculations, then the errors can accumulate. the computer might be able to rerun the computation to be more sure of its result, but it will never escape the range [ε ; 1-ε]
.
this constraint feels to me like it would also limit the number of bits of precision one can meaningfully store: there is only so many ways to combine numbers in that range, with errors at each step of computation, before the signal is lost to error noise. i'm not sure and haven't worked out the math, but it may turn out that arbitrary-precision numbers, for example, are ultimately of no use: given a constant ε
, there is a constant f(ε)
maximum number of useful bits of precision.
this issue relates to the uncertainty of 2+2=4: logical reasoning on a computer or on a human is still probabilistic/error-prone, because of hardware failures.
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.