tammy's blog about

AI alignment,
utopia,
anthropics,
and more;

(edit: this post has gotten a reply from my interlocutor, making a broader introduction to the topic at hand and then making her side of the argument. you might want to read it before you read this.)

kolmogorov complexity seeks to determine how complex a piece of information is by asking: what is the size of the smallest program that produces that piece of information?

i was arguing with someone about how objective kolmogorov complexity is: their argument against objectivity is that the choice of language matters, but my position is that some pieces of information (such as languages themselves) are gonna just tend to be *generally (and thus objectively) simpler* than others (and, generally, we should use simpler *languages* as our kolmogorov simplicity-measuring language).

let us consider "languagespace", a directed graph where there is one vertex per possible turing-complete language (there are countably infinitely many of them).

a language can be used to measure the simplicity of any other language (because of turing completeness, every language can express every other language), and we'll require the comparison of those measures to be a total order, and to be unique (two different input languages won't have an equal simplicity measure).

there is an edge going from every vertex to every other vertex, and those edges are labelled with a natural number: an edge going from language X to language Y with label N, means that Y is the N-th simplest language when using language X as a kolmogorov measure of complexity.

now, imagine a random walk through this graph, where each step you follow one arrow at random, assigning to each edge with label N a probability of 1/(2^N); such that, starting from any language X, you tend to go to a language that language X considers simple (and the infinite sum of all probabilities is indeed 1).

my claim is that this type of random walk through the infinite directed graph of languagespace would, after sufficiently many steps, tend to spend more time around what i'll call the "central cluster", than any other collection of languages. the "central cluster" is a set of what i think of as "simple languages", such as SKI-calculus, turing machines, cellular automata, and other "simple" instances of common models of computation.

this, however, is merely a conjecture on my part, and the person i was arguing with claims that the random walk would have no particularly "central" cluster it would tend to converge around, but instead it would end up gravitating around any of an infinite number of such "mutually simple" clusters.

i've come to be convinced that i am wrong about this.

imagine that there exists a finite set of languages particularly "attracts" the random walk more than the rest of languagespace. let's call that set A, and let's say it contains two languages: A1 and A2.

now, there is probably another set of languages, B, containing languages B1 and B2. in fact, given that languagespace is infinite, it seems silly to think such an isomorphic set of languages doesn't exist.

for example:

languages A1: [A1, A2, B1, B2, …] languages A2: [A1, A2, B1, B2, …] languages B1: [B1, B2, A1, A2, …] languages B2: [B1, B2, A1, A2, …] (and the "…" rest of the list is identical in all four languages)

finite cluster A and finite cluster B are isomorphic in terms of their lists of language simplicities, so the random walk will encounter A as much as B. even if you're willing to add B1 and B2 to the set of "objectively simplest languages", you can then imagine yet another set of languages that is isomorphic to the new one you have, and so on forever.

therefore, there is not finite set of simplest languages.

unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.

unless explicitely mentioned, all content on this site was created by me; not by others nor AI.