tammy's blog about
AI alignment,
utopia,
anthropics,
and more;
john carmack has started working on AGI and apparently, despite yudkowsky's efforts, he's hard to alignmentpill — as in, convince that alignment is a difficult and important matter.
my current model is that, if a very smart person studies the problem of AGI, whether they become alignmentpilled "from the inside" (by working on the problem) is a matter of what order they attack the problem in. if they start from "hmm, what's the input to the AI's motivation/decision-theory system?", then they're a lot more likely to alignmentpill themselves than if they start from "hmm, how to optimize decision-making?". given this and vague things i've heard about carmack, i'll emit the following predictions as to which of those possibilities happen first, conditioned on the assumption that nothing interrupts his work — such someone else making clippy first:
(predictions are spoilered so you can make your own guesses without being anchored — click on a prediction to see the percentage)
unless otherwise specified on individual pages, all posts on this website are licensed under the CC_-1 license.
unless explicitely mentioned, all content on this site was created by me; not by others nor AI.