in the sequences it is argued that 0 and 1 are not probabilities; that these "certainty ratios" aren't meaningful. but, i can think a situation that challenges this.
imagine a fully deterministic world — for example, running on a cellular automaton — and imagine that in this world there are some intelligences (either artificial or natural) that utilize this determinism to have the ability to make flawless logical deductions (for example, automated theorem proving algorithms running on computers that cannot ever have undetected hardware failures). for example, if they think about mathematics, under the axioms under which they work, 2 + 2 will always equal to 4, and doing any mathematical computation will either result in them knowing they don't have the computational resources to do the operation, or the result being guaranteedly true with the same certainty as that the cellular's automaton's rules will be applied next tick.
now, these beings still have a use for probability and statistics: those can be used to talk about parts of the world that they don't have complete information about. but, there will be some contexts, both purely in their minds (such as logic or math) or sometimes in the real world (they could make assessments like "this box cannot contain any spaceship of a certain size") that will be, functionally, certain.
it could be argued that they should still be weighing everything by the probability that there might be unknown unknowns; for example, their cellular automaton might have rules that apply only very rarely, and that they never got a chance observe yet but might yet observe later. but, let's say that they assume the rules of their world are exactly as they think, and let's say that they happen to be correct in that assessment. does that not make some of their deductions actually entirely certain?