wouldn't it be neat if we didn't have to worry about infinite ethics ?
i think it is plausible that there are finitely many moral patients.
the first step is to deduplicate moral patients by computational equivalence. this merges not only humans and other creatures we usually care about, but also probably a lot of other potential sources of moral concerns.
then, i think we can restrict ourselves to patients in worlds that are discrete (like ours); even if there were moral patients in non-discrete worlds, it seems to me that from where we are, we could only access discrete stuff. so whether by inherent limitation, plain assumption, or just limiting the scope of this post, i'll only be talking about discrete agents (agents in discrete worlds).
once we have those limitations (deduplication and discreteness) in place, there are finitely many moral patients of any given size; the only way for an infinite variety of moral patients — or more precisely, moral patient moments — to come about is for some moral patients to grow in size forever. while infinite time seems plausible even in this world, it is not clear to me that whatever the hell a "moral patient" is can be arbitrarily complex; perhaps at a certain size, i start only caring about a subset of the information system that a "person" would consist of, a "sub-patient".