my psi format is quite inspired by wolfram's hypergraphs, though those include a turing-complete rewriting system. in this post i sketch out something similar for psi which i'd argue is a lot more elegant.
in a way, wolfram's hypergraph rewrite system can be said to be a kind of linear system/move semantics in which quantities are consumed and produced: namely, edges in the graph are consumed and produced by rules, and their presence is required for a rule to apply; in addition, the system gets nondeterminism from deciding which rule gets to consume a given edge when several could.
psi rewrite rules are similar to wolfram's, except for the following differences:
in this system, new nodes can only be created, not consumed. this works great for "constructive" systems such as logic, where new true statements can be created but not consumed; for systems like cellular automata, one can construct an artificial timekeeping system by defining each new state as a function of its past state.
given a starting set of nodes, the amount of data can only grow; a rule can be applied again to an existing set of nodes but because psi merges together observationally identical nodes, applying a rule to equal nodes a second time to the same input doesn't do anything; the result already exists.
so, all information is defined, and tested for equality, purely on a basis of relation to other information; and if two identical nodes are constructed at different "places" of the psi state, they are merged together; avoiding wolfram's need to do extra work to notice identical patterns in different states.
it can also be hoped that these properties allow an implementation of this rewrite system to more natively recognize, and merge, "locally equivalent" computations in the way hashlife does.
rule application is kind of deterministic, but which part of the computational expansion you choose to follow (which local area of the graph you observe from) and the result of which rule applications you choose to follow can be considered a source of indeterminism.
unlike "native multiverse" systems like wolfram where timelines on one hand, and different locations in space on the other are a different source of parallelism, this makes nondeterminism have only one source: which part of the set of nodes you follow (and not: which hypergraph timeline-instant and which piece of space in that timeline-instant).
in this way, nondeterminism with multiple timelines is implemented "on top" of the system: the base framework can be considered to be a deterministic system that just computes all timelines, and which one you choose to look at is the source of nondeterminism.
i expect that this unification of time/timelines together with space will greatly simplify bookkeeping the entire history of expanding computations; in fact, with the exception of rule constructed nodes that are not connected back to existing nodes, any expansion of the psi graph is also its own causal graph: new nodes point to the existing nodes that have allowed it to exist.
note that, together with constuctivity, the unification of time and space means that units of space in a cellular automaton also need to be defined relative to one another instead of relying on "intrinsically different" vertices.
there seems to be one main limitation in this system: cycles can only exist within a piece of data that results from only a single rule application; a cycle can't span the results of multiple rule applications, because one application preceeds the other and the nodes produced by the first application can't have their fields point in the future to nodes that haven't been created yet.
there is a way, however, to overcome this: if rules themselves are objects in the graph, maybe in the form
(→ input-list output-list), then some rules can produce novel rules, and thus one can encode "general rules" able to generate (with full turing-complete ability) arbitrarily complex rules, which in turn can be applied to produce arbitrarily complex cycles in a single rule step.