Anoma is a cybernetic consensus system for causal accounting. Using Anoma, agents can create, merge, and verify cryptographically content-addressed histories in a manner which enforces arbitrary logical relations, including directed and undirected identity relationships and a distributed linear resource logic. These logics are used as the substrate for a cryptographic accounting system which provides a quantifiable view of entanglement, a pairwise measure of the degree of a particular kind of relationship between two identities. Different measures of entanglement can be fed back into the system to inform local choices which require trust, such as delegation of data storage and consensus provisioning, allowing for automatic reduction of computational resource expenditure in high-entanglement interactions. These measures can also be input into and output from the system in order to track relations of interdependence in the external world.

Consensus over history in conjunction with user data input allows for the provisioning of causal accounting, in that the system can combine local information about local causal dependence and local agent preferences into a gestalt system model of causal and preference relations, and provide pairwise and relative measures of preference satisfaction as outputs which can be used for resource provisioning decisions in the future.

Anoma's architecture is scale-free, in that the system's fundamental unit of cryptographic identity is closed under composition, so the system can model relational topologies spanning arbitrary levels of individuality. Under the assumption of an inverse-power law with regards to economic distance, the network of real relations viewed from outside will take the form of a fractal, i.e. recurrence of form independent of scale.

Anoma provides computational integrity (correctness) and informational locality (privacy) on the basis of cryptographic assumptions. The protocol architecture abstracts cryptographic primitives by their information-theoretic properties, so implementations of particular primitives can be chosen and updated for availability, maturity, and performance over time.

These documents describe the architecture of the Anoma protocol. They are intended to be complete, in that enough is said to define precisely what a valid implementation of Anoma must do, and minimal, in that no more is said than that.

Anoma is free, in both the senses of "free speech" and "free beer". The source for these documents is on Github, permissively licensed, and they can be forked or edited as you like. At present, this particular repository is stewarded by the Anoma Foundation. Contributions are welcome.

NOTE: These documents are not yet complete. If you're reading this page right now, there's a high chance you might be interested in Namada.

Happy reading!

Angles of approach

Anoma can be viewed through the lens of many different languages.

TODO: Write this section. Readers: skip for now, move on to Architecture.

TODO: Write this section. Readers, skip for now.

Blockchain systems architecture

vis-a-vis cosmos/ibc

  • rollups: rollups-on-demand
  • cosmos/ibc: state not tied to instances in the same way, consensus provision separate from application logic, automatic resolution of cross-instance state
  • fractal scaling - implementation of all n layers
  • mesh security - scale-free implementation architecture

We unify the privacy-preserving double-spend prevention technique of Zerocash/Zexe/Taiga - nullifiers, uniquely binding to note commitments, unlinkable and calculable with secret information - with the double-spend prevention required of distributed database systems / blockchains (preventing multiple valid conflicting descendents of the same block), recasting the mechanism as a distributed enforcement system for a linear resource logic.

  • We unify various concepts of state-machine-level cryptographic identity -- public-key-based accounts, smart-contract accounts, BFT light clients, threshold keys -- and network-level cryptographic identity into a singular information-theoretic duality of external identity and internal identity, the fundamental abstraction on top of which the protocol is built.
  • We unify the concepts of message and state, in that each message commits to its own history, and that there is no state other than can be computed from a set of messages at a particular point in partially-ordered logical time.

TODO: Write this section. Readers, skip for now.

Cybernetics of coordination

  • Cybernetic systems architecture
  • That paper by Aurora Apolito

TODO: Write this section. Readers, skip for now.

Anthropology of money

  • Graeber's "wave theory" of credit-cash money

TODO: Write this section. Readers, skip for now.

Web-of-trust history

  • Twitter trust graph
  • Facebook trust graph
  • Money trust graph


TODO: Write this section in a clearer way.

Principle of locality

The central principle in the design of Anoma's protocol architecture is that of locality. For now the term itself will be left without explicit definition, but the sense is closest to that in which physics uses it, except that we are concerned principally with proximity in informational space instead of physical space (where the sense used by physics concerns the relation between the two). Locality takes several forms:

  • Temporal locality, i.e. local orderings only
  • Informational locality (privacy), i.e. keep information to smallest group by default
  • Measurement locality, i.e. rely on local measurements only
  • Value locality, i.e. local values may differ
  • Name locality, i.e. local names may differ

Conceptual context

The Anoma architecture operates on the basis of agents. The architecture does not presume any sort of global view or global time. It also does not presume any particular motivations of agents, but rather describes the state of the system as a function of the decisions taken by agents over (partially ordered) time.

  1. Agent is a primary notion in the Anoma protocol that aims to extend/replace the notion of process in the distributed systems literature.

  2. Agents are assumed to have the ability to:

    • generate local randomness,
    • locally store and retrieve data,
    • perform arbitrary classical computations,
    • create, send, receive and read messages over an arbitrary, asynchronous physical network.
  3. Agents may have local input (e.g. human user input) and/or local randomness (e.g. from a hardware random number generator).

  4. Agents can join and leave the system at any time.

  5. All actions committed by agents are recorded in the history. To commit an action is to send a message. The state of the system at any point in time is a function of the history of messages sent by agents up to that point in time.

The rest of this specification defines the Anoma protocol, which is specific logic that agents run to read, create, and process messages. For convenience, the Anoma protocol shall be referred to henceforth as just the protocol.

Acceptors need to be aware of the different definitions of learners in order to be able to know what correct behavior is defined as. This set of the agents for learners, might empty or have overlaps, the acceptors have to follow the deifnition regardless.

We briefly describe how the communication for a consensus round works. Suppose we have two learners and which refer to agents that are interested in blockchain and blockchain . Proposers propose a chimera block, by sending messages, each carrying a value and unique ballot number (round identifier), to acceptors. All acceptors in all involved chains () send messages to each other to communicate that they’ve received a message. When an acceptor receives a message for the highest ballot number it has seen from a learner ’s quorum of acceptors, it sends a message labeled with and that ballot number. There is one exception: once a safe acceptor sends a message for a learner , it never sends a message with a different value for a learner , unless one of the following is true:

  • It knows that a quorum of acceptors has seen messages with learner and ballot number higher than .
  • It has seen Byzantine behavior that proves and do not have to agree.

A learner decides on a block when it receives messages with the same proposed block and ballot number from one of its quorums of acceptors.


  • Base chains are two or more independent chains between which we would like to carry out atomic transactions. The chains must protocol-wise adhere to some specific set of requirements to be compatible. For example, IBC support.
  • A chimera chain is a chain that allows atomic transactions to be carried out on objects from the base chains. It carries an additional consensus mechanism, that is dependent on the consensus of the base chains.

  • A learner is a client that is interested in the value decided by the voting process. Learners might be full nodes, light client nodes, or nodes from other chains.

  • An acceptor is an agent participating in the voting process of the consensus protocol. Non-byzantine acceptors are called real acceptors.

  • A quorum is a subset of acceptors sufficient to make a learner decide on a value. For a learner to maintain consistency (avoid deciding on two contradictory values), any two of its quorums must have a common real acceptor. Most chains achieve this practically by making the intersection of the quorums big enough, i.e. the acceptors in the intersection being backed by more than >1/3 stake of each chain under <1/3 Byzantine assumption. For example, suppose:

    • Base chain A has a total stake of .
    • Base chain B has a total stake of .
    • Any set of acceptors backed by is a quorum of chain A. This means would have to back unsafe acceptors for chain A to fork.
    • Any set of acceptors backed by is a quorum of chain B. This means woudl have to back unsafe acceptors for chain B to fork.
    • Suppose that, for every quorum of chain A, and every quorum of chain B, the acceptors in are backed by , and . This would mean that, in order for atomic batches on the chimera chain to lose atomicity, and would have to back unsafe acceptors.
      • When a batch loses atomicity, the transactions on one state machine (say, A) are executed, but not the transactions on the other state machine (say, B). However, each state machine itself remains consistent: neither A nor B forks.
      • This means some chimera chains offer atomicity with lower (yet well-defined) levels of integrity than their base chains' no-fork guarantees.
  • A proposer is an acceptor that may propose a new block according to the rules of the blockchain. For Typhon, the "blocks" of the consensus protocol are the headers produced by the mempool. A potential proposer would need (a) data availability, (b) the ability to sign messages, ( c) something at stake (to prevent spam) and (d) the ability to communicate with the acceptors. Acceptors that are in the overlaps of quorums may especially well suited to be proposers, but other Acceptors (or even other machines) might be proposers as well. The Heterogeneous Paxos technical report effectively uses weighted voting for select proposer, but perhaps there are interesting tricks with VRFs that would improve efficiency.


  1. Acceptors are staked: An acceptor has a certain amount of stake backing them. This stake is either fully their own stake or is partly delegated to them by other token holders.
  2. Quorums are known: Quorums are defined implicitly by the internal logic of each chain. For most proof-of-stake chains, a quorum is a subset of acceptor that have of stake backing them.
  3. Large Overlap of Quorums: A practical way to ensure a safe acceptors in the overlap between quorums. To guarantee atomicity with the same integrity as base chains, each quorum overlap must be backed by of the stake of each base chain.
  4. Connectivity: All acceptors in a chimera chain communicate with each other (even if that means talking to acceptors who are not on the same base chain). This communication can be indirect (i.e. gossiping through other acceptors), so long as all pairs of honest acceptors can always communicate.
  5. Only one learner per main chain: We model the learner graph as one learner per main chain, and then full nodes can instantiate their base chain's learner to make decisions. The reason is that in Heterogeneous Paxos the learner graph is a closed graph with known nodes, while in the blockchain context we have a open network where we don't know who all the learners are.

Chimera Chain

In this section we describe how chimera chains operate.

Upon wanting to include an atomic batch of transactions from the transaction pool into a block, a block proposer either creates a genesis block if there is no existing chimera chain or builds on top of an existing chimera chain.

Genesis block

In order to be safe, the guarantee we want here is that future quorum updates on the main chains are guaranteed to be received before voting happens on chimera chains.

  1. Create a genesis block that claims to be a "chimera chain" and allocate some unique name or index.
  2. Register (in a transaction) that genesis block on all base chains and allocate it some unique name, so they know to involve it in any future quorum updates. The guarantee we want here is that future quorum updates are guaranteed to be received before votes happen.
  3. The first block append on the chimera chain requires IBC messages from all base chains explaining that they have registered this genesis block.

Producing Blocks

The block consists of transactions from both chains or just on of the chains. The transaction can be bundled together to make sure they are carried out atomically.

It is possible that more than one block may be finalized like in Grandpa (from Polkadot project) decoupling block prduction from finalization. In thi scase, Heterogeneous Paxos would serve the roll of a "finality gadget" in Grandpa's terms, but blocks could be produced at any rate.

Moving Objects

Suppose state in each state machine is expressed in terms of objects (think object-oriented programming) with state and methods, and a location (each is located on a chain). Our ideas about "movable objects" should be applicable to any program entities which feature:

  • a unique identifier
  • a permanent set of public methods (an API) callable by users, or other methods on other objects in the same state machine, which can compute and return values
  • a set of private methods callable only by other methods on this object
  • mutable state (which only methods can mutate)

These are much like "smart contracts," which also carry internal mutable state and public APIs much like methods. An example of an "object" is any sort of account: multisignature account or single user account:

  • accounts usually have a unique identifier
  • public methods include "send money," which takes as input some signed authorization and deducts money from the account.
  • mutable state includes some value representing the amount of money in the account.

One might want to move an object to another state machine. For simplicity, let's suppose it's a state machine with the same quorums (on a different, possibly chimera, chain). This is not so very different from various chains trying to get a lock on an account (objects) and then perform atomic operations on them.

This "move" operation should require an IBC send, and then deleting the original object. Or instead of deleting the original object the original object can be made a pointer to the object which now lives primarily on the destination chain. On the destination state machine, an IBC receive should be followed by creating a new copy of the object. Any "object identifiers" found in pointers from other objects will have to be preserved, which could be tricky.

Permissions for moving Objects

We have to consider who is allowed to move which objects where. One way to do this is to have "move" simply be a "private method" of any object: the programmer has to specifically program in any methods that the transactions or other objects can call to move the object. The most straightforward private method definition would allow anyone (e.g.,owners) who can control the object be able to give permission for moving.

We may want to allow chains to specify limits on what objects can move onto that chain. For example, they may require that objects have a method that can move them back to a base chain.

Epoch Change

If new quorums for each base chain do not overlap on a real acceptor, atomicity cannot be guaranteed. We can no longer meaningfully commit atomic batches to the chimera chain. However, extensions to the chimera chain are consistent from the point of view of each base chain (but different base chains may "perceive" different extensions). This allows us to, for example, move objects to base chains even after the chimera chain loses atomic guarantees.

Kill Chains

Likewise, it's probably useful to kill chimera chains no one is using anymore. If we don't kill them, when quorums change all of chimera chains need to get an update, and that can become infeasible. One possibility is to allow chains with no objects on them (everything has moved out) to execute a special "kill" transaction that prohibits any future transactions, and sends an IBC message to all parent chains. On receipt, parent chains could remove that chimera chain from their registry.

Protocol Description

This section describes how a chimera block is appended to the existing chimera chain assuming all the setup has taken place.

For simplicity, this protocol is written as if we're deciding on one thing: the specific block that belongs on a specific blockchain at a specific height. All state and references are specific to this blockchain / height pair. We do not care about messages before this height. This height may be defined using the last finalized block in the blockchain.

Learner Graph

For each learner , we call its set of quorums .

For each pair of learners and , there are a specific set of conditions under which they require agreement. We enumerate these as , a set of "safe sets" of acceptors: whenever at least one set in is composed entirely of safe (non-byzantine, but possibly crashed) acceptors, and must not decide different things.

We designate the set of acceptors who are actually safe . This is of course determined at runtime, and unknown to anyone.


The protocol is carried out by sending and receiving messsages. The table below describes the structure of a typical heteregenous paxos message.

chainIdIdIdentifies this Chimera Chain
heightuint64Current height of the chimera chain
pktTypeenum PktType {1A, 1B, 2A}
ballotBallot = (Timestamp, Hash)This consists of the hash of the proposed block and the timestamp of the initiated ballot. There is a total lexicographical order on Ballot.
sigSigThe signature field sig unforgeably identifying its sender, e.g., the message is signed, and the sender has a known PK in some PKI.
refsVec<Hash>The list of hashes of messages received previously.

In general, acceptor relay all sent or received messages to all learners and other acceptors. This ensures that any message received by a real acceptor is received by all real acceptors and learners.

Definition: Message Signer

Definition: Message Set Signers

We can define over sets of messages, to mean the set of signers of those messages:

Messages also contain a field refs, which includes chained hashes of every message the sender has sent or received since (and including) the last message it sent. As a result, we can define the transitive references of a message, which should include every message the sender has ever sent or received:

Definition: Transitive References

To ensure that acceptors and learners fully understand each message they receive, they delay doing any computation on it (sometimes called delivery) until they have received all the messages in refs. As a result, acceptors and learners will always process messages from any given sender in the order they were sent, but also from any messages that sender received, and recursively.

Consensus Round: Ballot

Next, we briefly describe how the communication for a consensus round works. Consensus is reached in four steps: proposing the chimera block, acknowledging receipt of proposals, establishing consensus, and termination.

Suppose we have two learners and from two different blockchains.

message: proposing blocks

A proposer proposes a new block by sending a message to all acceptors, which includes

  • a value (the atomic transaction for example)
  • a unique ballot number (round identifier)
  • a message containing the hash of the proposed block along with a time stamp of the ballot initiation.

If there is an existing chimera chain, the proposer can built upon that chain and if the proposer is starting a new chimera chain it needs to lock some funds for that.

Also, the acceptor needs to check validity as soon as possible: don't even "receive" an invalid proposal (or at least don't send a "1b" message in response).

message: acknowledging receiving proposals

On receipt of a message, an acceptor sends an ackowledgement of its receipt to all other acceptors and learners in the form of message.

message: establishing consensus

When an acceptor receives a message for the highest ballot number it has seen from a learner ’s quorum of acceptors, it sends a message labeled with and that ballot number.

There is one exception: once a safe acceptor sends a message for a learner , it never sends a message with a different value for a learner , unless one of the following is true:

  • It knows that a quorum of acceptors has seen messages with learner and ballot number higher than .
  • It has seen Byzantine behavior that proves and do not have to agree.
Specifics of establishing Consensus

In order to make a learner decide, we need to show that another, Entangled learner could not already have decided.

Definition: Entangled

In an execution, two learners are entangled if their failure assumptions matched the failures that actually happen:

If some learner does not agree with some other learner , then learner cannot possibly agree with both and .

Definition: Heterogeneous Agreement
  • Within an execution, two learners have agreement if all decisions for either learner have the same value.
  • A heterogeneous consensus protocol has agreement if, for all possible executions of that protocol, all entangled pairs of learners have agreement.
Definition: Accurate Learner

An accurate learner is entangled with itself:

A learner whose quorums contain too many failures is inaccurate. This is the equivalent of a chain that can fork.

In order to prevent entangled disagreement, we must define the conditions that will ultimately make learners decide:

Definition: Get1a

It is useful to refer to the that started the ballot of a message: the highest ballot number in its transitive references.

Definition: Ballot Numbers

The ballot number of a is part of the message, and the ballot number of anything else is the highest ballot number among the s it (transitively) references.

Definition: Value of a Message

The value of a is part of the message, and the value of anything else is the value of the highest ballot among the messages it (transitively) references.

Terminate: Finalizing blocks

A learner decides when it receives messages with the same ballot number from one of its quorums of acceptors.

If no decision can be reached within a certain time, proposers must begin a new round (with a higher timestamp, and thus a higher Ballot). Proposers can start a new round by proposing a new block or by trying to finalize the same block again (in case there was no consensus).

Definition: Decision

A learner decides when it has observed a set of messages with the same ballot, sent by a quorum of acceptors. This will allow the learner to decide on the value of the messages in the set. We call such a set a decision:

Now we are ready to discuss what makes a Well-Formed message. This requires considering whether two learners might be entangled, and (unless we can prove they are not engangled), whether one of them might have already decided:

Definition: Caught

Some behavior can create proof that an acceptor is Byzantine. Unlike Byzantine Paxos, our acceptors and learners must adapt to Byzantine behavior. We say that an acceptor is Caught in a message if the transitive references of the messages include evidence such as two messages, and , both signed by , in which neither is featured in the other's transitive references (safe acceptors transitively reference all prior messages).

Slashing: Caught proofs can be used for slashing.

Definition: Connected

When some acceptors are proved Byzantine, clearly some learners need not agree, meaning that isn't in the edge between them in the CLG: at least one acceptor in each safe set in the edge is proven Byzantine. Homogeneous learners are always connected unless there are so many failures no consensus is required.

Definition: Buried

A message can become irrelevant if, after a time, an entire quorum of acceptors has seen s with different values, the same learner, and higher ballot numbers. We call such a buried (in the context of some later message ):

Definition: Connected 2a Messages

Entangled learners must agree, but learners that are not connected are not entangled, so they need not agree. Intuitively, a message references a message to demonstrate that some learner may have decided some value. For learner , it can be useful to find the set of messages from the same sender as a message (and sent earlier) which are still unburied and for learners connected to . The cannot be used to make any new messages for learner that have values different from these messages.

Definition: Fresh

Acceptors send a message whenever they receive a message with a ballot number higher than they have yet seen. However, this does not mean that the 's value (which is the same as the 's) agrees with that of messages the acceptor has already sent. We call a message fresh (with respect to a learner) when its value agrees with that of unburied messages the acceptor has sent.

Definition: Quorums in Messages

messages reference quorums of messages with the same value and ballot. A 's quorums are formed from fresh messages with the same ballot and value.

Definition: Well-Formed

We define what it means for a message to be well-formed.

An acceptor who has received a sends a for every learner for which it can produce a Well-formed .

Before processing a received message, acceptors and learners check if the message is well-formed. Non-wellformed messages are rejected.

Incentive Model


  • Incentivizing token holders to put down their stake for security
  • Disincentivizing byzantine behavior


  • Participating in consensus based on backing stake: this includes validating and voting
  • Producing blocks

Slashing: there are a number of offenses

  • Invalid blocks
  • Equivocation (caught)


The first question is once there is no demand for atomic batches of transactions, do we keep the chimera chain alive?

We need to figure out whether killing the chimera chain (and potentially requiring a new genesis later) is not too expensive.

The second question is whether we update the quroum changes whenever they happen or when we have a transaction?

The first option is expensive and can lead to attack if not handled well. We need to figure out whether the latter option is safe.

  • If the answer is yes for first question and we pick the first option for the second question:

Since all chimera chains need to be updated when the quorums on the base chains change, we need to figure out who pays for these updates. For example, a chimera chain might have had a block produced last week, but since the has been updated 200 times that is 200 blocks that do not have any transactions with transaction fees. If this is not paid by anyone, it becomes a burden for acceptors and an attack vector for the adversary.

We may need to add a "locked" fee for making new chimera chains. In particular, we don't want an attacker to make a lot of chains, forcing base chains to update all of them with each quorum change.

Alternatively, each "chimera chain" could keep an account on each base chain that is funded by a portion of transaction fees, from which a small fee is extracted with each validator update. When the account runs dry, the (parent)base chains are allowed to kill the chimera chain (even if there are still objects on there). We could allow anyone to contribute to these accounts. We could even prohibit anyone who did not contribute from sending IBC messages between chimera and parent chains.

Security Discussion

Note that the chimera cannot guarantee atomicty under the same adversary assumption as the base chains. For example, if we assume the adversary to control less than 1/3 of the stake to assure safety on the base chains, atomicity is not guaranteed for such an adversary but only a weaker one. This is important for users so they can decide whether for their transaction chimera chians would be secure enough.

By setting the quorums of each learner to be the same as the quorums of the corresponding base chain, we ensure that each learner's view is as consistent as the base chain. Specifically, two instantiations of the learner for some base chain should decide on the same blocks in any chimera chain, unless the adversary is powerful enough to fork chain .

"Accurate" (or "self-engangled") learners (defined above) correspond to base chains where the adversary is in fact powerful enough to fork the chain. Proving that a learner is accurate is equivalent to proving that its base chain cannot fork.

Two learners and corresponding to different base chains will decide on the same blocks (which is what makes atomic batches useful), so long as one of the safe sets in is composed entirely of safe acceptors. The stake backing these safe sets represents the "assurance" that atomic batches remain atomic, as well as the maximum slashing punishment should they lose atomicity.

Loss of atomicity is a bit like a "trusted bridge" that turns out not to be trustworthy: each state machine within the chimera chain has as much integrity as its base chain, but atomicity properties of multi-state-machine atomic batches have a lesser, but still well-defined, guarantee. Loss of atomicty allows double spending on the chimera chain. And while misbehavior has happened in such an attack it is not trivial to slash the misbehaving acceptor since according to each chain everything has been carried out correctly.

Open Challenges

Programming Model

Atomic Batches

We'll need a way to specify that a batch of transactions should be committed together or not at all. Ideally, this should communicate to the programmer how reliably this atomicity is (see "practical considerations" below). On an chimera chain, batches can include transactions from any of their "main chains". If we want to have match-making, transactions will need to be able to say something like "if I'm in an atomic batch that matches these criteria, then do this...".

Each atomic batch should be scheduled within one block.

(We encode transactions with Portobuff and Borsht.) We need define structures such that transactions can be bundled and cannot be carried out separately.

Atomic Communication

Transactions within an atomic batch need to be able to send information to each other (and back). For example, in a market with a fluctuating exchange rate, a market transaction could send a message to an account, which transfers money, and sends a message to another account, which transfers goods. We need a language in which this communication takes place with minimal constraints on the state machines involved, so we should probably adapt the IBC communication language.

We need to figure out inter-chain communication works for transactions communicating with each other within an atomic batch.

Can we have synchronous (in terms of blocks) IBC?

Yes: when the quorums involved in two chains are the same, then we can guarantee that (unless there are sufficient failures to fork the chain) an IBC message sent in one chain will arrive at the other chain (in a transaction) within a specific number (perhaps as few as 1?) of blocks. This is because some of the machines in the quorum that approved the "send" transaction must be in the quorum that decides the next block, so they can refuse to decide on blocks that lack the "receive" transaction.

Note that we may need to rate-limit sends to ensure that all sends can be received in time (availability).

Note also that blinding transactions makes this hard.

Match Making

We can in priciple do cross-chain match-making. If we want an on-chain market, an chimera chain might be a good place to do it. However, full nodes in the gossip layer might be able to gather sets of transactions that match the transactions' "If I'm in an atomic batch ..." criteria, bundle them into atomic batches, and then send those off to be committed.

We may want to incorporate some incentive scheme for good match-making. Matchmakers include nodes who are full nodes on both chains, and in principle could include any node who sees the request transactions.

Changing Quorums?

2 Phase Commit

We could require that any quorum-chainging transaction has to be 2-phase committed. Essentially, the "main chain" announces that it won't progress beyond a certain block until everyone has appended a new block that sets the (same) new quorums, and sends a response by IBC. It can then progress only with IBC responses from all the other chains that use these quorums.

Synchronous IBC

We may be able to leverage our "synchronous" IBC idea above for faster quorum changes. The difficulty is that it allows a finite number of blocks to be appended to the chimera chains before they receive the quorum change method. These chains can be arbitrarily slow, so that could take arbitrarily long.

Need to figure out inter-machine communication for acceptors, since they might run many machines.

Discussion Questions /Practical Considerations

  • Optimizing messgaing: Pipelining (from Hotstuff), Kauri, BFTree
  • Replicating state machines
  • Probems Tendermint has:
    • Doesnt allow many validators
    • Lightclient design
  • Optimizing recoveryfrom slow finalization: Separating block prosuction from finalizing, finalzing more than one block
  • ABCI ++? Another version of it
  • Look into other papers of Dalia Malkhi / Fast HotStuff?

Execution Engine


Given a total order (from the consensus) of transactions (from the mempool), the execution engine updates and stores the "current" state of the virtual machine, using as much concurrency as possible. Proofs from the execution engine allow light clients to read the current state. When the execution engine has finished with a transaction, it communicates to the mempool that the transaction can be garbage-collected from storage.


  • Shards are processes that store and update state.

    • Different shards may be on different machines. Redistributing state between shards is called Re-Sharding.
    • Each Shard is specific to 1 learner. However, as an optimization, an implementation could conceivably use 1 process to do the work of 2 shards with different learners so long as those shards are identical, and fork that process if / when the learners diverge.
  • Executors are processes that actually run the VM and compute updates. Executors should probably be co-located with shards.


  • We assume the Mempool is using the Heterogeneous Narwhal setup?
    • In which case Consensus picks leaders in the DAG
  • Mempool is treated as some kind of black-box set of processes that can each transmit transactions to Shards.
    • In which case Consensus produces more detailed ordering information Perhaps we should have some general notion of Timestamp on transactions?

The VM is largely a black box: we assume we can give it a set of input key-value pairs and a transaction, and get output key-value pairs.


State is stored as mutable Values (unlimited size blobs of binary), each of which is identified with an immutable Key. If you want to mutate a Key associated with a specific Value, that's equivalent to deleting the Value associated with the old Key, and writing it to the new Key. Keys that have never had a Value written to them are mapped to an empty value.

For each Learner, all processes can map Transactions to a set of Shards whose state they read, and a set of shards whose state they write. This makes Re-Sharding challenging.

read(L : Learner, T : Transaction) : Set[Shard]

write(L : Learner, T : Transaction) : Set[Shard]

One way to implement this is to partition the space of Keys across Shards, and Label each Transaction with a Sub-Space of keys it touches. One possible Key-space would be to arrange Keys in some kind of tree configuration.

Mempool Interface

We assume, for each Learner, that each transaction has a unique Executor:

executor(L : Learner, T : Transaction) : Executor

It would be more efficient if executor(L, T)write(L, T)read(L, T)TSSheardAllWrite(S) > heardAllRead(S)heardAllReadheardAllWrite as the greatest lower bound of all the partial bounds.

Consensus Interface

Consensus needs to update each Shard's knowledge of the total order of timestamps. In particular, we assume that Consensus informs shards of an ever-growing prefix of this total order.

  • One way to accomplish this is simply to have each timestamp be the hash of the transaction, and have consensus stream a totally ordered list of all hashes included in the chain to all Shards. This may not be very efficient.
  • We could instead consider one timestamp to be definitely after another if it is a descendent in the Narwhal DAG. Narwhal workers could transmit DAG information to Shards, and shards would learn some partial ordering information before hearing from Consensus. Consensus could then transmit only the Narwhal blocks it decides on, and shards could determine a total ordering from there.


For each learner LTread(L, T)SSwrite(L, T).

Generally, transactions do not have side effects outside of state writes. However, we could in principle encode client reads as read-only transactions whose side-effect is sending a message, or allow for VMs with other side effects.

Executors can save themselves some communication if they're co-located with Shards. As an optimization, we can save on communication by combining messages for multiple learners if their content is identical and their shards are co-located. Likewise, we can save on computation by using one process to execute for multiple learners so long as they are identical.

State Updates

For each key in its state, each shard SheardAllRead(S)heardAllWrite(S)ST_1T_2timestamp(T_1) < timestamp(T_2)T_1T_2timestamp(T_1) < timestamp(T_2)KT_2T_1\langle T_2, K \rangleTtimestamp(T) < heardAllWrite(S)timestamp(T)Sread(learner(S), T)executor(learner(S), T)KSwrite(learner(S), T)executor(learner(S), T)Ktimestamp(T)\langle T, K\ranglewrite(learner(S), T)TheardAllWrite(S). These need to be added to the dependency graph and processed just like all other transactions.

Garbage Collection

Each Shard can delete all but the most recent value written to each key before heardAllRead(S).

Once all of a transaction's Executors (for all learners) have executed the transaction, we can garbage collect it. We no longer need to store that transaction anywhere.

Client Reads

Read-only transactions can, in principle, bypass Mempool and Consensus altogether: they only need to arrive at each of the relevant shards SheardAllRead(S)heardAllReadheardAllWrite


For now, see anoma/taiga.


For now, see anoma/ferveo.

Compiler stack

For now, see anoma/juvix, anoma/vamp-ir, and anoma/geb.

TODO: Write this up in detail. Readers, ignore for now.


Consensus provisioning

The XAN stakeholder set acts as a consensus provider:

  • Liquid delegation cubic proof-of-stake
  • Free gas per time for stakers (non-transferable)
  • Continuously better staking yield rates with greater lock time
    • Some inverse-quadratic curve approaching an asymptote

Randomness beacon

  • Randomness for Schelling PPGF
  • Human-provided input cadence & rewards
  • Threshold randomness from key shares

Liquid RPGF

  • Consensus on areas of RPGF
  • Category-separated liquid (chain-delegated) democracy
  • Privacy of funding decisions, but rewards for agreement
  • Advance market commitments?

Schelling-human PPGF

  • Random graph walks
  • Density of interconnection
  • bidirectional "humanity" relation


Anoma protocol

The Anoma protocol is the logical framework which agents use to read, create, and process messages.


An agent is a non-deterministic, stateful entity which can send and receive messages.

The term "agent" is similar to "process" used in distributed systems. However, "agent" is used to highlight the possibility of non-deterministic behaviour, such as random events or choices made by external users. The term also emphasizes the idea of decision-making that can affect the state of the system. This is important for ensuring that the state of the system accurately reflects the state of the world. To achieve this, individual agents must provide data inputs that correspond in local ways, as the system protocol itself does not have direct knowledge of the state of the world.

Read more about agents in the conceptual context.

Canonical serialization

A canonical serialization refers to a standardized way of representing data or functions as a series of bytes that can be transmitted across a network.

Canonical serialization are fully discussed in Prerequisites Primitives


"Turing-equivalent" means that the functions and data being transmitted can be computed by a Turing machine, a well-known theoretical model of computation.


A message is any datum sent between agents.


A state may refer to the state of an agent, the state of the world, or the state of the system.

  • The state of an agent is the set of all data stored by the agent.

  • The state of the system is a function of the decisions taken by agents over (partially ordered) time.

  • The state of the world is the set of data related to the "real world" that is observed by and of interest to the agents.


A system is a virtual environment which consists of a set of agents interacting with each other.

Read more about the world in the conceptual context.