Skip to content

Worker

Purpose

Workers are one of the mempool engines and, in V1, they are the only one and there is only a single worker.

The worker receives transaction requests from users and solvers and batches these transaction requests, assigning a unique TxFingerprint to every new transaction. Each transaction candidate will be sent to an Executor inside an ExecuteTransaction message. Once the worker has received a KVSLockAcquired for every part of the transaction request's label (from the shards of the same Anoma validator in response to KVSAcquireLock-messages), it knows that this transaction candidate has been "seen" by all Shards, which implies that all shards are prepared to process lock requests from execution processes (see KVSReadRequest and KVSWrite for details). This information about locks being recorded is distributed to all shards via UpdateSeenAll messages, which contain the most recent TxFingerprint for which it is certain that all Shards have "seen" this transaction candidate and all previous ones from the same worker (and they are thus prepared to grant locks). Note that if shards receive transaction candidates in a different order than the final total order of transactions, UpdateSeenAll messages are necessary to avoid that shards grant locks before all locks of previous transaction executions have been served.

Workers also are in charge of collecting and curating logs of transaction execution. Success is equivalent to all reads and writes being successful and an ExecutorFinished-message from the executor that was spawned to execute the message.

State

Each worker keeps track of

There is no precise state representation described by the V1 specs.

Todo

the following almost certainly are not the template we want -->

ExecutorFinished

Purpose

Informs the mempool about execution of a transaction.

Details

Structure

Field Type Description
fingerprint TxFingerprint a descriptor for executed transaction
log_key Local Storage Key handle to the transaction log of the transaction execution

Effects

This message is a pre-requisite for enabling garbage collection in the mempool. The log_key can be used by the user to request data about the transaction. In V1, this is kept as long as the instance is running.

Triggers

none

TransactionRequest

  • from User, Solver

Purpose

A user or solver requests that a transaction candidate be ordered and executed.

Details

Structure

Field Type Description
tx TransactionCandidate the actual transaction to be ordered
resubmission TxFingerprint option reference to the previous occurrence

The resubmission indicates if there was a previous occurrence of the very same transaction candidate which either has failed or a needs to be executed again, e.g., because it is a recurring payment.

This is the "bare-bone" version for V1. Additional user preferences can be supplied in future versions concerning - how the response will be given - how long duplicate checks are to be performed - etc.

Effects

  • The receiving worker is obliged to store the new transaction (until after execution) unless it is out of storage. (Suitable fee mechanisms may be introduced to ensure that the probability of sufficient storage is relatively high, which involves a trade-off against cheap fees.)
  • The received transaction request might complete the current batch.
  • The worker assigns a batch number (the number of the current batch) and a transaction number (before the closing of the batch) such that this transaction candidate can be referenced via the corresponding TxFingerprintunless the worker already has received a request for the same transaction candidate after the resubmission time stamp. If the exact same transaction candidate has already been ordered, the request is disregarded; optional messages may be sent to the sender of the request.

Triggers

Todo

move this as a response to EPID message

ExecutorPIDAssigned

Purpose

Provides the worker with an ID for newly spawned or available executor engine instance.

Details

Structure

Field Type Description
epid ExternalIdentity the ID of the spawned Executor-engine instance

Effects

The receiving worker can request the eager reads and start the execution.

Triggers

  • ExecuteTransaction→Executor, KVSReadRequestShards: for the next transaction to be executed, the worker sends
  • the ExecuteTransaction-message to the executor
  • the will-read KVSReadRequests to the relevant Shards

KVSLockAcquired

Purpose

This message informs the Worker Engine that the sending Shard has recorded upcoming read or write requests to a key specified in an earlier KVSAcquireLock from the Worker Engine. It is an asynchronous response.

Details

Structure

Field Type Description
fingerprint TxFingerprint the fingerprint of the TransactionCandidate for which some locks have been recorded
key KVSKey the key in the key value store that will be accessed
write bool true for write, false for read
optional bool true for may_write or may_read, false for will_write or will_read

Effects

Triggers

RequestLogs

  • from User, Solver

Purpose

Request the log of a finished execution.

Details

Structure

Field Type Description
fingerprint TxFingerprint the fingerprint of the TransactionCandidate for logs are requested
log_key Local Storage Key the key for retrieving the log

Effects

none

Triggers

  • to User, Solver: SendLog Answer the request with the data requested.

Todo

we need to find better places for these footnotes


  1. It might be too expensive to check from genesis; transaction requests could have a parameter for how long the duplicate check is active. 

  2. This condidtion can be added to avoid too many waiting/idling executor processes. (This comes at the price of a sliver of additional latencey for the first transactions in a batch.) Note that this cannot lead to deadlocks as the lock acquisition messages (KVSAcquireLock,KVSLockAcquired,UpdateSeenAll) are completely independent of spawning transactions. In more detail, if we were missing a KVSAcquireLock message for a transaction, the executor could not start operating (even if it is spawned). 

  3. This can be done by use of a executor process supervisor in the implementation. 

  4. In all future versions of Anoma, workers will be organized around primaries; however, in V1, we can omit primaries as they do not serve any purpose. In V1, there is only a single worker, which can be though of as featuring also as its primary. 

  5. In future versions, IO is output of results from the responsible workers (and their fellow/mirror workers) to some fixed address. Inputs may allow for non-trivial validator inputs, according to a orthogonal protocol (an may fail deterministically). 

  6. In V1, we report all the data about a single transaction back to the submitter as part of execution. -->