Asynchronous, Timing-Agnostic, Sharded Blockchain using Client-side Ordinal Transaction Ordering

Enhancing State Synchronization in Asynchronous Distributed Systems via Timestamping Techniques


In distributed asynchronous systems, such as those embodied by the Sierpinski Merkle Trie (SMT) protocol, achieving efficient state synchronization among various shards poses significant challenges. This paper explores advanced timestamping techniques, specifically Lamport timestamps, dense timestamps, and vector clocks, to facilitate “as-is” snapshots for state synchronization. These mechanisms ensure consistency in the order of events across disparate shards, a critical aspect in maintaining the integrity of a distributed system.

Lamport Timestamps and Causal Relationships

Lamport timestamps provide a fundamental mechanism in distributed systems to establish causal relationships between events. They are based on a simple yet powerful principle: incrementing a counter (timestamp) for each event in the system. This section delves into the mathematical formalization of Lamport timestamps and provides an algorithm for updating them.

Mathematical Formalization

A Lamport timestamp, denoted as L, is an integer assigned to each event in a distributed system. For two events A and B, if A causally precedes B, then L(A) < L(B). This relationship is formalized as follows:


If event A causally precedes event B in a distributed system, then the Lamport timestamp of A is less than that of B.


Assume two events A and B occur, where A causally affects B. By the definition of Lamport timestamps, when A occurs, its counter C_A is incremented. As B is causally affected by A, B's counter C_B is set to be at least C_A + 1. Therefore, L(A) = C_A < C_B = L(B).

Algorithm for Updating Lamport Timestamps

The algorithm to update Lamport timestamps upon the occurrence of an event or the receipt of a message is as follows:

L \leftarrow \text{Lamport Clock}
   L \leftarrow L + 1
   L \leftarrow \max(L, L(M)) + 1

Dense Timestamps for Chronological Ordering

Dense timestamps provide a high-resolution chronological order of events, facilitating precise snapshots of the state at any given time. These timestamps are especially valuable in systems where the granularity of temporal ordering is paramount.


A dense timestamp D is a real number that uniquely represents a moment in time with fine granularity. For two distinct events X and Y, D(X) \neq D(Y), ensuring unique temporal identifiers for each event.

Vector Clocks for State Synchronization

Vector clocks extend Lamport timestamps by enabling each shard in the system to maintain a summary of the information received from other shards. This summary aids in understanding the state of the system from the perspective of each shard.


A vector clock V in a system of n shards is represented as an n-dimensional vector of integers. Each element V_i of the vector represents the latest timestamp of shard i.

V \leftarrow \text{Vector Clock}
   V_i \leftarrow V_i + 1
\text{ReceiveMessageFromShard}(j, V')
   V \leftarrow \max(V, V')

Snapshot for State Synchronization

Each shard utilizes timestamping mechanisms to create a comprehensive snapshot of its state. This snapshot includes the causal and chronological history, enabling efficient synchronization.

Snapshot Generation Algorithm

The algorithm to generate a snapshot of a shard’s state is outlined as follows:

S \leftarrow \text{State of Shard}
   Snapshot \leftarrow \{S, V, D\}
   \text{Return Snapshot}

Efficient Synchronization

The proposed method significantly reduces synchronization overhead by obviating the need to process all events sequentially. Instead, it relies on understanding the causal and chronological summary encapsulated in the snapshots.


The integration of Lamport timestamps, dense timestamps, and vector clocks offers a robust solution for efficient and accurate state synchronization in distributed systems, particularly those based on the SMT protocol. This methodology enables the creation of detailed snapshots of each shard’s state, capturing the complete sequence of events and allowing for “as-is” synchronization, a crucial feature for the consistency and efficiency of distributed asynchronous systems.

Thank you almost finished the initial prototype test net in Haskell. I will share a link to the GitHub when I make it public.

Quick Update:

  • MVP Performance: The L1 MVP (Coded in “simple” Haskell) is in the early stages of coding and testing for production-quality deployment. Notably, the system is benchmarking at approximately 29,000 transactions per second (tps) (- +10% deviation; with real-world scenarios factored in). This figure is per shard, with a mere 5% increase in overhead for cross-shard operations.

  • Structural Improvements: Structural and logical enhancements have been implemented to optimize global state management. This progress builds on earlier research, further advancing our system’s capabilities.

  • System Architecture: Our network architecture integrates shards, triadic consensus, epidemic message broadcasting performed recursively up a sierpinski triangle; fractal, hierarchical topology, ternary tries, and Zero-Knowledge Proofs (ZKPs). By eliminating traditional ordering mechanisms and using a custom, Turing-complete version of Ordinals in transaction setups, we’ve redefined the standard in blockchain scalability, privacy, and security. This approach removes the need for ordering within the consensus process.

  • Notably: Our system diverges from traditional blockchain models. More aptly described as a “Tree” of proofs, it lacks conventional blocks. Transactions occur concurrently, and shards function independently without synchronization. Each shard provides an epoch snapshot at predetermined intervals. This design maintains transaction rates and latency, as transactions are processed optimistically and asynchronously. The scalability potential reaches millions or even billions of TPS. Shards store only local transactions and a copy of the latest Zero-Shot Succinct Nested Global State Proof. The aggregated Global Proof embeds its previous iteration within the new proof, maintaining a constant global state of a few hundred bytes. Shards retain their local transaction data in the shard’s local ternary trie accumulator, for a TBD set number of years for audit and regulatory compliance, with the option to prune thereafter. This approach ensures maximum and constant storage capacity, keeping transaction costs low. For example, a network with 10,000 shards could maintain the same storage volume for transaction and metadata over 50, 100, 1000 years etc. We use Circom circuits for the zkps.

  • Further Information: More details and the official White Paper will be released soon.

Feel free to reach out, if you have any questions, or OFC drop a comment. You know where to do it. ; )