Swirlds recently announced a blockchain alternative called Hashgraph. George Samman talks to founder and CEO Leemon Baird about the new service.

As many people know, my interest in consensus mechanisms runs far and wide. In the KPMG research report I co-authored – Consensus: Immutable Agreement for the Internet of Value – many consensus mechanisms were discussed. In Appendix 3 of the paper, many of the major players in the space discussed their consensus methodologies. One consensus mechanism that wasn’t in the paper was the Swirlds Hashgraph Consensus Algorithm. That white paper is a great read and this consensus mechanism holds quite a lot of promise.

I have had many discussions with its creator Leemon Baird, and this blog post comes from conversations, questions and emails about the topic. At the end of the blog, I asked Leemon to fill out the consensus questionnaire from the KPMG report and he graciously did. His answers appear at the end of this post.

What exactly is a hashgraph?

A hashgraph is a data structure storing a certain type of information, and updated according to a certain algorithm. The data structure is a directed acyclic graph, where each vertex contains the hash of its two parent vertices. This could be called a Merkle DAG and is used in git, and IPFS, and in other software.

The stored information is a history of how everyone has gossiped. When Alice tells Bob everything she knows, during a gossip synch, Bob commemorates that occurrence by creating a new “event”, which is vertex in the graph, containing the hash of his most recent event, and the hash of Alice’s most recent event. It also contains a timestamp, and any new transactions that Bob wants to create at that moment. Bob digitally signs this event. The “hashgraph” is simply the set of all known events.

The hashgraph is updated by gossip: each member repeatedly chooses another member at random and gives them all the events they don’t yet know. As the local copy of the hashgraph grows, the member runs the algorithm in the paper to determine the consensus order for the events (and the consensus timestamps). This determines the order of the transactions, so they can be applied to the state as specified by the app.

Blockchain and hashgraph. Source: George Samman

What are gossip protocols?

A “gossip protocol” means that information is spread by each computer calling up another computer at random, and sharing everything it knows that the other one doesn’t. It’s been used for all sorts of things through the decades. I think the first use of the term “gossip protocol” was for sharing identity information, though the idea probably predates the term. There’s a Wikipedia article with more of the history. In bitcoin, the transactions are gossiped and the mined blocks are gossiped.

It’s widely used because it’s so fast (information spreads exponentially fast) and reliable (a single computer going down can’t stop the gossip).

The “gossip about gossip” idea is new with hashgraph, as far as I know. There are many types of information that can be spread by gossip, but having the information to gossip be the history of the gossip itself is a novel idea.

In hashgraph, it’s called “gossip about gossip” rather than “gossip of gossip”; similar to how your friends might “gossip about what Bob did” rather than “gossip of what Bob did”.

Key characteristics of Swirlds hashgraph consensus

  1. Ordering and fairness of transactions are the centrepiece of Swirlds. Simply put, Swirlds seeks to fix the ordering problem found in the blockchain world today (due to different consensus methodologies that have trouble addressing this problem) by using Hashgraph Consensus and “gossip about gossip”.
  2. Hashgraph can achieve consensus with no Proof of Work. So it can be used as an open system (non-permissioned) using Proof of Stake, or it can be used as a permissioned system without POW or POS.
  3. There’s no mining. Any member can create a block (called an “event”) at any time.
  4. It supports smart contract creation.
  5. Block size can be whatever size you want. When you create a block (event), you put in it any new transactions you want to create at that time, plus a few bytes of overhead. So the block ranges from a few bytes (for no transactions), to as big as you want it (for many transactions). But since you’re creating many blocks per second, there’s no reason to make any particular block terribly big.
  6. The core hashgraph system is for distributed consensus of a set of transactions, so all nodes receive all data. One can build a shared, hierarchical system on top of that, but the core system is a replicated state machine. Data is stored on each machine, but for the core system, the data is replicated.

Other questions I asked Leemon Baird about the white paper

Below are some questions I asked Leemon after reading the white paper. His answers are elaborate and very useful for those seeking to not only understand Hashgraph consensus, but also the inner workings of blockchains and the consensus algorithms that power them.

Why is fairness important?

Fairness allows new kinds of applications that weren’t possible before. This creates the fourth generation of distributed trust. For some applications, fairness doesn’t matter. If two coins are spent at about the same time, we don’t care which one counts as “first”, as long as we all agree. If two people record their driving licence in the ledger at about the same time, we don’t care which counts as being recorded first.

On the other hand, there are applications where fairness is of critical importance. If you and I both bid on a stock on the New York Stock Exchange at the same time, we absolutely care which bid counts as being first! The same is true if we both try to patent the same thing at the same time. Or if we both try to buy the same domain name at the same time. Or if we are involved in an auction. Or if we are playing an online game: if you shoot me and I dodge, it matters whether I dodged before you shot or after you shot.

So hashgraph can do all the things blockchain does (with better speed, cost, proofs, and so on). But hashgraph can also do entirely new kinds of things that you wouldn’t even consider doing with a blockchain.

It’s useful to think about the history of distributed trust as being in four generations:

  1. Cryptocurrency
  2. Ledgers
  3. Smart Contracts
  4. Markets.

I think it’s inevitable – once you have a cryptocurrency, people will start thinking about storing other information in it, which turns it into a public ledger with distributed trust. Once you have the ledger storing both money and property, people will start thinking about smart contracts to allow you to sell property for money with distributed trust. Once you have the ability to do smart contracts, people will start thinking about fair markets to match buyers and sellers, and to do all the other things that fairness allows (like games, auctions, patent offices, and so on).

Swirlds is the first system of the fourth generation. It can do all the things of the first three generations (with speed, and so on), but it can also do the things of the fourth generation.

Evolution of distributed consensus. Source: Swirlds

You mention internet speed and how faster bandwidth matters. So it acts like the current state of electronic trading in the stock market. Are you not worried about malicious actors with high-speed connections taking over the network? Kind of like how high frequency trading does in the stock market, using low latency trading mechanisms, co-locality and huge bandwidth, are extremely advantageous for “winning”, as Michael Lewis talks about in ‘Flash Boys’?

In hashgraph, a fast connection doesn’t allow you to “take over the network”. It simply allows you to get your message out to the world faster. If Alice creates a transaction, it will spread through gossip to everyone else exponentially fast, through the gossip protocol. This will take some number of milliseconds, depending on the speed of her internet connection and the size of the community. If Bob has a much faster connection, then he might create a transaction a few milliseconds later than her, but get it spread to the community before hers. However, once her transaction has spread to most people, it’s then too late for Bob to count as being earlier than her, even if Bob has infinite bandwidth.

This is analogous to the current stock market, except for one nice feature. If Bob wants an advantage of a few milliseconds, he can’t just build a single, fast pipe to the single, central server. He instead needs a fast connection to everyone in the network, and the network may be spread across every continent. So he’ll just need to have a fast connection to the internet backbone. That’s the best he can do, and anyone can do that, so it isn’t “unfair”.

In other words, the advantage of a fast connection is smaller than the advantage he could get in the current stock market. And it’s fair. If the “server” is the entire community, then it’s fair to say that whichever transaction reached the entire community first will count as being “first”. Bob’s fast connection benefits him a little, but it also benefits the community by making the entire system work faster. So it’s good.

‘Flash Boys’ was a great book, and I found it inspiring. Our system mitigates the worst parts of the existing system, where people pay to have their computers co-located in the same building as the central server, or pay huge amounts to use a single, fast pipe tunnelled through mountains. In a hashgraph system, there is no central server, so that kind of unfairness can’t happen.

You mention in the white paper that increasing block size “can make the system of fairness worse”. Why is that?

That’s true for a POW system like bitcoin. If Alice submits a transaction, then miner Bob will want to include it in his block, because he’s paid a few cents to do so. But if Carol wants to get her transaction recorded in history before Alice’s, she can bribe Bob to ignore Alice’s transaction and include only Carol’s in the block. If Bob succeeds in mining the block, then Alice’s transaction is unfairly moved to a later point in history, because she has to wait for the next miner to include her transaction.

If each block contains one transaction, then Alice has suffered a one-slot delay in where her transaction appears in history. If each block contains a million transactions, then Alice has suffered a million-slot delay. In that sense, big blocks are worse than small blocks. Big blocks allow dishonest people to delay your transactions into a later position in the consensus order.

The comment about block size doesn’t apply to leader-based systems such as Paxos. In them, there isn’t really a “block”. The unfairness simply comes from the current leader accepting a transaction from Alice, but then delaying a long time before sending it out to be recorded by the community. The comment also doesn’t apply to hashgraph.

Can you explain how not remembering old blocks works? And why one just needs to know the most frequent blocks, and how this doesn’t fly in the face of longest chain rule?

Hashgraph doesn’t have a “longest chain rule”. In blockchain, you absolutely must have a single “chain”, so if it ever forks to give you two chains, the community must choose to accept one and reject the other. They do so using the longest chain rule, but in hashgraph, forking is fine – every block is accepted. The hashgraph is an enormous number of chains all woven together to form a single graph. We don’t care about the “longest chain”. We simply accept all blocks. (In hashgraph, a block is called an event.)

What we have to remember is not the “most frequent block”. Instead, we remember the state that results from the consensus ordering of the transactions. Imagine a cryptocurrency where each transaction is a statement “transfer X coins from wallet Y to wallet Z”. At some point, the community will reach a consensus on the exact ordering of the first 100 transactions. At that time, each member of the community can calculate exactly how many coins are in each wallet after processing those 100 transactions (in the consensus order) before processing transaction number 101. They will therefore agree on the “state”, which is the list of amounts of coins in all the non-empty wallets. Each of them digitally signs that state. They gossip their signatures. So then each member will end up having a copy of the state along with the signatures from most of the community. This combination of the state and list of signatures is something that mathematically proves exactly how much money everyone had after transaction 100. It proves it in a way that’s transferrable: a member could show this to a court of law to prove that Alice had 10 coins after transaction 100 and before transaction 101.

At that point, each member can discard those first 100 transactions, and they can discard all the blocks (events) that contained those 100 transactions.There’s no need to keep the old blocks and transactions, because you still have the state itself, signed by most of the community, proving that there was consensus on it.

Of course, you’re also free to keep that old information. Maybe you want to have a record of it, or want to do audits, or whatever. But the point is that there’s no harm in throwing it away.

You mention that blockchains don’t have a guarantee of Byzantine agreement, because a member never reaches certainty that agreement has been achieved. Can you elaborate on this and explain why hashgraph can achieve this?

Bitcoin doesn’t have Byzantine fault tolerance, because of how that’s defined. Hashgraph has it, because of the math proof in the paper.

In computer science, there is a famous problem called ‘The Byzantine Generals Problem’. Here’s a simplified version: You and I are both generals in the Byzantine army. We need to decide whether to attack at dawn. If we both attack or both don’t attack, we will be fine. But if only one of us attacks alone, he will be defeated because he doesn’t have enough forces to win by himself.

So, how can we coordinate? This is in an age before radio, so you can send me a messenger telling me to attack. But what if the messenger is captured, so I never get the message? Clearly, I’m going to need to send a reply by messenger to let you know I got the message. But what if the reply is lost? Clearly, you need to send a reply to my reply to let me know it got through. But what if that is lost? We could spend eternity replying to each other, and never really know for sure we are in agreement. There was actually a theatre play that dramatised this problem.

The full problem is more complicated, with more generals and with two types of generals. But that’s the core of the problem. The standard definition is that a computer system is “Byzantine fault tolerant” if it solves the problem in the following sense:

  • assume there are N computers, communicating over the internet
  • each computer starts with a vote of YES or NO
  • all computers need to eventually reach consensus, where we all agree on YES, or all agree on NO
  • all computers need to know when the consensus has been reached
  • more than two thirds of the computers are “honest”, which means they follow the algorithm correctly, and although an honest computer may go down for a while (and stop communicating), it will eventually come back up and start communicating again
  • the internet is controlled by an attacker who can delay and delete messages at will (except, if Alice keeps sending messages to Bob, the attacker eventually must allow one to get through; then if she keeps sending, he must eventually allow another one to get through, and so on)
  • each computer starts with a vote (YES or NO) and can change that vote many times, but eventually a time must come when the computer “decides” YES or NO. After that point, it must never again change its mind
  • all honest computers must eventually decide (with probability one), and all must decide the same way, and it must match the initial vote of at least one honest member.

That’s just for a single YES/NO question. But Byzantine fault tolerance can also be applied to more general problems. For example, the problem of decided the exact ordering of the first 100 transactions in history.

So if a system is Byzantine fault tolerant, that means eventually all the honest members will eventually know the exact ordering of the first 100 transactions. Furthermore, each member will reach a point in time where they know that they know it. In other words, their opinion doesn’t just stop changing. They actually know a time when it’s guaranteed that consensus has been achieved.

Bitcoin doesn’t do that. Your probability of reaching consensus grows after each confirmation. You might decide that after six confirmations, you’re “sure enough”, but you’re never mathematically certain. So bitcoin doesn’t have Byzantine fault tolerance. There are a number of discussions online about whether this matters, but, at least for some people, this is important.

If you’re interested in more details on bitcoin’s lack of Byzantine fault tolerance, we can talk about what happens if the internet is partitioned for some period of time. When you start thinking about the details, you actually start to see why Byzantine fault tolerance matters.

You mention in the white paper, “In hashgraph, every container is used, and none are discarded”? Why is this important and why is this not a waste?

In bitcoin, you may spends lots of time and electricity mining a block, only to discover later that someone else mined a block at almost the same time, and the community ends up extending their chain instead of yours. So your block is discarded. You don’t get paid. That’s a waste. Furthermore, Alice may have given you a transaction that ended up in your block but not in that other one. So she thought her transaction had become part of the blockchain, and then later learned that it hadn’t. That’s unfortunate.

In hashgraph, the block (event) definitely becomes part of the permanent record as soon as you gossip it. Every transaction in it definitely becomes part of the permanent record. It may take some number of seconds before you know exactly what position it will have in history, but you immediately know that it will be part of history. Guaranteed.

In the terminology of bitcoin, the “efficiency” of hashgraph is 100%, because no block is wasted.

Of course, after the transactions have become part of the consensus order and the consensus state is signed, then you’re free to throw away the old blocks, but that isn’t because they failed to be used. That’s because they were used, and can now be safely discarded having served their purpose. That’s different from the discarded blocks in bitcoin, which are not used, and whose transactions aren’t guaranteed to ever become part of the history/ledger.

On page 8 of the white paper, you wrote: “Suppose Alice has hashgraph A and Bob has hashgraph B. These hashgraphs may be slightly different at any given moment, but they will always be consistent. Consistent means that if A and B both contain event X, then they will both contain exactly the same set of ancestors for X, and will both contain exactly the same set of edges between those ancestors. If Alice knows of X and Bob does not, and both of them are honest and actively participating, then we would expect Bob to learn of X fairly quickly, through the gossip protocol. But the consensus algorithm does not make any assumptions about how fast that will happen. The protocol is completely asynchronous, and does not make assumptions about timeout periods, or the speed of gossip, or the rate at which progress is made.” What if they are not honest?

If Alice is honest, then she will learn what the group’s consensus is. If Bob is not honest, then he might fool himself into thinking the consensus was something other than what it was. That only hurts himself.

If more than two thirds of the members are honest, then they are guaranteed to achieve consensus, and each of them will end up with a signed state that they can use to prove to outsiders what the consensus was.

In that case, the dishonest members can’t stop the consensus from happening. The dishonest members can’t get enough signatures to forge a bad “signed state”. The dishonest members can’t stop the consensus from being fair.

By the way, that “two thirds” number up above is optimal. There is a theorem that says no algorithm can achieve Byzantine fault tolerance with a number better than 2/3, so that number is as good as it can be.

Are the elections mentioned in the white paper to decide the order of transactions or information?

Yes. Specifically, the elections decide which witness events are famous witnesses. Then those famous witness events determine the order of events, which determines the order of transactions (and consensus timestamps).

What makes yellow “strongly see” from the chart on page 8 of the white paper?

If Y is an ancestor of X, then X can “see” Y, because there is a path from X to Y that goes purely downward in the diagram. If there are many such paths from X to Y, which pass through more than 2/3 of the members, then X can “strongly see” Y. That turns out to be the foundation of the entire math proof.

(To be complete: for X to see Y, it must also be the case that no forks by the creator of Y are ancestors of X. But normally, that doesn’t happen.)

What’s the difference between weak BFT (Byzantine Fault Tolerance) and strong BFT? Which are you using?

Hashgraph is BFT. It is strong BFT. “Weak BFT” means “not really BFT, but we want to use the term anyway”. Those aren’t really technical terms. A Google search for “weak byzantine fault tolerance” (in quotes) says that phrase doesn’t occur even once on the entire web. And “weak BFT” (in quotes) occurs six times, none of which refer to Byzantine stuff.

People like to use terms such as Byzantine in a weaker sense than their technical definition. The famous paper ‘Practical Byzantine Fault Tolerance’ describes a system that technically isn’t Byzantine fault tolerant at all. My paper references two other papers that talk about that fact. So speaking theoretically, those systems aren’t actually BFT. Hashgraph truly is BFT.

We can also talk about it practically rather than theoretically. The paper I referenced in my tech report talks about how simple attacks on the network can almost completely paralyse leader-based systems such as PBFT or Paxos. That’s not too surprising. If everything is coordinated by a leader, then you can just flood that leader’s single computer with packets and shut down the entire network. If there is a mechanism for them choosing a new leader (as Paxos has), you can switch to attacking the new leader.

Systems without leaders, such as bitcoin and hashgraph, don’t have that problem.

Some people have also used Byzantine in a weaker sense that is called being “synchronous”. This means that you assume an honest computer will always respond to messages within X seconds, for some fixed constant X. Of course, that’s not a realistic assumption if we are worried about attacks like I just described. That’s why it’s important that systems like bitcoin and hashgraph are “asynchronous”. Some people even like to abuse that term by saying a system is “partially asynchronous”. So to be clear, I would say that hashgraph is “fully asynchronous” or “completely asynchronous”. That just means we don’t have to make any assumptions about how fast a computer might respond. Computers can go down for arbitrarily long periods of time, and when they come back up, progress continues where it left off without missing a beat.

Do “famous witnesses” decide which transactions come first?

Yes. They decide the consensus order of all the events, and they decide the consensus timestamp for all the events. And that, in turn, determines the order and timestamp for the transactions contained within the events.

It’s worth pointing out that a “witness” or a “famous witness” is an event, not a computer. There isn’t a computer acting as a leader to make these decisions. These “decisions” are virtually being made by the events in the hashgraph. Every computer looks at the hashgraph and calculates what the famous witness is saying, so they all get the same answer. There’s no way to cheat.

On page 8 of the white paper, you write: “This virtual voting has several benefits. In addition to saving bandwidth, it ensures that members always calculate their votes according to the rules.” Who makes the rules?

The “rules” are simply the consensus algorithm given in the paper. Historically, Byzantine systems that aren’t leader-based have been based on rounds of voting. In those votes, the rules are, for example, that Alice must vote in round 10 in accordance with the majority of the votes she received from other people in round 9. But since Alice is a person (or a computer), she might cheat and vote differently; she might cheat by voting NO in round 10 even though she received mostly YES votes from others in round 9.

But in the hashgraph, every member looks at the hashgraph and decides how Alice is supposed to vote in round 10, given the virtual votes she is supposed to have received in round 9. Therefore, the real Alice can’t cheat, because the “voting” is done by the “virtual Alice” that lives on everyone else’s computers.

There are also higher-level rules that are enforced by the particular app built on top of the Swirlds platform. For example, the rule that you can’t spend the same coin twice. But that’s not what that sentence was talking about.

How are transactions validated, and who validates them?

The Swirlds platform runs a given app on the computers of every member who is part of that shared world (a “swirld”). In bitcoin terminology, the community of members is a “network” of “full nodes” (or of “miners”). The hashgraph consensus algorithm ensures that every app sees the same transactions in the same order. The app is then responsible for updating the state according to the rules of the application. For example, in a cryptocurrency app, a “transaction” is a statement that X coins should be transferred from wallet Y to wallet Z. The app checks whether wallet Y has that many coins. If it does, the app performs the transfer by updating its local record of how much is in Y and how much is in Z. If Y doesn’t have that many coins, then the app does nothing because it knew the transaction was invalid.

Since everyone is running the same app (which is Java code running in a sandbox), and since everyone ends up with the same transactions in the same order, then everyone will end up with the same state. They will all agree exactly how many coins are in Y after the first 100 transactions. They will all agree on which transfers were valid and which were invalid. And so, they will all sign that state. And that signed state is the replicated, immutable ledger.

What was the original motivation for creating Swirlds?

We can use the cloud to collaborate on a business document, or play a game, or run an auction, but it bothered me that “cloud” meant a central server, with all the costs and security issues that implies. It bothered me a lot.

It should be possible for anyone to create a shared world on the internet, and invite as many participants as they want, to collaborate, or buy and sell, or play, or create, or whatever. There shouldn’t be any expensive server. It should be fast and fair and Byzantine. And the rules of the community should be enforced even if no single individual is trusted by everyone. This should be what the internet looks like. This is my vision for how cyberspace should run. This is what we need.

But no such system existed. Whenever I tried to design such a system, I kept running into roadblocks. It clearly needed to be built on a consensus system that didn’t use much computation, didn’t use much bandwidth, and didn’t use much storage, yet would be completely fair, fast and cheap.

I would work hard on it for days until I finally convinced myself it was impossible. Then, a few weeks later, it would start nagging at me again and I’d have to go back to working intensely on it until I was again convinced it was impossible.

This went on for a long time, until I finally found the answer. If there’s a hashgraph, with gossip about gossip, and virtual voting, then you get fairness and speed and a math proof of Byzantine fault tolerance. When I finally had the complete algorithm and math proof, I then built the software and a company. The entire process was a pretty intense three years, but in the end it turned out to be a system that’s very simple, and which seems obvious in retrospect.


  • The DAG with hashes isn’t new, and has been widely used. Using it to store the history of gossip (“gossip about gossip”) is new.
  • The consensus algorithm looks similar to voting-based Byzantine algorithms that have been around for decades, but the idea of using “virtual voting” (where no votes ever have to cross the internet) is new.
  • A distributed database with consensus (a “replicated state machine”) isn’t new, but a platform for apps that can respond to both the non-consensus and consensus order is new.
  • It appears that hashgraph and the Swirlds platform can do all the things that are currently being done with blockchain, and that hashgraph has greater efficiency. Yet, hashgraph also offers new kinds of properties that will allow new kinds of applications to be built.

Overall consensus methodology

What is the underlying methodology of the used consensus?

The Swirlds hashgraph consensus system is used to achieve consensus on the fair order of transactions. It also gives the consensus timestamps on when each transaction was received by the community. It also gives consensus on enforcement of rules, such as in smart contracts.

How many nodes are need to validate a transaction (% vs number)? How would this impact a limited participation network?

Consensus is achieved when more than 2/3 of the community is online and participating. Almost a third of the community could be attackers, and they would be unable to stop consensus, or to unfairly bias what order becomes the consensus for the transactions.

Do all nodes need to be online for system to function? Number of current nodes?

Over 2/3 of the nodes need to be online for consensus. If fewer are online, the transactions are still communicated to everyone online very quickly, and everyone will immediately know for certain that those transactions are guaranteed to be part of the immutable ledger. They just won’t know the consensus order until more than 2/3 come online.

Does the algorithm have the underlying assumption that the participants in the network are known ahead of time?

No, that’s not necessary. Though it can be run that way if desired.

Ownership of nodes – consensus provider or participants of network?

The platform can be used to create a network that is permissioned or not.

What are current stages of mechanism?

Transactions are put into “events”, which are like blocks, where each miner can mine many blocks per second. There is never a need to slow down mining to avoid forking the chain. The events are spread by a gossip protocol. When Alice gossips with Bob, she tells Bob all of the events that she knows that he doesn’t, and vice versa. After Bob receives those, he creates a new event commemorating that gossip sync, which contains the hash of the last event he created and the hash of the last event Alice created before syncing with him. He can also include in the event any new transactions he wants to create at that moment. And he signs the event. That’s it. There is no need for any other communication, such as voting. There is no need for proof of work to slow down mining, because anyone can create events at any time.

When is a transaction considered “safe” or “live”?

As soon as Alice hears of a transaction, she immediately verifies it and knows for certain that it will be part of the official history. And so does anyone she gossips with after that. After a short delay (seconds to a minute or two), she will know its exact location in history, and have a mathematical guarantee that this is the consensus order. That knowledge isn’t probabilistic (as in, after six confirmations, you’re pretty sure). It’s a mathematical guarantee.

What is the fault tolerance (how many nodes need to be compromised before everything is shut down?)

This is Byzantine fault tolerant as long as less than 1/3 of the nodes are faulty/compromised/attacking. The math proof assumes the standard assumptions: attacking nodes can collude, and are allowed to mostly control the internet. Their only limit on control of the internet is that if Alice repeatedly sends Bob messages, they must eventually allow Bob to receive one.

Is there a forking vulnerability?

The consensus can’t fork as long as less than 1/3 are faulty/attacking.

How are the incentives defined within a permissioned system for the participating nodes?

Different incentive schemes can be built on top of this platform.

How does a party take ownership of an asset?

This is a system for allowing nodes to create transactions, and the community to reach consensus on what transactions occurred, and in what order. Concepts like “assets” can be built on top of this platform, as defined by an app written on it.

Cryptography/strength of algorithm

How are the keys generated?

Each member (node) generates its own public-private key pair when it joins.

Does the algorithm have a leader or not?

No leader.

How is a node behaviour currently measured for errors?

If a node creates an invalid event (bad hashes or bad signature) then that invalid event is ignored by honest nodes during syncs. Errors in a node can’t hurt the system as long as less than 1/3 of the nodes have errors.


How are controls/governance enforced?

If an organisation uses the platform to build a network, then that organisation can structure governance in the way they desire.

Tokenization (if used)

Are there any transaction signing mechanisms?

Every event is signed, which acts as a signature on the transactions within it. An app can be built on top of this platform that would define tokens or cryptocurrencies.


What is the current time measurement: for transaction to be validated, and for consensus to achieved?

The software is in an early alpha stage. The answers to this questionnaire refer to what the platform software will have when it is complete. For a replicated database (every node gets every transaction), it should be able to run at the bandwidth limit, where it handles as many transactions per second as the bandwidth of each node allows, where each node receives and sends each transactions once (on average) plus a small amount of overhead bytes (a few % size increase). For a hierarchical, shared system (where a transaction is only seen by a subset of the nodes, and most nodes never see it), it should be possible to scale beyond that limit. But for now, the platform is assuming a replicated system where every node receives every transaction.


Does your mechanism have digital signature?

Yes, it uses standards for signatures, hashes, and encryption (ECDSA, SHA-256, AES, SSL/TLS).

How does system ensure the synchrony of the network (what is time needed for the nodes to sync up with network?)

No synchrony is assumed. There is no assumption that an honest node will always respond within a certain number of seconds. The Byzantine fault tolerance proofs are for a fully asynchronous system. The community simply makes progress on consensus whenever the communication happens. If every computer goes to sleep, then progress continues as soon as they wake up. It should even work well over sneaker-net, where devices only sync when they are in physical proximity, and it may take days or months for gossip to reach everyone. Even in that situation, the consensus mechanism should be fine, working slowly as the communication slowly happens. In normal internet connections with a small group, consensus can happen in less than a second.

Do the nodes have access to an internal clock/time mechanism to stay sufficiently accurate?

There is a consensus timestamp on an event, which is the median of the clocks of those nodes that received it. This median will be as accurate as the typical honest computer’s clock. This consensus timestamp doesn’t need to be accurate for reaching consensus on the ordering the events, or for anything important in the algorithm, but it can be useful to the applications built on top of this platform.


How does system ensure privacy?

The platform allows each member to define their own key pair, and use that as their identity. If an app is built on top of this platform to establish a network, the app designer can decide how members will be allowed to join, such as by setting up a CA for their keys, or by having votes for each member, or by using proof-of-stake based on a cryptocurrency, and so on. The app can also create privacy, such as by allowing multiple wallets for one user, but the platform simply manages consensus based on a key pair per node.

Does the system require verifiable authenticity of the messages delivered between the nodes (is signature verification in place)?

Yes, everything is signed, and all comm channels are SSL encrypted.

How does data encryption work?

All comm during a gossip sync is SSL/TLS encrypted, using a session key negotiated using the keys of the two participants. If an app wants further encryption, such as encrypting data inside a transaction so that only a subset of the members can read it, then the app is free to do so, and some of the API functions in the platform help to make such an app easier to write.

Implementation approach

What are current uses cases for consensus mechanism?

In addition to traditional use cases (cryptocurrency, public ledger, smart contracts), the consensus mechanism also gives fairness in the transaction ordering. This can enable use cases where the order must be fair, such as a stock market, or an auction, or a contest, or a patent office, or a massively multiplayer online (MMO) game.

Who is currently working with (venture capitalist, banks, credit card companies, and so on)?

Ping Identity has announced a proof of concept product for distributed session management built on the Swirlds platform. Swirlds Inc is currently funded by a mixture of investors including venture capital, strategic partner and angel funding.

READ NEXT: A deeper dive into the state of blockchain

– This article is reproduced with kind permission. Some minor changes have been made to reflect BankNXT style considerations. Read more here. Image: inconet, Shutterstock.com

About the author

George Samman

George Samman is the former CMO of Fuzo, which is using blockchain to bring financial inclusion to the developing world. He is also committee chair of the Wall Street blockchain Alliance (WSBA) for blockchain and financial services. He co-founded BTC.sx, now magnr, a bitcoin trading platform, and is a former Wall Street senior portfolio manager and market strategist, as well as technical analyst.

Leave a Comment