Hashgraph Consensus Timing Vulnerability

I’m confused. There is no ICO or coin raise for Hashgraph. Companies have certainly been funded both traditionally and through ICO on much, much less.

The algorithm has been debated extensively in their Telegram and Discord channels. The proofs are public, yet I know of no credible anit-proof of proofs that the claims made are demonstrably false. I’d imagine most people would (rightly) view “independent” verification commissioned by Swirlds as suspect. There are plans to release the source for Hedera at the appropriate time for peer review.

I’ll let my previous comments stand on their own, and I’m happy to respond to questions or specific concerns but a philosophical debate is beyond what I’d consider productive discussion.

Signing off…


Hi, I am doing a PhD on vulnerabilities of blockchains from University of Derby and I can see how Kladko’s arguments are valid.

Median timestamps do not make hashgraph asynchronous, I wonder why the founder claimed that! Then there is evidence in FLP for the timing vulnerability. That’s why PBFT was introduced by Barbara Liskov, which makes sense.

Now for the median time example discussion, any information withheld by famous witnesses can later become an embarassment. The counterexample Craig gave was off the point because what if the transaction comes in through Bob, the bad actor? If he withholds the transaction for some time, we have a failed client!

Therefore, thanks for the point.

Ok, so I’m not a tenured professor from from any of the universities listed by Kladkogex, but I’m working on consensuses in one of the other DAG projects (alephzero.org), and I think that the argument entirely misses the point (and I can prove it). Why the argument is wrong:

The idea of using median timestamp as an ‘unbiased’ timestamp for an event is based on the fact that at the time when the median of the set of timestamps is calculated, there is no possibility of adding an additional element to this set by a dishonest actor. Hence, the attack proposed by Kladko doesn’t make sense. As we read in the Hashgraph whitepaper in the paragraph above the one quoted by Kladko:

First, the received round is calculated. Event x has a received round of r if that is the first round in which all the unique famous witnesses were descendants of it, and the fame of every witness is decided for rounds less than or equal to r. (emphasis added)

Then we read the whole paragraph partially quoted by Kladko:

Then, the received time is calculated. Suppose event x has a received round of r, and Alice created a unique famous witness y in round r. The algorithm finds z, the earliest self-ancestors of y that had learned of x. Let t be the timestamp that Alice put inside z when she created z. Then t can be considered the time at which Alice claims to have first learned of x. The received time for x is the median of all such timestamps, for all the creators of the unique famous witnesses in round r.

Summing up:

  • computation of a ‘time received’ for an event happens only when fame of all witnesses of its ‘round received’ is decided.
  • the ‘time received’ is a median of units which are uniquely defined by the set of the aforementioned famous witnesses. Hence, it is not possible that an additional timestamp withhold by a malicious actor will be included in the computation of this median at any time in the future.

Hashgraph’s white paper might not be the most well-written mathematics, but the arguments in the paper are correct.


Blockquote at the time when the median of the set of timestamps is calculated

Doesn’t that mean the network must be synchronous then in order to be safe as such? I think that’s what Kladko is getting at- it can’t be asynchronous and safe the way it’s constructed.


Well, now we have Ahsan, a PhD candidate.
And Hashgraph still did not present the three professors :-))
So looks like the game is being won by Hashgraph skeptics.

Adam - are you willing to take the bet ? I am willing to accept it 10:1.

If you do not present the proof in terms of the professors, (see the description above) , you pay me $1000. If not, I will pay you $10,000))

1 Like

As a side question, do we even have a “mathematical proof” that PoW solves the Byzantine General’s Problem probabilistically?


Levalicious - no, it doesn’t mean that the network must be synchronous. The median is calculated as soon as enough famous witnesses are revealed, no matter how long it takes, so it doesn’t require any assumptions regarding network delays. Hence - it is asynchronous.

kladkogex -I think that math is rather about exchanging arguments and providing proofs than flashing scientific degrees. And regarding arguments - I think that the ‘skeptics’ are loosing since no one is even trying to rebuke my argument. Having said so, I would really like to see more papers from the crypto to be peer-reviewed. However, I’m not willing to take the bet. As I said before - hashgraph is not a well written mathematics. Even though I believe that the proofs are correct, I don’t think that it could pass peer-review in this form.

maiavictor - it is hard to answer this question, since classically BFT is defined for a fixed set of actors, and PoW blockchain is nonpermissioned at its core. Here: https://eprint.iacr.org/2016/454.pdf the authors are showing that the Nakamoto consensus is safe in a way which you could call “probabilistically BFT” provided that the hardness of the puzzle is a function of the network delay (which can be seen as a soft form of synchronicity assumption).


IMHO people learn a lot during graduate study, including moral standards. Some of them decide to break the standards later.

Some people do decide to take an alternative path and never go for graduate study.
IMHO it is totally possible to be as good a researcher, if one does not have a graduate degree.

The problem is when people break academic standards. If you have a proof that is not really a proof you do not call it a proof. If Hashgraph would want to develop a great algorithm and they would have a vision but no proofs they should have stated this clearly in their whitepaper.

The whitepaper in its current form is imho clearly misleading. Thats why imho it should be reviewed by three reasonably sane and independent CS faculty members :)) It is exactly the job of academia to resolve issues like that. In biotech it is normal to require expert opinions of academic scientists. The same should be true for blockchain IMHO.


Hi kladkogex,

You might find this helpful:



Thats too soon a media hype than a cited verifying paper of any mathematical proof. Rather, here is an IEEE paper citing the hashgraph vulnerability:
“The validity is dependent on the (directly or indirectly) outgoing edges of the transaction, which represents the nodes that have validated it. A scale-out throughput can be achieved if the acquirement of the complete graph is not obligated for all nodes. Then, the validity of the transactions is compromised due to its dependency on the validators”.
A Scale-out Blockchain for Value Transfer with Spontaneous Sharding, Zhijie Ren and four more authors. Link here: https://arxiv.org/pdf/1801.02531.pdf
Another way of simply putting it is what Kladko and I have just already pointed out. Bob withholds the transaction. What next? Becomes famous witness, turns nasty. Doesnt become famous witness, the client still cries out where is his product! Because Bob is not just an event! See the double naming of variables? I mean its hideous!
And also still left is my question of why timed medians are asynchronous!
I know every new tech product in the market engages tenure-track professors for verifying their claims, and this initially does create confirmation bias. But for the company it is just shooting themselves in the foot.
AdamGagol, not being hard on you but thats why I entered the academia from development. You never know the why of development because you are looking at it just too closely :slight_smile:
Thanks Kladko for making my thesis sparkle!
Craig is on vacations :smiley:
Ahsan Nabi Khan

Vow ) Yet another Alice in Wonderland news from Hashgraph ) Coq has nothing to do with proving that they can securely map what they got into a linear chain and run smart contracts.

Hashgraph needs to take its original whitepaper which has an incredible amount of unproven and shaky things, turn it into a mathematical paper and have it peer reviewed. Coq is not going to help with that imho.

1 Like

(Disclaimer: I am not endorsing Hedera Hashgraph in any way)

I played with the Hashgraph consensus a while back, and understand how it works rather well. Not a PhD, so feel free to disregard my comment entirely :slight_smile:

The vulnerability you described depends on how exactly you use HashGraph. If you wait for a transaction to receive a round receivable, then your vulnerability doesn’t work. Consider the algorithm on page 14. The median timestamp is computed among the events that are reachable from the round r unique famous witness, where r is the round receivable of the event with the transaction in question.

Assuming other proofs in the paper are not wrong, once the round receivable is assigned it doesn’t change, the unique famous witness is known by definition, and doesn’t change either, thus the set of events reachable from it doesn’t change as well, neither does a median timestamp of them.

Note that if you don’t wait for the round receivable to be assigned before executing smart contracts, that your timing attack is also somewhat incomplete, because the events are first sorted by the round receivable and only then by timestamp, and since you don’t know the round receivable for the event yet, you cannot just look at the timestamps of two events to attempt to predict their order.

Unrelatedly, something I personally found funny about hashgraph is their coin rounds (see bottom part of page 10). They effectively say that their voting for famous witnesses can stall under certain conditions, and to account for that every now and then they would have all the witnesses vote randomly, and since there’s a chance that 2/3 of votes will end up being the same, that will unstall the algorithm. This can give you an estimate of how many witnesses they plan to run, because even for 20 witnesses the chance that in 20 coin tosses more than 2/3 will end up the same is very low.

Understood :slight_smile: My objection to Hashgraph is fundamental. IMHO it is impossible to cheaply transform DAG into a linear ordering required by blockchain. The reason for this is because linear blockchain ordering is equivalent to a binary consensus for each block, and asynchronous binary consensus is expensive, the best known asynchronous binary consensus algorithms are N^2. DAGs are much cheaper to build. So it is fundamentally impossible for Hashgraph to end up with a linear ordered consensus and run smartcontracts on it. It is unfixable imho, the entire chain of thought of Hashgraph is wrong and insecure, because they do not understand that DAG and linear ordering are fundamentally different.

And the reason why they do not understand is because they are not mathematicians or computer scientists and are not able to specify things in logical way. The entire paper is a joke imho.

“They are not mathematicians or computer scientists” is an attempt to appeal to authority.

I wanted to use Hashgraph for Near, and read all the proofs very carefully, as well as had an implementation that I stress tested, and I am personally convinced that it does properly order transactions. I also understand the algorithm very well, so if you want to try to break it, I would be happy to answer any questions.

Note that before a round received can be assigned, at least one round needs to pass, and a round requires 2/3 of participants to observe that 2/3 of participants observed a witness message for the round (so called “strongly seeing”), and computing this property for each message cannot be done faster than O(n^2) of compute. Since a consensus for a particular transaction takes on the order of O(n log n) messages, it takes O(n^3 log n) compute on each node to reach a consensus for a single transaction (this gets amortized when you have multiple, so ordering k transactions takes way less than O(k n^3 log n), but I don’t have an exact estimate handy). This computation cost is why I personally ended up not using Hashgraph. But it somewhat should address your concern with the complexity – Hashgraph is by no means cheap.

Again, disclaimer: I’m attesting to the validity of the Hashgraph consensus algorithm, but do not endorse the Hedera Hashgraph protocol in general.


First of all I must say that kladkogex communication style appears very annoying to me, almost like a textbook version of a stereotype Russian troll. This is unfortunate as the topic is super interesting IMHO. Personally I would like to see a valid counterexample to the hashgraph consensus, so I decided to get into the discussion despite the annoying communication style.

Ok, first of all we need to clarify what we want to achieve and I propose the following:

Proof that hashgraph consensus is not a byzantine fault tolerant atomic broadcast algorithm under the assumptions made in the hashgraph paper.

Do you agree? (Please no moral, hate talk, accusations bla, bla… Its hard enough to go that road with you anyway. Stay scientific, ok?)

Ok, how do we do that? BFT atomic broadcast, which hashgraph claims to be, is defined by the following four axioms:

1.) Validity: if a correct participant broadcasts a message, then all correct participants will eventually receive it.

2.) Uniform Agreement: if one correct participant receives a message, then all correct participants will eventually receive that message.

3.) Uniform Integrity: a message is received by each participant at most once, and only if it was previously broadcast.

4.) Uniform Total Order: the messages are in a total order in the mathematical sense; that is, if any correct participant receives message 1 first and message 2 second, then every other correct participant must receive message 1 before message 2.

The task is then to find a concrete counterexample to one of these axioms. This boils down to finding a time-series of concrete hashgraphs (for users A(t),B(t),C(t),…) which violates one of the previous points after we run the algorithms of the paper on it.

Now kladkogex claimed in the initial post to have found such a situation. Unfortunately his example missed the actual hashgraph. So kladkogex can you please rewrite your example into an actual series of hashgraphs for users A,B,C,…, so everyone can run the algorithms on it (by hand) and see the contradiction? After all you made very strong accusations and according to the rules of science, which you brought stubborn into the game, the burdon of proof is on you. As you accused craigdrabiktxmq as being mathematically imprecise it’s now your turn to make your also mathematically imprecise proposed counterexample precise, so we all can see that it works.

I think its enough to write the appropriate hashgraph related to your counterexample with pen&paper and then send a picture of it here for discussion.

P.S: Leemon Baird was a professor of mathematic by the time he wrote the paper. He wrote >40 other math papers. This of course does not mean, that it is correct. Personally I like the style of the hashgraph paper.

However, I’m not a fan of Hedera, not the least! I think they are like the pest for decentralized payment and I would love to see them fall. But we have to do it properly. Its David against Goliath and David can not afford to rank high on the crackpot index, if he claims any chance in this battle.


Atomic Broadcast and Weak Synchrony?
I will repeat what is already written and referenced about the hashgraph timing:
“The validity is dependent on the (directly or indirectly) outgoing edges of the transaction, which represents the nodes that have validated it. A scale-out throughput can be achieved if the acquirement of the complete graph is not obligated for all nodes. Then, the validity of the transactions is compromised due to its dependency on the validators”.
A Scale-out Blockchain for Value Transfer with Spontaneous Sharding, Zhijie Ren and four more authors. Link here: https://arxiv.org/pdf/1801.02531.pdf

Actually my counterexample is not off point. If Bob holds a transaction submitted to Bob, that happens before the transaction reaches the network. This is something no consensus mechanism can prevent. If the receiving node fails to send transactions on to the rest of the network, there is no action that can be taken by the network - any network.

In Hedera this is easily defended against. Submit your transaction to multiple nodes. The first transaction gossiped to the network will be accepted and ordered. The subsequent transactions are detected as duplicates and rejected.

This paper refutes the ability of a Hashgraph network to scale at O(N) where N is the number of nodes in the network. Hashgraph does not claim an O(N) scale out as a function of the number of nodes. The performance charts in the Hedera white paper clearly show that this isn’t the case.

Here is the original paper, where hashgraph took the idea of virtual voting from. It is based on the old idea of Lamport clocks and gives a pretty clear analysis of the run time of this family of algorithms:

It is a shame and IMHO plain evil that this Leemon bird-person did not mentioned his sources properly. SEction 4 clearly defines virtual voting as votes deduced from the causality structure in the graphs that occure if messages acknowledge other messages.

Roughly speaking, the local run time is O(n^3), because every user will have at least one witness nodes in each round (more under forking [called mutations in the original paper] ), each of these witness nodes execute a consensus algorithm step for every of the n witnesses in all previous rounds that have their fame not decided, yet. This is O(n^2) for a single round. Estimations from bubble-hashgraph (test implementation on GitHub ) then suggest that has to be done roughly for n previous rounds giving roughly O(n^3) runtime.

What is optimal is network complexity. This is the case with all virtual voting based algorithms from the Moser Smith family. But hashgraph is fare less optimal then Mosers original approach when it comes to local computation complexity (O(n^3))

Therefore for everyone who is not bothered by the non-locality of these algorithms (The number of nodes or amount of stake must be known), they should really implement one of Mosers original algorithms instead. Much better performance on the local process. IMHO Nakamoto is superior as it does not depend on the number of user or any overall staking value but is FULLY LOCAL

Vow :-))

The Hashgraph bots got really personal, touching on my nationality.

I am not a Russian troll simply because I am Ukrainian. Having said this, I know great scientists and people from Russia. I am a US citizen, who was named Directors Fellow at Los Alamos National Lab (US) and who received Otto Hahn Medal for the best PhD research in Germany. I am also author of more than 20 publications in top scientific journals - something that the Hashgraph whitepaper can never achieve.

If my style is annoying to snake oil sellers I am more than happy about that :slight_smile: I stand by the $10,000 offer, lets make it $20,000 having the interest to this message thread.