Trustless validator blackmailing with the blockchain

True, however only in the case in which many keys are stolen, which is a pretty horrible situation to be in already, and which I believe we should be able to defend against (not saying it’s trivial, of course)

There are other slashable offences. I don’t see a good reason to allow this?

I never quite understood why one would want separate keys in a PoS system.
Having the stake at risk should imho be part of the deal of getting rewarded for running a validator node. Precisely because it strongly incentivizes to have a system which is well secured. The resilience of the network depends on that.
If somebody doesn’t feel confident enough in their ability to secure a system, it’s probably better for them not to be a validator anyway. It doesn’t help the network to have a lot of validators if too many of them could be taken over by a sophisticated attacker (e.g. through a zero-day exploit). I think that’s the main risk of a PoS system compared to a PoW system.

Probably the attack described here shows that trying to separate the risk of losing the stake from the risk of having the signing key stolen is futile in a design which allows slashing.
I’m not sure that statement is true. If it is, it’s probably better to not separate keys in order to not obscure that risk.

The way to attack a validator is by attacking the software supply chain either through subversion or exploiting a zero day. Once in, the attacker has a gun to the validator’s head. Whether he chooses to pull the trigger and profit or to extort and profit is a minor detail.

To be clear, a supply chain attack has multiple victims immediately. A small number of individual hacks aren’t a realistic scenario.

I think very few people feel confident enough to be able to secure a system in a way that there is absolutely no way for it to be broken. People can physically break into your house and take the computer you’re running the validator on. You simply can’t stop that.

The incentives that you want are that

  • uncorrelated attacks that only affect a small number of users only come with a small penalty
  • correlated attacks that affect a large number of users come with a huge penalty

This is because the first do not actually compromise the security of the system, while the latter do.

I would say the dual key system does create the incentives we want as illustrated by this attack. It is only devastating when an attacker can get access to a large number of staking keys, with a small number the extortion is much less effective (as validators would only use 1 eth compared to 32). If there were only one key for staking or withdrawal, then any compromise would lead to loss of all funds, which would mean physical security is required; only very few people could afford that.

The hacker claims a share of the funds at risk rather than the whistleblower reward (blackmailing) and can ask others to pay to learn if they have been hacked (“blackmailing in the dark”). That’s interesting even with a single hacked validator.

Now with the current way we slash people, the attacker is incentivized to batch his blackmailing, but as well to do as much FUD as possible so people overestimate how many validators are actually hacked, and so accept to pay more.

If I have the penalties calculated right, we have today, with 10m staked, a hacker taking 20% of the slashable funds (so not that much), and no “blackmailing in the dark”:

# of validators slashed 1 1% 2% 4% 8% 16% 32%
individual penalty (ETH) 1.00 1.93 2.86 4.72 8.44 15.88 30.76
Hacker’s reward (ETH) 0.20 0.39 0.57 0.94 1.69 3.18 6.15
Total hacker’s reward (ETH) 0.202 1206 3575 11800 42200 158800 615200
Total hacker’s reward ($, 1 ETH = $250) $50 $301,563 $893,750 $3 million $11 million $40 million $154 million
Ratio vs. simple whistleblower reward x4 x7 x10 x17 x31 x58 x112

The hacker can also target staking pools of course (but users have to trust staking pools now: Trustless Staking Pools).

2 Likes

I understand this. I just said that this still leads to the right incentives in protecting your key (basically, proportionally invest more in security against attacks that could affect many validators as compared to only one).

It is annoying that the dominant strategy on detecting validator misbehaviour (in this case not protecting keys) would be blackmailing instead of reporting.

BTW, the game theory of this is actually interesting. Unless the blackmailer can actually make it credible that

  • They will slash if not paid
  • They will destroy the key and not slash if paid

then the incentives actually work out differently:

  1. The blackmailer – upon being paid whatever amount – has no incentive to actually destroy the key and thus should repeat the blackmail ad infinitum
  2. Since this is the case, the rational strategy for any victim is not to pay anything.

One way of doing this is enforcing it through a smart contract that the attacker funds, and that will burn the funds if a slashing is submitted despite paying the ransom. However, this is not very plausible as (a) the attacker would have to commit a lot of funds to this which could be frozen via a concerted hard fork (very plausible if >10% of validators have just been attacked) and (b) they would also expose their funds in case one of those validators gets slashed for another reason after paying the ransom.

So, the blackmailing might be much harder to execute than it is proposed here. At least I don’t see an easy way to do this.

They will slash if not paid

If the victim does not pay, or try to exit, then the best strategy for the attacker is to slash the victim, and the victim knows it.

They will destroy the key and not slash if paid

The contract I proposed in my initial post was:
The victim sends the funds to a smart contract. These funds are locked until:

  • case 1: someone (i.e. the victim) proves that the victim has actually been slashed, in this case the funds are returned to the victim’s address.
  • case 2: someone (i.e. the hacker) proves that the victim has exited or that a delay (a year) has passed, in this case the funds are sent to the hacker’s address.

This way the attacker doesn’t have to lock any fund.

1 Like

I agree, the incentives do not change. But the reward for the attacker increases by several orders of magnitude, which means as well that the attacker can invest more than previously.

1 Like

@dankrad I agree.
What I’m concerned about is that the key separation makes people only superficially familiar with how it works believe that they don’t need to care much about security, because the “important” key is not on that machine anyway. I know folks with little knowledge about IT security who run a lot of “master nodes” for various chains, who reason exactly like that.
So, having 2 keys is fine. But we should make sure that prospective validators are aware about this risks. It’s not about the tech itself, but about how it’s communicated. The incentives can work only if they are understood.

1 Like

What makes you think that the majority of stakers will be skilled if you make the setup harder to secure? Such a strategy might serve to reduce the number of skilled stakers, while not discouraging the ignorant.

Right, that could happen.

What I’m concerned about is a narrative where people believe that running a node is a piece of cake everybody can and should do, because the withdrawal key isn’t there anyway, thus nothing could be stolen.
This could result in a network where a large number of nodes can be compromised by a skilled attacker.
The worst outcome would in my opinion be if such an attacker could abuse that power without hurting the node operators themselves - such that they wouldn’t even notice that their node is being used for malicious purposes.
I’m not yet familiar enough with Ethereum 2.0 to come up with concrete examples for how that could happen - an example from the web2 world for this kind of issue: Exploiting Wordpress Pingpacks
I’d guess that an attacker controlling say >10% of the nodes could mess with the network in ways which hurt it.

What this means for me:
It’s probably a good thing if the risk of having a portion of the stake stolen by an intruder is not zero. But only if that risk is well known and thus acts as an incentive for more attention on security.

So, I’m not for making it harder to secure nodes, but for making sure that people care enough about protecting the validator key.

Right, I should have re-read the post. I forgot about that after going down in the discussion. Looks like the game theory is sound.

I do think we are very clear on this that securing the validator key is very important!

I hope for the emergence of staking hardware wallets soon that

  • Don’t allow export of the staking key
  • Will never allow signing of a slashable message
  • Allow the above two points to be certified by a trustworthy manufacturer – so you can run it even in a datacenter with the assurance that the staking key will be safe

Some nice additional ideas:

  • Add a GPS/Glonass/Beidou module so you can get an NTP-independent time source and be safe from all timing attacks
  • Add a Wifi module, so you can just hide it somewhere under a floorboard for increased physical security :slight_smile:
3 Likes

What makes you think that even a prudent validator would know that they downloaded and updated to a compromised version of the node software? After all, they would have covered all the other hardening and opsec procedures.

Do you think it will be wise for a validator to outsource their vulnerability to a third party HW/SW provider? Why wouldn’t a skilled provider use the advantage of the more secure wallet to get more gains than someone who hasn’t invested the effort?

Not sure if I understand your question – do you mean a wallet provider who incorporates a back door?

In fact, the prudent validator can’t know for sure.
But the design of Casper slashing (the individual being punished more if many fail at same time) anyway works in favor of those who avoid doing what everybody else does. It incentivizes decentralization in a very generic way, including e.g. trying to not use the same software and distribution channels everybody else is relying upon.
As a result of this multi-dimensional decentralization (the term diversity seems to fit), it should become much harder for an attacker to get in control of a large chunk of the network.

This could also mean that for non-techies it’s safer to ask a friend to run a validator node for them instead of using some one-click solution which requires no understanding of the system and which many others may be using too.
I think that’s better for network resilience, thus am skeptical about attempts which try to make running a validator node very easy.
We will see how it plays out.

It is worth pointing out that slashing rewards go to the block proposer that includes the slashing information, not the entity that broadcasts the slashing details. This makes this a relatively difficult attack to pull off for a large number of validators.

Very much so. As @dankrad says above, having an HSM with built-in slashing protection is something that is not simple to do, and beyond the abilities of most. Any pure software solution is vulnerable to an attacker gaining the private key.

As to if you trust the HSM provider, that’s a different matter. As no-one has built one yet it’s hard to decide if they’re trustworthy or not.

1 Like

It’s like hardware wallets – hopefully there will be providers who will make a good effort and show to the community that they’re building things with good faith (open source, commissioning audits, etc.).
We can probably add extra layers of security over time:

  • Since only very limited communication from the HSM to the outside is necessary – you can probably add a device to the connection that checks that only the required signatures and nothing else comes out of the HSM. Then, even if the HSM manufacturer is compromised, you would still be protected
  • As we have made all the efforts to be able to run a validator in a secure multi-party computation, it is also possible to use several HSMs by different manufacturers, and you would only be exposed if 2/3 of them are compromised

I don’t expect these will happen next year, but if Ethereum becomes massively successful and the capital requirements go up to 100k$ or more, we will probably see some of this.