Casper: CAPM & Validation Yield


Casper FFG: CAPM & Validation Yield

Link to working doc

tl;dr While the CAPM model and Sharpe ratio have major limitations, there are concrete takeaways for designing Casper incentivization. Primarily, the more we can limit the standard deviation of validation yield, the lower the required returns of the validators. That allows for either lower issuance/dilution—or for any given level of issuance, a higher risk-adjusted return—which will make participation in the network more compelling for the same level of issuance.

Introduction to CAPM

E(R_i) = R_f + \beta_i (R_m - R_f)

E(R_i) - R_f = \beta_i (R_m - R_f)

In other words, the risk premium of a given asset (such as ETH validation stake) should be the (a) relative volatility of the asset vs the market times (b) the market premium of the asset.

  • The risk premium (E(R_i) - R_f) is defined as the expected return of the asset in excess of the risk-free rate (e.g. 3 month US Treasury bill)
  • The beta of the asset (B_i) is the standard deviation of the asset returns divided by the standard deviation of market returns (\sigma_i/\sigma_m). This measures the relative volatility of the asset returns to the market.
  • The excess market returns R_m - R_f is the returns of a given “market” in excess of the risk-free return.

The two large factors for this analysis is:

  1. What is the correct & reasonable selection of the relevant “market returns” that this asset class is under?
  2. How the reward/penalty parameters will affect \sigma_i and therefore the \beta_i of the asset.
  3. The more we can limit the standard deviation of the asset returns (make it more predictable, the less we have to reward the validators. i.e. less issuance / dilution of ether value. or higher excess return for same level of issuance).

To take it one step further: there are three real drivers of the assets required returns E(R_i). The required returns of the asset will be greater when:

  1. The market returns are higher.
  2. The standard deviation of the asset returns are higher.
  3. The standard deviation of the market returns are lower.


The main takeaway here is that, there is a direct cost to ETH holders for having high standard deviation validator returns. So for a given level of “economic security,” we should strive to minimize the standard deviation of validator returns. That will allow for (1) additional “resources” to increase penalties / cost of attack by increasing TD, (2) decreasing issuance and enhancing value of ETH, or (3) provide additional excess risk-adjusted returns to validators (attracting a broad set of validators).

Also related: Sharpe ratio and its cousin Sortino ratio


This looks like it might be an argument for lower p values as I describe here :slight_smile:


The required returns of the asset will be greater when:

2.The standard deviation of the asset returns are higher.

I’m a total econ nube (so let me know if I’m totally off-base here :slight_smile: ), but it seems like the standard deviation on the returns should not include the returns of validators who were slashed for purposeful malicious behavior. E.g. the returns of some validators who got caught (and slashed!) for a purposeful finality reversion attack shouldn’t be considered in this calculation.

The reasoning is that, if you are an honest node, there isn’t more risk to validation because some evil validators decide to get themselves slashed (though this behavior might change your rewards in some way as well). This makes sense, as we wouldn’t want to minimize the difference between the good validator and evil validator returns anyways.


Yes, lower than p=1 would add a “defensive” feature to validator returns per your point.

Although, if we believe that higher p-values are more prescriptive of issuance level in relation to TD, that would be an argument towards higher p-values lowering the std deviation of returns by providing more predictable relationships between the variables mentioned.

This is a subtle tradeoff across multiple variables that all affect perceived risk/reward of participation.


Good insight, but the inference is actually flipped.

What you mentioned is not an affordance / free assumption, but a result we’d like to drive by having clear mechanism/incentives that have deterministic guarantees on reward and penalty protection based on a validator’s actions (i.e. online validators shouldn’t lose money, voting should always be more lucrative than not voting, etc).

We want to create a mechanism with incentives such that the validators themselves can assume what you just said about not including the chances of being slashed.

One of the reasons why we can’t just take that as a given is that, there’s no guarantee that validator’s are 100% immune from slashing based on bugs or other observed Byzantine behavior that don’t stem from malicious intent. Validators need to have a guarantee of no slashing, if not, they will necessarily incorporate that into their risk assessment and therefore required return.

Taking a step back, I think the mechanism design in general can assume and design the “ideal execution,” but implementation and incentivization should assume maximal entropy and push the equilibrium towards the “ideal execution” as imagined.

Let me know if that makes sense.