Post-Pectra Effects on Ethereum: Reorg Rate, Propagation, and Block Size

Post-Pectra Effects on Ethereum: Reorg Rate, Propagation, and Block Size

This analysis was conducted as part of the Pectra Grants Program by the Decipher Research Team.

We would like to thank the ethPandaOps team for providing Xatu-data, an extensive dataset of Ethereum node measurements, and the Ethereum Foundation ESP team for supporting this work through the Pectra grants round.

Our preprocessed dataset and analysis code are openly available .

TL;DR

In this post, we empirically analyze the impact of the Pectra upgrade on Ethereum. We focus on how the reorg rate changed, how block and blob propagation times shifted, and how block sizes evolved. We also connect these outcomes to recent protocol changes such as EIP-7623 EIP-7691, and EIP-7549.

Here are key takeaways:

  • Reorg rate decreased slightly, from 0.1224% → 0.1121% (~10% lower).
  • This reduction aligns with faster propagation times: both blob propagation and block propagation times shifted downward, reducing the fraction of slots with delays near the 4s attestation boundary.
  • Large blocks (>400 kB) are more likely to exceed 4s propagation.
  • The frequency of such large blocks dropped after Pectra, especially due to EIP-7623.
  • The average block size also decreased, mainly driven by EIP-7549, which reduced consensus-layer overhead by aggregating votes.

Contents

  • Data collection
  • What drives reorgs?
    • Re-org rate before vs after Pectra
  • Propagation times before vs after Pectra
    • blob propagation time
    • block propagation time
    • Block vs Blob : Which comes faster?
  • Block size change
    • Big blocks
    • Normal blocks

Data Collection

Our analysis uses data from Xatu-data, covering January 1, 2025 to August 27, 2025 (slots #10,738,799 to #12,459,220). The raw data was gathered from the following sources:

  • Since beacon_api_eth_v1_events_block_gossip does not provide data before the Pectra hardfork, block propagation times were instead taken from libp2p_gossipsub_beacon_block. The average propagation times from the two sources differ by less than 10 ms, so the values can be regarded as closely aligned.
  • Blob propagation times are taken from beacon_api_eth_v1_events_blob_sidecar, and reorg events from beacon_api_eth_v1_events_chain_reorg.
  • Validator max effective balance and total validator counts come from canonical_beacon_validators.
  • Block size and transaction calldata are collected from canonical_execution_block and canonical_execution_transaction.

What drives reorgs?


Reorgs are an important indicator in measuring the consensus safety. Re-orged slots hurt user experience and add uncertainty for validators.

Several factors may contribute to reorgs. In this analysis, we tested three features:

  1. blob_prop — the arrival time of the latest blob in a slot.
  2. block_prop — the time when a block is first seen on the p2p network.
  3. gas_used — total gas consumed, used here as a proxy for execution time.

We also consider max_prop = max(blob_prop, block_prop), since validators must wait for both the block and all blobs before attesting.

To see how well these features explain reorgs, we fit logistic regression models and report the ROC-AUC scores:

ROC-AUC block_prop: 0.974
ROC-AUC blob_prop: 0.845
ROC-AUC max(block_prop, blob_prop): 0.982
ROC-AUC max(block_prop, blob_prop), gas_used: 0.982

These results show:

  • Propagation times are the main driver of reorgs. Both block_prop and blob_prop matter, and their maximum (max_prop) has the strongest predictive power.
  • Execution time adds little predictive power. Adding gas_used to the model does not change performance, suggesting execution time has little direct effect on reorgs. Its influence is likely indirect, through its impact on block size and thus propagation.

Looking at the distribution of max_prop for reorged vs. non-reorged slots, we find a clear gap. For reorged slots, max_prop is much higher: over 75% of them exceed 4000 ms. This pattern is consistent with default client behavior, where attestations are made around 4 seconds after the slot begins.

When we bin slots by max_prop and compute the reorg probability in each bin, we see a sharp rise near 3900 ms. In fact, slots with max_prop above 3900 ms show about a 26.7% chance of reorg, highlighting how sensitive reorgs are to propagation delays close to the attestation boundary.

Re-org rate before VS after Pectra

The moving average of re-orged slots shows some fluctuations over time, but the overall reorg rate decreased slightly after Pectra—from 0.1224% to 0.1121%, a reduction of about 10%. This improvement likely relates to changes in propagation times for blocks and blobs. In the next section, we examine how these propagation times shifted after the Pectra hardfork.

Before After
Total slots 906,535 805,546
Re-orged slots 1110 903
Ratio 0.1224% 0.1121%

Propagation Times before vs after Pectra

before Pectra after Pectra Change
blob_prop 1877ms 1744ms -133ms (-7%)
block_prop 1887ms 1832ms -55ms (-2.9%)
max_prop 2014ms 1888ms -126ms (-6.3%)

max_prop

The histogram of max_prop shows a small leftward shift after Pectra. The median decreased from 2014 ms to 1888 ms, a reduction of about 126 ms.

More telling than the median is the share of slots with very high propagation times. Slots exceeding 3800–3900 ms are much more likely to reorg, and this fraction is slightly lower in the post-Pectra period than before.

blob_prop

blob_prop, the latest blob propagation time, also decreased noticeably after Pectra: the median dropped from 1877 ms to 1744 ms, a reduction of about 133 ms. This is notable because EIP-7691 raised the maximum blob count per slot from 6 to 9.

The box plots by blob count show this effect clearly. For every blob count, the median propagation time is lower after Pectra (red) than before (blue). In slots with 6 blobs before Pectra the median was 2151 ms, while in slots with 9 blobs after Pectra it was 2018 ms—despite the higher load, the propagation time was faster.

At the same time, the familiar trend remains: propagation slows as the number of blobs increases. Slots with 5–6 blobs are slower than those with 1–2, and after Pectra this pattern extends into the new 7–9 blob range.

Blob count 1 2 3 4 5 6 7 8 9
Before 1858 1937 2008 2069 2097 2151 - - -
After 1707 1749 1816 1871 1898 1927 1966 2000 2018

block_prop

Block propagation times also improved after Pectra, though the effect was smaller than for blobs. The median block_prop decreased from 1887 ms to 1832 ms, a reduction of about 55 ms.

Block vs Blob : Which comes faster?

The reduction in max_prop appears to come mainly from faster blob propagation. To check this, we measured how often blobs are the bottleneck when computing max_prop—that is, cases where blob_prop is larger than block_prop.

Before Pectra, blobs were the bottleneck in 85.2% of blob-containing slots. After Pectra, this ratio dropped to 74.9%.

The chart below shows this trend: blob bottlenecks became less common after Pectra. Also, as expected, the chance of blobs being the bottleneck increases with the number of blobs in the slot.

We compared block_prop and blob_prop across slots with different blob counts, before and after Pectra. Before Pectra, once a slot had more than 2 blobs, the median blob_prop was already higher than the block_prop. After Pectra, this threshold shifted upward—blob_prop only exceeded block propagation times once a slot contained more than 4 blobs.

Since block propagation time depends in part on block size, we next examine how recent protocol upgrades have changed block sizes.

Block size change

Recent Ethereum upgrades have affected block size in different ways. EIP-7623 introduced a new gas pricing rule for DA-purpose transactions, encouraging them to move from calldata to blobs. This aims to reduce the worst-case block size. EIP-7549 further reduced block size on the consensus layer by allowing the aggregation of identical votes, which cut down consensus data and lowered the average block size.

At the same time, the maximum block size has also been pushed upward through gas-limit increases: from 30 million to 36 million in January 2025, and then from 36 million to 45 million in July 2025. These changes expanded the ceiling on block size even as other upgrades worked to reduce it.

Big blocks

Larger blocks tend to propagate more slowly, which can increase the chance of reorgs. The figure below shows the fraction of blocks with block_prop above 4 seconds, grouped by block size. For blocks smaller than 300 kB—which make up more than 99.5% of all blocks—the fraction with high propagation time is very low and fairly stable. In contrast, very large blocks show a clear penalty. For example, blocks larger than 500 kB have a 2.05% chance of exceeding 4 seconds in propagation.


This shows that while most blocks are small enough to propagate quickly, the rare very large blocks introduce a noticeable delay and higher reorg risk. The reductions in block size brought by EIP-7623 therefore play an important role in limiting the occurrence of these slow-propagating blocks.

The distribution of block sizes shifted noticeably after Pectra. The figure compares the pre-Pectra period with the post-Pectra period, plotted on a log scale. This shows that EIP-7623 cut down the extreme cases. By limiting the occurrence of very large blocks, it helped reduce the long propagation delays that are most likely to contribute to reorgs.

The reduction in the ratio of big blocks can also be explained at the transaction level. The figure shows the percentage of total calldata consumed, grouped by calldata size, before and after EIP-7623.

Before EIP-7623, a large share of calldata consumption came from very big transactions, especially those in the 350–400k token range, which alone accounted for over 12% of all calldata used. After EIP-7623, this heavy concentration nearly disappeared. Other high-calldata ranges (above 100k tokens) also dropped sharply.

This pattern shows that EIP-7623 successfully shifted the data load away from calldata. By cutting down extreme calldata-heavy transactions, it reduced the chance of producing very large blocks, which in turn helps block propagation.

We also analyzed the calldata ratio, defined as the total calldata size in a transaction divided by its gas usage. After EIP-7623, the distribution became more concentrated around the center, with noticeably lower variance. This indicates that extreme calldata-heavy transactions have become less common, making block sizes more stable.

Normal blocks

The chart below shows the total block size over time. As the gas limit increased from 30M to 36M, and later from 36M to 45M, average block size rose accordingly. A notable break occurs at the Pectra hardfork: block sizes dropped sharply after the fork, and the fluctuations became visibly smaller as well.

To make a fair comparison, we focus on the period when the maximum gas limit was fixed at 36M. In this window, the execution payload size stayed almost the same, showing only a 2.9% reduction. By contrast, the variance of execution payload size fell sharply from 31,735 byte to 18,637 byte, an effect largely attributable to EIP-7623.

before (byte) after (byte) Change
execution payload size 44136 42839 -1297(-2.9%)
consensus size 17603 4965 -12638(-71.8%)
total block size 61739 47805 -13934(-22.6%)

Consensus data size, defined as the difference between total block size and execution payload size, fell by more than 12KB, which corresponds to a 70% reduction. This points to EIP-7549 as the main driver of block size reduction, with EIP-7623 contributing primarily by stabilizing execution payload variability.

Conclusion and Future works

The Pectra hardfork, together with recent gas limit increases, has changed several aspects of Ethereum’s performance. Scalability improved for both blocks and blobs, with smaller average block sizes, fewer extreme outliers, and reduced variance. These changes helped lower propagation delays, and the reorg rate decreased slightly, suggesting that consensus security has not been harmed and may even have improved.

What remains uncertain is the precise cause of the faster propagation. Both block and blob propagation times fell, but it is unclear whether this came from client-level optimizations, network effects, or validator dynamics. EIP-7251 could eventually reduce network load by lowering validator counts, but this effect was not visible during our analysis window. Future work should look deeper into client and network factors, and monitor long-run validator participation under EIP-7251, to better explain the observed improvements.

7 Likes

Congrats! Interesting results! Great and important research!

just a quick q: assuming profit-maximizing, economically rational validators, IMHO it’d be interesting to also see whether (some of the) reorged blocks have correlation either with

  1. higher MEV block content; i.e., blocks are reorged because they have a higher MEV content, hence, the subsequent validator is motivated to reorg that block and claim the juicy MEV opportunities to themselves, or,
  2. RANDAO manipulation, reorgs might also happen because validators want to manipulate the randomness beacon in a way that they could propose more blocks in subsequent epochs.

Maybe I’m paranoid, but to me, it’d be completely plausible that some reorgs are absolutely intentional and happen for profit-maximizing reasons, i.e., maximizing MEV, or to manipulate the RANDAO for higher number of proposed blocks in the next epochs.

2 Likes

@seresistvanandras Thanks a lot for raising this interesting point!

I don’t think it’s paranoid at all. For MEV, a couple of months ago we looked at the MEV income of slots right after a reorg. The differences were small overall, and interestingly, some MEV-maximizing validator entities tended to include late blocks (4–8s) rather than fork them out. My guess is that reorg attempts carry their own risks, so even with juicy MEV opportunities, the risk/reward doesn’t always favor a reorg.

For RANDAO manipulation, we haven’t explored that yet, but I agree it’s worth checking. Before connecting it directly to reorgs, I first plan to look for signs among large validator entities — specifically, whether they really strategically time their RANDAO reveals to maximize their chances of proposing blocks.

If I find anything meaningful, I’ll make sure to share it here.

2 Likes

Defaults are strong, but not impervious. My guess is that most stakers are doing whatever default clients do, and no one has gone out of their way to make it easy for people to do something that may be financially wiser but non-default.

Consider that Ethereum went many many years without a significant amount of MEV extraction, and then almost overnight nearly every miner was exploiting MEV. The defaults carried us for quite a while, but in the end incentives won out.

3 Likes

@MicahZoltu Thanks for sharing your insights !
I agree that defaults won’t hold forever, and block building will likely become more “efficient” as profit-maximizing strategies (including intentional reorgs) come into play. We’re already seeing hints of this, as you mentioned (e.g., Kiln providers playing timing games and slightly adjusting attestation deadlines). It’s definitely worth exploring the economic incentives for deviating from defaults and what kind of equilibrium that might create.

1 Like