Here’s one possible model. Suppose there is a block time of 1 minute, and if blocks take N minutes to propagate and verify (we’ll use minutes as the common unit for simplicity; clearly N < 1), then if a block is received, there is a probability N that the next block will be created while that block is still being verified, creating an off-chain block, which would also need to be verified, but would not contribute to the chain. This already creates a cost of N + N^2, and following the logic further gets us to N + N^2 + N^3 + ... = \frac{N}{1-N}. In Ethereum’s case, N \approx 0.15 at present.
In the limit, if the parameter is too high (ie. approaches infinity) you’ll get unnecessarily large fee volatility, and if the parameter is too low you’ll get too many periods where blocks are full as the fee slowly adjusts upwards to a new demand level. If the goal is to maximally avoid the second problem, then you would want to set it as high as possible without creating unneeded instability or nullifying the advantage of short-term fee predictability. I feel like trading off between these goals is an art more than a science, though would definitely welcome any attempt at coming up with a more rigorous way of selecting the constant.
I’m new to mathematical finance (computational physicist here), but I worry that some feature of the stochasticity in the demand could lead to some sort of high frequency trading strategies - this is just my intuition, I’ve nothing
What do you mean by “high frequency trading strategies”? The choice available to individual users is to either send a transaction at some particular time or not send one. It’s not a market of the same type as a financial trading market.