This is absolutely detectable on all central limit order books including centralized/fiat exchanges of today. It may not be provable in many instances. The victim and anyone else who can see the victim order will basically see the price move through the victim order and fill a different order at a worse price. i.e. If you had a limit buy at 200, the price will fill 199 before filling your order.
Orders are timestamped by the user AND the operator. The users order timestamp serves to ensure the operator is time-stamping accurately. The operators timestamp serves as the single clock of reference for a price time priority proof. A small clock skew is tolerated to ensure variable ping delays.
As a practical matter, the user clock is adjusted from the server price feed and the skew is adjusted automatically, so for all practical purposes the requests will have nearly same timestamp as the server.
Note that front-running requires the deployment of capital and the smaller the price movement, the larger the capital allocation that is needed to make the same amount of profit. Large ticks on futures products ensures that even if the underlying spot market moves a bit (due to 6 decimal places) the futures product is unlikely to move (due to 1 decimal place) within a span of a few milliseconds. While there is a theoretical possibility of operator frontrunning in the skew tolerance (difference between user and server clock), the profit collected would be so small that its not economically attractive to deploy a huge amount of capital for it; it would be more attractive to just lend it at interest.
When data is unavailable in Gluon Plasma, it affects all users since the rootHashes wont match and they won’t be able to exit. Also note that if the operator steals from one person, everyone else should know they are next and the only rational course is to halt the chain.
Do honest large stake owners have to lose 10% of their tokens every time they vote
If the operator is compromised, there is a very good chance that the governance tokens are nearly worthless in a few hours. Its better to sacrifice 10% of them and save other assets. On the flip side a malicious person would be discouraged from voting falsely or carelessly since they would have to pay with valuable tokens.
an adversary’s prior vote has a long-term accumulative impact
Everything is reset when a new G-block is created.
but this does increases the finality time
Perhaps there is a chance that someone will spend $100K maliciously to delay a G-block by 10 minutes. Or $2M to delay by a few hours or $200M to delay by a day. Its certainly possible since there are some wealthy folks in crypto who have “more money than Rwanda”, but I think its highly unlikely they will dump so much money on a temporary prank, whose only lasting effect is to advertise the robustness of the thing they are attacking.
There are cases when an honest user just happen to have an order that is falsely considered as front-running. This is not provable as you pointed out. And sadly, this is probably undetectable either. You cannot distinguish between a front-running and a case when multiple users just happen to send orders with similar prices simultaneously, which may happen rather often in a exchange with a great popularity.
There is always network delay and with difference of 1 sec in the client’s and operator’s timestamp, it is sufficient for the operator to mount front-running.
An operator can front run at no cost. This is actually a problem in general centralised exchange and I don’t think there is a decent way to detect front-running. And based on the design of this protocol, I think both the functionality and vulnerability are comparable to centralised exchanges instead of DEX. The Gluon Plasma is more like a verifiable centralised exchange rather than DEX.
I think some assumptions on the off-chain network of Gluon Plasma are needed. It is possible that a user is isolated and made to vote and lose 10% of his tokens.
Note that my original statement referred to existing exchanges like NYSE. There is simply no way for a user to front run anything on gluon plasma since the operator clock is used for price-time priority. The execution order in a central limit order is deterministic, given an order price, size and time. Any variance is seen by everyone and provable by S9.3.5. The exception is market orders, which users do not have to use.
This is incorrect. The client knows the network latency and applies the difference when creating an order. For example, if the round trip difference including minor clock differences, network latency and processing time all add up to one second, the client will add one second to his current timestamp and use it instead. This delta is automatically corrected by the realtime feed from the exchange.
Indeed, front running is not impossible. The window to do it is so tight (milliseconds) that it becomes unattractive compared to a regular centralized exchange, where there are no restraints.
As a practical matter user’s limit orders are filled at the limit price and maker orders cannot be victimized. Taker orders would need to specify an IOC flag to ensure that they don’t pay more than they intended.
In the general markets, front running occurs on thin books when market orders are placed because this kind of front running is undetectable and unprovable and you could do it all day with no restrictions of millisecond sized windows.
This is only true on exchanges where you can place orders without depositing capital. On Gluon, this would lead to a solvency violation and is detected by S9.3.3 Counterfeit fraud.
Yes, its a non-custodial exchange with centralized pieces.
Designs to address data unavailability are the weak spots for all plasmas. In Gluon, we need to run a few simulations to get the right mix. Tendermint consensus may eliminate data unavailability and also token voting.
This is still a work in progress and any ideas are welcome. Security vetting is pending.
The basic idea is in the UPDATE of the original post but let me try to state it differently:
In most other plasmas, the operator manages the chain. In Gluon the operator is the exchange AND manages the chain. These roles can be decoupled. The exchange role only needs to match orders and create ledger entries (including deposits and withdrawals). We introduce a Tendermint consensus validators who listen to these entries and create new blocks and manage the chain.
In such a construction, the exchange cannot withhold data (ie commit a hashroot using hidden entries) because it would no longer be its job. At least 1/3 of Tendermint validators would need to be compromised for an unverifiable hashRoot to be committed.
The exchange can certainly not publish some entries, or skip a few. These would result in the Tendermint block producers simply not committing any more blocks since they wont be able to compute a valid block. So far, the only harm the exchange can do is basically halt itself by publishing anything other than complete fully correct entries. Voting to halt or other ways to manage data withholding by users becomes unnecessary. In a way, users have delegated verification to tendermint consensus but this avoids figuring out the optimal voting parameters.
In a multi-exchange scenario, other exchanges can continue trading on the chain. Only the bad exchange is halted. Multi-exchange needs to address race conditions during deposits and withdrawals and needs some work on co-ordinating fills.
Thanks @bharathrao, I’ve missed the UPDATE part, now I’ve read it, along with your reply.
This setup is certainly better than the one with the centralized operator. However, Tendermint (being a modification of PBFT) has quadratic message complexity (which means it becomes very slow when the number of nodes increase). This means that you will always have a low node count in practice, which means it’s very realistic to expect cartelization. Speaking of cartelization:
If 1/3 +1 validators are malicious, they still can not commit malicious blocks (2/3 is needed for consensus). What they can do is halt the consensus, and they can not be punished for this (unless you introduce some sort of “slow bleed-out” function, similar to what Eath 2.0 has for unavailable validators).
I see, thanks for the input. The other option we are considering is Dfinity style consensus using BLS signature aggregation. This can probably be done log(n) due to BLS properties.
A highly decentralized validator set instantly solves most of the issues. The hard part is to chose a proper consensus (I generally like Dfinity’s approach) and to bootstrap the network (token economics has to be tuned well).
FYI, leverj is live with Gluon Plasma. An excellent example of research turning into a working product. Just wanted to thank everyone for their input and vetting the idea.