Defining zkOracle for Ethereum

ZK Rollup definitely inspires a lot of zk usages in blockchain.

Actually, when doing the framing for oracle, I was thinking about calling ZK Rollup as one of the zkOracle.

However,

  1. ZK Rollup’s system is more complex.
  2. The core part is not about the oracle, but state transition or bridge.
  3. If ZK Rollup is zkOracle, then Rollup will have to be one kind of oracle.

To avoid confusion, ZK Rollup is not one category in zkOracle.

3 Likes

Yes. Hyper Oracle zkOracle and zkWASM are all in Halo2 PSE. :slight_smile:

You can watch our talk for more technical details about zkOracle at here.

And you can also read the paper of zkWASM for more details about its functionality and architecture at here.

2 Likes

This is amazing! Thanks a lot for the reply.
PSE Halo2 is efficient and can be verified on chain.

1 Like

I wanted to flag a section from Vitalik’s Schellingcoin piece for you @fewwwww

Mining for Schells

The interesting part about SchellingCoin is that it can be used for more than just price feeds. SchellingCoin can tell you the temperature in Berlin, the world’s GDP or, most interestingly of all, the result of a computation. Some computations can be efficiently verified; for example, if I wanted a number N such that the last twelve digits of 3N are 737543007707, that’s hard to compute, but if you submit the value then it’s very easy for a contract or mining algorithm to verify it and automatically provide a reward. Other computations, however, cannot be efficiently verified, and most useful computation falls into the latter category. SchellingCoin provides a way of using the network as an actual distributed cloud computing system by copying the work among N parties instead of every computer in the network and rewarding only those who provide the most common result.

I just wanted to confirm for myself – What Vitalik suggests here with computations is what Hyper Oracle can do, but in an optimistic fashion, right?

I had asked Marlene about this on our Twitter space too, and that’s pretty much what she’d said.

I’m curious what you think the tradeoffs are? I mean, assuming all else is equal, you’d always use a ZK solution over an optimistic one. Maybe the costs could be different, though?

2 Likes

What Vitalik is talking about here could be implemented via Hyper Oracle’s zk, and of course I think in Vitalik’s statement Vitalik would have known that zk might be an implementation path as well. All compute-related steps can be easily migrated to zk.

But since these are non-deterministic off-chain data, we may have to adopt a “consensus” mechanism like "rewarding only those who provide the most common result ". Such a mechanism may be sort of like “optimistic” (edit: it’s actually honest majority that we don’t really want.). Such a mechanism itself may not be removed in this example (due to the data source), but the mechanism’s logic can still be wrapped in zk, allowing for succinct verification of it in external systems. The example here is a potential Input zkOracle.

I think one can probably repurpose ZK EVM to do it. ZK EVM can not read logs, but if one adds instructions to read logs from blocks, then you can write a Solidity program to read and processes data both from state and logs.

Same with off-site computations - one can treat it as an EVM instance running outside blockchain and acting on the current state root.

1 Like

I really like this idea, it is definitely good to unify the tech stack, and also to reuse a lot of the ecosystem work done in zkEVM.

Some side effects on the zkEVM approach is that:

  • It needs to implement a lot of new EIPs, and because of the many protocol changes involved, shipping those standards can be very slow.
  • In this scenario, the performance of the zkEVM solution may not be better than that of the generic zkVM solution.
  • Many of the existing technology stacks and custom applications (oracle, middleware) are not based on Solidity, and these would need to be rebuilt.

In practice, we (Hyper Oracle) choose to use generic zkVM (zkWASM, zkRISCV…) for building zkOracle. At the same time, the recent boom in Nova technology may allow zkVM to be both performant and general.

Hey

Interesting.

Do you think Nova will run faster than existing systems (like Linea from Consensys?)

1 Like

I personally believe that Nova has great potential as a new ZK stack to provide further performance enhancements for large circuits (especially zkVM).

As compared to Halo2 in the PSE benchmark, For large circuits, Nova appears to be faster than Halo2 (KZG). However, there is no comparison with Linea’s gnark yet, but I think there are potential enhancements.

However, these would require more specialized cryptographers and circuit engineers to study them in depth. In general, the conceptual implementation of zkOracle can be based on any scheme, be it a zkWASM or zkRISC0, or any zkEVM.

1 Like

@fewwwww Maybe I’m not understanding the intended use case of this oracle. My question is can the new design provide a ETH-USD oracle price in a more robust fashion than either makerDAO’s reputation-based chronicles protocol or Chainlink or Tellor’s to-be crypto-economic scheme? Thanks

1 Like

I think this one is for the case where data is on chain and you want to do lots of computation for it

2 Likes

Just like @kladkogex explained, the main use cases of zkOracle is output oracle (more like indexing protocol) and i/o oracle (more like automation network). They both have the original data source from on-chain with heavy computation that can only be performed in off-chain context.

The case you mentioned is input oracle case. It’s a little bit tricky if we want to make input oracle into zkOracle. Because data source is originally from off-chain context (USD, asset price in CEX). If data source is not from on-chain, it may be hard to assume that those source data reached consensus. There’s no single truth or consensus for ETH/USD. And zk part in zkOracle and any other case only secures computation, not the original data source.

We can experiment it with several ways:

  • Just use something like Uniswap TWAP as on-chain price oracle feed with help of zk in off-chain context for heavy lifting of heavy computation and accessing historical data. So the data source is from on-chain now. But it can only support ETH/USDC, ETH/USDT…, not ETH/USD.
  • More complicated mechanism, just like building a stable coin on-chain with zkOracle. Then get the ETH/USD price based on the first approach.

In general, since the data source comes from off-chain, it may need a more complex system to secure the entire process with zk fully input oracle (like Chainlink Price Feeds).

One standout application for zkOracle is the zk Stable Coin, which allows pledging with any fiat currency through off-chain zkML-level computations.

1 Like

I like how the zkOracle is divided into input and output. Makes me think that to avoid oracle type hacks a network for these oracles should be run and they should have gossip protocol-like checking so that there is less centralization in oracle protocols, because that’s where problems come in. Overall ZKPs do address the limitations of traditional oracle networks and this not only enhances security but optimizes performance. It’s really interesting to see how the zkOracle network utilizes zkPoS and zkGraphs running on zkWASM to make this all happen trustlessly and securely. I’m looking forward to seeing more about this, as I created my own post about ZKPs enabling a novel type of decentralized relay. I think you might find that interesting as well.

1 Like

For a zk network (including zkOracle), the design of the consensus protocol is very important. It is also different from (or, say better than) the consensus of traditional blockchain networks or oracle networks.

We are looking forward to some new explorations in zk network consensus, such as zk rollup networks.

1 Like

After further exploration, development, and research into zkOracle, we realized that the core of what we were building was the Ethereum-based “zkOracle protocol”, as well as the “programmable zkOracle protocol”.

A more precise definition of zkOracle is a ZK-based onchain oracle protocol.

For updates on our research and development, see: Hyper Oracle’s Blog and GitHub.

For an on-chain zkOracle protocol, there are three primary applications that enhance the computational capabilities of smart contracts:

  • Accessing and Computation of Historical Onchain Data:
    The zkOracle protocol empowers smart contracts by generating zero-knowledge proofs (such as zkPoS, State Proof, and Event Proof), facilitating access to comprehensive historical onchain data. This functionality enables smart contracts to utilize historical data for further computations within the smart contract or the zkOracle itself.

  • Extension of Complex Computational Capabilities:
    Conventional smart contracts face inherent limitations within the onchain computing environment, restricting their ability to execute certain functions, including processing large datasets, running complex models, and performing high-precision computations. Conversely, zkOracle transcends these limitations, offering an expansive range of computational possibilities without constraints. This includes the capacity to handle high-intensity computations, such as machine learning tasks.

  • Internet Data:
    In addition to onchain data sources, zkOracle and smart contracts can seamlessly incorporate internet-based data. Leveraging trustless proving libraries of Transport Layer Security (TLS), zkOracle can collaborate with Proof of HTTPs protocols, opening up diverse avenues for utilizing internet data. The integration of zkOracle with these protocols facilitates access to internet data, thereby unlocking new opportunities for onchain data utilization and computation.

Strange choice of word to call that an oracle instead of a coprocessor.
Interesting ideas tho.

If I understand correctly, the idea of the Output zkOracle is to attach proof of correct source and computation when a user retrieves blockchain data. The verifier would need to obtain a trusted block hash from which to start the verification, so it’s similar to running a light client, right ? Are there any tradeoff here with traditional zk light clients like Plumo in terms of proving time/proof size ?

I know Axiom v2 is also using Halo2, any tradeoffs or differences in designs or goals here ?

For the difference between zkOracle and zkCoprocessor, you can check out this post by 7x: zkOracle and zkCoprocessor — SevenX Ventures. TLDR is you can think of a zkOracle as a zkCoprocessor, but zkOracle is a broader concept (because you wouldn’t call Chainlink a coprocessor, and Chainlink, just like zkOracle, has input, output, and I/O oracles).

Yes, it’s similar to running a light client. We don’t have a benchmark against Plumo right now.

Some of the difference in terms of model and features are explained in the zkOracle and zkCoprocessor post. One key difference between Axiom and HyperOracle zkOracle is that Axiom has their Halo2 circuit, then developers use Axiom eDSL to call those circuit APIs, and zkOracle is built on zkWASM and all computation (historical data access, and the “eDSL calling those circuit APIs” part) is secured by a zkp.

Verifier will be a universal smart contract on Ethereum or other networks.

Prover network will be a decentralized and permissionless network of zkOracle nodes.