1. Blog
  2. Research and Engineering
  3. An introduction to zero-knowledge machine learning (ZKML)

An introduction to zero-knowledge machine learning (ZKML)

Zero-knowledge machine learning (ZKML) is a field of research and development that has been making waves in cryptography circles recently. But what is it and why is it useful? First, let's break down the term into its two constituents and explain what they are.

What is ZK?

A zero-knowledge (ZK) proof is a cryptographic protocol in which one party, the prover, can prove to another party, the verifier, that a given statement is true, without revealing any additional information beyond the fact that the statement is true. It is an area of study that has been making great progress on several fronts, from research to protocol implementations and applications.

The two main “primitives” (or, building blocks) that ZK brings to the table are the ability to create proofs of computational integrity for a set of given computations, where the proof is significantly easier to verify than it is to perform the computation itself. (We call this property “succinctness.”) ZK proofs also provide the option to hide parts of said computation whilst preserving computational correctness. (We call this property “zero-knowledge.”)

Generating zero-knowledge proofs is very computationally intensive, many times as expensive as the original computation. This means that there are some computations for which it is infeasible to compute zero-knowledge proofs because the time it'd take to create them on the best hardware available makes them impractical. However, advancements in the field of cryptography, hardware, and distributed systems in recent years have allowed zero-knowledge proofs to become feasible for ever more intensive computations. These advancements have allowed for the creation of protocols that can use proofs of intensive computations, thus expanding the design space for new applications.

ZK Use Cases

Zero-knowledge cryptography is one of the most popular technologies in the Web3 space since it allows developers to build scalable and/or private applications.

As ZK tech matures it's likely there will be a Cambrian explosion of new applications since the tooling used to build them will require less domain expertise and will be a lot easier to use for developers.

Below are a few examples of how it is being used in practice (though note that many of these projects are works-in-progress).

Scaling ethereum with ZK rollups

Distributed systems like public blockchains have limited computational power since all participant nodes (computers) have to verify the computations in each block by running them themselves. Using zk proofs we can execute these computations off-chain, compute a zk proof and verify this proof on-chain, thus achieving scalability without sacrificing decentralization or security. Examples:

Building privacy-preserving applications

The zero-knowledge property of ZK proofs enables hiding parts of the computation being proven which is really useful to create applications that preserve users' privacy and personal data when making cryptographic attestations. Examples:

Aztec is building a private scalability solution for Ethereum (ZK rollup) where users' balances and transactions are fully hidden to any outside observer.

Identity primitives and data provenance

At Worldcoin we are building WorldID, which is our privacy-preserving proof-of-personhood protocol. It allows any person with a WorldID to make a cryptographic attestation signalling that they are a unique human being and that they haven't performed an action before (like signing up for a social network) without revealing their identity.

Layer 1 protocols

Since ZK proofs help off-load computation and make computations private it allows for the creation of private and/or succinct (small in size, easily verifiable) layer 1s. Examples:

Machine learning

Machine learning is a subfield of artificial intelligence that involves the development and application of algorithms that enable computers to learn and adapt from data autonomously, optimizing their performance through iterative processes. Large language models, such as GPT-4 and Bard, are state-of-the-art natural language processing systems that leverage vast amounts of training data to generate human-like text, while text-to-image models like DALL-E 2, Midjourney, and Stable Diffusion translate textual descriptions into visual representations with remarkable fidelity. The rapid advancement of machine learning techniques holds significant promise in addressing complex challenges across various domains, including healthcare, finance, and transportation, by leveraging data-driven insights and predictions to improve decision-making and optimize outcomes. As these models become more sophisticated, they are poised to revolutionize numerous industries, transforming the way we live, work, and interact with technology.

Motivation and Current efforts in ZKML

In a world where AI-generated content increasingly resembles human-created content, the potential application of zero-knowledge cryptography could help us determine that a particular piece of content was produced by applying a specific model to a given input. This could provide a means for verifying outputs from large language models like GPT4, text-to-image models such as DALL-E 2, or any other models, if a zero-knowledge circuit representation is created for them. The zero-knowledge property of these proofs would allow us to also hide parts of the input or the model as well if need be. A good example of this would be applying a machine learning model on some sensitive data where a user would be able to know the result of model inference on their data without revealing their input to any third party (e.g., in the medical industry).

Note: When we talk about ZKML, we are talking about creating zero-knowledge proofs of the inference step of the ML model, not about the ML model training (which, in and of itself, is already very computationally intensive).

The current state of the art of zero-knowledge systems coupled with performant hardware still falls a few orders of magnitude short of being able to prove something as big as currently available large language models (“LLMs”), but there has been some progress in creating proofs of smaller models.

We researched the state of the art of zero-knowledge cryptography in the context of creating proofs for ML models and created an aggregation of the relevant research, articles, applications, and codebases that belong to this domain. Resources on ZKML can be found on the ZKML community's awesome-zkml repository on GitHub.

The Modulus Labs team recently released a paper titled “The Cost of Intelligence”, where they benchmark existing ZK proof systems against a wide range of models of different sizes. It is currently possible to create proofs for models of around 18M parameters in about 50 seconds running on a powerful AWS machine using a proving system like plonky2. Figure 1 illustrates the scaling behavior of different proving systems as the number of parameters of a neural network are increased:

Figure
Fig.1

Source: “The Cost of Intelligence: Proving Machine Learning Inference with Zero-Knowledge.” Modulus Labs. Fig. 2, pp. 12. January 20, 2023.

Another initiative that is working on improving the state of the art of ZKML systems is Zkonduit's ezkl library which allows you to create ZK proofs of ML models exported using ONNX. This enables any ML engineer to create ZK proofs of the inference step of their models and to prove the output to any verifier.

There are several teams working on improving ZK technology, creating optimized hardware to speed up the computation of ZK proofs, particularly for resource-intensive tasks such as the prover and verifier algorithms. As ZK technology matures it will be possible to prove bigger models on less powerful machines in a smaller period of time due to improvements in specialized hardware, proof system architecture (proof size, verification time, proof generation time, etc) and more performant ZK protocol implementations. We expect these advancements will allow new ZKML applications and use cases to emerge.

ZKML Use Cases

In order to decide whether ZKML could be used for a given application, we can examine how the properties of ZK cryptography would help enable certain use cases. This can be illustrated as a Venn Diagram:

Figure
Fig.2

Venn Diagram explaining how ZK and ML primitives and technologies can be combined together

Heuristic optimization - A problem-solving approach that uses rules of thumb or "heuristics" to find good solutions to problems that are difficult to solve using traditional optimization methods. Rather than trying to find the optimal solution to a problem, heuristic optimization methods aim to find a good or "good enough" solution in a reasonable amount of time given the relative importance of the problem to the overall system and the difficulty in optimizing it.

Fully Homomorphic Encryption (FHE) ML - FHE allows developers to perform operations on encrypted data and when decrypted the result will be the output of the operation performed on the original unencrypted input. Enables evaluating models in a privacy-preserving fashion (full data privacy, unlike ZKML where the prover needs access to all data); however, there's no way to cryptographically prove the correctness of the computations being performed like with ZK proofs. For example, Zama is working on creating a FHE ML framework called Concrete ML.

ZK proofs vs. Validity proofs - These terms are oftentimes used interchangeably in the industry since validity proofs are ZK proofs that don't hide parts of the computation or its results. In the context of ZKML, most current applications are leveraging the validity proof aspect of ZK proofs.

Validity ML - SNARK/STARK proofs of ML models where all computations are publicly visible to the verifier (). Any verifier can then prove the computational correctness of the ML models.

ZKML - ZK proofs of ML models where computations are being hidden from the verifier (using the zero-knowledge property). The prover can prove the computational correctness of the ML models without revealing any further information.

Use case examples

Computational integrity (validity ML)

Validity proofs (SNARKs/STARKs) can be used to prove that some computation happened correctly, in the context of ML we are proving ML model inference or that some model created some output using a specific input.

For example Modulus Labs, a ZKML-focused startup, is building these use cases:

  • On-chain verifiable ML trading bot - RockyBot
  • Blockchains that self-improve vision (examples):
  • Enhancing the Lyra finance options protocol AMM with intelligent features
  • Creating a transparent AI-based reputation system for Astraly (ZK oracle)
  • Working on the technical breakthroughs needed for contract-level compliance tools using ML for Aztec Protocol (a zk-rollup with privacy features)

The ability to easily prove and verify that the output is the product of a given model and input pair. This enables ML models to be run off-chain on specialized hardware and to have their ZK proofs easily verifiable on-chain. For example, Giza is helping Yearn (a DeFi yield aggregator protocol) to prove that some complex yield strategy which is using ML is being correctly executed on-chain.

ML as a Service (MLaaS) transparency

When different companies provide access to ML models through their APIs, it is really hard to know as a user whether the service provider is actually providing the model that they say they are since the API is a black box. Providing validity proofs attached to an ML model API would be useful to provide transparency to the user as they can verify which model they are using.

ZK anomaly/fraud detection

Enables the creation of a ZK proof for exploitability/fraud. Anomaly detection models could be trained on smart contract data and agreed upon by DAOs as interesting metrics to be able to automate security procedures such as pausing contracts in a more proactive, preventive way. There are startups already looking at using ML models for security purposes in a smart contract context, so ZK anomaly detection proofs could be a next step.

Privacy (ZKML)

Besides validity proofs, we can also hide parts of the computation in order to enable the privacy-preserving application of ML. A few examples can be found below:

  • Decentralized Kaggle: proof that a model has greater than x% accuracy on some test data without revealing weights.
  • Privacy-preserving inference: medical diagnostics on private patient data get fed into the model and the sensitive inference (e.g., cancer test result) gets sent to the patient. (source: vCNN paper, page 2/16)- Worldcoin

Potential use cases at Worldcoin

One potential use of ZKML in the context of Worldcoin is iris code upgradeability. World ID users would be able to self-custody their signed biometrics in the encrypted storage of their mobile device, download the ML model for iris code generation and create a zero-knowledge proof locally that proves their iris code was indeed generated from signed images using the correct model. This iris code could then be permissionlessly inserted into the set of registered Worldcoin users since the receiving smart contract would be able to verify the zero-knowledge proof which validates the creation of the iris code. This would mean that, if Worldcoin ever upgrades the algorithm to create the iris code in a way that'd break compatibility with its previous iteration, users wouldn't have to go back to an Orb, but could just compute the upgrade locally on-device.

Learn More and Contribute

During the second half of 2022, a few different teams and individuals working in the ZKML domain (including Worldcoin) created the ZKML community. It is an open community where its members discuss the latest research and experiments in the ZKML domain and share their findings. If you want to learn more about ZKML and start talking to people working in the field, it is a great place to ask questions and become familiar with the topic. For a more in-depth write-up, read “Checks and balances: Machine learning and zero-knowledge proofs” by Elena Burger from a16z. Also, check out the awesome-zkml resource aggregator where you can find more resources like these!

Authors

dcbuilder.eth, and the Worldcoin Team