International Association for Cryptologic Research

International Association
for Cryptologic Research

Advances in Cryptology – ASIACRYPT 2025

Scalable zkSNARKs for Matrix Computations:

A Generic Framework for Verifiable Deep Learning


Mingshu Cong
The University of Hong Kong, Hong Kong

Sherman S. M. Chow
The Chinese University of Hong Kong, Hong Kong

Siu Ming Yiu
The University of Hong Kong, Hong Kong

Tsz Hon Yuen
Monash University, Australia


Keywords: zkSNARK, Matrix, Zero-Knowledge, Machine Learning


Abstract

Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy.

Complexity-wise, for a matrix expression with $M$ atomic operations on $n \times n$ matrices, the prover runs in $O(M n^2)$ time while proof size and verification time are $O(\log(M n))$, outperforming known VML systems. Honed for this framework, we formalize relations directly in matrices or vectors—a more intuitive form for VML than traditional polynomials. Our LiteBullet proof, an inner-product proof built on folding and its connection to sumcheck (Crypto '21), yields a polynomial-free alternative. With these ingredients, we reconcile heterogeneity, zero knowledge, succinctness, and architecture privacy in a single VML system.

Publication

Advances in Cryptology – ASIACRYPT 2025. ASIACRYPT 2025. Lecture Notes in Computer Science, vol 16249. Springer, Singapore.

Paper

Artifact

Artifact number
asiacrypt/2025/a5

Artifact published
December 31, 2025

Badge
IACR Artifacts Functional

README

ZIP (478804 Bytes)  

View on Github

License

Some files in this archive are licensed under a different license. See the contents of this archive for more information.

Note that license information is supplied by the authors and has not been confirmed by the IACR.


BibTeX How to cite

Cong, M., Chow, S.S.M., Yiu, SM., Yuen, T.H. (2026). Scalable zkSNARKs for Matrix Computations. In: Hanaoka, G., Yang, BY. (eds) Advances in Cryptology – ASIACRYPT 2025. ASIACRYPT 2025. Lecture Notes in Computer Science, vol 16249. Springer, Singapore. https://doi.org/10.1007/978-981-95-5116-3_12. Artifact available at https://artifacts.iacr.org/asiacrypt/2025/a5