Transactions on Cryptographic Hardware and Embedded Systems, Volume 2025
Dash: Accelerating Distributed Private Convolutional Neural Network Inference with Arithmetic Garbled Circuits
Jonas Sander
University of Luebeck, Luebeck, Germany
Sebastian Berndt
Technische Hochschule Luebeck, Luebeck, Germany
Ida Bruhns
University of Luebeck, Luebeck, Germany
Thomas Eisenbarth
University of Luebeck, Luebeck, Germany
Keywords: Garbled Circuit, Inference, GPU, TEE
Abstract
The adoption of machine learning solutions is rapidly increasing across all parts of society. As the models grow larger, both training and inference of machine learning models is increasingly outsourced, e.g. to cloud service providers. This means that potentially sensitive data is processed on untrusted platforms, which bears inherent data security and privacy risks. In this work, we investigate how to protect distributed machine learning systems, focusing on deep convolutional neural networks. The most common and best-performing mixed MPC approaches are based on HE, secret sharing, and garbled circuits. They commonly suffer from large performance overheads, big accuracy losses, and communication overheads that grow linearly in the depth of the neural network. To improve on these problems, we present Dash, a fast and distributed private convolutional neural network inference scheme secure against malicious attackers. Building on arithmetic garbling gadgets [BMR16] and fancy-garbling [BCM+19], Dash is based purely on arithmetic garbled circuits. We introduce LabelTensors that allow us to leverage the massive parallelity of modern GPUs. Combined with state-of-the-art garbling optimizations, Dash outperforms previous garbling approaches up to a factor of about 100. Furthermore, we introduce an efficient scaling operation over the residues of the Chinese remainder theorem representation to arithmetic garbled circuits, which allows us to garble larger networks and achieve much higher accuracy than previous approaches. Finally, Dash requires only a single communication round per inference step, regardless of the depth of the neural network, and a very small constant online communication volume.
Publication
Transactions of Cryptographic Hardware and Embedded Systems, Volume 2025, Issue 1
PaperArtifact
Artifact number
tches/2025/a3
Artifact published
March 6, 2025
Badge
✅ IACR CHES Artifacts Functional
License
This work is licensed under the GNU General Public License version 3.
Some files in this archive are licensed under a different license. See the contents of this archive for more information.
BibTeX How to cite
Sander, J., Berndt, S., Bruhns, I., & Eisenbarth, T. (2024). Dash: Accelerating Distributed Private Convolutional Neural Network Inference with Arithmetic Garbled Circuits. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2025(1), 420-449. https://doi.org/10.46586/tches.v2025.i1.420-449. Artifact available at https://artifacts.iacr.org/tches/2025/a3