Conference:33rd USENIX Security Symposium CCF level:CCF A Categories:network and information security Year:2024 Conference time:August 14–16, 2024 Philadelphia, PA, USA
Title:
MD-ML: Super Fast Privacy-Preserving Machine Learning for Malicious Security with a Dishonest Majority
MD-ML:超快速隐私保护呆板学习,可防止大多数不诚实用户发起恶意安全攻击
Authors:
Abstract:
Privacy-preserving machine learning (PPML) enables the training and inference of models on private data, addressing security concerns in machine learning. PPML based on secure multi-party computation (MPC) has garnered significant attention from both the academic and industrial communities. Nevertheless, only a few PPML works provide malicious security with a dishonest majority. The state of the art by Damgård et al. (SP'19) fails to meet the demand for large models in practice, due to insufficient efficiency. In this work, we propose MD-ML, a framework for Maliciously secure Dishonest majority PPML, with a focus on boosting online efficiency.
MD-ML works for n parties, tolerating corruption of up to n-1 parties. We construct our novel protocols for PPML, including truncation, dot product, matrix multiplication, and comparison. The online communication of our dot product protocol is one single element per party, independent of input length. In addition, the online cost of our multiply-then-truncate protocol is identical to multiplication, which means truncation incurs no additional online cost. These features are achieved for the first time in the literature concerning maliciously secure dishonest majority PPML.
Benchmarking of MD-ML is conducted for SVM and NN including LeNet, AlexNet, and ResNet-18. For NN inference, compared to the state of the art (Damgård et al., SP'19), we are about 3.4—11.0x (LAN) and 9.7—157.7x (WAN) faster in online execution time.