区块链论文速读A会-SECURITY 2024 针对恶意大多数安全攻击的隐私保护呆板学 ...

打印 上一主题 下一主题

主题 916|帖子 916|积分 2748


Conference:33rd USENIX Security Symposium
CCF level:CCF A
Categories:network and information security
Year:2024
Conference time:August 14–16, 2024 Philadelphia, PA, USA




Title: 
MD-ML: Super Fast Privacy-Preserving Machine Learning for Malicious Security with a Dishonest Majority
MD-ML:超快速隐私保护呆板学习,可防止大多数不诚实用户发起恶意安全攻击

Authors




Abstract
Privacy-preserving machine learning (PPML) enables the training and inference of models on private data, addressing security concerns in machine learning. PPML based on secure multi-party computation (MPC) has garnered significant attention from both the academic and industrial communities. Nevertheless, only a few PPML works provide malicious security with a dishonest majority. The state of the art by Damgård et al. (SP'19) fails to meet the demand for large models in practice, due to insufficient efficiency. In this work, we propose MD-ML, a framework for Maliciously secure Dishonest majority PPML, with a focus on boosting online efficiency.
MD-ML works for n parties, tolerating corruption of up to n-1 parties. We construct our novel protocols for PPML, including truncation, dot product, matrix multiplication, and comparison. The online communication of our dot product protocol is one single element per party, independent of input length. In addition, the online cost of our multiply-then-truncate protocol is identical to multiplication, which means truncation incurs no additional online cost. These features are achieved for the first time in the literature concerning maliciously secure dishonest majority PPML.
Benchmarking of MD-ML is conducted for SVM and NN including LeNet, AlexNet, and ResNet-18. For NN inference, compared to the state of the art (Damgård et al., SP'19), we are about 3.4—11.0x (LAN) and 9.7—157.7x (WAN) faster in online execution time.

隐私保护呆板学习 (PPML) 支持在私有数据上训练和推理模子,办理了呆板学习中的安全题目。基于安全多方计算 (MPC) 的 PPML 引起了学术界和工业界的极大关注。然而,只有少数 PPML 工作提供了不诚实多数的恶意安全性。Damgård 等人的最新结果 (SP'19) 由于效率不足,无法满意实践中对大型模子的需求。在这项工作中,我们提出了 MD-ML,这是一个恶意安全不诚实多数 PPML 框架,重点是进步在线效率。
MD-ML 实用于 n 方,最多可容忍 n-1 方的腐败。我们为 PPML 构建了新奇的协议,包罗截断、点积、矩阵乘法和比较。我们的点积协议的在线通讯是每方一个元素,与输入长度无关。此外,我们的先乘后截断协议的在线成本与乘法相同,这意味着截断不会产生额外的在线成本。这些特性初次在有关恶意安全不诚实多数 PPML 的文献中实现。
对 SVM 和 NN(包罗 LeNet、AlexNet 和 ResNet-18)进行了 MD-ML 基准测试。对于 NN 推理,与开始辈的技术(Damgård 等人,SP'19)相比,我们的在线实行时间约莫快 3.4—11.0 倍(LAN)和 9.7—157.7 倍(WAN)。






























































持续接收区块链最新论文
洞察区块链技术发展趋势
Follow us to keep receiving the latest blockchain papers
Insight into Blockchain Technology Trends

免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
回复

使用道具 举报

0 个回复

倒序浏览

快速回复

您需要登录后才可以回帖 登录 or 立即注册

本版积分规则

道家人

金牌会员
这个人很懒什么都没写!

标签云

快速回复 返回顶部 返回列表