【8】提到了一种文本到文本的隐私化方法,文本到文本隐私化基于 dX 隐私【9】,这是当地差分隐私的一种基于间隔的松弛形式,被广泛用于保护文本内容隐私。
形式上,对于给定的输入集 X 和输出集 Y,dX 是界说在 X 上的一个间隔函数。一个随机机制 M: X → Y 满足 dX 隐私,当且仅当对于任意 x ∈ X 和 x' ∈ X,M(x) 和 M(x') 的输出分布满足以下不等式:
此中 η ≥ 0 是隐私参数,控制隐私保护的程度。在应用文本到文本隐私化时,其核心头脑是将 x_t 更换为在嵌入空间中与 最近的词语 :
【1】关于发布上海市2024年度区块链关键技术攻关专项项目指南的通知
【2】FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models
【3】A Fast, Performant, Secure Distributed Training Framework For LLM
【4】Ditto: Quantization-aware Secure Inference of Transformers upon MPC
【5】PrivateLoRA For Efficient Privacy Preserving LLM
【6】DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware
【7】Ew-tune: A framework for privately fine-tuning large language models with differential privacy
【8】Privacy-preserving prompt tuning for large language model services
【9】A Predictive Differentially-Private Mechanism for Mobility Traces
【10】MPCFORMER: FAST, PERFORMANT AND PRIVATE TRANSFORMER INFERENCE WITH MPC
【11】PUMA: SECURE INFERENCE OF LLAMA-7B IN FIVE MINUTES
【12】Iron: Private Inference on Transformers
【13】BumbleBee: Secure Two-party Inference Framework for Large Transformers