标题: [论文阅读]Demystifying Prompts in Language Models via Perplexity Estim [打印本页] 作者: 光之使者 时间: 2025-3-14 09:43 标题: [论文阅读]Demystifying Prompts in Language Models via Perplexity Estim Demystifying Prompts in Language Models via Perplexity Estimation
Demystifying Prompts in Language Models via Perplexity Estimation - ACL Anthology
EMNLP 2023
存在这样一个现实:LLM的零样本大概少样本提示学习的本领强劲,但是偶然候明显看起来一致的提示词却表现出了较大的输出差异。