{"context": "这篇文章讨论了气候变化的影响,包括经济、社会和环境层面的变化。", "target": "This article discusses the impacts of climate change, including economic, social, and environmental changes."},
{"context": "随着科技的发展,人工智能正在逐步改变各个行业的面貌。", "target": "With the development of technology, artificial intelligence is gradually changing the landscape of various industries."}
"target": "Please reason step by step and translate the following sentence:\n1. Understand the meaning of the sentence\n2. Analyze the structure of the sentence\n3. Adjust according to the grammar of the target language\n4. Generate the final translation\nThe sentence to be translated: Climate change has had a profound impact on global ecosystems."}
Lester, B., Al-Rfou, R., & Constant, N. (2021). The Power of Scale for Parameter-Efficient Prompt Tuning. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 3045–3059. (这篇论文提出了 Parameter-Efficient Prompt Tuning,是软提示微调的重要基础)
Khashabi, D., Cohan, A., & Choi, Y. (2021). Prompting with Discrete Prompts: A Survey. Transactions of the Association for Computational Linguistics, 10, 866–883. (这篇论文对离散提示举行了综述,可以作为硬提示微调的参考)
Li, X. L., & Liang, P. (2021). Prefix-Tuning: Optimizing Continuous Prompts for Generation. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4582–4597. (这篇论文提出了 Prefix-Tuning,是另一种提示词微调方法,可以作为对比或增补)
关于链式推理 (Chain-of-Thought Prompting) 的论文:
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., … & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35. (这篇论文提出了链式推理方法,可以作为 CoT 提示的理论基础)
关于 few-shot learning 的论文:
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. (这篇论文探讨了语言模子作为 few-shot learners 的本领,可以作为 few-shot 学习的理论参考)
关于 WMT 数据集和 BLEU 评估指标的论文:
Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). Bleu: a method for automatic evaluation of machine translation. Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311-318. (这篇论文提出了 BLEU 评估指标,是机器翻译范畴的重要参考文献)
Bojar, O., Buck, C., Federmann, C., Haddow, B., Koehn, P., Leveling, J., … & Zampieri, M. (2014). Findings of the 2014 Workshop on Statistical Machine Translation. Proceedings of the Ninth Workshop on Statistical Machine Translation, 12–58. (这篇论文介绍了 WMT 评测和数据集,可以作为 WMT 数据集的参考)
关于 MBART 模子的论文:
Liu, Y., Gu, J., Goyal, N., Li, X., Edunov, S., Ghazvininejad, M., … & Lewis, M. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. (这篇论文提出了 MBART 模子,可以作为 MBART 模子的参考)