Xiang LisaX.L.LiXiang LisaX.L., PercyP.LiangPercyP., Prefix-Tuning: Optimizing Continuous Prompts for Generation, „Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)”, Online: Association for Computational Linguistics, 2021, s. 4582–4597, DOI: 10.18653/v1/2021.acl-long.353 [dostęp 2023-03-24](ang.).
BrianB.LesterBrianB., RamiR.Al-RfouRamiR., NoahN.ConstantNoahN., The Power of Scale for Parameter-Efficient Prompt Tuning, „Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing”, Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021, s. 3045–3059, DOI: 10.18653/v1/2021.emnlp-main.243 [dostęp 2023-03-24](ang.).
Arthur Shield [online], www.arthur.ai [dostęp 2024-03-20](ang.).
arxiv.org
PengfeiP.LiuPengfeiP. i inni, Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, „ArXiv”, 2021, DOI: 10.48550/arxiv.2107.13586, arXiv:2107.13586.
PranabP.SahooPranabP. i inni, A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2402.07927, arXiv:2402.07927.
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?, „arXiv”, arXiv:2202.12837(ang.).
XufengX.ZhaoXufengX. i inni, Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic, „arXiv”, 2023, DOI: 10.48550/ARXIV.2309.13339, arXiv:2309.13339.
ZilongZ.WangZilongZ. i inni, Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2401.04398, arXiv:2401.04398.
VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models, „arXiv”, arXiv:2403.06098(ang.).
ChrisanthaCh.FernandoChrisanthaCh. i inni, Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution, „Arxiv”, 2023, DOI: 10.48550/ARXIV.2309.16797, arXiv:2309.16797.
QingyanQ.GuoQingyanQ. i inni, Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers, „Arxiv”, 2023, DOI: 10.48550/ARXIV.2309.08532, arXiv:2309.08532.
ZhuoshiZ.PanZhuoshiZ. i inni, LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2403.12968, arXiv:2403.12968.
PengfeiP.LiuPengfeiP. i inni, Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, „ArXiv”, 2021, DOI: 10.48550/arxiv.2107.13586, arXiv:2107.13586.
Xiang LisaX.L.LiXiang LisaX.L., PercyP.LiangPercyP., Prefix-Tuning: Optimizing Continuous Prompts for Generation, „Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)”, Online: Association for Computational Linguistics, 2021, s. 4582–4597, DOI: 10.18653/v1/2021.acl-long.353 [dostęp 2023-03-24](ang.).
BrianB.LesterBrianB., RamiR.Al-RfouRamiR., NoahN.ConstantNoahN., The Power of Scale for Parameter-Efficient Prompt Tuning, „Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing”, Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021, s. 3045–3059, DOI: 10.18653/v1/2021.emnlp-main.243 [dostęp 2023-03-24](ang.).
PranabP.SahooPranabP. i inni, A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2402.07927, arXiv:2402.07927.
XufengX.ZhaoXufengX. i inni, Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic, „arXiv”, 2023, DOI: 10.48550/ARXIV.2309.13339, arXiv:2309.13339.
ZilongZ.WangZilongZ. i inni, Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2401.04398, arXiv:2401.04398.
ChrisanthaCh.FernandoChrisanthaCh. i inni, Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution, „Arxiv”, 2023, DOI: 10.48550/ARXIV.2309.16797, arXiv:2309.16797.
QingyanQ.GuoQingyanQ. i inni, Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers, „Arxiv”, 2023, DOI: 10.48550/ARXIV.2309.08532, arXiv:2309.08532.
ZhuoshiZ.PanZhuoshiZ. i inni, LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression, „Arxiv”, 2024, DOI: 10.48550/ARXIV.2403.12968, arXiv:2403.12968.