Prompt engineering (English Wikipedia)

Analysis of information sources in references of the Wikipedia article "Prompt engineering" in English language version.

refsWebsite
Global rank English rank
69th place
59th place
2nd place
2nd place
low place
low place
5th place
5th place
1,943rd place
1,253rd place
616th place
430th place
11th place
8th place
low place
low place
7th place
7th place
1,272nd place
837th place
4th place
4th place
652nd place
515th place
1,559th place
1,155th place
187th place
146th place
6,413th place
4,268th place
low place
8,795th place
low place
low place
896th place
674th place
388th place
265th place
153rd place
151st place
low place
low place
low place
low place
175th place
137th place
low place
low place
low place
low place
5,045th place
2,949th place
low place
low place
703rd place
501st place
1,040th place
623rd place
79th place
65th place
3,700th place
2,360th place
1,131st place
850th place

aclanthology.org

arstechnica.com

artnews.com

arxiv.org

  • Wahle, Jan Philip; Ruas, Terry; Xu, Yang; Gipp, Bela (2024). "Paraphrase Types Elicit Prompt Engineering Capabilities". In Al-Onaizan, Yaser; Bansal, Mohit; Chen, Yun-Nung (eds.). Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Miami, Florida, USA: Association for Computational Linguistics. pp. 11004–11033. arXiv:2406.19898. doi:10.18653/v1/2024.emnlp-main.617.
  • The Natural Language Decathlon: Multitask Learning as Question Answering. ICLR. 2018. arXiv:1806.08730.
  • Wei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed H.; Le, Quoc V.; Zhou, Denny (October 31, 2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems (NeurIPS 2022). Vol. 35. arXiv:2201.11903.
  • Chen, Zijie; Zhang, Lichao; Weng, Fangsheng; Pan, Lili; Lan, Zhenzhong (June 16, 2024). "Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 7727–7736. arXiv:2310.08129. doi:10.1109/cvpr52733.2024.00738. ISBN 979-8-3503-5300-6.
  • Kojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). "Large Language Models are Zero-Shot Reasoners". NeurIPS. arXiv:2205.11916.
  • Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes". NeurIPS. arXiv:2208.01066.
  • Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared D.; Dhariwal, Prafulla; Neelakantan, Arvind (2020). "Language models are few-shot learners". Advances in Neural Information Processing Systems. 33: 1877–1901. arXiv:2005.14165.
  • Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten; Zhou, Denny; Metzler, Donald; Chi, Ed H.; Hashimoto, Tatsunori; Vinyals, Oriol; Liang, Percy; Dean, Jeff; Fedus, William (October 2022). "Emergent Abilities of Large Language Models". Transactions on Machine Learning Research. arXiv:2206.07682. In prompting, a pre-trained language model is given a prompt (e.g. a natural language instruction) of a task and completes the response without any further training or gradient updates to its parameters... The ability to perform a task via few-shot prompting is emergent when a model has random performance until a certain scale, after which performance increases to well-above random
  • Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2023). "Broken Neural Scaling Laws". ICLR. arXiv:2210.14891.
  • Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes". NeurIPS. arXiv:2208.01066. Training a model to perform in-context learning can be viewed as an instance of the more general learning-to-learn or meta-learning paradigm
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR. 2023. arXiv:2203.11171.
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models. NeurIPS. 2023. arXiv:2305.10601.
  • Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. ICLR. 2024. arXiv:2310.11324.
  • Leidinger, Alina; van Rooij, Robert; Shutova, Ekaterina (2023). Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.). "The language of prompting: What linguistic properties make a prompt successful?". Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics: 9210–9232. arXiv:2311.01967. doi:10.18653/v1/2023.findings-emnlp.618.
  • Linzbach, Stephan; Dimitrov, Dimitar; Kallmeyer, Laura; Evang, Kilian; Jabeen, Hajira; Dietze, Stefan (June 2024). "Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models". In Duh, Kevin; Gomez, Helena; Bethard, Steven (eds.). Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Mexico City, Mexico: Association for Computational Linguistics. pp. 3645–3655. arXiv:2404.01992. doi:10.18653/v1/2024.naacl-long.201.
  • Efficient multi-prompt evaluation of LLMs. NeurIPS. 2024. arXiv:2405.17202.
  • Sequeda, Juan; Allemang, Dean; Jacob, Bryon (2023). "A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases". Grades-Nda. arXiv:2311.07509.
  • Explaining Patterns in Data with Language Models via Interpretable Autoprompting (PDF). BlackboxNLP Workshop. 2023. arXiv:2210.01848.
  • Large Language Models are Human-Level Prompt Engineers. ICLR. 2023. arXiv:2211.01910.
  • Pryzant, Reid; Iter, Dan; Li, Jerry; Lee, Yin Tat; Zhu, Chenguang; Zeng, Michael (2023). "Automatic Prompt Optimization with "Gradient Descent" and Beam Search". Conference on Empirical Methods in Natural Language Processing: 7957–7968. arXiv:2305.03495. doi:10.18653/v1/2023.emnlp-main.494.
  • Automatic Chain of Thought Prompting in Large Language Models. ICLR. 2023. arXiv:2210.03493.
  • Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2023). "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion". ICLR. arXiv:2208.01618. Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new "words" in the embedding space of a frozen text-to-image model.
  • Lester, Brian; Al-Rfou, Rami; Constant, Noah (2021). "The Power of Scale for Parameter-Efficient Prompt Tuning". Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045–3059. arXiv:2104.08691. doi:10.18653/V1/2021.EMNLP-MAIN.243. S2CID 233296808. In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
  • How Does In-Context Learning Help Prompt Tuning?. EACL. 2024. arXiv:2302.11521.

doi.org

fastcompany.com

googleblog.com

ai.googleblog.com

ibm.com

ieee.org

spectrum.ieee.org

  • Genkina, Dina (March 6, 2024). "AI Prompt Engineering is Dead: Long live AI prompt engineering". IEEE Spectrum. Retrieved January 18, 2025.

jmlr.org

kdnuggets.com

microsoft.com

midjourney.com

docs.midjourney.com

  • "Prompts". docs.midjourney.com. Retrieved August 14, 2023.

nih.gov

ncbi.nlm.nih.gov

pubmed.ncbi.nlm.nih.gov

nytimes.com

openai.com

cdn.openai.com

openart.ai

cdn.openart.ai

  • Diab, Mohamad; Herrera, Julian; Chernow, Bob (October 28, 2022). "Stable Diffusion Prompt Book" (PDF). Retrieved August 7, 2023. Prompt engineering is the process of structuring words that can be interpreted and understood by a text-to-image model. Think of it as the language you need to speak in order to tell an AI model what to draw.

quantamagazine.org

scientificamerican.com

searchenginejournal.com

semanticscholar.org

api.semanticscholar.org

  • Li, Xiang Lisa; Liang, Percy (2021). "Prefix-Tuning: Optimizing Continuous Prompts for Generation". Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582–4597. doi:10.18653/V1/2021.ACL-LONG.353. S2CID 230433941. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning... Prefix-tuning draws inspiration from prompting
  • Lester, Brian; Al-Rfou, Rami; Constant, Noah (2021). "The Power of Scale for Parameter-Efficient Prompt Tuning". Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045–3059. arXiv:2104.08691. doi:10.18653/V1/2021.EMNLP-MAIN.243. S2CID 233296808. In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
  • Shin, Taylor; Razeghi, Yasaman; Logan IV, Robert L.; Wallace, Eric; Singh, Sameer (November 2020). "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts". Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics. pp. 4222–4235. doi:10.18653/v1/2020.emnlp-main.346. S2CID 226222232.

ssrn.com

papers.ssrn.com

stable-diffusion-art.com

techcrunch.com

  • Wiggers, Kyle (June 12, 2023). "Meta open sources an AI-powered music generator". TechCrunch. Retrieved August 15, 2023. Next, I gave a more complicated prompt to attempt to throw MusicGen for a loop: "Lo-fi slow BPM electro chill with organic samples."

technologyreview.com

thecvf.com

openaccess.thecvf.com

theregister.com

unite.ai

venturebeat.com

vice.com

worldcat.org

search.worldcat.org

wsj.com