Prompt engineering (English Wikipedia)

Analysis of information sources in references of the Wikipedia article "Prompt engineering" in English language version.

refsWebsite
Global rank English rank
69th place
59th place
2nd place
2nd place
low place
low place
5th place
5th place
1,943rd place
1,253rd place
11th place
8th place
low place
low place
7th place
7th place
1,272nd place
837th place
616th place
430th place
4th place
4th place
652nd place
515th place
1,559th place
1,155th place
187th place
146th place
6,413th place
4,268th place
low place
8,795th place
low place
low place
896th place
674th place
388th place
265th place
153rd place
151st place
low place
low place
low place
low place
175th place
137th place
low place
low place
low place
low place
5,045th place
2,949th place
low place
low place
703rd place
501st place
1,040th place
623rd place
79th place
65th place
3,700th place
2,360th place
1,131st place
850th place

aclanthology.org (Global: low place; English: low place)

arstechnica.com (Global: 388th place; English: 265th place)

artnews.com (Global: 5,045th place; English: 2,949th place)

arxiv.org (Global: 69th place; English: 59th place)

  • Wahle, Jan Philip; Ruas, Terry; Xu, Yang; Gipp, Bela (2024). "Paraphrase Types Elicit Prompt Engineering Capabilities". In Al-Onaizan, Yaser; Bansal, Mohit; Chen, Yun-Nung (eds.). Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Miami, Florida, USA: Association for Computational Linguistics. pp. 11004–11033. arXiv:2406.19898. doi:10.18653/v1/2024.emnlp-main.617.
  • McCann, Bryan; Keskar, Nitish; Xiong, Caiming; Socher, Richard (June 20, 2018). The Natural Language Decathlon: Multitask Learning as Question Answering. ICLR. arXiv:1806.08730.
  • Wei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed H.; Le, Quoc V.; Zhou, Denny (October 31, 2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems (NeurIPS 2022). Vol. 35. arXiv:2201.11903.
  • Chen, Zijie; Zhang, Lichao; Weng, Fangsheng; Pan, Lili; Lan, Zhenzhong (June 16, 2024). "Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 7727–7736. arXiv:2310.08129. doi:10.1109/cvpr52733.2024.00738. ISBN 979-8-3503-5300-6.
  • Kojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). "Large Language Models are Zero-Shot Reasoners". NeurIPS. arXiv:2205.11916.
  • Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes". NeurIPS. arXiv:2208.01066.
  • Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared D.; Dhariwal, Prafulla; Neelakantan, Arvind (2020). "Language models are few-shot learners". Advances in Neural Information Processing Systems. 33: 1877–1901. arXiv:2005.14165.
  • Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten; Zhou, Denny; Metzler, Donald; Chi, Ed H.; Hashimoto, Tatsunori; Vinyals, Oriol; Liang, Percy; Dean, Jeff; Fedus, William (October 2022). "Emergent Abilities of Large Language Models". Transactions on Machine Learning Research. arXiv:2206.07682. In prompting, a pre-trained language model is given a prompt (e.g. a natural language instruction) of a task and completes the response without any further training or gradient updates to its parameters... The ability to perform a task via few-shot prompting is emergent when a model has random performance until a certain scale, after which performance increases to well-above random
  • Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2023). "Broken Neural Scaling Laws". ICLR. arXiv:2210.14891.
  • Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes". NeurIPS. arXiv:2208.01066. Training a model to perform in-context learning can be viewed as an instance of the more general learning-to-learn or meta-learning paradigm
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR. 2023. arXiv:2203.11171.
  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models. NeurIPS. 2023. arXiv:2305.10601.
  • Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. ICLR. 2024. arXiv:2310.11324.
  • Leidinger, Alina; van Rooij, Robert; Shutova, Ekaterina (2023). Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.). "The language of prompting: What linguistic properties make a prompt successful?". Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics: 9210–9232. arXiv:2311.01967. doi:10.18653/v1/2023.findings-emnlp.618.
  • Linzbach, Stephan; Dimitrov, Dimitar; Kallmeyer, Laura; Evang, Kilian; Jabeen, Hajira; Dietze, Stefan (June 2024). "Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models". In Duh, Kevin; Gomez, Helena; Bethard, Steven (eds.). Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Mexico City, Mexico: Association for Computational Linguistics. pp. 3645–3655. arXiv:2404.01992. doi:10.18653/v1/2024.naacl-long.201.
  • Efficient multi-prompt evaluation of LLMs. NeurIPS. 2024. arXiv:2405.17202.
  • Sequeda, Juan; Allemang, Dean; Jacob, Bryon (2023). "A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases". Grades-Nda. arXiv:2311.07509.
  • Explaining Patterns in Data with Language Models via Interpretable Autoprompting (PDF). BlackboxNLP Workshop. 2023. arXiv:2210.01848.
  • Large Language Models are Human-Level Prompt Engineers. ICLR. 2023. arXiv:2211.01910.
  • Pryzant, Reid; Iter, Dan; Li, Jerry; Lee, Yin Tat; Zhu, Chenguang; Zeng, Michael (2023). "Automatic Prompt Optimization with "Gradient Descent" and Beam Search". Conference on Empirical Methods in Natural Language Processing: 7957–7968. arXiv:2305.03495. doi:10.18653/v1/2023.emnlp-main.494.
  • Automatic Chain of Thought Prompting in Large Language Models. ICLR. 2023. arXiv:2210.03493.
  • Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2023). "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion". ICLR. arXiv:2208.01618. Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new "words" in the embedding space of a frozen text-to-image model.
  • Lester, Brian; Al-Rfou, Rami; Constant, Noah (2021). "The Power of Scale for Parameter-Efficient Prompt Tuning". Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045–3059. arXiv:2104.08691. doi:10.18653/V1/2021.EMNLP-MAIN.243. S2CID 233296808. In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
  • How Does In-Context Learning Help Prompt Tuning?. EACL. 2024. arXiv:2302.11521.

doi.org (Global: 2nd place; English: 2nd place)

fastcompany.com (Global: 1,040th place; English: 623rd place)

googleblog.com (Global: 1,272nd place; English: 837th place)

ai.googleblog.com

ibm.com (Global: 1,131st place; English: 850th place)

ieee.org (Global: 652nd place; English: 515th place)

spectrum.ieee.org

  • Genkina, Dina (March 6, 2024). "AI Prompt Engineering is Dead: Long live AI prompt engineering". IEEE Spectrum. Retrieved January 18, 2025.

jmlr.org (Global: low place; English: low place)

kdnuggets.com (Global: low place; English: low place)

microsoft.com (Global: 153rd place; English: 151st place)

midjourney.com (Global: low place; English: low place)

docs.midjourney.com

  • "Prompts". docs.midjourney.com. Retrieved August 14, 2023.

nih.gov (Global: 4th place; English: 4th place)

ncbi.nlm.nih.gov

pubmed.ncbi.nlm.nih.gov

nytimes.com (Global: 7th place; English: 7th place)

openai.com (Global: 1,559th place; English: 1,155th place)

cdn.openai.com

openart.ai (Global: low place; English: low place)

cdn.openart.ai

  • Diab, Mohamad; Herrera, Julian; Chernow, Bob (October 28, 2022). "Stable Diffusion Prompt Book" (PDF). Retrieved August 7, 2023. Prompt engineering is the process of structuring words that can be interpreted and understood by a text-to-image model. Think of it as the language you need to speak in order to tell an AI model what to draw.

quantamagazine.org (Global: 6,413th place; English: 4,268th place)

scientificamerican.com (Global: 896th place; English: 674th place)

searchenginejournal.com (Global: low place; English: 8,795th place)

semanticscholar.org (Global: 11th place; English: 8th place)

api.semanticscholar.org

  • Li, Xiang Lisa; Liang, Percy (2021). "Prefix-Tuning: Optimizing Continuous Prompts for Generation". Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582–4597. doi:10.18653/V1/2021.ACL-LONG.353. S2CID 230433941. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning... Prefix-tuning draws inspiration from prompting
  • Lester, Brian; Al-Rfou, Rami; Constant, Noah (2021). "The Power of Scale for Parameter-Efficient Prompt Tuning". Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045–3059. arXiv:2104.08691. doi:10.18653/V1/2021.EMNLP-MAIN.243. S2CID 233296808. In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
  • Shin, Taylor; Razeghi, Yasaman; Logan IV, Robert L.; Wallace, Eric; Singh, Sameer (November 2020). "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts". Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics. pp. 4222–4235. doi:10.18653/v1/2020.emnlp-main.346. S2CID 226222232.

ssrn.com (Global: 703rd place; English: 501st place)

papers.ssrn.com

stable-diffusion-art.com (Global: low place; English: low place)

techcrunch.com (Global: 187th place; English: 146th place)

  • Wiggers, Kyle (June 12, 2023). "Meta open sources an AI-powered music generator". TechCrunch. Retrieved August 15, 2023. Next, I gave a more complicated prompt to attempt to throw MusicGen for a loop: "Lo-fi slow BPM electro chill with organic samples."

technologyreview.com (Global: 1,943rd place; English: 1,253rd place)

thecvf.com (Global: low place; English: low place)

openaccess.thecvf.com

theregister.com (Global: 3,700th place; English: 2,360th place)

unite.ai (Global: low place; English: low place)

venturebeat.com (Global: 616th place; English: 430th place)

vice.com (Global: 175th place; English: 137th place)

worldcat.org (Global: 5th place; English: 5th place)

search.worldcat.org

wsj.com (Global: 79th place; English: 65th place)