Prompt engineering (Czech Wikipedia)

Analysis of information sources in references of the Wikipedia article "Prompt engineering" in Czech language version.

refsWebsite
Global rank Czech rank
2nd place
4th place
69th place
196th place
low place
low place
low place
5,093rd place
1,185th place
846th place
low place
low place
1,272nd place
5,345th place
1st place
1st place
low place
low place
low place
low place

acm.org

dl.acm.org

aipe.cz

  • Naučte se Promptování: Průvodce komunikací s umělou inteligencí. aipe.cz [online]. [cit. 2023-06-05]. Dostupné v archivu pořízeném z originálu dne 2023-06-05. 

arxiv.org

  • openreview.net. Dostupné online. arXiv 2201.11903. 
  • LEWIS, Patrick; PEREZ, Ethan; PIKTUS, Aleksandra; PETRONI, Fabio; KARPUKHIN, Vladimir; GOYAL, Naman; KÜTTLER, Heinrich. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems. Curran Associates, Inc., 2020, s. 9459–9474. Dostupné online. arXiv 2005.11401. 

datacamp.com

  • SELVARAJ, Natassha. What is Retrieval Augmented Generation (RAG)? [online]. Datacamp [cit. 2024-01-24]. Dostupné online. 
  • THEVAPALAN, Arunn. A Beginner's Guide to The OpenAI API: Hands-On Tutorial and Best Practices [online]. Datacamp [cit. 2024-01-24]. Dostupné online. 

doi.org

dx.doi.org

  • MANN, Benjamin; RYDER, Nick; SUBBIAH, Melanie. Language Models are Few-Shot Learners. arXiv, Advances in Neural Information Processing Systems 33. 2020-05-28. QID: Q95727440. DOI 10.48550/ARXIV.2005.14165. (English) 
  • WEI, Jason; WANG, Xuezhi; SCHUURMANS, Dale. Chain of Thought Prompting Elicits Reasoning in Large Language Models.Chybí název periodika! 2022-01-28. QID: Q111971110. DOI 10.48550/ARXIV.2201.11903. (English) 
  • [s.l.]: [s.n.] Dostupné online. ISBN 9781450391573. DOI 10.1145/3491102.3501825. 
  • LI, Xiang Lisa; LIANG, Percy. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). [s.l.]: [s.n.], 2021-08-01. QID: Q110887424. DOI 10.18653/V1/2021.ACL-LONG.353. S. 4582–4597. (English)
  • LESTER, Brian; AL-RFOU, Rami; CONSTANT, Noah. The Power of Scale for Parameter-Efficient Prompt Tuning. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. [s.l.]: [s.n.], 2021-11-01. QID: Q110887400. DOI 10.18653/V1/2021.EMNLP-MAIN.243. S. 3045–3059. (English)
  • YANG, Jingfeng; JIANG, Haoming; YIN, Qingyu. SEQZERO: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models. In: Findings of the Association for Computational Linguistics: NAACL 2022. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. Dostupné online. DOI 10.18653/v1/2022.findings-naacl.5.

googleblog.com

ai.googleblog.com

kdnuggets.com

neurips.cc

proceedings.neurips.cc

  • LEWIS, Patrick; PEREZ, Ethan; PIKTUS, Aleksandra; PETRONI, Fabio; KARPUKHIN, Vladimir; GOYAL, Naman; KÜTTLER, Heinrich. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems. Curran Associates, Inc., 2020, s. 9459–9474. Dostupné online. arXiv 2005.11401. 

openreview.net

web.archive.org

  • Naučte se Promptování: Průvodce komunikací s umělou inteligencí. aipe.cz [online]. [cit. 2023-06-05]. Dostupné v archivu pořízeném z originálu dne 2023-06-05.