Ingeniería de instrucciones (Spanish Wikipedia)

Analysis of information sources in references of the Wikipedia article "Ingeniería de instrucciones" in Spanish language version.

refsWebsite
Global rank Spanish rank
69th place
148th place
2nd place
2nd place
616th place
951st place
1,559th place
1,639th place
1,943rd place
2,010th place
7th place
15th place
57th place
3rd place
272nd place
346th place
1,272nd place
2,910th place
low place
low place
low place
low place
low place
low place
low place
low place
175th place
251st place
8,313th place
low place
low place
low place
low place
low place
low place
low place
low place
low place
low place
low place
low place
low place
187th place
254th place
low place
low place
896th place
745th place
low place
low place
low place
low place
786th place
1,006th place
low place
low place
551st place
633rd place
low place
low place
low place
low place
low place
low place
3,700th place
5,130th place
2,503rd place
4,314th place
388th place
684th place
34th place
84th place
low place
low place
149th place
128th place
4th place
4th place
383rd place
859th place
low place
low place
895th place
815th place

aclanthology.org

alignmentforum.org

  • «Mesa-Optimization». Consultado el 17 de mayo de 2023. «Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer.» 

arstechnica.com

arxiv.org

  • Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten et ál. (2022-08-31). «Emergent Abilities of Large Language Models». arXiv:2206.07682  [cs.CL]. «"In prompting, a pre-trained language model is given a prompt (e.g. a natural language instruction) of a task and completes the response without any further training or gradient updates to its parameters... The ability to perform a task via few-shot prompting is emergent when a model has random performance until a certain scale, after which performance increases to well-above random"». 
  • Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2022). "Broken Neural Scaling Laws". International Conference on Learning Representations (ICLR), 2023.
  • Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten et ál. (2022-08-31). «Emergent Abilities of Large Language Models». arXiv:2206.07682  [cs.CL]. 
  • Wei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed H.; Le, Quoc V. et ál. (2022-10-31). «Chain-of-Thought Prompting Elicits Reasoning in Large Language Models». arXiv:2201.11903  [cs.CL]. 
  • «Transformers learn in-context by gradient descent». arXiv:2212.07677. «"Thus we show how trained Transformers become mesa-optimizers i.e. learn models by gradient descent in their forward pass"». 
  • Garg, Shivam; Tsipras, Dimitris; Liang, Percy. «What Can Transformers Learn In-Context? A Case Study of Simple Function Classes». arXiv:2208.01066. «"Training a model to perform in-context learning can be viewed as an instance of the more general learning-to-learn or meta-learning paradigm"». 
  • Kojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). «Large Language Models are Zero-Shot Reasoners». arXiv:2205.11916  [cs.CL]. 
  • Liu, Jiacheng; Liu, Alisa; Lu, Ximing; Welleck, Sean; West, Peter; Le Bras, Ronan; Choi, Yejin; Hajishirzi, Hannaneh (May 2022). «Generated Knowledge Prompting for Commonsense Reasoning». Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Dublin, Ireland: Association for Computational Linguistics): 3154-3169. arXiv:2110.08387. doi:10.18653/v1/2022.acl-long.225. 
  • Zhou, Denny; Schärli (2022-05-01). «Least-to-Most Prompting Enables Complex Reasoning in Large Language Models». arXiv:2205.10625. 
  • Fu, Yao; Peng, Hao; Sabharwal, Ashish; Clark, Peter; Khot, Tushar (2022-10-01). «Complexity-Based Prompting for Multi-Step Reasoning». arXiv:2210.00720  [cs.CL]. 
  • Madaan, Aman; Tandon, Niket; Gupta, Prakhar; Hallinan, Skyler; Gao, Luyu; Wiegreffe, Sarah; Alon, Uri; Dziri, Nouha et ál. (2023-03-01). «Self-Refine: Iterative Refinement with Self-Feedback». arXiv:2303.17651  [cs.CL]. 
  • Madaan, Aman; Tandon, Niket; Gupta, Prakhar; Hallinan, Skyler; Gao, Luyu; Wiegreffe, Sarah; Alon, Uri; Dziri, Nouha et ál. (2023-03-01). «Self-Refine: Iterative Refinement with Self-Feedback». arXiv:2303.17651  [cs.CL]. 
  • Yao, Shunyu (2023-05-17). «Tree of Thoughts: Deliberate Problem Solving with Large Language Models». arXiv:2305.10601  [cs.CL]. 
  • Long, Jieyi (2023-05-15). «Large Language Model Guided Tree-of-Thought». arXiv:2305.08291  [cs.AI]. 
  • Jung, Jaehun; Qin, Lianhui; Welleck, Sean; Brahman, Faeze; Bhagavatula, Chandra; Le Bras, Ronan; Choi, Yejin (2022). «Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations». arXiv:2205.11822  [cs.CL]. 
  • Li, Zekun; Peng, Baolin; He, Pengcheng; Galley, Michel; Gao, Jianfeng; Yan, Xifeng (2023). «Guiding Large Language Models via Directional Stimulus Prompting». arXiv:2302.11520  [cs.CL]. 
  • OpenAI (2023-03-27). «GPT-4 Technical Report». arXiv:2303.08774  [cs.CL].  [See Figure 8.]
  • Lewis, Patrick; Perez, Ethan; Piktus, Aleksandra; Petroni, Fabio; Karpukhin, Vladimir; Goyal, Naman; Küttler, Heinrich; Lewis, Mike et al. (2020). «Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks». Advances in Neural Information Processing Systems (Curran Associates, Inc.) 33: 9459-9474. arXiv:2005.11401. 
  • Fernando, Chrisantha; Banarse, Dylan; Michalewski, Henryk; Osindero, Simon; Rocktäschel, Tim (2023). Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. arXiv:2309.16797. 
  • Pryzant, Reid; Iter, Dan; Li, Jerry; Lee, Yin Tat; Zhu, Chenguang; Zeng, Michael (2023). Automatic Prompt Optimization with "Gradient Descent" and Beam Search. arXiv:2305.03495. 
  • Guo, Qingyan; Wang, Rui; Guo, Junliang; Li, Bei; Song, Kaitao; Tan, Xu; Liu, Guoqing; Bian, Jiang et al. (2023). Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers. arXiv:2309.08532. 
  • Zhou, Yongchao; Ioan Muresanu, Andrei; Han, Ziwen; Paster, Keiran; Pitis, Silviu; Chan, Harris; Ba, Jimmy (2022-11-01). «Large Language Models Are Human-Level Prompt Engineers». arXiv:2211.01910  [cs.LG]. 
  • Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2022). «An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion». arXiv:2208.01618  [cs.CV]. «"Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new "words" in the embedding space of a frozen text-to-image model."». 
  • Kirillov, Alexander; Mintun, Eric; Ravi, Nikhila; Mao, Hanzi; Rolland, Chloe; Gustafson, Laura; Xiao, Tete; Whitehead, Spencer et ál. (2023-04-01). «Segment Anything». arXiv:2304.02643  [cs.CV]. 
  • Sun, Simeng; Liu, Yang; Iter, Dan. «How Does In-Context Learning Help Prompt Tuning?». arXiv:2302.11521. 
  • Greshake, Kai; Abdelnabi, Sahar; Mishra, Shailesh; Endres, Christoph; Holz, Thorsten; Fritz, Mario (2023-02-01). «Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection». arXiv:2302.12173  [cs.CR]. 
  • Perez, Fábio; Ribeiro, Ian (2022). «Ignore Previous Prompt: Attack Techniques For Language Models». arXiv:2211.09527  [cs.CL]. 
  • Branch, Hezekiah J.; Cefalu, Jonathan Rodriguez (2022). «Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples». arXiv:2209.02128  [cs.CL]. 

claid.ai

cnet.com

computerweekly.com

contractnerds.com

doi.org

dx.doi.org

github.blog

github.com

  • «protectai/rebuff». Protect AI. 13 de septiembre de 2023. Consultado el 13 de septiembre de 2023. 

googleblog.com

ai.googleblog.com

hackaday.com

intef.es

descargas.intef.es

issn.org

portal.issn.org

langchain.dev

blog.langchain.dev

learnprompting.org

linkedin.com

masterofcode.com

medium.com

midjourney.com

docs.midjourney.com

nccgroup.com

research.nccgroup.com

  • Selvi, Jose (5 de diciembre de 2022). «Exploring Prompt Injection Attacks». research.nccgroup.com. «Prompt Injection is a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning». 
  • Selvi, Jose (5 de diciembre de 2022). «Exploring Prompt Injection Attacks». NCC Group Research Blog (en inglés estadounidense). Consultado el 9 de febrero de 2023. 

neurips.cc

proceedings.neurips.cc

nih.gov

ncbi.nlm.nih.gov

nvidia.com

developer.nvidia.com

nytimes.com

openai.com

cdn.openai.com

openai.com

  • OpenAI (30 de noviembre de 2022). «Introducing ChatGPT». OpenAI Blog. Consultado el 16 de agosto de 2023. «what is the fermat's little theorem». 

platform.openai.com

openart.ai

cdn.openart.ai

  • Diab, Mohamad (28 de octubre de 2022). «Stable Diffusion Prompt Book». Consultado el 7 de agosto de 2023. «Prompt engineering is the process of structuring words that can be interpreted and understood by a text-to-image model. Think of it as the language you need to speak in order to tell an AI model what to draw.» 

sciencedirect.com

scientificamerican.com

  • Musser, George. «How AI Knows Things No One Told It». Scientific American. Consultado el 17 de mayo de 2023. «By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users' prompts—an ability known as in-context learning.» 

searchenginejournal.com

simonwillison.net

stable-diffusion-art.com

techcrunch.com

  • Wiggers, Kyle (12 de junio de 2023). «Meta open sources an AI-powered music generator». TechCrunch. Consultado el 15 de agosto de 2023. «Next, I gave a more complicated prompt to attempt to throw MusicGen for a loop: "Lo-fi slow BPM electro chill with organic samples."». 

technologyreview.com

theregister.com

venturebeat.com

vice.com

vulcan.io

washingtonpost.com

zapier.com

  • Robinson, Reid (3 de agosto de 2023). «How to write an effective GPT-3 or GPT-4 prompt». Zapier. Consultado el 14 de agosto de 2023. «"Basic prompt: 'Write a poem about leaves falling.' Better prompt: 'Write a poem in the style of Edgar Allan Poe about leaves falling.'». 

zdnet.com