대형 언어 모델 (Korean Wikipedia)

Analysis of information sources in references of the Wikipedia article "대형 언어 모델" in Korean language version.

refsWebsite
Global rank Korean rank
69th place
54th place
1st place
1st place
2nd place
3rd place
low place
low place
383rd place
118th place
5th place
11th place
1,272nd place
983rd place
1,559th place
711th place
46th place
2nd place
18th place
27th place
415th place
263rd place
616th place
362nd place
low place
8,293rd place
77th place
194th place
low place
7,747th place
652nd place
342nd place
low place
low place
11th place
310th place
12th place
65th place
612th place
2,608th place
1,943rd place
1,161st place
2,263rd place
781st place
low place
low place
2,503rd place
1,329th place
low place
low place
low place
low place
187th place
102nd place
low place
low place
153rd place
82nd place
low place
low place
234th place
148th place
low place
low place
low place
low place
105th place
291st place
low place
low place
low place
low place
220th place
358th place

aclanthology.org (Global: low place; Korean: low place)

amazon.com (Global: 105th place; Korean: 291st place)

aws.amazon.com

amazon.science (Global: low place; Korean: low place)

analyticsindiamag.com (Global: low place; Korean: 7,747th place)

anthropic.com (Global: low place; Korean: low place)

  • “Product” (영어). 《Anthropic》. 2023년 3월 14일에 확인함. 

arxiv.org (Global: 69th place; Korean: 54th place)

  • Goodman, Joshua (2001년 8월 9일), 《A Bit of Progress in Language Modeling》, arXiv:cs/0108005, Bibcode:2001cs........8005G 
  • Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). “A Primer in BERTology: What We Know About How BERT Works”. 《Transactions of the Association for Computational Linguistics》 8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_00349. S2CID 211532403. 2022년 4월 3일에 원본 문서에서 보존된 문서. 2024년 1월 21일에 확인함. 
  • Movva, Rajiv; Balachandar, Sidhika; Peng, Kenny; Agostini, Gabriel; Garg, Nikhil; Pierson, Emma (2024). 〈Topics, Authors, and Institutions in Large Language Model Research: Trends from 17K arXiv Papers〉. 《Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)》. 1223–1243쪽. arXiv:2307.10700. doi:10.18653/v1/2024.naacl-long.67. 2024년 12월 8일에 확인함. 
  • Movva, Rajiv; Balachandar, Sidhika; Peng, Kenny; Agostini, Gabriel; Garg, Nikhil; Pierson, Emma (2024). 〈Topics, Authors, and Institutions in Large Language Model Research: Trends from 17K arXiv Papers〉. 《Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)》. 1223–1243쪽. arXiv:2307.10700. doi:10.18653/v1/2024.naacl-long.67. 2024년 12월 8일에 확인함. 
  • Peng, Bo 외 (2023). “RWKV: Reinventing RNNS for the Transformer Era”. arXiv:2305.13048 [cs.CL]. 
  • Gu, Albert; Dao, Tri (2023년 12월 1일), 《Mamba: Linear-Time Sequence Modeling with Selective State Spaces》, arXiv:2312.00752 
  • Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2018년 10월 11일). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. arXiv:1810.04805v2 [cs.CL]. 
  • Gao, Leo; Biderman, Stella; Black, Sid; Golding, Laurence; Hoppe, Travis; Foster, Charles; Phang, Jason; He, Horace; Thite, Anish; Nabeshima, Noa; Presser, Shawn; Leahy, Connor (2020년 12월 31일). “The Pile: An 800GB Dataset of Diverse Text for Language Modeling”. arXiv:2101.00027 [cs.CL]. 
  • Smith, Shaden; Patwary, Mostofa; Norick, Brandon; LeGresley, Patrick; Rajbhandari, Samyam; Casper, Jared; Liu, Zhun; Prabhumoye, Shrimai; Zerveas, George; Korthikanti, Vijay; Zhang, Elton; Child, Rewon; Aminabadi, Reza Yazdani; Bernauer, Julie; Song, Xia (2022년 2월 4일). “Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model”. arXiv:2201.11990. 
  • Wang, Shuohuan; Sun, Yu; Xiang, Yang; Wu, Zhihua; Ding, Siyu; Gong, Weibao; Feng, Shikun; Shang, Junyuan; Zhao, Yanbin; Pang, Chao; Liu, Jiaxiang; Chen, Xuyi; Lu, Yuxiang; Liu, Weixin; Wang, Xi; Bai, Yangfan; Chen, Qiuliang; Zhao, Li; Li, Shiyong; Sun, Peng; Yu, Dianhai; Ma, Yanjun; Tian, Hao; Wu, Hua; Wu, Tian; Zeng, Wei; Li, Ge; Gao, Wen; Wang, Haifeng (2021년 12월 23일). “ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation”. arXiv:2112.12731. 
  • Askell, Amanda; Bai, Yuntao; Chen, Anna 외 (2021년 12월 9일). “A General Language Assistant as a Laboratory for Alignment”. arXiv:2112.00861 [cs.CL]. 
  • Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur 외 (2022년 3월 29일). “Training Compute-Optimal Large Language Models”. arXiv:2203.15556 [cs.CL]. 
  • Zhang, Susan; Roller, Stephen; Goyal, Naman; Artetxe, Mikel; Chen, Moya; Chen, Shuohui; Dewan, Christopher; Diab, Mona; Li, Xian; Lin, Xi Victoria; Mihaylov, Todor; Ott, Myle; Shleifer, Sam; Shuster, Kurt; Simig, Daniel; Koura, Punit Singh; Sridhar, Anjali; Wang, Tianlu; Zettlemoyer, Luke (2022년 6월 21일). “OPT: Open Pre-trained Transformer Language Models”. arXiv:2205.01068 [cs.CL]. 
  • Lewkowycz, Aitor; Andreassen, Anders; Dohan, David; Dyer, Ethan; Michalewski, Henryk; Ramasesh, Vinay; Slone, Ambrose; Anil, Cem; Schlag, Imanol; Gutman-Solo, Theo; Wu, Yuhuai; Neyshabur, Behnam; Gur-Ari, Guy; Misra, Vedant (2022년 6월 30일). “Solving Quantitative Reasoning Problems with Language Models”. arXiv:2206.14858 [cs.CL]. 
  • Taylor, Ross; Kardas, Marcin; Cucurull, Guillem; Scialom, Thomas; Hartshorn, Anthony; Saravia, Elvis; Poulton, Andrew; Kerkez, Viktor; Stojnic, Robert (2022년 11월 16일). “Galactica: A Large Language Model for Science”. arXiv:2211.09085 [cs.CL]. 
  • Soltan, Saleh; Ananthakrishnan, Shankar; FitzGerald, Jack 외 (2022년 8월 3일). “AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model”. arXiv:2208.01448 [cs.CL]. 
  • Wu, Shijie; Irsoy, Ozan; Lu, Steven; Dabravolski, Vadim; Dredze, Mark; Gehrmann, Sebastian; Kambadur, Prabhanjan; Rosenberg, David; Mann, Gideon (2023년 3월 30일). “BloombergGPT: A Large Language Model for Finance”. arXiv:2303.17564. 
  • Ren, Xiaozhe; Zhou, Pingyi; Meng, Xinfan; Huang, Xinjing; Wang, Yadao; Wang, Weichao; Li, Pengfei; Zhang, Xiaoda; Podolskiy, Alexander; Arshinov, Grigory; Bout, Andrey; Piontkovskaya, Irina; Wei, Jiansheng; Jiang, Xin; Su, Teng; Liu, Qun; Yao, Jun (2023년 3월 19일). “PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing”. arXiv:2303.10845. 
  • Köpf, Andreas; Kilcher, Yannic; von Rütte, Dimitri; Anagnostidis, Sotiris; Tam, Zhi-Rui; Stevens, Keith; Barhoum, Abdullah; Duc, Nguyen Minh; Stanley, Oliver; Nagyfi, Richárd; ES, Shahul; Suri, Sameer; Glushkov, David; Dantuluri, Arnav; Maguire, Andrew (2023년 4월 14일). “OpenAssistant Conversations -- Democratizing Large Language Model Alignment”. 《arXiv:2304.07327 [cs]》. 

cerebras.net (Global: low place; Korean: low place)

cnbc.com (Global: 220th place; Korean: 358th place)

deepmind.com (Global: low place; Korean: 8,293rd place)

doi.org (Global: 2nd place; Korean: 3rd place)

dx.doi.org

euronews.com (Global: 612th place; Korean: 2,608th place)

facebook.com (Global: 77th place; Korean: 194th place)

ai.facebook.com

fastcompanyme.com (Global: low place; Korean: low place)

forefront.ai (Global: low place; Korean: low place)

github.com (Global: 383rd place; Korean: 118th place)

  • “BERT”. 2023년 3월 13일 – GitHub 경유. 
  • “gpt-2”. 《GitHub》. 2023년 3월 13일에 확인함. 
  • “GPT Neo”. 2023년 3월 15일 – GitHub 경유. 
  • Khrushchev, Mikhail; Vasilev, Ruslan; Petrov, Alexey; Zinov, Nikolay (2022년 6월 22일), 《YaLM 100B》, 2023년 3월 18일에 확인함 

googleblog.com (Global: 1,272nd place; Korean: 983rd place)

ai.googleblog.com

harvard.edu (Global: 18th place; Korean: 27th place)

adsabs.harvard.edu

huggingface.co (Global: low place; Korean: low place)

ieee.org (Global: 652nd place; Korean: 342nd place)

ieeexplore.ieee.org

kdnuggets.com (Global: low place; Korean: low place)

lambdalabs.com (Global: low place; Korean: low place)

microsoft.com (Global: 153rd place; Korean: 82nd place)

mit.edu (Global: 415th place; Korean: 263rd place)

direct.mit.edu

nature.com (Global: 234th place; Korean: 148th place)

terms.naver.com

neurips.cc (Global: low place; Korean: low place)

proceedings.neurips.cc

nvidia.com (Global: 2,503rd place; Korean: 1,329th place)

blogs.nvidia.com

openai.com (Global: 1,559th place; Korean: 711th place)

openai.com

cdn.openai.com

ourworldindata.org (Global: 2,263rd place; Korean: 781st place)

semanticscholar.org (Global: 11th place; Korean: 310th place)

api.semanticscholar.org

techcrunch.com (Global: 187th place; Korean: 102nd place)

technologyreview.com (Global: 1,943rd place; Korean: 1,161st place)

theguardian.com (Global: 12th place; Korean: 65th place)

unite.ai (Global: low place; Korean: low place)

venturebeat.com (Global: 616th place; Korean: 362nd place)

web.archive.org (Global: 1st place; Korean: 1st place)

worldcat.org (Global: 5th place; Korean: 11th place)