BERT (언어 모델) (Korean Wikipedia)

Analysis of information sources in references of the Wikipedia article "BERT (언어 모델)" in Korean language version.

refsWebsite
Global rank Korean rank
69th place
54th place
1,272nd place
983rd place
low place
low place
2nd place
3rd place
11th place
310th place

aclanthology.org

arxiv.org

  • Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2018년 10월 11일). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. arXiv:1810.04805v2 [cs.CL]. 
  • Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). “A Primer in BERTology: What We Know About How BERT Works”. 《Transactions of the Association for Computational Linguistics》 8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_00349. S2CID 211532403. 
  • Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). “Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books”. arXiv:1506.06724 [cs.CV].  arXiv 인용에서 지원되지 않는 변수를 사용함 (도움말)
  • Rajpurkar, Pranav; Zhang, Jian; Lopyrev, Konstantin; Liang, Percy (2016년 10월 10일). “SQuAD: 100,000+ Questions for Machine Comprehension of Text”. arXiv:1606.05250 [cs.CL]. 
  • Zellers, Rowan; Bisk, Yonatan; Schwartz, Roy; Choi, Yejin (2018년 8월 15일). “SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference”. arXiv:1808.05326 [cs.CL]. 

doi.org

dx.doi.org

googleblog.com

ai.googleblog.com

semanticscholar.org

api.semanticscholar.org