Моўная мадэль (Belarusian Wikipedia)

Analysis of information sources in references of the Wikipedia article "Моўная мадэль" in Belarusian language version.

refsWebsite
Global rank Belarusian rank
1st place
2nd place
2nd place
22nd place
69th place
311th place
179th place
342nd place
5,893rd place
low place
11th place
3,063rd place
120th place
575th place
low place
low place
low place
low place
low place
low place
18th place
75th place
low place
low place
low place
low place
153rd place
171st place
3,087th place
4,433rd place
low place
low place
383rd place
331st place

aclweb.org

arxiv.org

doi.org

figshare.com

github.com

gluebenchmark.com

harvard.edu

ui.adsabs.harvard.edu

karpathy.github.io

microsoft.com

nyu-mll.github.io

rajpurkar.github.io

researchgate.net

scholarpedia.org

semanticscholar.org

api.semanticscholar.org

stanford.edu

web.stanford.edu

  • Jurafsky, Dan; Martin, James H. (2021). "N-gram Language Models". Speech and Language Processing (3rd ed.). Архівавана з арыгінала 22 May 2022. Праверана 24 May 2022.

nlp.stanford.edu

uiuc.edu

l2r.cs.uiuc.edu

web.archive.org

  • Jurafsky, Dan; Martin, James H. (2021). "N-gram Language Models". Speech and Language Processing (3rd ed.). Архівавана з арыгінала 22 May 2022. Праверана 24 May 2022.
  • Andreas, Jacob, Andreas Vlachos, and Stephen Clark (2013). "Semantic parsing as machine translation" Архівавана 15 жніўня 2020 года.. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).
  • Pham, Vu, et al (2014). "Dropout improves recurrent neural networks for handwriting recognition" Архівавана 11 лістапада 2020 года.. 14th International Conference on Frontiers in Handwriting Recognition. IEEE.
  • Htut, Phu Mon, Kyunghyun Cho, and Samuel R. Bowman (2018). "Grammar induction with neural language models: An unusual replication" Архівавана 14 жніўня 2022 года.. arΧiv:1808.10000.
  • The Unreasonable Effectiveness of Recurrent Neural Networks. Архівавана з першакрыніцы 1 November 2020. Праверана 27 January 2019.
  • Bengio, Yoshua (2008). "Neural net language models". Scholarpedia. Vol. 3. p. 3881. Bibcode:2008SchpJ...3.3881B. doi:10.4249/scholarpedia.3881. Архівавана з арыгінала 26 October 2020. Праверана 28 August 2015.
  • The Corpus of Linguistic Acceptability (CoLA). nyu-mll.github.io. Архівавана з першакрыніцы 7 December 2020. Праверана 25 лютага 2019.
  • GLUE Benchmark (англ.). gluebenchmark.com. Архівавана з першакрыніцы 4 November 2020. Праверана 25 лютага 2019.
  • Microsoft Research Paraphrase Corpus(нявызн.). Microsoft Download Center. Архівавана з першакрыніцы 25 October 2020. Праверана 25 лютага 2019.
  • Recognizing Textual Entailment(недаступная спасылка). Архівавана з першакрыніцы 9 August 2017. Праверана February 24, 2019.
  • The Stanford Question Answering Dataset. rajpurkar.github.io. Архівавана з першакрыніцы 30 October 2020. Праверана 25 лютага 2019.
  • Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. nlp.stanford.edu. Архівавана з першакрыніцы 27 October 2020. Праверана 25 лютага 2019.
  • Hendrycks, Dan (2023-03-14), Measuring Massive Multitask Language Understanding, Архівавана з арыгінала 15 March 2023, Праверана 2023-03-15