GPT-3 (French Wikipedia)

Analysis of information sources in references of the Wikipedia article "GPT-3" in French language version.

refsWebsite
Global rank French rank
187th place
491st place
1,559th place
1,879th place
69th place
232nd place
1,943rd place
3,000th place
114th place
415th place
8,920th place
8,588th place
57th place
4th place
5th place
13th place
2nd place
3rd place
43rd place
132nd place
low place
low place
low place
low place
low place
low place
786th place
1,674th place
low place
low place
low place
low place
low place
low place
low place
1,889th place
low place
1,354th place
466th place
1,248th place

analyticsindiamag.com

  • Ram Sagar, « OpenAI Releases GPT-3, The Largest Model So Far », Analytics India Magazine,‎ (lire en ligne, consulté le )

arr.am

artificiallawyer.com

arxiv.org

  • (en) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever et Dario Amodei, « Language Models are Few-Shot Learners », ArXiv et Advances in Neural Information Processing Systems 33,‎ (ISSN 2331-8422, OCLC 228652809, DOI 10.48550/ARXIV.2005.14165, arXiv 2005.14165, lire en ligne) :

    « To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. »

    .Voir et modifier les données sur Wikidata
  • (en) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan et al., « Language Models are Few-Shot Learners », .

d4mucfpksywv.cloudfront.net

  • « Language Models are Unsupervised Multitask Learners », OpenAI blog,‎ (lire en ligne, consulté le ) :

    « "GPT-2, is a 1.5B parameter Transformer" »

developpez.com

intelligence-artificielle.developpez.com

doi.org

dx.doi.org

  • (en) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever et Dario Amodei, « Language Models are Few-Shot Learners », ArXiv et Advances in Neural Information Processing Systems 33,‎ (ISSN 2331-8422, OCLC 228652809, DOI 10.48550/ARXIV.2005.14165, arXiv 2005.14165, lire en ligne) :

    « To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. »

    .Voir et modifier les données sur Wikidata

engadget.com

issn.org

portal.issn.org

  • (en) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever et Dario Amodei, « Language Models are Few-Shot Learners », ArXiv et Advances in Neural Information Processing Systems 33,‎ (ISSN 2331-8422, OCLC 228652809, DOI 10.48550/ARXIV.2005.14165, arXiv 2005.14165, lire en ligne) :

    « To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. »

    .Voir et modifier les données sur Wikidata

openai.com

openai.com

beta.openai.com

  • (en) « OpenAI API », sur beta.openai.com (consulté le )

siecledigital.fr

techcrunch.com

technologyreview.com

textcortex.com

  • (en-US) « Democratizing Written Communication - TextCortex Raises $1.2 Million Pre-Seed To Advance Proprietary NLG Capabilities », TextCortex AI,‎ (lire en ligne, consulté le )

the-decoder.com

theverge.com

towardsdatascience.com

  • Frederik Bussler, « Will GPT-3 Kill Coding? », sur Towards Data Science, (consulté le )
  • (en) Frederik Bussler, « Will GPT-3 Kill Coding? », sur Medium, (consulté le )

wikidata.org

  • (en) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever et Dario Amodei, « Language Models are Few-Shot Learners », ArXiv et Advances in Neural Information Processing Systems 33,‎ (ISSN 2331-8422, OCLC 228652809, DOI 10.48550/ARXIV.2005.14165, arXiv 2005.14165, lire en ligne) :

    « To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. »

    .Voir et modifier les données sur Wikidata

worldcat.org

  • (en) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever et Dario Amodei, « Language Models are Few-Shot Learners », ArXiv et Advances in Neural Information Processing Systems 33,‎ (ISSN 2331-8422, OCLC 228652809, DOI 10.48550/ARXIV.2005.14165, arXiv 2005.14165, lire en ligne) :

    « To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. »

    .Voir et modifier les données sur Wikidata

zdnet.com