GPT-3

Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters, which is over two orders of magnitude greater than that of its predecessor, GPT-2. GPT-3, which was introduced in May 2020, and is in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of


Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters, which is over two orders of magnitude greater than that of its predecessor, GPT-2. GPT-3, which was introduced in May 2020, and is in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of "pre-trained language representations." Prior to the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity ten times less than that of GPT-3.
Read article on Wikipedia