GPT-3 and Article Writing

Dan - Sep 15 '20 - - Dev Community

Open AI’s GPT-3

In the event that you haven't read the news stories about it, a software engineering student at the University of California, Berkeley, set up a blog on Substack under the pen name Adolos. While OpenAI has at present made GPT-3 accessible only to a restricted crowd of engineers, and Liam Porr was not one of them, he was able to ask a Ph.D. student who had access to the AI to run his inquiries on GPT-3.

Essentially, Porr gave a feature and introduction for the post, and GPT-3 provided a full article. He picked the best of a few results from the AI model and submitted them as a blog with almost no altering.

The principal post, named, "Feeling unproductive? Maybe you should stop overthinking" reached the top spot on Hacker News with almost 200 upvotes and in excess of 70 remarks. In a single week, the blog received 26,000 viewed and achieved 60 subscribers. As indicated by Porr, not many people brought up that the blog may have been composed by AI.

Porr ended the blog with a confession and some ideas on how GPT-3 could change the eventual fate of writing.

The Guardian’s AI Article

The Guardian followed up by publishing an article written by GPT-3 where the prompt was why humans have nothing to fear from AI. It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.” GPT-3 produced 8 different outputs, which the Guardian edited and spliced into a single essay. The Guardian stated that editing the articles from GPT-3 was no different than editing a human written article.

What is GPT-3?

Generative Pre-trained Transformer 3 (GPT-3) is a new language model created by OpenAI that is able to generate written text of such quality that is often difficult to differentiate from text written by a human.

GPT-3 is a deep neural network that uses the attention mechanism to predict the next word in a sentence. It is trained on a corpus of over 1 billion words, and can generate text at character level accuracy. GPT-3's architecture consists of two main components: an encoder and a decoder. The encoder takes as input the previous word in the sentence and produces a vector representation of it, which is then passed through an attention mechanism to produce the next word prediction. The decoder takes as input both the previous word and its vector representation, and outputs a probability distribution over all possible words given those inputs. GPT-3's full version has a capacity of 175 billion machine learning parameters, over 10 times the previous largest language model, Microsoft’s Turing NLG.

How it works

The tech world is abuzz about GPT-3's release. Huge language models (like GPT3) are becoming higher and higher and are starting to emulate a human's ability. Whereas it's not fully reliable for many businesses to place before of their customers, these models are showing sparks of cleverness that are guaranteed to accelerate the march of automation and also the prospects of intelligent systems. Let's dig into how GPT-3 is trained and the way it works.

A trained language model generates text. We will optionally pass it some text as input, and that influences its output. The output is generated from what the model “learned” throughout its coaching amount wherever it scanned huge amounts of text.

Training is accomplished by exposing the model to a lot of text. That training has been completed. All the experiments you see currently are from that one trained model. It has been calculated to take 355 GPU years and cost $4.6 million.

The dataset of three hundred billion tokens of text is employed to come up with coaching examples for the model. The model is presented an example. It is only given the features and then asked to predict the following word.

The model’s prediction will be wrong. A calculation based on the error in its prediction is performed and the model updated so the next time it makes a better prediction. Repeat this over and over again.

Now let’s follow up on these same steps with a closer look at the details.

GPT-3 really generates output one token at a time (let’s assume a token may be a word for now). GPT-3 is very large. It encodes what it learns from training in one hundred seventy five billion numbers (called parameters). These numbers are used to calculate the token that comes up at every run. The novice model starts with random parameters. Training finds values that result in higher predictions.These numbers are a part of many matrices within the model. Prediction is generally lots of matrix multiplication.

To shed light on how these parameters are distributed and used, we’ll have to open the model and peer within. GPT3 is 2048 tokens wide. That's its “context window”. which means it's 2048 tracks on that tokens are processed.

High-level steps:

Convert the word to a vector (list of numbers) representing the word

  • Calculate prediction
  • Convert ensuing vector to word
  • The necessary calculations of the GPT-3 occur within its stack of ninety six electrical device decoder layers.

Each of those layers has its own 1.8 billion parameters to form its calculations. That's wherever the “magic” happens. It’s spectacular that this works like this. Results will improve dramtically once fine-tuning is extended for the GPT-3. The odds are it going to be even more impressive. Fine-tuning really updates the model’s weights to form the model higher at a particular task.

GPT-3 Key Takeaways

  • GPT-3 shows that language model performance scales as a power-law of model size, dataset size, and therefore the quantity of computation.
  • GPT-3 demonstrates that a language model trained on enough knowledge will solve information science tasks that it's never encountered. That is, GPT-3 studies the model as a general answer for several downstream jobs while not fine-tuning.
  • The cost of AI is increasing exponentially. coaching GPT-3 would value over $4.6M employing a Tesla V100 cloud instance.
  • The size of progressive (SOTA) language models is growing by a minimum of an element of ten each year. This outpaces the expansion of GPU memory. For NLP, the times of "embarrassingly parallel" is coming back to the end; model parallelization can become indispensable.
  • Although there's a transparent performance gain from increasing the model capability, it's not clear what's extremely happening beneath the hood. Especially, it remains an issue of whether or not the model has learned to try and do reasoning, or just memorizes coaching examples in an exceedingly a lot of intelligent method.


Smart EDJE Image

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .