From the course: Next Generation AI: An Intro to GPT-3

Uses and examples of GPT-3

From the course: Next Generation AI: An Intro to GPT-3

Uses and examples of GPT-3

- GPT-3 was released for use in experimentation to a limited audience in 2020. So far we've seen a lot of remarkable demonstrations and even a few full solutions. The degree to which GPT-3 will be incorporated fully into production solutions, particularly by Microsoft who now have substantive rights over it are yet to be seen. However, many early demonstrations of the technology are a wonderful tease of its possibilities. Let's take a look at a few. I'll start with GPT-3 running as a function within Google Sheets. In this example, some cells include the data for the populations of four US states. Michigan is then added without knowledge of its population. The GPT-3 function is then entered in the cell where the population result will reside. The parameters for the function are the existing states and populations in the spreadsheet. GPT-3 processes the information and determines from the pattern that the missing information is the population of Michigan, which it then goes ahead and populates. We can see by changing state name and even adding additional state information, that GPT-3 is smart enough to pool in the missing information. You'll recall that this is known as Few-Shot Learning. In the next more advanced example GPT-3 is going to process some text and create computer code based on what has been written. In other words, what you see here is someone simply typing in a description and you can see that GPT-3 is generating the appropriate code on the left and you can see the output of the code on the right. This example is so remarkable, could it make programming software as easy as describing what you want without any knowledge of computer code at all? It's a tantalizing prospect (laughs). Next, let's take a look at how GPT-3 answers some questions. While GPT-3 hasn't passed the Turing test yet, the gold standard for AI where a human can tell whether it is another human answering a question or software, but it comes close. Under rigorous testing, it becomes clear that GPT-3 does make too many basic mistakes. Here you see a set of questions about animals, it answers them correctly. In the last question, it is asked whether any animals have three legs. The answer is no, but the follow-up is interesting. It's asked why no animals have three legs. And the answer given is that the animal would fall over. Let's be clear, GPT-3 didn't logically deduce that last answer, but the ability to find exactly the right answer from its database is an admirable accomplishment for software. This next example is particularly impressive for me as a professor and instructor. One of the things I often have to do is create quizzes for my courses. In this GPT-3 demo, all that is needed are the questions, a student then enters answers and GPT-3 checks them. It processes all the input and determines what answers are correct and incorrect. This is really powerful because the students is free to express the answer in their words. The software does not impose any constraints on how to answer. GPT-3 can handle the unstructured nature of the responses. In this next example, we can generate human faces of people who don't already exist, simply by typing in a description. Watch us some basic details are entered, then generate is clicked. GPT-3 takes a moment to process the request and create photo realistic faces. We do it for a woman and then a man, the results are quite remarkable. If this particular example intrigues you, I recommend taking a look at a website called "This Person Does Not Exist", here's the URL. It's not built with GPT-3, but it's another form of AI based on machine learning and neural networks. Lots of other fascinating demos exist that help create legal language for contracts from plain English or vice versa. It takes legal language and converts it to something we can all understand. Others show how entering a math problem in plain English can be converted to an equation and then it'll solve that equation. What GPT-3 does best is generating text. There are numerous examples of GPT-3 generating complete emails from just a few text prompts. Poems have been written, essays have been composed, science papers and news articles have been generated. Sure there are plenty of errors and clues that the content is not human, but the effort is pretty good. Try some GPT-3 applications yourself. Here's a resume builder. I include more in the final video in this course. Finally, while I focused on GPT-3's incredible ability to generate meaningful texts in these examples, let's look at a unique implementation of the technology called Dali. The name comes from a portmanteau of the artist Salvador Dali and Pixar's WALL-E. Instead of generating text, Dali takes text and generates images, a pixel at a time. For example, in a popular demonstration of its capabilities, the sentence, "an armchair in the shape of an avocado", generated a collection of convincing images of chairs in the shape of an avocado. Let's recognize the breakthrough here. These images aren't pre-existing and stored in the database, these are original images generated on the fly. For example, a snail made of a harp generates several images that are clearly unique. Since GPT-3 hasn't been trained specifically to create snails made of harps, it's achieving this outcome through zero-shot learning. I could imagine all manner of designers, for example, loving Dali. However, my guess is that this capability will be far reaching as it develops. You can experiment with Dali on the OpenAI website. GPT-3 hints at the future of AI in general. If you're feeling almost a little uncomfortable, that's understandable. Today, AI augments human activities increasingly well, but how long will it be until it overtakes us? And what might that mean for our future?

Contents