Monica
💫 Summary
This tutorial demonstrates how to build a ChatGPT client in Python to create a custom AI chat app using the ChatGPI API. The video covers the benefits of using an OpenAI API, how to use the GPT-3.5 Turbo model, and how to modify prompts to add functionality and personality to the app.
✨ Highlights📊 Transcript
This section introduces the benefits of using the ChatGPT API in Python.
00:00
Explains that the ChatGPT API allows for building products or services that use AI.
Mentions the ability to modify prompts for additional functionality or personality.
Highlights the power, affordability, and accessibility of the ChatGPT API.
To use the ChatGPT API in Python, sign up and create a new secret key, then refer to the documentation for instructions on how to format and send messages to the API.
03:40
Sign up and create a new secret key in the profile section.
Copy the key as it is needed for the Python app to communicate with OpenAI.
Open the documentation and scroll down to find the chat completion section.
Import the OpenAI client, specify the model, and send a series of messages in JSON format.
Messages can have different roles like "system", "user", and "assistant" to guide the AI's responses.
Including sample user and assistant responses can help the AI understand how to react.
Establish context for a conversation by including previous exchanges in the message.
The last user content prompt is the most important for the AI to pay attention to.
This section covers printing a message, importing the OpenAI library, and using the argparse library in Python.
07:17
The speaker demonstrates how to print a "Hello World" message.
The importance of importing the OpenAI library and installing the latest version is mentioned.
The speaker explains how to use the argparse library to handle command line arguments.
An example of adding an argument called "prompt" is shown.
The video demonstrates how to use the ChatGPI API in Python to create a chat completion.
10:58
The code for making the request is copied and pasted.
The model "gpt-turbo" is used for the chat completion.
The response from the API can be extracted using `response.choice[0].message.content`.
The response is stored in a variable and can be printed or returned.
The `ask_gpt` function is completed and used in the program.
The program receives a response from ChatGPT and it is displayed in a different color for better visibility.
The speaker explains how to modify the chat application to fix two problems: making it understand the context of the chat and running in a loop until the user decides to exit.
14:35
The input to the API is a list of messages.
An empty list called "chat_history" is created.
The chat_history is unpacked and added to the messages list.
All user interactions are stored in the chat history.
There may be a limit on the number of conversations stored in the chat history.
The video demonstrates how to use the ChatGPI API in Python to build a custom AI chat app.
18:14
The app can respond with code examples and instructions based on user input.
The app can remember previous context and provide relevant information.
Pressing Enter without typing anything will end the session.
The system message can be customized to control the style and content of the responses.
The video discusses how to run a Python script from anywhere in the terminal.
21:50
Different types of terminals (bash, z-shell, fish) can be used.
The location of the Python binary and the script file needs to be known.
The command "which python" can be used in the terminal to find the location of the Python executable.
Once the command is run with the file, the script becomes universally executable.
00:00If you want to get started building products or services
00:03that use AI, then this video is for you.
00:05Today, I want to share with you how to build a ChatGPT client
00:09in Python that we can run directly from the terminal.
00:12In doing so, you will learn how to use the ChatGPT
00:16API directly from Python, and how to modify
00:18your prompts so that you can add additional
00:21functionality or personality to your app.
00:24With the release of ChatGPT's new API, the business
00:27use cases of generative AI is booming right now.
00:30Just check out the Google search volume for any AI-related topic.
00:33I think there's going to be a lot of opportunities to
00:36create things that weren't possible up until now.
00:39So, if you're excited about AI and want to learn
00:42how to use it, then let's get started.
00:44Before we get started coding, let's take a look
00:49at some of the benefits of using an OpenAI API.
00:53First of all, the tech is extremely powerful, but it's also really cheap to use.
00:57There's also many ways to use it.
00:59And I think that these three things create the
01:02perfect storm for innovation and creativity when
01:05it comes to using generative AI, both for business
01:07and for just fun projects in general.
01:09In this tutorial specifically, we're going to
01:12learn how to use the GPT-3.5 Turbo model.
01:15This model was released recently, and it is the
01:18same model that ChatGPT uses behind the scenes.
01:20But I think the best thing about it is that it's also extremely cheap.
01:25It only costs $0.002 per 1000 tokens.
01:28And you can think of tokens like words, so you
01:31can almost process an entire essay, and
01:33it won't even cost you 10 cents to do that.
01:35And aside from being extremely cheap, I think that
01:38the API is also extremely accessible because anybody
01:41can go ahead and sign up to use it right now.
01:43And there's no more waiting or a screening process that you need to do.
01:47You could sign up right now at the website to get an API key and start using it.
01:51Overall, I think that this is going to be a great thing
01:54to learn whether you want to build your own original
01:57AI products or pick up a new skill for your resume.
01:59And this entire project is going to take you less than half an hour to code.
02:04To follow along with this project, you're going
02:06to need to know the basics of Python
02:08and a little bit about how REST APIs works.
02:10We'll also be using the OpenAI Python library, so you'll need
02:14to know how to use a package manager like pip as well.
02:18And this is the command you need to run with
02:20pip to install the OpenAI library in Python.
02:23Now let's go over what we will be building in this tutorial.
02:26Our project is going to be contained within a single Python file.
02:30We'll be able to run this script directly from
02:33the terminal, especially if we create an
02:35alias for it, and then we can use it to ask
02:37GPT questions directly from our terminal.
02:40For example, I could ask it, "what are five
02:42books to get better at coding?" And the program
02:45will talk to GPT directly from the
02:47terminal and then give us the response here.
02:49And it will maintain this chat context.
02:52So we can ask it follow up questions as well, like,
02:54"where can I buy them?" And here it will answer
02:57with where it thinks we can buy these books.
02:59So here it says "you can buy these books online
03:02from various retailers, such as Amazon, etc,
03:04etc." And to end the conversation, we just press
03:06Enter without typing anything in the prompt.
03:09I'll also show you how to modify the program so that you can
03:12get the AI to respond to you with a specific personality.
03:15In this case, I prompted it to respond sarcastically
03:19and berate the user, but still answer them anyway.
03:22And if I ask the AI the same question, you could
03:25see that it's added these sassy remarks to
03:28the prompt, but it still answers me anyways.
03:31To do all of this stuff, you're going to need an API key from OpenAI.
03:36So head over to "platform.openai.com" and sign up for an account.
03:40Once you've signed up, go over to your profile and then click
03:44View API Keys, and then click "Create a New Secret Key".
03:47I already have the maximum number of API keys, so I can't create anymore.
03:51But here you should see a pop up with a big piece of text.
03:54So copy that because that's the only time you're ever going to see it.
03:58And that key is what you eventually need for your Python app
04:01to be able to talk to OpenAI and use this ChatGPT endpoint.
04:05Now before I start using an API, I usually like to have the documentation open.
04:10So I recommend you do that as well.
04:12And to do that, just click here on the documentation,
04:14and then scroll down and look for chat
04:16completion, which is the one we'll be using.
04:18And here you'll see a bunch of text explaining what it's
04:21about, an introduction and a couple of code examples.
04:24And this is pretty straightforward.
04:26So once we're in Python, we can basically just
04:29import the OpenAI client, specify the model,
04:31which is going to be this ChatGPT turbo.
04:33And then we can send it a series of messages.
04:37And the messages are JSON formatted, and they have a role and a content.
04:41And there's three different roles we can include in our messages.
04:45So here we could use the "system" role to tell the
04:48AI what type of personality it should have,
04:51or how it should respond to the user messages.
04:54And if we want to give the AI examples of how to react, we can
04:58also include a couple of sample user and assistant responses.
05:02These won't be used in the prompt, but it will help the AI
05:06understand how it is expected to respond to user messages.
05:09We can also use this to establish context for a conversation.
05:13So if we have a conversation with 10 exchanges between
05:16the user and the AI ready, we can basically put
05:19that whole list into this message so that the AI knows
05:23how to answer follow up questions of the user.
05:26Finally, the prompt that the AI will pay attention
05:29to the most is this last user content prompt here.
05:32And that is essentially what the AI will respond to.
05:35And if we scroll down, you can see the expected response format.
05:39So we're going to get a message object with the
05:42role of assistant and the content of the AI.
05:45So this text is what we're really looking for here in our response.
05:49The response will also include usage information about this particular request.
05:53So it'll tell us how many tokens we used to generate this output.
05:58And this is important to you because the tokens is
06:01how the pricing is calculated when you use OpenAI.
06:04And if you go to your user profile, and then
06:06you click pricing, you'll be able to see
06:09the pricing page for each of the models.
06:11So you can see here the chat model will charge you $0.002 for every 1000 tokens.
06:17To put into perspective how cheap this is, if you
06:20get this ChatGPT to write you an entire 10,000 word
06:24essay, it's only going to cost you two cents.
06:27I think we've now spent enough time reading the documentation.
06:31Now let's get to coding.
06:33I'm in my editor, VSCode, and I'm in a new project directory
06:36called "GPT shell" and it's completely empty.
06:39So let's first start by creating our Python file.
06:42And I'm going to call this "gpt.py", you can call this whatever you want.
06:46Here, I'm going to create the main function
06:49that I'm going to use to run the program
06:51and then run it when this file is executed.
06:53This is how I like to start my projects.
06:55So just an empty function called "main", I'm just going to "pass".
06:59And then I'm going to write "if name equals main run main".
07:03So this will get the program to run when I execute it.
07:06Because ultimately, I want to run this as a script in my terminal.
07:09And let's go to this terminal, and see if we can run the file.
07:14So I'm just going to type Python, then "gpt.py".
07:17And so it runs, no errors, but nothing shows up yet.
07:21Okay, that's fine. So now let's just print a "Hello World" message.
07:27And if I go back and run that again, the message shows up.
07:30So that's good.
07:31And now let's import the OpenAI library because
07:34that's critical to what we want to build.
07:37And if you don't have this, then don't forget to
07:40run pip install OpenAI in your environment.
07:43And you might need to upgrade it if you don't
07:45have the latest version because the GPT client
07:48is only in the latest version. I believe
07:50that's going to be version 0.27 and up.
07:52If you've installed OpenAI with pip, and it's still complaining
07:57in your VSCode, then just click at the bottom here.
08:00And it will show all the Python environments on your computer.
08:03So just make sure that you're using the right
08:06one that you've installed OpenAI into.
08:07Now I want my program to be able to take a list of arguments from the terminal.
08:12So if I open my program in the terminal, I want to be able to
08:16type GPT and then send a message like Hello World like this.
08:19And this doesn't do anything because we don't recognize this input yet.
08:23But I wanted to recognize that input.
08:25I'm going to use the "argparse" library in Python,
08:28which lets us deal with arguments passed
08:30into the command line if we use it like this.
08:32So let's go ahead and import that as well.
08:34And "argparse" is a default library in Python.
08:37So you don't have to install anything, it should just come by default.
08:40To use argparse, we first have to create "parser",
08:43and then we can start adding arguments to it.
08:45So I want to add an argument called a prompt,
08:48which is going to be the user's input.
08:50And I want the prompt to be able to have multiple words.
08:53So I'm going to do "nargs=+", I'm going to say the type is going to be a string.
08:58And this "nargs=+" will make it so that if we don't put
09:02quotes around our prompt, it's still able to understand
09:06that each one of those is part of that same prompt.
09:10So this is going to become a list.
09:12First, we'll have to parse the arguments, and
09:15then we access it by doing args.prompt here.
09:17But since prompt is going to be a list, we're going to have to join it on a space.
09:22So that it turns it into one string, and then we can print out the entire prompt.
09:26And I can also print out the original prompt, as received
09:30by the argument parser, just so you can see how it looks.
09:33Let's go back to the terminal and try it again, run the same command.
09:39And you can see here, because I've put it into quotes,
09:42this is seen as one argument by our arg pass command.
09:45So it comes as one element in the list.
09:48But now I can also do something like this,
09:51"python gpt", but then without the quotes.
09:53So if I do Hello World, it will still understand
09:56that these are two separate arguments, but
09:59it turns them into one sentence for our AI.
10:01So now we have the prompt, let's send it over to ChatGPT.
10:05And to do that, I'm going to create a separate
10:07function just so I can keep my code clean.
10:10And I'm going to create this function called "ask-gpt".
10:13And it's going to accept the prompt as an input.
10:16First, we'll need to set up the OpenAI client
10:19with the API key that we've created earlier.
10:22So here, you could set it into your environment variable.
10:25If you know how to do that, you could do with
10:27the dot file if you know how to do that.
10:29Or you could just set it here directly as a string as well.
10:32So whatever your key is, just enter it here.
10:34But I've already set it as my environment variable.
10:37So this is what I'm going to use.
10:38If you do it like me, you're going to have to also import
10:42OS so that you can get this environment variable.
10:44And once the key is set up for our client, let's go back to the
10:47documentation and copy the code example that they showed us.
10:51So now I'm back on the OpenAI documentation for chat completion.
10:55We've already imported OpenAI, and we've already set up our key.
10:58So let's just copy the actual code making this request.
11:01And I'm just going to paste it down here.
11:04So let's take a look at this.
11:06We use the SDK to create a chat completion.
11:09And we use the model "gpt-turbo". And then the message is this list of things.
11:16But let's get rid of that.
11:18Let's just put in the message that we prompted from our terminal.
11:21So let's just put that here.
11:23So we have this, well, how do we get a response?
11:26I think the documentation has an example for that as well.
11:29So if you scroll down, it says "the Python assistance reply
11:33can be extracted with response.choice[0].message.content".
11:37So let's just copy that and put it into our app.
11:40First, we'll actually have to store the response
11:44in a variable. And now we can do that.
11:47I don't know, I don't think it's happy with these quotes when I copy pasted it.
11:50So you got to watch out for that.
11:51Sometimes when you paste stuff from websites, it's
11:54not using the right characters for certain things.
11:56So I'm just going to replace that with double quotes
11:59that my Python interpreter understands.
12:01And now I have the message, I guess I'll call that content.
12:04But this is going to be a string.
12:06And I can print this out or I can return it.
12:08I'm going to do both.
12:09I'm going to print it out and returning.
12:11So now my ask_gpt function is complete. Let's go ahead and use it here.
12:16Let's go back to our terminal and run the program again.
12:19So this time we have a response from ChatGPT.
12:22We said Hello world to it.
12:23And it says "Hello there, how can I assist you today?" To
12:27make this a little bit nicer, I'm going to do two things.
12:30I'm going to change the response text from ChatGPT to
12:33a different color, maybe green, so it stands out.
12:36And I'm also going to reprint the question that we're asking it.
12:39And I'm going to delete this because I don't need it anymore.
12:42And I will make this a format strings.
12:45And I'm going to put "Q" for a question.
12:48Now to make ChatGPT response appear in a different color, I can
12:52add special characters to it that my terminal understands.
12:55And it's going to render those text as different color.
12:58So let's do that.
12:59And I'm using GitHub CoPilot.
13:01So I don't even need to remember how to do this, I just have
13:04to type in a comment and it will auto suggest it for me.
13:07So this is how we can do it.
13:09And if I run it again, I can see that my question is now showing at the top.
13:14And the response is showing in this green color.
13:17I can also strip this white space so it doesn't have like a double space here.
13:23So to do that, I think that the response is usually
13:26returned with a couple of extra new lines.
13:29Because this is a string, I can actually call strip on it to
13:33get rid of the trailing white spaces before and after it.
13:37But I still might want at least one new line. So
13:41I'll just add that at the top here as well.
13:44Now let's try to ask it a more complex question.
13:47So here I'm going to say,
13:51"How do I sort a list in Python?" Okay, so this
13:54works, it gives me a really long response. It
13:57says to sort a list in Python, you could use
14:01the "sorted()" function or the "sort()" method.
14:04And then it shows me quote examples for each of those things.
14:07And now I have two problems with this.
14:09First is I can't ask it follow up questions, because
14:13when I finish typing, the program exits.
14:16So if I ask it something now, it doesn't have
14:19the context of this previous conversation.
14:21For example, I want to ask it "which method do
14:24you recommend?" And here it doesn't really
14:26understand because it's lost this context.
14:29The second problem I have is that this response is
14:32a bit too long for me, I think that if it just provides
14:35a code snippet, or maybe just a one liner
14:38explaining how to do it, that's usually enough.
14:40So I want to tell it to just keep its responses shorter and more concise.
14:45So how can we modify our application to fix those two problems?
14:49Let's break it down one problem at a time.
14:52First, let's make it understand the context of the chat.
14:56So if you look here, the input to the API is actually
15:00a list of messages, all we would have to do
15:03is just store a certain number of messages from
15:06the user and send that entire list back to GPT.
15:09But we'll also need to make it so that when we
15:12ask GPT something, we don't terminate the program
15:15right away, we run in a loop and keep prompting
15:17them for input until they decide to exit.
15:19To do that, let's first create an empty list called "chat_history".
15:24And I'm also going to make this ask GPT, except with chat_history as an argument.
15:31Now, I'm going to assume that this chat_history
15:35is going to be a list of objects like these.
15:38So it's not just going to be the string itself, because
15:41I also need the role and the content in this structure.
15:44So if it's a list of things like this, I should
15:47be able to just expand it, or spread it directly
15:51in the messages using this asterisk symbol.
15:54And this will kind of unpack all of the items of chat history
15:57and just put it into this messages list instead.
16:00When we run this for the first time, chat history is going
16:04to be empty, and this expansion will do nothing.
16:06So our first input is always going to look the same like this.
16:10Now, as we use this application, we want to store
16:13all of the things we do into this chat history.
16:15Ideally, we probably want to put a limit on it as well.
16:18So that if we have more than 10, or maybe 100
16:21pieces of conversation, we start evicting some
16:23of the previous elements. Just so that we don't
16:26have an infinitely long chat history going
16:28on forever, because those still count towards
16:31our token use per interaction with the API.
16:33But we're not going to do that for now, we're
16:36just going to put everything into chat
16:38history and assume that we don't use it more
16:40than a couple of times in one sitting.
16:42So to do that, let's first create this as a separate object.
16:46And I'm going to call this "user_prompt".
16:49And now I'm going to append that to my chat history.
16:53And I'm also going to put it into here.
16:56Well, actually, I want to append it after I
16:59send this response because I don't want to
17:02actually have it appear the chat history before
17:05I expand it and use it in the messages.
17:07I'll also need the AI's response to be added to the chat history.
17:11Okay, so now I should have a chat history being built up as I use this.
17:16So now I've updated my as GPT function to also take a chat history.
17:21And I'm updating the history as I go.
17:23The last part of this is to create a loop so
17:26that I keep prompting the user for input,
17:28as long as they're still using the app.
17:30So to start with, I'm going to capture their input
17:32by using the inbuilt "input()" function.
17:34And here we can put a string, this string can
17:37be anything you want, it's just going to show
17:39up prompting the user to enter some text.
17:41And we're going to store that text in the user input variable.
17:44And here I'm going to create a loop.
17:46So while the user input is not equal to an empty
17:49string, which means that if they just press Enter
17:53without typing anything, the app will exit.
17:55But while that's not true, and while there is some text in the
17:59user input, I'm going to ask GPT with that as the new prompt.
18:03And I'm just going to keep repeating this input prompt until they decide to exit.
18:08And we're not doing anything with this response at the moment.
18:11So I don't even have to use this. I could just use "ask_gpt",
18:14because it will be printed out within the function itself.
18:17And if they do decide to exit the app and just
18:20press Enter, I can make the application just
18:23say bye, and that it's ending the session.
18:25So now let's hop over to the terminal and see if that works.
18:28So here, let's ask it the same question again, "how do
18:32you sort a list in Python?" So now it's responded with
18:36code examples and how I could sort a list in Python.
18:40Now I can ask it a follow up question, and it will actually have this context.
18:45So I can ask, "now how about JavaScript?" And if
18:48I asked that, you could see that it still remembers
18:51what I asked it before about sorting a list.
18:53But now it's kind of transferred that context over to JavaScript
18:57and given me examples on how to sort things in JavaScript.
19:00Okay, so this works really well.
19:01And then if I press Enter without having anything typed
19:04here, it will interpret that as me exiting the session.
19:07So it'll say bye to me. And the next thing I want
19:10to fix now is that these answers are really long.
19:13And in fact, I want more control of the style
19:17of the response that GPT will return to me.
19:21So let's go back to our code editor and use that system
19:25role prompt that we saw in the example snippet.
19:27Now I'm going to make the system message a part
19:31of the argument to my ask GPT function.
19:34And I'm going to add that right at the top of my messages list.
19:38So the role is going to be system, and the content
19:41is going to be the system message string.
19:44And now I have to create a system message.
19:47So I could do that as a constant or I could do
19:49it here. Let's do it as a constant up here.
19:51And for my system message, I'm going to tell it to answer
19:55with short, concise, really condensed information.
19:59So this is a system message I want to be included when I prompt GPT.
20:04"Answer the user with short, concise answers and
20:07code snippets, keep it to the point and don't
20:10go off topic." Okay, so now I have this constant
20:13and I just have to put it into my function.
20:16I'll put it here and I'll also put it in here where I'm using it.
20:20So now we should receive the system message,
20:23it should add it to our list of messages.
20:25Let's see if that works.
20:26Back to our terminal, let's clear this out and ask the exact same question again.
20:31Okay, so this is a lot shorter, it's not as
20:33verbose, there's a code snippet right away.
20:36And there's an explanation as well.
20:38And now let's try to ask it how to do it in JavaScript.
20:44And still, this is pretty short.
20:46So that's quite good.
20:47Now you could also have a lot of fun with this, you can prompt
20:50the system message to basically do anything you want.
20:53Another really interesting use case I just thought
20:56of is you could use it to write real estate
20:59listings. For example, if you're a real estate
21:02agent, or you're just wanting to run out your room,
21:05we can tell it to do something like this.
21:07So now my system message is that "you are a real
21:10estate listing assistant." And to generate a listing
21:13description based on the user's prompt and
21:16descriptions, and use emotionally charged words
21:19to describe the property, emojis and capitalization
21:22to emphasize the good qualities of the property.
21:25Let's see what it comes up with.
21:27So here I'm going to say two bed, two bath, one car,
21:31north-view, river, storage, 4th floor, near train.
21:36And maybe I'll put a suburb as well, "Redfern Sydney".
21:42Okay, let's go ahead and run that and see what it does.
21:46So this is pretty funny.
21:48It's got the emojis spot on.
21:50And it's also got the capitalization and the type
21:53of language real estate agents love to use.
21:56"This magnificent two bed, two bath apartment
21:58with one car garage is a true gem perched on
22:00the fourth floor with gorgeous views of the
22:02river and the city's north." The location
22:05offers a lifestyle of convenience and luxury.
22:07Yeah, and you can pause the video and read it yourself if you want.
22:10But yeah, this is I think this is a really interesting way
22:14to use the AI just to kind of make somebody's workflow
22:17a little bit more automated and a little bit easier.
22:20So now that you have the script written in your folder
22:23somewhere, how do you actually make it so that
22:26you can run it from anywhere in your terminal?
22:29It depends on the type of terminal you're using.
22:32But most people are using bash terminal or z-shell terminal
22:35if you're on a Unix system, which is Mac or Linux.
22:38But I'm actually using "fish" because I'm just trying it out.
22:41So my terminal is a little bit different.
22:43But either way, it's pretty simple. We just need to know
22:46the location of the Python binary that we're using
22:49to, to run this so that we have all the environments.
22:53And we also need to know the location of the file that we're using.
22:56Now, a really easy way to check that is to just go here, and
23:00then you see the exact path to your Python binary in VSCode.
23:03And if you're in the terminal already, you should
23:06also be able to get the location of your Python
23:10executable by just typing "which python".
23:12So here it is.
23:14So now if I run this command with this file,
23:17let me just put this here like this, I should
23:21be able to execute this from anywhere.
23:24So even though this is a little bit longer than before,
23:27now it's actually universally executable.
23:29I also have to enter the prompt.
23:31So I'll be a little-- oh, yeah, it's still the real estate bot.
23:35So I have to do t"wo bad two bath one car".
23:38Okay, so if I want to make this accessible from
23:41anywhere, I can turn it into an alias.
23:43And if you're using bash, you could just type in alias,
23:46and then the name of the alias you want to call it.
23:49So for example, you could call it "chat".
23:51And then you could type in in quotes, the command
23:54that you want to run, including the full path.
23:57So in bash, that should work.
23:59But I'm using fish.
24:00So I don't think this actually works.
24:02Oh, maybe it does.
24:03I don't know.
24:04Well, okay, it *does* work in fish as well.
24:07So yeah, just type this line, alias space, and then chat
24:11or whatever command you want, and equals to this.
24:15And that should work for you.
24:16And if you want this to be available every time
24:20you start your terminal, you have to modify your
24:23".rc" file, depending on what terminal you're
24:26using, it could be bash RC or Z shell RC.
24:28So you could open that up and then just add that alias line to the bottom there.
24:33So now every time you start up your terminal,
24:35it should be available for you to use.
24:37On my own computer, I've actually alias this to the "gpt" command.
24:41So that's why I can just type in "gpt" from anywhere
24:44my terminal doesn't have to be in the same folder.
24:47And I can type in the prompt right away.
24:49And because this is actually referencing the file
24:53on disk, if I change or modify that file,
24:55like add a new system personality, it will automatically
24:59update when I use this command.
25:01So that's it.
25:02I hope you enjoyed the tutorial.
25:04And if you want to take a closer look at the code, then
25:07check the video description below for the GitHub link.
25:09If you have any questions or feedback, then please let me know in the comments.
25:14And if you enjoyed this video and you want to see what
25:17else you could do with AI and Python, then check
25:20out this project where I generate an original Pokemon
25:23card collection using OpenAI and Midjourney.
25:25I hope you found this useful. And thank you for watching.
Copy
Chat with video

FAQs about This YouTube Video

1. What is covered in the tutorial for building a custom AI chat application?

The tutorial covers the benefits of using OpenAI's API, demonstrates how to use the GPT-3.5 Turbo model, and provides code examples for implementing chat completion and context in the application.

2. How does the custom AI chat application add functionality and personality to apps?

The custom AI chat application uses the ChatGPI API in Python, allowing developers to add functionality and personality to their apps by implementing chat completion and context.

3. What is the main focus of the tutorial on building a custom AI chat application?

The main focus of the tutorial is on using the ChatGPI API in Python to build a custom AI chat application that enables developers to add functionality and personality to their apps.

4. What model is demonstrated in the tutorial for building a custom AI chat application?

The tutorial demonstrates how to use the GPT-3.5 Turbo model for building a custom AI chat application using the ChatGPI API in Python.

5. What are the benefits of using OpenAI's API as mentioned in the tutorial?

The tutorial highlights the benefits of using OpenAI's API, which includes enabling developers to add functionality and personality to their apps through chat completion and context.

Save time on long videos, get key ideas instantly

⏰ Grasp the gist of any video in seconds
✨ Get the key insight of the video
🪄 No barriers to support 20+ languages of summaries
👀 Navigate through timestamped breakdowns
Get AI Summary Now

More Long YouTube Videos Summaries

The video discusses the lies and controversies surrounding SZA, including her claims about being a marine biologist, her age, and her statements about her music. The speaker questions the impact of these lies on SZA's career and the perception of her music industry success.

This video is an interview with Fleece Johnson, aka the "Booty Warrior," who discusses his attraction to men and his experiences in prison. He explains how in prison, respect is important and talks about the dynamics of relationships in the prison system.

The video provides guidance on how to enhance attractiveness from the female perspective by focusing on aspects such as physique, facial hair, hairstyle, scent, jewelry, and self-care. It emphasizes the importance of a lean and fit physique, a well-groomed beard or stubble, trendy hairstyles, subtle and warm scents, minimal and stylish accessories, and overall personal care for men to appeal to women.

The video tutorial explains how to earn money using the shrinkme.io platform by shortening URLs and driving traffic to them through viral videos on platforms like TikTok, with the potential to make around $600 per month by sharing free content and incentivizing viewers to click on the shortened links. This is a side hustle opportunity suitable for beginners in the online world.

The video shows an exhibit featuring the glasses of Jeffrey Dahmer, a notorious serial killer, and discusses their origins and significance. The exhibit also includes other personal items of Dahmer, such as letters and a Bible, shedding light on his personal life. Additionally, the creator of the exhibit showcases his own personal artwork, including a unique skull painted with his blood, adding a chilling dimension to the display.

The video discusses the reliability of IGVAULT for buying game accounts. The creator shares their positive experience purchasing a Genshin Impact account, but cautions to always check the seller's history. They also provide some tips for securing purchased accounts.