Introduction

    In today’s rapidly evolving world of artificial intelligence, OpenAI stands at the forefront, pushing the boundaries of what’s possible. Founded with the mission of ensuring that artificial general intelligence (AGI) benefits all of humanity, OpenAI has developed a powerful API that opens up new possibilities for developers. In this blog post, we’ll explore the fascinating realm of OpenAI and learn how to harness its power using Python.

    Getting Started with OpenAI

    Before diving into the technical aspects, let’s understand what OpenAI is all about. OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. They’ve made significant strides in this direction with their powerful API, which provides access to state-of-the-art language models and various AI services. This API allows developers to integrate AI capabilities into their applications, making it a valuable tool for a wide range of use cases.

    Obtaining API Credentials

    To start using OpenAI, you’ll need API credentials. Signing up for an API key is a straightforward process.

    1. Create an OpenAI Account: https://platform.openai.com/signup
    2. Go to the API Key page –> https://platform.openai.com/account/api-keys
    3. Create a new secret key and copy it

    OpenAI offers different pricing tiers, including free trial options, so you can choose the one that best fits your needs and budget. Keep in mind that understanding the pricing structure and usage limits is essential to avoid unexpected charges.

    Endpoints

    OpenAI provides access to various endpoints that serve different purposes. Some of the most common endpoints are the Completion Endpoint (e.g., Completion.create) and the Chat Completion Endpoint (e.g., ChatCompletion.create).

    Completion Endpoint

    The Completion Endpoint is designed for single-turn tasks that involve generating text based on a prompt. It’s typically used for tasks that don’t require maintaining a conversational context. You provide a single prompt, and the model generates text in response to that prompt. The model doesn’t have memory of previous interactions or context beyond the immediate prompt.

    Example #1: Single-turn tasks

    In the following Python code snippet, we initiate the OpenAI API interaction by storing the API key within the openai.api_key variable. Subsequently, we send a request to the “Completion” endpoint, which serves the purpose of generating text completions. In this case, the prompt is “What is OpenAI API?”.

    As seen above, the API returns a JSON (JavaScript Object Notation) response, which is a nested data format that resembles a Python dictionary with keys and values. The response has several keys, such as “choices”, “id”, and “model”.

    We can then unnest the JSON object by selecting the value from the “choices” key.

    Chat Completion Endpoint

    The Chat Endpoint is designed for multi-turn conversational interactions with the model. It allows you to build chatbots, virtual assistants, and interactive agents that can engage in back-and-forth conversations with users. You provide a series of messages in a conversation, including user messages and assistant messages. The model maintains context across messages, enabling dynamic conversations.

    Exampl #1: Single-turn Task

    The Python code below shows how to prompt is set up by creating a list of dictionaries, where each dictionary provides content to one of the roles. The messages often start with the system role (“You are a Data Scientist”) followed by alternating user and assistant messages.

    Exampl #2: Multi-turn Task

    In the Python code below, we add an “assistant” to served as an ideal example for the model for a similar topic, and a response that we consider ideally written for our use case. The mode now not only has its pre-existing understand, but also a working example to guide its response.

    With an example to work with, the assistant provides an acucrate reponse that is more in-line with our expectations.

    Example #3: Multi-turn Chat

    To facilitate a dynamic conversation with an AI model, we need to create a feedback loop where user messages and assistant responses are continuously added to the conversation history. This enables the model to maintain context from the previous interactions.

    In the following Python code, we establish the groundwork by defining a set of initial messages, which may include a system message and any developer-written examples. Additionally, we compile a list of user questions (stored in user_qs) that depend on the context of previous responses.

    To ensure that each question receives a tailored response, we iterate through the user_qs list. To prepare the user message for API communication, we construct a dictionary for each message and append it to the messages list using the append method. Subsequently, we transmit the entire message history to the Chat Completion endpoint and store the response. To extract the assistant’s messages, we isolate them from the API response, convert them into a dictionary format, and append them to the messages list for the subsequent iteration.

    To present the conversation coherently, we incorporate two print statements that display the dialogue between the user and the assistant as a script. This approach allows us to seamlessly provide corrective feedback to the model’s responses without the need to restate the original questions or the model’s previous responses. The resulting output showcases the effectiveness of this conversational loop.

    Conclusion

    This blog post introduces OpenAI’s role in artificial intelligence and demonstrates how to use its API with Python. It covers obtaining API credentials, explores different endpoints like Completion and Chat Completion, and provides practical code examples. The Completion Endpoint is suitable for single-turn tasks, generating text based on a prompt, while the Chat Completion Endpoint facilitates multi-turn conversations.