OpenAI Pricing & Tokens Calculator

1. Select a model from below
2. Enter the number of words in your prompt to GPT
3. Hit that beautiful Calculate button 🎉
Estimated Cost:

The calculator above has 2 inputs that generate 2 outputs. Given a model (Davinci, Curie, Babbge, or Ada), and the number of words you expect to provide the model for text completion, this page will calculate the estimated OpenAI tokens and estimated cost of those tokens.

Check out our new OpenAI token calculator.

What are GPT language models?

GPT is artificial intelligence software, created and managed by that leverages large language models (LLM) to create content, answers, and responses based on some input from users.

The software is designed to understand and generate human-like responses to text-based conversations. GPT models take millions of text-based data points from all corners of the Internet to formulate human-like responses that are quite often very helpful.

How does GPT work?

GPT is a set of models that can understand and generate natural language based on being trained with a vast corpus of text data, which is then used to generate statistically probable responses. They work by using a deep learning technique called Transformer architecture. This architecture enables the software to understand the context of the text input and generate a response that is appropriate and relevant to the input.

When you type a message or question to prompt the software, it then analyzes the input to understand the meaning and context. Then, using its vast database of training data, the model generates a response that is most appropriate for the input.

The training data used to develop GPT models consists of a large corpus of text from various sources, including books, articles, and websites. This data is used to train the model to recognize patterns and relationships between words and phrases, which enables the models to understand the meaning and context of your input.

OpenAI Models Pricing and Overview

The GPT models provided by OpenAI are a family of large language models that are trained based off an enormous corpus of text data.

Each model, described in brief detail below, each have their pros and cons with respect to performance, speed, tasks required, and cost.

Davinci GPT Model

The Davinci model is the most powerful and capable GPT model available, however, it comes at a higher price than the other models.

According to their documentation, OpenAI suggests experimenting and tinkering with Davinci until you get things working. Once things are working, then you should try out the other models, which are often more faster and 1/10th of the cost of Davinci.

The training data for Davinci is available up to Janaury 2021.

OpenAI Davinci Pricing

According to their documentation, OpenAI prices Davinci at $0.02/1000 tokens.

Curie GPT Model

The Curie model is the 2nd most powerful, after Davinci, but offers speed and lower price point.

The training data for Curie is available up to October 2019.

OpenAI Curie Pricing

According to their documentation, OpenAI prices Curie at $0.002/1000 tokens.

Babbage GPT Model

The Babbage model is very capable of straightforward tasks and very fast, but also offers a low price point.

The training data for Babbage is available up to October 2019.

OpenAI Babbage Pricing

According to their documentation, OpenAI prices Babbage at $0.005/1000 tokens.

Ada GPT Model

The Ada model is suited for simple tasks compared to its fellow models, and thus has the lowest price point.

The training data for Ada is available up to October 2019.

OpenAI Ada Pricing

According to their documentation, OpenAI prices Ada at $0.0004/1000 tokens.


If you are eager to learn more about the above models, please check out the OpenAI documentation

Why Does This Page Exist?

On the OpenAI pricing page, I realized the pricing was a little confusing for my specific use case. I wanted to know, in the context of SEO/content creation, how much would it cost for [x] number of words generated?

So, I created this simple (silly) calculator to estimate usage costs for the various GPT models.