LLM Token Price Calculator

Compare costs across different LLM models based on your usage patterns

Tokens in your prompt/input to the model

Tokens in the model's response/output

Number of API calls over the period

Cost Comparison Chart

Select Models

9 Models
3 Selected
Input:$0.1/1M
Output:$0.4/1M
Total: $0.30
Input:$0.15/1M
Output:$0.6/1M
Total: $0.45
Input:$0.27/1M
Output:$1.1/1M
Total: $0.82
Input:$1.1/1M
Output:$2.19/1M
Total: $2.20
Input:$1.1/1M
Output:$4.4/1M
Total: $3.30
Input:$3/1M
Output:$3/1M
Total: $4.50
Input:$3/1M
Output:$15/1M
Total: $10.50
Input:$5/1M
Output:$15/1M
Total: $12.50
Input:$15/1M
Output:$60/1M
Total: $45.00

Detailed Cost Breakdown

ModelInput PriceOutput PriceInput CostOutput CostTotal CostCost per API Call
DeepSeek V3$0.27/1M$1.1/1M$0.27$0.55$0.82$0.0008
Claude 3.5 Sonnet$3/1M$15/1M$3.00$7.50$10.50$0.0105
GPT-4o$5/1M$15/1M$5.00$7.50$12.50$0.0125

Note: Prices are approximate and based on publicly available information as of March 2025. Actual prices may vary.

All costs are calculated in USD per 1 million tokens and then applied to your usage.

About This LLM Token Price Calculator

This calculator helps you estimate and compare the costs of using different Large Language Models (LLMs) for your AI applications. By inputting your expected usage patterns, you can make informed decisions about which models offer the best balance of performance and cost for your specific needs.

Most LLM providers charge based on the number of tokens processed. Tokens are pieces of text (roughly 4 characters in English), and pricing typically differs between input tokens (your prompts) and output tokens (the model's responses).

Understanding these costs is crucial for budgeting and optimizing your AI applications, especially as you scale.

How to Use This Calculator

  1. Enter your usage metrics: Specify your average input tokens, output tokens, and total number of API calls.
  2. Select models to compare: Check the models you want to include in your comparison.
  3. Add custom models: If you're using models not listed, add them with their specific pricing.
  4. Compare costs: View the chart and detailed breakdown table to see how costs compare across models.
  5. Save your custom models: Your custom model configurations are automatically saved to your browser for future visits.

The color-coding helps you quickly identify low-cost (green), medium-cost (amber), and high-cost (red) pricing tiers.

Frequently Asked Questions

What are tokens in LLM context?

Tokens are the basic units that LLMs process. In English, a token is approximately 4 characters or 0.75 words. Different languages may have different token densities.

Why do input and output tokens have different prices?

Output tokens typically cost more because generating responses requires more computational resources than processing inputs.

How accurate are these price estimates?

Prices are based on publicly available information and may change. Always check the provider's official pricing for the most current rates.

How can I optimize my LLM costs?

Strategies include using smaller models when appropriate, optimizing prompts to be more concise, and implementing caching for common queries.