Together AI model pricing
Together AI pricing coverage indexed by CostHawk for searchable AI model cost tracking.
Indexed models
18
Last updated
Mar 13, 2026
Coverage
Searchable, crawlable model-level pricing for answer engines and buyers.
Dedicated provider page
Together AI pricing catalog
Searchable coverage
18 models visible across 1 provider.
Newest releases first inside each provider
Updated Mar 13, 2026
| Model | Input / 1M | Output / 1M | Extra pricing |
|---|---|---|---|
$1.00 $0.0010 per 1K | $3.20 $0.0032 per 1K | Standard token pricing | |
$0.200 $0.0002 per 1K | $1.10 $0.0011 per 1K | Standard token pricing | |
$0.270 $0.0003 per 1K | $0.850 $0.0008 per 1K | Standard token pricing | |
$0.880 $0.0009 per 1K | $0.880 $0.0009 per 1K | Standard token pricing | |
$0.600 $0.0006 per 1K | $1.70 $0.0017 per 1K | Standard token pricing | |
$0.200 $0.0002 per 1K | $0.600 $0.0006 per 1K | Standard token pricing | |
$0.650 $0.0006 per 1K | $3.00 $0.0030 per 1K | Standard token pricing | |
$0.100 $0.0001 per 1K | $0.100 $0.0001 per 1K | Standard token pricing | |
$0.100 $0.0001 per 1K | $0.300 $0.0003 per 1K | Standard token pricing | |
$0.150 $0.0001 per 1K | $1.50 $0.0015 per 1K | Standard token pricing | |
$0.500 $0.0005 per 1K | $2.80 $0.0028 per 1K | Standard token pricing | |
$0.300 $0.0003 per 1K | $1.20 $0.0012 per 1K | Standard token pricing | |
$0.300 $0.0003 per 1K | $0.300 $0.0003 per 1K | Standard token pricing | |
$1.00 $0.0010 per 1K | $3.00 $0.0030 per 1K | Standard token pricing | |
$1.20 $0.0012 per 1K | $4.00 $0.0040 per 1K | Standard token pricing | |
$3.00 $0.0030 per 1K | $7.00 $0.0070 per 1K | Standard token pricing | |
$0.200 $0.0002 per 1K | $0.200 $0.0002 per 1K | Standard token pricing | |
$0.150 $0.0001 per 1K | $0.600 $0.0006 per 1K | Standard token pricing |
Glossary Shortcuts
The five operating concepts behind provider pricing
If you are comparing providers, these are the related topics that usually decide the final architecture and spend profile.
Max Tokens
Provider pricing is easier to reason about once you understand the output cap that sets the per-request ceiling.
Read moreLLM Gateway
Use a routing layer when you need fallback paths, policy controls, or spend-aware traffic steering.
Read moreServerless Inference
Most public model APIs are bought as serverless inference, which changes how you think about capacity and cost.
Read moreLogging
Request metadata is what lets you connect provider price changes to real token consumption and failures.
Read moreOpenTelemetry
Telemetry standards help you stitch provider requests into traces, dashboards, and operational workflows.
Read moreTogether AI Tracking
Track Together AI usage and model costs from one place
Use CostHawk to monitor Together AI alongside the rest of your stack, with local telemetry first and optional deeper tracking paths when your team needs them.
