Quick Start
Get Together AI working in 3 steps:Tip: You can also set
provider="@together-ai" in Portkey() and use just model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo" in the request.Add Provider in Model Catalog
- Go to Model Catalog → Add Provider
- Select Together AI
- Choose existing credentials or create new by entering your Together AI API key
- Name your provider (e.g.,
together-ai-prod)
Complete Setup Guide →
See all setup options, code examples, and detailed instructions
Reasoning / Thinking Support
Together AI supports reasoning models that expose their internal chain of thought. Use thereasoning_effort parameter to control reasoning behavior, and set strict_open_ai_compliance=False to receive the thinking content in content_blocks.
content_blocks with both the model’s thinking process and the final answer. Streaming is also supported and returns reasoning chunks in the content_blocks field of the stream delta.
Thinking Mode Documentation
Learn more about thinking/reasoning support across providers
Managing Together AI Prompts
Manage all prompt templates to Together AI in the Prompt Library. All current Together AI models are supported, and you can easily test different prompts. Use theportkey.prompts.completions.create interface to use the prompt in an application.
Next Steps
Add Metadata
Add metadata to your Together AI requests
Gateway Configs
Add gateway configs to your Together AI requests
Tracing
Trace your Together AI requests
Fallbacks
Setup fallback from OpenAI to Together AI
SDK Reference
Complete Portkey SDK documentation

