DeepSeek-V3.1 671B finally on ShareAI

The wait is over.
Starting today, DeepSeek-V3.1 671B is live on ShareAI—built for providers with serious infrastructure who want to deliver frontier-grade inference to their customers.
Have a big infrastructure and want to give your clients the latest, greatest AI inference? The wait is over.
Why this matters
DeepSeek-V3.1 is a hybrid model that supports both thinking and non-thinking modes—one model, two behaviors—switchable via chat template. It brings three big upgrades:
- Hybrid thinking mode — Toggle between thinking and non-thinking by changing the chat template.
- Smarter tool calling — Post-training optimization boosts performance for tool use and agent-style tasks.
- Higher thinking efficiency — DeepSeek-V3.1-Think reaches answer quality comparable to DeepSeek-R1-0528, while responding faster.
What you can do now
- Offer frontier inference to enterprise clients that demand speed, scale, and reliability.
- Run agentic workflows with improved tool execution and planning.
- Choose your reasoning style per request: thinking on for tough problems, off for low-latency chats.
Getting started on ShareAI
- Select the model:
deepseek-v3.1-671b
. - Pick the mode: set the chat template to Thinking or Standard.
- Wire your tools: pass your function/tool schema as usual—V3.1 is optimized for it.
- Ship: route traffic from your existing endpoints; monitor as you do today.
Who is it for?
Clouds, platforms, and providers with big infra and bigger ambitions—teams shipping next-gen assistants, research copilots, and high-throughput inference services.
Availability
DeepSeek-V3.1 671B is available today on ShareAI. Turn it on in your workspace or reach out for quota and SLAs.
Ready to build the frontier?
→ Enable DeepSeek-V3.1 • Start building • Talk to sales