DeepSeek-V3.1 671B finally on ShareAI

deepseek-v3.1-671B.jpg

The wait is over.
Starting today, DeepSeek-V3.1 671B is live on ShareAI—built for providers with serious infrastructure who want to deliver frontier-grade inference to their customers.

Have a big infrastructure and want to give your clients the latest, greatest AI inference? The wait is over.

Why this matters

DeepSeek-V3.1 is a hybrid model that supports both thinking and non-thinking modes—one model, two behaviors—switchable via chat template. It brings three big upgrades:

  • Hybrid thinking mode — Toggle between thinking and non-thinking by changing the chat template.
  • Smarter tool calling — Post-training optimization boosts performance for tool use and agent-style tasks.
  • Higher thinking efficiency — DeepSeek-V3.1-Think reaches answer quality comparable to DeepSeek-R1-0528, while responding faster.

What you can do now

  • Offer frontier inference to enterprise clients that demand speed, scale, and reliability.
  • Run agentic workflows with improved tool execution and planning.
  • Choose your reasoning style per request: thinking on for tough problems, off for low-latency chats.

Getting started on ShareAI

  1. Select the model: deepseek-v3.1-671b.
  2. Pick the mode: set the chat template to Thinking or Standard.
  3. Wire your tools: pass your function/tool schema as usual—V3.1 is optimized for it.
  4. Ship: route traffic from your existing endpoints; monitor as you do today.

Who is it for?

Clouds, platforms, and providers with big infra and bigger ambitions—teams shipping next-gen assistants, research copilots, and high-throughput inference services.

Availability

DeepSeek-V3.1 671B is available today on ShareAI. Turn it on in your workspace or reach out for quota and SLAs.

Ready to build the frontier?
Enable DeepSeek-V3.1Start buildingTalk to sales

This article is part of the following categories: News

Power Up the Future of AI

Turn your idle computing power into collective intelligence—earn rewards while unlocking on-demand AI for yourself and the community.

Related Posts

EmbeddingGemma on ShareAI: 300M Multilingual Embeddings

EmbeddingGemma is now on ShareAI We’re announcing that EmbeddingGemma, Google’s compact open embedding model, is now …

Welcome OpenAI’s GPT-OSS to ShareAI

ShareAI is committed to bringing you the latest and most powerful AI models—and we’re doing it …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Power Up the Future of AI

Turn your idle computing power into collective intelligence—earn rewards while unlocking on-demand AI for yourself and the community.

Table of Contents