We’re Live 🎉 – Meet ShareAI, the Global AI Engine Made of Us

TL;DR – ShareAI is a decentralized layer that links every idle CPU/GPU running Ollama into one elastic super‑cluster. Share spare compute today, earn tokens, and—when your own traffic spikes—tap the network’s extra power. Think Airbnb for AI models: 70 % of every dollar spent goes right to the device owner powering your workload.
Why We Built ShareAI
Running Ollama locally is liberating—your models, your rules—but capacity ends where your metal ends. A single viral demo, a classroom full of students, or the wrong time‑zone retweet can bring even beefy workstations to their knees. Meanwhile, trillions of GPU cycles nap on laptops, minis, and data‑center nodes worldwide.
That mismatch felt… wrong. So we decided to fix it by turning all those sleeping processors into one border‑less, community‑powered cloud.
What Is ShareAI?
ShareAI is a peer‑to‑peer scaling layer for Ollama and any other open‑model runtime. Install a lightweight desktop app and your device becomes both:
- Contributor – donate idle cycles to the mesh and earn tokens.
- Consumer – burst to the mesh whenever your own demand outgrows local capacity.
No vendor lock‑ins, no mystery bills—just straight‑forward pricing and transparent usage metrics.
How It Works (in 90 seconds)
- Download & install the ShareAI Windows app (.msi) from here.
- Log in with your ShareAI account.
What the network does: Authenticates you as the owner of the device and links it to you. - Pick the models you’re willing to share (or install new ones from the in‑app catalog).
What the network does: Publishes your device into the network + your sharable models + hardware metadata so tasks can find you. - Enter Sharing Mode – hit the toggle.
What the network does: Runs a quick benchmark to score your CPU/GPU, ensuring healthy task allocation. - Keep working.
What the network does: Send tasks to your device; records input and output tokens to your account for each job completed. - Need more juice than you generate? Buy extra tokens.
What the network does: Pays 70 % of your spend straight to the node owners who ran your jobs.
The Token Economy in Plain English
- Earn while you sleep – Laptop closed? Server coasting? Let ShareAI rent those cycles.
- Earn while you work – Coming soon: even if your CPU is crunching spreadsheets, an idle GPU can still join the party and earn.
- Pay only when you spike – Most days you stay within the credits you’ve earned. When you blow past them, simply top‑up. 70 % of every real‑money purchase is streamed directly to the humans hosting your tasks—no crypto jargon, just real payouts.
Real‑World Use Cases
- Indie hacker shipping an AI feature but can’t justify a $2 000 / month GPU lease.
- Digital agency juggling unpredictable client demos.
- Research lab that occasionally needs a 500‑GPU burst for a paper deadline.
- Anyone who hates hardware waste and loves open models.
Getting Started in 3 Easy Steps
1. Download the Windows installer → Download ShareAI
2. Log in (or create) your ShareAI account during setup.
3. Select the models & resources you wish to share and flip the **Share** toggle.
Your personal dashboard shows earned tokens and real‑time job graphs. When your own traffic spikes, the network backstops you automatically—no extra config needed.
Developer Quick‑Start: REST API
Once you have tokens, calling ShareAI looks almost identical to the ChatGPT pattern you know:
POST https://api.shareai.now/api/v1/chat/completions
Authorization: Bearer YOUR_API_KEY # generate at https://console.shareai.now
Content-Type: application/json
{
"model": "deepseek-r1:32b",
"messages": [
{
"role": "system",
"content": "You are a highly perceptive sentiment analysis model. You detect emotional tone and interpret subtle context from user inputs. You answer with only one word: Positive, Neutral, or Negative."
},
{
"role": "assistant",
"content": "Understood. I will respond with a single word sentiment only."
},
{
"role": "user",
"content": "There’s something beautiful about thousands of people powering AI together. No single company, no gatekeepers—just open access and shared compute. Feels like how the internet was supposed to be."
}
]
}
What’s Next on the Roadmap
- Streaming responses for real‑time tokenized output.
- Chat history keys so you can resend just a
chat_id
instead of the whole transcript. - Gamification layer – XP, badges, and seasonal leaderboards for top contributors.
- Async endpoint (
/task_completion
) – fire‑and‑forget jobs with webhook or polling callbacks. - Mobile nodes – iOS & Android background sharing (with battery safeguards).
- Model marketplace – rent rare fine‑tuned weights alongside raw compute.
Call to Action 🚀
We’re live in public beta and we need you:
- Install the app, share a slice of your rig, and tell us how it goes.
- Jump into our Discord to give feedback, request features, or just lurk.
- If you ❤️ the mission, tweet
#ShareAI
so more idle GPUs can join the party.
Let’s turn the world’s wasted compute into humanity’s most open, elastic AI cloud. Welcome to ShareAI.