Device Requirements
The ShareAI application automatically recommends which AI models are suitable for sharing based on your device’s GPU VRAM. To help you maximize your contribution while ensuring optimal performance, we use two types of requirements:
1. Recommended Requirements
- These indicate the ideal VRAM needed to run models efficiently.
- If your GPU meets recommended VRAM, you can expect good performance and optimal task processing speed.
- If your GPU is below recommended VRAM, you can still install and run the model, but performance may be noticeably slower. Your device will process fewer tasks per second compared to optimal conditions.
2. Minimum Requirements
- These are the lowest VRAM values your GPU must meet to run specific models.
- If your GPU does not meet minimum VRAM, you cannot install or share the model. This ensures network reliability and prevents ineffective resource utilization.
GPU VRAM Categorization
Based on your GPU’s VRAM, your device falls into one of the following categories:
VRAM Range (GB) | Recommended Models |
---|---|
4–6 GB | 1B, 3B |
8–12 GB | 7B (quantized 4/8-bit) |
16–24 GB | 7B, 14B (quantized 4-bit) |
32–48 GB | 14B, 20B (quantized 4/8-bit), 70B (4-bit) |
48–96 GB (multiple GPUs) | 70B (8-bit quantized) |
96 GB+ (multiple GPUs) | 70B (16-bit precision or multiple instances) |
For instance:
- RTX 3060 (12GB): Recommended to use models around 7–8B parameters (e.g., DeepSeek-r1:8B). You can install up to 13B models if you accept potential performance degradation.
- RTX 4080 (16GB): Recommended for 7–14B models (quantized 4-bit).
Alerting and Notifications
When adding new models:
- If your GPU meets recommended requirements:
- No warnings or restrictions.
- If your GPU is below recommended but above minimum:
- A non-blocking warning message appears informing you about potential performance issues.
- Example warning: ⚠️ Performance Notice: Your GPU VRAM is below the recommended amount. This model may run slower and process fewer tasks per second.
- If your GPU does not meet minimum requirements:
- A blocking alert prevents installation.
- Example alert: 🔴 Insufficient VRAM: Your GPU VRAM does not meet the minimum requirements for this model. Please upgrade your GPU or select a smaller model. Learn more.
Special Cases
- K-Quant Overhead: Additional VRAM overhead is considered when using models quantized with K-quants (K_S, K_M, K_L). ShareAI automatically calculates and adjusts recommendations accordingly.
:latest
Models:- Since
:latest
models don’t specify parameter count upfront, ShareAI performs a check using theollama show
command before allowing sharing. - If your resources don’t match the model’s minimum after checking, the model will not activate.
- Since
Final Recommendations
- Regularly review GPU recommendations within the ShareAI interface.
- Keep your GPU drivers updated for optimal performance.
- Consider GPU upgrades if you wish to contribute significantly to the network or host larger models.
This ensures both the efficiency of your contributions and the health of the ShareAI decentralized AI network.