{"id":869,"date":"2025-09-14T18:10:23","date_gmt":"2025-09-14T15:10:23","guid":{"rendered":"https:\/\/shareai.now\/?post_type=documentation&#038;p=869"},"modified":"2025-09-14T18:10:23","modified_gmt":"2025-09-14T15:10:23","slug":"device-requirements","status":"publish","type":"documentation","link":"https:\/\/shareai.now\/docs\/provider\/troubleshoot\/device-requirements\/","title":{"rendered":"Device Requirements"},"content":{"rendered":"\n<p>The ShareAI application automatically recommends which AI models are suitable for sharing based on your device&#8217;s GPU VRAM. To help you maximize your contribution while ensuring optimal performance, we use two types of requirements:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1. Recommended Requirements<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>These indicate the ideal VRAM needed to run models efficiently.<\/li>\n\n\n\n<li><strong>If your GPU meets recommended VRAM<\/strong>, you can expect good performance and optimal task processing speed.<\/li>\n\n\n\n<li><strong>If your GPU is below recommended VRAM<\/strong>, you can still install and run the model, but performance may be noticeably slower. Your device will process fewer tasks per second compared to optimal conditions.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">2. Minimum Requirements<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>These are the lowest VRAM values your GPU must meet to run specific models.<\/li>\n\n\n\n<li><strong>If your GPU does not meet minimum VRAM<\/strong>, you cannot install or share the model. This ensures network reliability and prevents ineffective resource utilization.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">GPU VRAM Categorization<\/h3>\n\n\n\n<p>Based on your GPU&#8217;s VRAM, your device falls into one of the following categories:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>VRAM Range (GB)<\/th><th>Recommended Models<\/th><\/tr><\/thead><tbody><tr><td>4\u20136 GB<\/td><td>1B, 3B<\/td><\/tr><tr><td>8\u201312 GB<\/td><td>7B (quantized 4\/8-bit)<\/td><\/tr><tr><td>16\u201324 GB<\/td><td>7B, 14B (quantized 4-bit)<\/td><\/tr><tr><td>32\u201348 GB<\/td><td>14B, 20B (quantized 4\/8-bit), 70B (4-bit)<\/td><\/tr><tr><td>48\u201396 GB (multiple GPUs)<\/td><td>70B (8-bit quantized)<\/td><\/tr><tr><td>96 GB+ (multiple GPUs)<\/td><td>70B (16-bit precision or multiple instances)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>For instance:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>RTX 3060 (12GB)<\/strong>: Recommended to use models around 7\u20138B parameters (e.g., DeepSeek-r1:8B). You can install up to 13B models if you accept potential performance degradation.<\/li>\n\n\n\n<li><strong>RTX 4080 (16GB)<\/strong>: Recommended for 7\u201314B models (quantized 4-bit).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Alerting and Notifications<\/h3>\n\n\n\n<p>When adding new models:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If your GPU meets recommended requirements:\n<ul class=\"wp-block-list\">\n<li>No warnings or restrictions.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>If your GPU is below recommended but above minimum:\n<ul class=\"wp-block-list\">\n<li>A non-blocking warning message appears informing you about potential performance issues.<\/li>\n\n\n\n<li>Example warning: \u26a0\ufe0f <strong>Performance Notice:<\/strong> Your GPU VRAM is below the recommended amount. This model may run slower and process fewer tasks per second.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>If your GPU does not meet minimum requirements:\n<ul class=\"wp-block-list\">\n<li>A blocking alert prevents installation.<\/li>\n\n\n\n<li>Example alert: \ud83d\udd34 <strong>Insufficient VRAM:<\/strong> Your GPU VRAM does not meet the minimum requirements for this model. Please upgrade your GPU or select a smaller model. <a href=\"https:\/\/shareai.example.com\/device-requirements\" rel=\"nofollow noopener\" target=\"_blank\">Learn more<\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Special Cases<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>K-Quant Overhead:<\/strong> Additional VRAM overhead is considered when using models quantized with K-quants (K_S, K_M, K_L). ShareAI automatically calculates and adjusts recommendations accordingly.<\/li>\n\n\n\n<li><strong><code>:latest<\/code> Models:<\/strong>\n<ul class=\"wp-block-list\">\n<li>Since <code>:latest<\/code> models don&#8217;t specify parameter count upfront, ShareAI performs a check using the <code>ollama show<\/code> command before allowing sharing.<\/li>\n\n\n\n<li>If your resources don&#8217;t match the model\u2019s minimum after checking, the model will not activate.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Final Recommendations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Regularly review GPU recommendations within the ShareAI interface.<\/li>\n\n\n\n<li>Keep your GPU drivers updated for optimal performance.<\/li>\n\n\n\n<li>Consider GPU upgrades if you wish to contribute significantly to the network or host larger models.<\/li>\n<\/ul>\n\n\n\n<p>This ensures both the efficiency of your contributions and the health of the ShareAI decentralized AI network.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The ShareAI application automatically recommends which AI models are suitable for sharing based on your device&#8217;s GPU VRAM. To help you maximize your contribution while ensuring optimal performance, we use two types of requirements: 1. Recommended Requirements 2. Minimum Requirements GPU VRAM Categorization Based on your GPU&#8217;s VRAM, your device falls into one of the [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","docs-category":[30],"knowledge-base":[],"class_list":["post-869","documentation","type-documentation","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/documentation\/869","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/documentation"}],"about":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/types\/documentation"}],"version-history":[{"count":1,"href":"https:\/\/shareai.now\/api\/wp\/v2\/documentation\/869\/revisions"}],"predecessor-version":[{"id":870,"href":"https:\/\/shareai.now\/api\/wp\/v2\/documentation\/869\/revisions\/870"}],"wp:attachment":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/media?parent=869"}],"wp:term":[{"taxonomy":"docs-category","embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/docs-category?post=869"},{"taxonomy":"knowledge-base","embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/knowledge-base?post=869"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}