{"id":2447,"date":"2026-05-04T21:20:20","date_gmt":"2026-05-04T18:20:20","guid":{"rendered":"https:\/\/shareai.now\/?p=2447"},"modified":"2026-05-12T03:21:55","modified_gmt":"2026-05-12T00:21:55","slug":"monetize-gpu-shareai","status":"publish","type":"post","link":"https:\/\/shareai.now\/blog\/case-studies\/monetize-gpu-shareai\/","title":{"rendered":"How to Monetize GPU Idle Time with ShareAI"},"content":{"rendered":"\n<p>If you\u2019ve bought a powerful GPU for gaming, AI, or mining, you\u2019ve probably wondered how to<strong> monetize GPU <\/strong>when you\u2019re not using it. Most of that time, your hardware is just burning electricity and depreciating. <strong>ShareAI<\/strong> lets you monetize idle GPU time by renting it out for AI inference workloads, so you get paid for the <strong>\u201cdead time\u201d<\/strong> your GPUs and servers would normally waste.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">TL;DR: Why Monetizing GPU Dead Time with ShareAI Works<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"547\" src=\"https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai-1024x547.jpg\" alt=\"monetize gpu\" class=\"wp-image-1672\" srcset=\"https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai-1024x547.jpg 1024w, https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai-300x160.jpg 300w, https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai-768x410.jpg 768w, https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai-1536x820.jpg 1536w, https:\/\/shareai.now\/wp-content\/uploads\/2025\/09\/shareai.jpg 1896w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Dead time \u21d2 lost money.<\/strong> Consumer and datacenter GPUs often sit under-utilized, especially outside peak hours.<\/li>\n\n\n\n<li><strong>ShareAI aggregates demand<\/strong> from startups that need on-demand inference and routes it to your hardware.<\/li>\n\n\n\n<li><strong>You get paid per token served<\/strong>, without dealing with DevOps or renting whole machines to strangers.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/models\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Browse Models<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How ShareAI Turns Idle GPUs into Income (No Server Management)<\/h2>\n\n\n\n<p>ShareAI operates a decentralized GPU grid that matches <strong>real-time inference jobs<\/strong> to available devices. You run a lightweight provider agent; the network handles <strong>model dispatch, routing, and failover<\/strong>. Instead of chasing gigs, you\u2019re simply <strong>online when you want<\/strong> and earn whenever your GPU serves tokens.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Pay-per-token, not \u201crent-my-rig\u201d<\/h3>\n\n\n\n<p>Traditional rentals lock your box for hours or days\u2014great when it\u2019s busy, awful when it\u2019s idle. ShareAI flips this: <strong>you earn on usage<\/strong>, so the moment <strong>demand pauses, your cost exposure is zero<\/strong>. That means the <strong>\u201cdead time\u201d finally pays<\/strong>.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For founders: you pay per token consumed (no 24\/7 idling on expensive instances).<\/li>\n\n\n\n<li>For providers: you <strong>capture demand spikes<\/strong> from many buyers you\u2019d never reach alone.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/docs\/api\/using-the-api\/getting-started-with-shareai-api\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">API \u2014 Getting started<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Money Flow: Who Pays, Who Gets Paid<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li>A developer calls ShareAI for a model (e.g., a Llama family text model).<\/li>\n\n\n\n<li>The network routes the request to a compatible node (your GPU).<\/li>\n\n\n\n<li>Tokens stream back; <strong>payouts accrue to you<\/strong> based on tokens served.<\/li>\n\n\n\n<li>If your node goes offline mid-job, <strong>automatic failover<\/strong> keeps the user happy while your session simply ends\u2014no manual babysitting.<\/li>\n<\/ol>\n\n\n\n<p>Because ShareAI <strong>pools demand<\/strong>, your GPU can stay busy <strong>only when it makes sense<\/strong>\u2014exactly when <strong>buyers<\/strong> need throughput and you\u2019re <strong>available<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Step-by-Step: <em>Monetize GPU<\/em> in Minutes (Provider Path)<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Check hardware &amp; VRAM<\/strong><br>8\u201324 GB VRAM works for many text models; more VRAM unlocks larger models\/vision tasks. Stable thermals and a reliable uplink help.<\/li>\n\n\n\n<li><strong>Create your account<\/strong><br><a href=\"https:\/\/console.shareai.now\/?login=true&amp;type=login&amp;utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Create or access your account<\/a><\/li>\n\n\n\n<li><strong>Install the provider agent<\/strong><br>Follow the Provider Guide to install, register your device, and pass basic checks.<br>Docs: <a href=\"https:\/\/shareai.now\/docs\/provider\/manage\/overview\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Provider Guide<\/a><\/li>\n\n\n\n<li><strong>Choose what you serve<\/strong><br>Opt into queues that fit your VRAM (e.g., 7B\/13B text models, lightweight vision). More availability windows = more earnings.<\/li>\n\n\n\n<li><strong>Go online and earn<\/strong><br>When you\u2019re not gaming or training locally, toggle your node online and let ShareAI route work automatically.<\/li>\n\n\n\n<li><strong>Track earnings and uptime<\/strong><br>Use the Provider Dashboard (via Console) to monitor sessions, tokens, and payouts.<br>Console (keys, usage): <a href=\"https:\/\/console.shareai.now\/app\/api-key\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Create API Key<\/a> \u2022 User Guide: <a href=\"https:\/\/shareai.now\/docs\/about-shareai\/console\/glance\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Console overview<\/a><\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">Optimization Playbook for Providers<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Match VRAM to queues:<\/strong> Prioritize models that fit comfortably; avoid edge-case OOMs that cut sessions short.<\/li>\n\n\n\n<li><strong>Plan availability windows:<\/strong> If you game nightly, set your node online during work hours or overnight\u2014<strong>when demand spikes<\/strong>.<\/li>\n\n\n\n<li><strong>Network stability matters:<\/strong> Wired or solid Wi-Fi keeps throughput steady and reduces failovers.<\/li>\n\n\n\n<li><strong>Thermals &amp; power:<\/strong> Keep temps in check; consistent clocks = consistent earning.<\/li>\n\n\n\n<li><strong>Scale out:<\/strong> If you own multiple GPUs or a small server, onboard them incrementally to test thermals, noise, and net margins.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Step-by-Step: Founders Use ShareAI for Elastic, Low-Cost Inference (Buyer Path)<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Create an API key<\/strong> in Console: <a href=\"https:\/\/console.shareai.now\/app\/api-key\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Create API Key<\/a><\/li>\n\n\n\n<li><strong>Pick a model<\/strong> from the marketplace (150+ options): <a href=\"https:\/\/shareai.now\/models\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Browse Models<\/a><\/li>\n\n\n\n<li><strong>Route by latency\/price\/region<\/strong> via request preferences; ShareAI handles <strong>failover<\/strong> and <strong>multi-node scaling<\/strong>.<\/li>\n\n\n\n<li><strong>Stop paying for idle time:<\/strong> usage-based economics replace 24\/7 GPU leases.<\/li>\n\n\n\n<li><strong>Test prompts quickly<\/strong> in the Chat Playground: <a href=\"https:\/\/console.shareai.now\/chat\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Open Playground<\/a><\/li>\n<\/ol>\n\n\n\n<p><em>Bonus:<\/em> If you already run training elsewhere, keep it there. Use ShareAI <strong>only for inference<\/strong>, turning a fixed cost into a <strong>pure variable<\/strong> one.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Architecture Patterns We Recommend<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Hybrid training\/inference:<\/strong> Keep training on your preferred cloud\/on-prem; offload inference to ShareAI to absorb volatile user traffic.<\/li>\n\n\n\n<li><strong>Burst mode:<\/strong> Keep your core serving minimal; burst overflow to ShareAI during launches and marketing spikes.<\/li>\n\n\n\n<li><strong>A\/B or \u201cmodel roulette\u201d:<\/strong> Route a slice of traffic across multiple open models to optimize cost\/quality without spinning up new fleets.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/docs\/api\/using-the-api\/getting-started-with-shareai-api\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">API \u2014 Getting started<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Case Study (Provider): From Evening Gamer \u2192 Paid \u201cDead Time\u201d<\/h2>\n\n\n\n<p><strong>Profile:<\/strong><br>\u2022 1\u00d7 RTX 3080 (10 GB VRAM) in a home PC.<br>\u2022 Owner games 19:00\u201322:00 and is offline some weekends.<\/p>\n\n\n\n<p><strong>Setup:<\/strong><br>\u2022 Provider agent installed; node set <strong>online<\/strong> 08:00\u201318:00 and 22:30\u201301:00 (weekday windows).<br>\u2022 Subscribed to <strong>7B\/13B text<\/strong> queues; occasional vision jobs that fit.<\/p>\n\n\n\n<p><strong>Outcome (illustrative):<\/strong><br>\u2022 The node served steady weekday daytime demand plus late-night bursts.<br>\u2022 Earnings track <strong>tokens served<\/strong>, not clock hours, so <strong>short, hot periods<\/strong> count more than long idle periods.<br>\u2022 After month 1, the provider adjusted windows to overlap with the network\u2019s <strong>peak demand<\/strong> and increased their effective hourly revenue.<\/p>\n\n\n\n<p><strong>What changed:<\/strong><br>\u2022 The GPU\u2019s <strong>dead time<\/strong> became <strong>paid time<\/strong>.<br>\u2022 Electricity usage rose modestly during on-windows, but net was positive because <strong>utilized compute pays<\/strong> while idle doesn\u2019t.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Case Study (Founder): Inference Bill Cut by Aligning Costs to Usage<\/h2>\n\n\n\n<p><strong>Before:<\/strong><br>\u2022 2\u00d7 A100 instances parked 24\/7 to avoid cold starts for a generative feature.<br>\u2022 Average <strong>utilization &lt;40%<\/strong>; bill didn\u2019t care\u2014instances ran anyway.<\/p>\n\n\n\n<p><strong>After (ShareAI):<\/strong><br>\u2022 Switched to <strong>pay-per-token<\/strong> inference via ShareAI.<br>\u2022 Kept a small internal endpoint for batch jobs; <strong>spiky, interactive<\/strong> requests went to the grid.<br>\u2022 Built-in <strong>failover<\/strong> and <strong>multi-node routing<\/strong> maintained SLA.<\/p>\n\n\n\n<p><strong>Result:<\/strong><br>\u2022 Monthly inference cost <strong>tracked usage<\/strong>, not time, improving <strong>gross margins<\/strong> and freeing the team from constant GPU capacity planning.<\/p>\n\n\n\n<p><a href=\"https:\/\/aws.amazon.com\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">AWS (industry resources)<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Economics Deep Dive: When Monetizing Beats DIY Hosting<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Why small apps get crushed by underutilization<\/h3>\n\n\n\n<p>Running your own GPU for a light workload often means <strong>paying for idle hours<\/strong>. Large API providers win via <strong>massive batching<\/strong>; ShareAI gives smaller apps similar efficiency by <strong>pooling<\/strong> many buyers\u2019 traffic on shared nodes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Break-even intuition (illustrative)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Light load:<\/strong> You\u2019ll typically <strong>save<\/strong> with pay-per-token vs. renting a full GPU 24\/7.<\/li>\n\n\n\n<li><strong>Medium load:<\/strong> Mix and match\u2014pin a small baseline, burst the rest.<\/li>\n\n\n\n<li><strong>Heavy load:<\/strong> Dedicated capacity can make sense; many teams still keep ShareAI for <strong>overflow<\/strong> or <strong>regional<\/strong> coverage.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Sensitivities that matter<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>VRAM tiers:<\/strong> Bigger VRAM unlocks bigger models (higher token-throughput jobs).<\/li>\n\n\n\n<li><strong>Bandwidth &amp; locality:<\/strong> Close to demand = lower latency, more volume for your node.<\/li>\n\n\n\n<li><strong>Model choice:<\/strong> Smaller, efficient models (quantized\/optimized) often yield <strong>more tokens per watt<\/strong>\u2014good for both sides.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Trust, Quality, and Control<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Isolation:<\/strong> Jobs are dispatched through the ShareAI runtime; model weights and data handling follow the network\u2019s isolation controls.<\/li>\n\n\n\n<li><strong>Failover by design:<\/strong> If a provider drops mid-stream, <strong>another node<\/strong> completes the work\u2014founders don\u2019t chase incidents, providers aren\u2019t penalized for normal life events.<\/li>\n\n\n\n<li><strong>Transparent reporting:<\/strong> Providers see sessions, tokens, earnings; founders see requests, tokens, spend.<\/li>\n\n\n\n<li><strong>Updates:<\/strong> New\/optimized model variants appear in the marketplace without you rebuilding your fleet.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/releases\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Releases<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Provider Onboarding Checklist<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GPU &amp; VRAM<\/strong> meet queue requirements (e.g., \u22658 GB for many 7B models).<\/li>\n\n\n\n<li><strong>Stable drivers<\/strong> + recent CUDA stack (per provider guide).<\/li>\n\n\n\n<li><strong>Agent installed<\/strong> and device verified.<\/li>\n\n\n\n<li><strong>Uplink is stable<\/strong> (wired preferred) and ports available.<\/li>\n\n\n\n<li><strong>Thermals\/power<\/strong> checked for sustained sessions.<\/li>\n\n\n\n<li><strong>Availability windows<\/strong> set to overlap with likely demand.<\/li>\n\n\n\n<li><strong>Payout details<\/strong> configured in Console.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/docs\/provider\/manage\/overview\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Provider Guide<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Founder Integration Checklist<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>API key<\/strong> created and scoped: <a href=\"https:\/\/console.shareai.now\/app\/api-key\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Create API Key<\/a><\/li>\n\n\n\n<li><strong>Model selected<\/strong> with acceptable latency\/price: <a href=\"https:\/\/shareai.now\/models\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Browse Models<\/a><\/li>\n\n\n\n<li><strong>Routing preferences<\/strong> set (region, price ceiling, fallback).<\/li>\n\n\n\n<li><strong>Cost guardrails<\/strong> (daily\/monthly caps) monitored in Console.<\/li>\n\n\n\n<li><strong>Playground smoke-tests<\/strong> for prompts: <a href=\"https:\/\/console.shareai.now\/chat\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Open Playground<\/a><\/li>\n\n\n\n<li><strong>Observability<\/strong> wired for requests\/tokens\/spend in your stack.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/shareai.now\/docs\/api\/using-the-api\/getting-started-with-shareai-api\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">API \u2014 Getting started<\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">FAQs<\/h2>\n\n\n\n<p><strong>Can I game and provide at the same time?<\/strong><br>You can, but we recommend toggling your node <strong>offline<\/strong> during intensive local use to avoid contention and throttling.<\/p>\n\n\n\n<p><strong>What if my machine goes offline mid-job?<\/strong><br>The network <strong>fails over<\/strong> to another node; you simply stop earning for that session.<\/p>\n\n\n\n<p><strong>Do I need enterprise-grade networking?<\/strong><br>No. A stable consumer connection works. Lower jitter and higher uplink help <strong>latency-sensitive<\/strong> queues.<\/p>\n\n\n\n<p><strong>Which models fit in 8\/12\/16\/24 GB VRAM?<\/strong><br>As a rule of thumb: 7B text models in 8\u201312 GB, <strong>13B<\/strong> often prefers <strong>\u226516 GB<\/strong>, and larger\/vision models benefit from <strong>24 GB+<\/strong>.<\/p>\n\n\n\n<p><strong>How and when are payouts scheduled?<\/strong><br>Payouts are based on <strong>tokens served<\/strong>. Set up your payout details in Console; see the Provider Guide for cadence specifics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion: People-Powered AI Infra \u2014 Stop Wasting Dead Time, Start Earning<\/h2>\n\n\n\n<p>Monetizing GPU <strong>dead time<\/strong> used to be hard\u2014either you rented a whole rig or built a mini-cloud. <strong>ShareAI<\/strong> makes it <strong>push-button simple<\/strong>: run the agent when you\u2019re free, earn on <strong>actual usage<\/strong>, and let global demand find you. For founders, it\u2019s the same story in reverse: <strong>only pay when users generate tokens<\/strong>, not for silent GPUs waiting around.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Providers: <strong>Turn idle hours into income<\/strong> \u2014 start with the <a href=\"https:\/\/shareai.now\/docs\/provider\/manage\/overview\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Provider Guide<\/a>.<\/li>\n\n\n\n<li>Founders: <strong>Ship elastic inference fast<\/strong> \u2014 start in the <a href=\"https:\/\/console.shareai.now\/chat\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">Playground<\/a>, then wire the <a href=\"https:\/\/shareai.now\/docs\/api\/using-the-api\/getting-started-with-shareai-api\/?utm_source=blog&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai\">API<\/a>.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you\u2019ve bought a powerful GPU for gaming, AI, or mining, you\u2019ve probably wondered how to monetize GPU when you\u2019re not using it. Most of that time, your hardware is just burning electricity and depreciating. ShareAI lets you monetize idle GPU time by renting it out for AI inference workloads, so you get paid for [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":2452,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"cta-title":"Start Earning from GPU Idle Time","cta-description":"Turn your GPU\u2019s dead time into income. ShareAI routes real AI workloads to your hardware\u2014no server ops, pay-per-token, instant failover.","cta-button-text":"Create your API key","cta-button-link":"https:\/\/console.shareai.now\/app\/api-key\/?utm_source=shareai.now&amp;utm_medium=content&amp;utm_campaign=monetize-gpu-shareai","rank_math_title":"Monetize GPU Idle Time with ShareAI","rank_math_description":"Monetize GPU idle time: rent your hardware for AI inference and get paid for dead time. Founders pay per token; providers earn without server ops.","rank_math_focus_keyword":"monetize GPU,monetize GPU idle time,GPU dead time","footnotes":""},"categories":[2],"tags":[],"class_list":["post-2447","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-case-studies"],"_links":{"self":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/posts\/2447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/comments?post=2447"}],"version-history":[{"count":4,"href":"https:\/\/shareai.now\/api\/wp\/v2\/posts\/2447\/revisions"}],"predecessor-version":[{"id":2454,"href":"https:\/\/shareai.now\/api\/wp\/v2\/posts\/2447\/revisions\/2454"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/media\/2452"}],"wp:attachment":[{"href":"https:\/\/shareai.now\/api\/wp\/v2\/media?parent=2447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/categories?post=2447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/shareai.now\/api\/wp\/v2\/tags?post=2447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}