Gruve Secures $50M to Launch 500MW Inference Infrastructure Without New Data Centers
Gruve announced a $50M Series A follow-on, totaling $87.5M, and the launch of 500MW+ distributed inference capacity. By collaborating with partners like Lineage, Gruve is utilizing existing power from Tier 1 and Tier 2 cities to deploy infrastructure quickly. Their modular, high-density compute pods are designed for low-latency AI demands, with initial deployment of 30MW across four U.S. sites, and plans for expansion into Japan and Western Europe.

Gruve has launched 500MW+ of distributed inference capacity, supported by a $50M Series A follow-on, totaling $87.5M. The company is leveraging existing infrastructure and excess power in partnership with industrial firms like Lineage, allowing for rapid deployment without waiting for new data center permits.
Their modular pods are designed for high-density, low-latency AI requirements, addressing traditional data centers' limitations. Gruve is going live with 30MW across four U.S. sites and will expand to Japan and Western Europe shortly.




Comments