Nvidia has launched two groundbreaking “personal AI supercomputers” at GTC 2025, accelerating the transition to decentralised AI development through edge-based hardware.
The DGX Spark and DGX Station systems, powered by the Grace Blackwell chip architecture, are designed to empower researchers and enterprises to prototype, fine-tune, and deploy machine learning models directly at the edge, reducing reliance on cloud infrastructure.
The DGX Spark (formerly codenamed Project Digits) utilises the GB10 Grace Blackwell Superchip, which combines a 20-core Arm CPU with a Blackwell GPU via NVLink-C2C interconnects. This configuration delivers 5x higher bandwidth than PCIe 5.0, optimising memory-intensive AI workloads such as large language model inference. The system achieves 1,000 trillion operations per second (TOPS) and includes 128GB of unified LPDDR5x memory, preinstalled with Nvidia’s AI software stack.
The DGX Station features the GB300 Grace Blackwell Ultra Desktop Superchip, pairing a Blackwell Ultra GPU with a Grace CPU. This configuration provides 784GB of coherent memory—critical for training multitrillion-parameter models—and integrates FP4 precision for enhanced numerical stability in generative AI tasks. Both systems utilise ConnectX networking (ConnectX-7 for Spark, ConnectX-8 for Station), enabling high-speed data transfers and multi-node scalability.
“This is the computer of the age of AI. This is what computers should look like, and this is what computers will run in the future,” stated Nvidia CEO Jensen Huang.
DGX Spark is immediately available through Nvidia’s website and partners like Asus, with configurations starting at $2,999 for the 1TB Ascent GX10 variant. DGX Station will launch later in 2025 via collaborations with Asus, Boxx, Dell, HP, and Lenovo.
These announcements align with Nvidia’s push into AI-driven robotics, including partnerships with Disney Research and Google DeepMind on physics engines like Newton and foundation models like Groot N1 for humanoid robots. The Blackwell architecture’s secure AI capabilities and dedicated RAS engine further position these systems for enterprise adoption in data-sensitive sectors.
Nvidia aims to accelerate the transition from experimental AI projects to production-ready deployments across industries by enabling edge-based AI development.
“AI agents will be everywhere. How they run, what enterprises run, and how we run it will be fundamentally different. And so we need a new line of computers. And this is it,” Huang added.