AI isn’t just another workload—it’s a seismic shift in how infrastructure must perform. What once powered databases and virtual machines now struggles to keep up with the demands of training massive models, running inference at scale, and visualizing real-time data.
The pace of innovation is relentless. Just this month, NVIDIA introduced the RTX PRO 6000 Blackwell Server Edition, packing workstation-class visualization and AI acceleration into a compact 2U form factor. It’s a clear signal that the hardware landscape is advancing at lightning speed, and static infrastructure can’t keep up. Enterprises can’t afford rigid designs that become obsolete as soon as the next GPU drops.
To thrive in this new era, enterprises need more than raw power. They need composability. Infrastructure must be modular, dynamic, and intelligent enough to adapt as fast as AI workloads do.
From racks and blades to composability
For years, organizations built around fixed server designs—rack or blade—that served well for predictable workloads. But AI has shattered that predictability. Today’s workloads are dynamic, data intensive, and constantly evolving. Training models, running inference, and rendering high-performance visualizations demand far more flexibility than traditional architectures can offer.
That’s where composable infrastructure changes things. Instead of building applications around the limits of hardware, composability lets infrastructure adapt to the needs of applications. Compute, GPU, storage, and networking resources become modular, shared, and dynamically allocated. This gives IT teams the power to scale, shift, and optimize in real time.
Introducing UCS X-Series with X-Fabric Technology 2.0: composability for the AI era
The new Cisco UCS X580p PCIe Node together with X-Fabric Technology 2.0 cloud-operated by Cisco Intersight deliver on the promise of true composability for the AI era. This is more than a product refresh—it’s a strategic step toward Cisco Secure AI Factory with NVIDIA, where infrastructure and cloud management work together as one, adapting seamlessly to workloads over time.
And it’s built for what’s next. This latest form factor of UCS X-Series supports GPUs like the NVIDIA RTX PRO 6000 Blackwell Server Edition, so customers can take advantage of cutting-edge acceleration without needing to rip and replace infrastructure.
Here’s what that means in practice:
- AI-optimized infrastructure. The system supports GPU-accelerated workloads for training, inference, and high-performance visualization within a modular, composable architecture.
- Independent resource scaling. CPUs and GPUs can be scaled independently, with up to eight GPUs per chassis and shared GPU pools available across nodes.
- High-speed performance. PCIe Gen 5 delivers high-throughput performance with DPU-ready networking, optimized for the east-west GPU traffic that AI workloads generate.
- Intelligent resource allocation. GPU resources are dynamically allocated through policy-based orchestration in Cisco Intersight, enabling optimal utilization and improved total cost of ownership.
- Future-proof design. The modular architecture and disaggregated lifecycle management allow seamless integration of next-generation accelerators without requiring forklift upgrades.
This is the only modular server platform that unifies the latest GPUs and DPUs in a truly composable, cloud-managed system, operated and orchestrated through Cisco Intersight.
With Intersight, idle GPUs are a thing of the past. Policy-based allocation lets IT teams create a shared pool of GPU resources that can flex to meet demand. The result? GPUs go where they’re needed most and waste is reduced, maximizing performance and return on investment for the organization.
Why composability is critical for AI infrastructure
The promise of AI isn’t realized by hardware alone—it’s realized by running AI like a service. That requires three things:
- Power. AI workloads demand massive parallel compute and GPU acceleration. Without sufficient performance, training slows, inference lags, and innovation stalls.
- Flexibility. Modern workloads evolve rapidly. Infrastructure must support independent scaling of CPUs and GPUs to meet changing demands without overprovisioning or waste.
- Composability. Intelligent orchestration is essential. With policy-driven management across clouds, composable infrastructure ensures resources are allocated where they’re needed most—automatically and efficiently.
With UCS X-Series and X-Fabric Technology 2.0, customers get all three in a single chassis. As GPU and DPU technologies evolve, the infrastructure evolves with them. That’s investment protection in action.
Building for what comes next
This launch is just one milestone in the Cisco composability journey. X-Fabric Technology 2.0 represents the next generation of a platform designed for continuous innovation.
As PCIe, GPU, and DPU technologies advance—including new accelerators like the NVIDIA RTX PRO 6000 Blackwell Server Edition—UCS X-Series will integrate them seamlessly, protecting investments and positioning customers for what comes next.
The future of infrastructure is composable. It’s about freedom from silos, agility without compromise, and confidence that your data center can adapt as fast as your business does.
At Cisco, we’re not just building servers for today. We’re laying the foundation for the AI-driven enterprise of tomorrow.
Ready to see how Cisco and NVIDIA are redefining enterprise
Share:
