<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
gc_POD64

DEPLOY AND SCALE: IPU-POD64

Explore new horizons with AI at scale

Order Now
Scalability

Unprecedented scalability

Simplify and streamline your AI datacenter scale-out with pre-configured, pre-approved IPU-POD reference designs

Compute

Powerful AI compute

Challenge the status quo by choosing powerful, parallel AI compute and differentiate your business with new AI breakthroughs

Data

Software designed for AI at scale

Poplar flexibly and simply compiles AI models across any number of IPUs giving you back precious development time

Comms

Simple, powerful Built-in Networking

IPU-Fabric is designed from the ground-up for AI to provide close-to-constant communication latency, ready to extend with near-limitless scale

IPU-POD64

Get Started with IPU-POD64

Start with one and scale to AI supercomputer size with flexible, pre-configured reference designs and approved technology ecosystem partners. Take advantage of systems integration skills from our elite partner network to build out your IPU-based dedicated AI infrastructure.

Learn More
Built For:

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

MLPerf v1.0 Training Results | MLPerf ID: 1.0-1027, 1.0-1099

MLPerf v1.0 Training Results | MLPerf ID: 1.0-1026, 1.0-1028

IPU-POD16

IPU-POD16

Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems.

Learn more
IPU-Machine-M2000-system

IPU-M2000 

Our core building block for AI infrastructure. The IPU-M2000 packs 1 petaFLOP of AI compute in a slim 1U blade.

Learn more
graphcloud_block

Graphcloud

A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.

Learn more
×