<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
resnet34inf_b&w_inv_comp
 

Our IPU lets innovators make new breakthroughs in machine intelligence

IPU-Pod16

IPU-POD16

Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems. 

Learn More
IPU-Pod64

IPU-POD64

16 petaFLOPS of AI-compute for both training and inference workloads, the IPU-POD64 is designed for AI at scale.

Learn More
IPU-Pod128

IPU-POD128

With a powerful 32 petaFLOPS of AI compute, the IPU-POD128 is designed to provide cutting-edge performance for machine intelligence at datacenter scale.

Learn More
Graphcloud

Graphcloud

A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.

Request Access

Best for Natural Language Processing

Train BERT in 9.39 minutes. These blistering fast results on an IPU-POD64 measured in industry standard MLPerf benchmarks for BERT in the commercially available, open category for production hardware.

Even the closed result for IPU-POD16 is seriously impressive - just under 35 minutes. We’re getting into supercomputer territory with a 5U machine you can buy from our partners or use in Graphcloud today.

MLPerf v1.0 Training Results | MLPerf ID: 1.0-1098, 1.0-1099

Best for Computer Vision

Train ResNet-50 in 14.48 minutes. In the tightly regulated closed category for MLPerf, we achieve stunning performance on an IPU-POD64 and a really impressive 37 minutes on our mainstream, commercially available IPU-POD16 system.

MLPerf v1.0 Training Results | MLPerf ID: 1.0-1026, 1.0-1028

×