<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
resnet34inf_b&w_inv_comp
 

Our IPU lets innovators make new breakthroughs in machine intelligence

GC011_IPURACK_009_W4K-2_hp

IPU-M2000

The core building block for AI infrastructure. The IPU-M2000 packs 1 petaFLOP of AI compute in a slim 1U blade.

Learn More
IPU-POD16

IPU-POD16

Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems. 

Learn More
IPU POD64

IPU-POD64

16 petaFLOPS of AI-compute for both training and inference workloads, the IPU-POD64 is designed for AI at scale.

Learn More
Graphcloud

Graphcloud

A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.

Request Access
Bert-Large: Training Click to Zoom

Best for Natural Language Processing

The IPU delivers impressive performance for NLP. IPU-POD64 trains BERT-Large over 2.5 times faster than comparable DGX A100 platforms, cutting hours from AI development cycles.

EfficientNet-BO- Inference Click to Zoom

Best for Computer Vision

The IPU-M2000 delivers a significant performance advantage compared with the Nvidia A100 GPU. Running EfficientNet on the IPU is straightforward and doesn't require extra INT8 quantisation effort which can also affect accuracy.

×