The core building block for AI infrastructure. The IPU-M2000 packs 1 petaFLOP of AI compute in a slim 1U blade.Learn More
Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems.Learn More
16 petaFLOPS of AI-compute for both training and inference workloads, the IPU-POD64 is designed for AI at scale.Learn More
A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.Request Access
State of the art performance with for natural language processing, computer vision and much more
BERT-Large: TTT (time-to-train)Click to Zoom
Best for Natural Language Processing
The IPU delivers impressive performance for NLP. IPU-POD64 trains BERT-Large over 2.5 times faster than comparable DGX A100 platforms, cutting hours from AI development cycles.
EfficientNet-B0: InferenceClick to Zoom
Best for Computer Vision
The IPU-M2000 delivers a significant performance advantage compared with the Nvidia A100 GPU. Running EfficientNet on the IPU is straightforward and doesn't require extra INT8 quantisation effort which can also affect accuracy.