The core building block for AI infrastructure. The IPU-M2000 packs 1 petaFLOP of AI compute in a slim 1U blade.Learn More
Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems.Learn More
16 petaFLOPS of AI-compute for both training and inference workloads, the IPU-POD64 is designed for AI at scale.Learn More
A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.Request Access
WORLD-CLASS PERFORMANCE IN MLPERF RESULTS
State of the art performance for natural language processing and computer vision as seen in latest MLPerf 1.0 training results
Best for Natural Language Processing
Train BERT in 9.39 minutes. These blistering fast results on an IPU-POD64 measured in industry standard MLPerf benchmarks for BERT in the commercially available, open category for production hardware.
Even the closed result for IPU-POD16 is seriously impressive - just under 35 minutes. We’re getting into supercomputer territory with a 5U machine you can buy from our partners or use in Graphcloud today.
MLPerf v1.0 Training Results | MLPerf ID: 1.0-1098, 1.0-1099
Best for Computer Vision
Train ResNet-50 in 14.48 minutes. In the tightly regulated closed category for MLPerf, we achieve stunning performance on an IPU-POD64 and a really impressive 37 minutes on our mainstream, commercially available IPU-POD16 system.
MLPerf v1.0 Training Results | MLPerf ID: 1.0-1026, 1.0-1028