<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
resnet34inf_b&w_inv_comp
 

Our IPU lets innovators make new breakthroughs in machine intelligence

IPU-Pod16

IPU-POD16

Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems. 

Learn More
IPU-Pod64

IPU-POD64

16 petaFLOPS of AI-compute for both training and inference workloads, the IPU-POD64 is designed for AI at scale.

Learn More
IPU-Pod128

IPU-POD256

With a powerful 64 petaFLOPS of AI compute, our flagship IPU-POD256 is designed to provide cutting-edge performance for machine intelligence at datacenter scale.

Learn More
Graphcloud

Graphcloud

A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.

Request Access

Best for Natural Language Processing

BERT-Large training in 5.88 minutes. These are the blistering fast results you get on an IPU-POD128 in the latest MLPerf 1.1 benchmark results in the open category for production hardware, delivering real benefits to our customers.

MLPerf v1.1 Training Results | MLPerf ID: 1.1-2088, 1.1-2089, 1.1-2087

Best for Computer Vision

Our latest MLPerf submission for ResNet-50 show the IPU-POD16 platform outperforming the latest DGX A100 platform, training ResNet in 28.3 minutes.

IPU systems really shine with next-generation CV models like EfficientNet. With the IPU-POD's impressive scaling capabilities, you can train EfficientNet in 1.7 hours on an IPU-POD256 while an IPU-POD16 trains in 20.7 hours - 3.5x faster than the 70.5 hours published result for DGX A100.

 IPU-POD SDK2.4 Results

DGX A100 (A100-SXM4-80GB) | Published results

Best for Large Models

IPU-POD systems can support today’s large AI models, indicated by the impressive scaling of GPT-class models.  

The performance capabilities of the larger IPU-POD systems, fully managed by the Poplar SDK, allows customers to experiment and to build new types of large models.

IPU-POD systems can scale to support the largest brain-scale models with trillions of parameters. 

SDK2.3 Throughput Results

Graphcore claims its IPU-POD outperforms Nvidia A100 in model training

Learn more

Graphcore brings new competition to Nvidia in latest MLPerf AI benchmarks

Learn more

Man Group-Oxford Quants say their AI can predict Stock Moves

Learn more
×