<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Build

IPU-POD

Ramp up your AI projects, speed up production and see faster time to business value. IPU-POD64 is the powerful, flexible building block for world-leading AI performance in your enterprise datacenter, private or public cloud. Whether you're running large language models or rely on fast and accurate vision models, IPU-POD64 will deliver the results you need today as well as giving you the opportunity to explore innovative AI solutions for tomorrow.

  • World leading vision & language performance
  • Fine-grained compute & sparsity opens up new innovation
  • Extensive resources & AI expert support so you're up and running fast
IPU-POD128 Data Centre

We are impressed with Graphcore’s technology for energy-efficient construction and execution of large, next-generation ML models, and we expect significant performance gains for several of our AI-oriented research projects in medical imaging and cardiac simulations.

Are Magnus Bruaset

Simula Research Laboratory

Natural Language Processing

Language

Natural language processing (NLP) delivers business value today for finance firms to biotech leaders, scale-ups to hyperscalers, improving internet search sentiment analysis, fraud detection, chatbots, drug discovery and more. Choose IPU-POD64 whether you are running large BERT models in production or starting to explore GPT class models or GNNs.

Computer Vision

Vision

State of the art computer vision is driving breakthroughs in medical imaging, claims processing, cosmology, smart cities, self-driving cars, and more. World leading performance for traditional powerhouses like ResNet50 and high accuracy emerging models like EfficientNet are ready to run on IPU-POD64.

Scientific Research

Scientific Research

National labs, universities, research institutes and supercomputing centres around the world are making scientific breakthroughs on IPU-POD64 by maximising the IPU's fine-grained compute to deploy emerging models like Graph Neural Networks (GNNs) and probabilistic models, explore sparsity and make convergence of HPC and AI a reality.

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

MLPerf v1.1 Training Results | MLPerf ID: 1.1-2041, 1.1-2089

MLPerf v1.1 Training Results | MLPerf ID: 1.1-2040, 1.1-2042

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
IPUs 64x GC200 IPUs
IPU-M2000s 16x IPU-M2000s
Memory 57.6GB In-Processor-Memory and up to 4.1TB Streaming Memory
Performance 16 petaFLOPS FP16.16
4 petaFLOPS FP32
IPU Cores 94,208
Threads 565,248
IPU-Fabric 2.8Tbps
Host-Link 100 GE RoCEv2
Software

Poplar

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 450kg + Host servers and switches
System Dimensions 16U + Host servers and switches
Host Server Selection of approved host servers from Graphcore partners
Thermal Air-Cooled

The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved.

Unauthorized use strictly prohibited. See www.mlperf.org for more information.

For more performance results, visit our Performance Results page