<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Grow

IPU-POD

When you're ready to explore AI compute at supercomputing scale, choose IPU-POD256 for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. IPU-POD256 delivers AI at scale.

  • IPU at supercomputing scale
  • World leading language and vision performance for new and emerging models
  • Fine-grained compute & sparsity opens up new innovation
IPU-POD128 Data Centre

We are enthusiastic to add IPU-POD128 and IPU-POD256 systems from Graphcore into our Atos ThinkAI portfolio to accelerate our customers capabilities to explore and deploy larger more innovative AI models across many sectors, including academic research, finance, healthcare, telecoms and consumer internet

Agnès Boudot, SVP HPC & Quantum

Atos

Natural Language Processing

Language

Natural language processing (NLP) delivers business value today for finance firms to biotech leaders, scale-ups to hyperscalers, improving internet search sentiment analysis, fraud detection, chatbots, drug discovery and more. Choose IPU-POD256 whether you are running large BERT models in production or starting to explore GPT class models or GNNs.

Computer Vision

Vision

State of the art computer vision is driving breakthroughs in medical imaging, claims processing, cosmology, smart cities, self-driving cars, and more. World leading performance for traditional powerhouses like ResNet50 and high accuracy emerging models like EfficientNet or Transformer-based Vision models are ready to run on IPU-POD256.

Scientific Research

Scientific Research

National labs, universities and research institutes are turning to IPU-POD256 to solve problems in physics, weather forecasting, computational fluid dynamics, protein folding, oil & gas exploration and more. Take advantage of the IPU's fine grained compute at scale for emerging  Graph Neural Networks (GNNs) and probabilistic models, explore sparsity and make the convergence of HPC and AI a reality.

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

Scaling

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
IPUs 256x GC200 IPUs
IPU-M2000s 64x IPU-M2000s
Memory 230.4GB In-Processor-Memory and up to 16,384GB Streaming Memory
Performance 64 petaFLOPS FP16.16
16 petaFLOPS FP32
IPU Cores 376,832
Threads 2,260,992
IPU-Fabric 2.8Tbps
Host-Link 100 GE RoCEv2
Software

Poplar

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 1,800kg + Host servers and switches
System Dimensions 64U + Host servers and switches
Host Server Selection of approved host servers from Graphcore partners
Thermal Air-Cooled

The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved.

Unauthorized use strictly prohibited. See www.mlperf.org for more information.

For more performance results, visit our Performance Results page