<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Grow

IPU-POD

When you're ready to scale, choose IPU-POD128 for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. IPU-POD128 delivers for AI at scale.

  • Superior scaling & blazing fast performance
  • Full systems integration support for datacenter installation.
  • AI expert support to develop & deploy models at scale.
IPU-POD128 Data Centre

In order to continuously support the increasing super-scale AI HPC environment market demand, we are partnering with Graphcore to upgrade our IPU-POD64s to an IPU-POD128 to increase the “Hyperscale AI Services” offering to our customers. Through this upgrade we expect our AI computation scale to increase to 32 PetaFLOPS of AI Compute, allowing for more diverse customers to be able to use KT’s cutting-edge AI computing for training and inference on large-scale AI models. 

Mihee Lee, SVP Cloud/DX Business

Korea Telecom

Natural Langauge Processing

Language

Natural language processing (NLP) delivers business value today for finance firms to biotech leaders, scale-ups to hyperscalers, improving internet search sentiment analysis, fraud detection, chatbots, drug discovery and more. Choose IPU-POD128 whether you are running large BERT models in production or starting to explore GPT class models or GNNs.

Computer Vision

Vision

State of the art computer vision is driving breakthroughs in medical imaging, claims processing, cosmology, smart cities, self-driving cars, and more. World leading performance for traditional powerhouses like ResNet50 and high accuracy emerging models like EfficientNet are ready to run on IPU-POD128 at scale.

Scientific Research

Scientific Research

National labs, universities and research institutes are turning to IPU-POD128 to solve problems in physics, weather forecasting, computational fluid dynamics, protein folding, oil & gas exploration and more. Take advantage of the IPU's fine grained compute at scale for emerging  Graph Neural Networks (GNNs) and probabilistic models, explore sparsity and make the convergence of HPC and AI a reality.

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

Scaling

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
IPUs 128x GC200 IPUs
IPU-M2000s 32x IPU-M2000s
Memory 115.2GB In-Processor-Memory and up to 8.2TB Streaming Memory
Performance 32 petaFLOPS FP16.16
8 petaFLOPS FP32
IPU Cores 188,416
Threads 1,130,496
IPU-Fabric 2.8Tbps
Host-Link 100 GE RoCEv2
Software

Poplar

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 900kg + Host servers and switches
System Dimensions 32U + Host servers and switches
Host Server Selection of approved host servers from Graphcore partners
Thermal Air-Cooled

The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved.

Unauthorized use strictly prohibited. See www.mlperf.org for more information.

For more performance results, visit our Performance Results page