<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Next Generation IPU Systems

IPU-Machine and IPU-POD Systems

Core building blocks for AI infrastructure at scale

Watch on-demand webinar

The IPU-Machine: IPU-M2000

The IPU-M2000 is our revolutionary next-generation system solution built with the Colossus MK2 IPU. It packs 1 PetaFlop of AI compute and up to 450GB Exchange-Memory™ in a slim 1U blade for the most demanding machine intelligence workloads.

The IPU-M2000 has a flexible, modular design, so you can start with one and scale to thousands. Directly connect a single system to an existing CPU server, add up to eight connected IPU-M2000s or with racks of 16 tightly interconnected IPU-M2000s in IPU-POD64 systems, grow to supercomputing scale thanks to the high-bandwidth, near-zero latency IPU-Fabric™ interconnect architecture built into the box.

Watch launch video
GC009_IPUPOD 001_compressed_960px

The Graphcore IPU-POD64

IPU-POD64 is Graphcore's unique solution for massive, disaggregated scale-out enabling high-performance machine intelligence compute to supercomputing scale. The IPU-POD64 builds upon the innovative IPU-M2000 and offers seamless scale-out up to 64,000 IPUs working as one integral whole or as independent subdivided partitions to handle multiple workloads and different users.

The IPU-POD64 has 16 IPU-M2000s in a standard rack. IPU-PODs communicate with near-zero latency using our unique IPU-Fabric™ interconnect architecture. IPU-Fabric has been specifically designed to eliminate communication bottlenecks and allow thousands of IPUs to operate on machine intelligence workloads as a single, high-performance and ultra-fast cohesive unit.

Read Analyst Report


MK2 IPU systems deliver unparalleled performance and flexibility from device to scale-out, with 1 PetaFlops of AI-compute and more FP32 compute than any other processor.



IPU-Fabric™ is our innovative, ultra-fast and jitter-free communications technology. It offers 2.8Tbps communication in all directions from any IPU to any IPU and can scale up to 64,000 IPUs.



The IPU-M2000 has an unprecedented 450GB Exchange-Memory™ - 3.6GB In-Processor Memory™ plus up to 448GB Streaming Memory™ for larger models. This is crucial for modern AI workloads –how you access memory is as important as how you perform the compute once you've fetched the data.



IPU-POD64 is our solution for massive disaggregated machine intelligence scale-out. IPU-POD64 leverages the ultra-fast IPU-Fabric for outstanding performance at scale, and is designed for seamless deployment and integration into existing data centre set-ups.

Bert-Large- Training Click to Zoom

Natural Language Processing - BERT

The IPU delivers impressive performance across NLP models which are widely used across many industries and use-cases. As can be seen here for BERT-Large the IPU-POD64 achieves time-to train significantly faster than a single DGX A100 platform, and multiple DGX A100 based systems.

View Benchmarks

Co-Designed with Poplar® SDK

With IPU-POD64 systems you can run vast workloads across up to 64,000 IPUs. With Poplar, computing on this scale is as simple as using a single machine. Poplar takes care of all the scaling and optimisation – allowing you to focus on the model and the results.

We’ve also made it possible to dynamically share your AI compute between users, with our Virtual-IPU software when you want to allow multiple users to run different workloads at the same time.

We support interfaces to integrate with industry standard ecosystem tools for infrastructure management, including Open BMC and Redfish, Docker containers and orchestration with Slurm and Kubernetes. And we’re adding support for more platforms all the time.


Virtual-IPU Provisioning

We have made orchestration of single, or multi-tenant jobs and allocation of IPU resources for workloads simple, reliable and transparent. Our solution is built with industry standard tools such as Slurm and Kubernetes, and our Virtual-IPU provisioning software is integrated as part of our management software suite.

Slurm_KL kubernetes-horizontal-color

Hardware Management

Sophisticated hardware management is provided to give complete visibility of the important parameters of your entire IPU system in real time. Our hardware management software uses reliable, proven and extensible open-source software with OpenBMC and Redfish.

OpenBMC_logo Redfish_KL

PopVision™ Graph Analyser

PopVision™ allows you to monitor in minute detail the performance of your workloads across one or multiple IPUs. PopVision offers an unparalleled ability to look deep into the processing activity, and enables you to make correct, informed decisions when developing your models.


System Monitoring

We offer a rich dashboard UI for systems monitoring of your IPU-M2000 and IPU-POD64 systems using Grafana. This software is intuitive and easy to use, offering all of the information you need to keep up-to-date with your system's performance and status.


Introducing Colossus™ MK2 IPU

Learn more about our second-generation IPU - the world's most complex processor.

Read the blog

IPU-M2000 Webinar

Watch our webinar to get the low down on IPU-M2000 and IPU-POD64 systems.

Watch on-demand

Introducing Exchange-Memory™

Find out more about our revolutionary Exchange-Memory architecture.

Read the blog

Poplar Analyst Report

Check out the technical report from Moor Insights & Strategy about Poplar software.

Download now