We've just released Poplar SDK 1.4 and with it the first production version of PyTorch for the IPU. Known as PopTorch, our connecting library brings together the performance of the IPU-M2000 platform and the developer-ready accessibility of PyTorch.
This tightly coupled solution allows users to run standard PyTorch programs on the IPU by simply changing a couple of lines of code:
In opening up Graphcore technology to PyTorch developers, we are bringing together the most advanced AI compute platform - designed around the needs of next-generation models - with the framework that has become synonymous with innovation in machine intelligence.
Support for the PyTorch framework was first made available in preview earlier in 2020. Since then it has been extensively refined and extended, based on developer feedback, making this release not just a team effort but a product of the wider PyTorch community.
Continuing the theme of a community-centric product development, we are open-sourcing PyTorch for IPU, with the code available on GitHub. Contributions can be made as standard GitHub pull requests, once our contributors licence agreement (CLA) has been accepted.
As well as helping to refine and accelerate the evolution of PyTorch for IPU, open sourcing allows developers to dive deep into our code, building understanding of how Graphcore’s broader hardware and software offering works.
In addition to the production release and open sourcing of PyTorch for IPU, our SDK 1.4 release provides many other features including:
Significant Poplar compiler optimisations to reduce compile time for faster development as well as kernel-level optimisations to take advantage of the MK2 IPU architecture including enabling larger batch sizes and greater model coverage
Optimised distributed deployment tools including a distributed configuration library to improve scale out of data-parallel applications across multiple IPU-POD systems
The new PopVision System Analyser to better understand and optimise large distributed systems
Additional ONNX operator coverage and model support
Preview features in 1.4
Updated sparsity kernel libraries using dynamic/reconfigurable sparsity patterns
Support for CentOS 8
Support for Ubuntu 20.04
Getting started with PyTorch
PyTorch for IPU is designed to require minimal manual alterations to PyTorch models. This example shows the code changes (in comments) required to perform inference using a standard pre-trained BERT PyTorch model on the IPU.
PyTorch Model Support and Performance
We are publishing new benchmarks for our IPU-M2000 system today too, including some PyTorch training and inference results. We also provide reference implementations for a range of models on GitHub. In most cases, the models require very few code changes to run IPU systems. We will be regularly adding more PyTorch-based performance results to our website and as code examples on GitHub, so please keep checking in.