<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
Poplar SDK 2.5

May 23, 2022

Poplar SDK 2.5 now available

Written By:

Laurence Herbert

Try AI notebooks for free

Try IPUs in the cloud with a zero set-up, pre-configured Jupyter development environment on Paperspace

Try now for free

Join the IPU conversation

Join our Graphcore community for free. Get help and share knowledge, find tutorials and tools that will help you grow.

Join on Slack

The latest version of Graphcore’s Poplar SDK is now available, delivering a range of feature enhancements and performance improvements. Supporting both our newest third generation Bow Pod systems and our previous generation, the Poplar SDK empowers our customers to innovate and develop high performance applications on IPUs.

Poplar SDK 2.5 can be downloaded from our support portal and Docker Hub where you can find our latest and growing range of convenient and easy to use Poplar containers.

We are also adding many new examples to our growing model garden across multiple application domains such as NLP, vision, speech and GNNs, demonstrating ever-increasing breadth and capabilities.

Poplar SDK 2.5 highlights

Selected new features are presented below, while a full list of updates can be found in the accompanying release notes.

TensorFlow 2 enhancements

For this release we have updated our open-source version of TensorFlow 2 to version 2.5. We have expanded our Keras support by enabling the use of Keras Model subclasses and improved our integration with TensorBoard, TensorFlow’s visualisation toolkit, for Keras models. A new tutorial on using TensorBoard with IPUs is available on GitHub.

Serialised compilation

In addition to host memory optimisations that are available by default in Poplar SDK 2.5, for some models you can reduce host memory usage during compilation by using a new option to compile your model for a subset of IPUs at a time. This experimental feature allows users to compile large models on resource-limited systems. Further improvements will follow in Poplar SDK 2.6.

Automatic loss scaling

This release includes our experimental Automatic Loss Scaling feature for training models in half precision. The feature enables adaptation of the loss scaling factor based on gradient statistics during training, accounting for the limited dynamic range of the IEEE 754 16-bit float format for the representation of gradients. It is available for use with models written in PyTorch and PopART and removes the need for the user to find the correct loss scaling factor manually.

Improved operator coverage and optimisations

We continue to expand the number of supported PyTorch operators, further broadening model support. Loop-based models (such as RNNs) in TensorFlow will benefit from optimisations to improve compile time, memory usage and runtime performance. Many other optimisations have been made throughout the Poplar SDK.

Monitoring tools and libraries

The Poplar SDK includes a comprehensive set of tools for monitoring IPUs and libraries that enable customers to integrate these capabilities into their own systems. Some of the improvements that have been made in this release include multi-user/multi-host capabilities enabling users to better understand IPU availability and status in larger deployments.

Poplar Triton Backend

This release adds preview support for serving models to be run on the IPU using the Triton Inference Server, allowing users to deploy inference models more easily. Models written using PopART and TensorFlow for the IPU can be compiled and saved in PopEF (Poplar Exchange Format) files which can then be served by the Poplar Triton Backend. For more details, see the new Poplar Triton Backend User Guide.

Ubuntu 20.04

Poplar SDK 2.5 includes full support for the Ubuntu 20.04 operating system (previously in preview).

New model examples

Graphcore is committed to making it as easy as possible for users to deploy a wide range of models optimised for the IPU. We are continually updating our model garden and associated GitHub repositories.

As part of the Poplar SDK 2.5 release, we are making the following new examples available:

Computer vision

  • ViT – pretraining added to existing fine-tuning example (PyTorch)
  • DINO (PyTorch)
  • EfficientDet - inference (TensorFlow 2)
  • Neural Image Fields (TensorFlow 2)
  • Swin Transformer - pretraining (PyTorch)
  • ViT (Hugging Face Optimum, fine-tuning)
  • ConvNext (Hugging Face Optimum)


  • PackedBERT (PyTorch, PopART)
  • BERT-Large (TensorFlow 2)
  • GPT2-S/XL -inference (PyTorch)
  • GPT2-M/L - training (PyTorch)
  • BERT-Base/Large (Hugging Face Optimum, pretraining & fine-tuning)
  • RoBERTa-Base/Large (Hugging Face Optimum, fine-tuning)
  • DeBERTa-Base (Hugging Face Optimum, fine-tuning)
  • LXMERT (Hugging Face Optimum, fine-tuning)
  • GPT2-S/M (Hugging Face Optimum, fine-tuning)
  • T5-Small (Hugging Face Optimum, fine-tuning)
  • BART-Base (Hugging Face Optimum, fine-tuning)


  • FastSpeech2 - inference (TensorFlow 2)
  • Conformer-Large (PyTorch)
  • Fastpitch (PyTorch)
  • HUBERT (Hugging Face Optimum, fine-tuning)
  • Wave2Vec2 (Hugging Face Optimum)


  • Cluster GCN (TensorFlow 2)

AI for simulation

  • DeepDriveMD (TensorFlow 2)

IPU Programmer's Guide

Since its launch, the IPU Programmer’s Guide has proved to be an essential and popular resource for developers starting their journey with Graphcore systems – taking users from a general hardware and software overview through to compiling and executing programs on the IPU.

A new version of the IPU Programmer’s Guide is now available, featuring extensive updates and additions—including a new section on common algorithmic techniques such as replication, recomputation, and model parallelism and pipelining.

For access to all the latest documentation, tutorials, code examples, webinars, videos, research papers and further resources for IPU programming, check out our developer portal.