resnet-inference-19_compressed

Poplar Graph Framework Software

Co-designed with the IPU from the ground up for machine intelligence

Poplar

Introducing Poplar ®

The Poplar SDK is a complete software stack, which was co-designed from scratch with the IPU, to implement our graph toolchain in an easy to use and flexible software development environment.

At a high level, Poplar is fully integrated with standard machine learning frameworks so developers can port existing models easily, and get up and running out-of-the-box with new applications in a familiar environment.

Below these frameworks sits Poplar. For developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++.

Poplar White Paper
logos-support_updated

Standard framework support

Poplar seamlessly integrates with standard machine intelligence frameworks:

  • TensorFlow 1 & 2 support with full performant integration with TensorFlow XLA backend
  • PyTorch support for targeting IPU using the PyTorch ATEN backend 
  • PopART™ (Poplar Advanced Runtime) for training & inference; supports Python/C++ model building plus ONNX model input
  • Full support for PaddlePaddle and other frameworks is coming soon
PopLibs

PopLibs™ Graph Libraries

PopLibs is a complete set of libraries, available as open source code, that support common machine learning primitives and building blocks:

  • Over 50 optimised functions for common machine learning models
  • More than 750 high performance compute elements
  • Simple C++ graph building API
  • Implement any application
  • Full control flow support
poplar_2_layer_compressed Click to Zoom

Graph Compiler

Our state of the art compiler simplifies IPU programming by handling the scheduling and work partitioning of large parallel programs including memory control:

  • Optimised execution of the entire application model to run efficiently on IPU platforms
  • Alleviates the burden on developers to manage data or model parallelism
  • Code generation using standard LLVM
graphengine_compressed Click to Zoom

Graph Engine

High performance Graph Runtime to execute models and stream data through models running on IPU:

  • Highly optimised IPU data movement
  • Interfaces to host memory system
  • Device management: configuring the IPU-Link network, loading applications to devices & performing setup
  • Debug & profiling capabilities
Poplar scaling Click to Zoom

Multi-IPU Scaling & Communication

Poplar takes on the heavy lifting, so you don't have to, in a world of growing model sizes and complexity:

  • High bandwidth IPU-Link™ communication, fully automated and managed by Poplar, treats multiple IPUs like a single IPU compute resource
  • Graph Compile Domain (GCD) allows a single application to be programmed against multiple IPU processors, enabling both data parallel and model parallel execution
  • Model sharding allows the simple splitting of applications across multiple devices
  • Combining sharding with replication allows you to take code data parallel with minimum effort
  • Advanced model pipelining lets users extract maximum system performance to run large models fast and efficiently

Open Source

At Graphcore we put power in the hands of AI developers allowing them to innovate. Poplar Graph Libraries (PopLibs) are fully open source and available on GitHub to allow the entire developer community to contribute to and enhance these powerful tools.

Read the blog

PopVision™ Analysis Tools

The PopVision™ family of analysis tools help developers gain a deep understanding of how applications are performing and utilising the IPU. Get a deep understanding of your code's inner workings with a user-friendly, graphical interface.

Read the blog

Straightforward Deployment

Pre-built Docker containers with Poplar SDK, Tools and frameworks images to get up and running fast.

Docker

Standard Ecosystem Support

Ready for production with Microsoft Azure deployment, Kubernetes orchestration and Hyper-V virtualisation & security.

Kubernetes Microsoft Hyper-v
Learn More
poplar_whitepaper_compressed

Poplar Analyst Report

Detailed technical white paper on the Poplar software stack from analyst Moor Insights & Strategy.

Read the white paper
poplar1-2_compressed

Poplar SDK 1.3

PyTorch for IPU, Keras support, Exchange Memory Management features and more.

Read the blog
intelligent_memory_compressed

Exchange Memory

The IPU's unique Exchange Memory lets users execute ML models fast no matter how large the model or where the data is stored.

Read the blog
opensource_compressed

Open Source

We've made our PopLibs libraries, TensorFlow for IPU & PopART™ code fully open source.

Read the blog
×