<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
resnet-inference-21

P‌O‌‍PLA‍R‌ G‌‍RAPH FRAMEWOR‍K SO‌‍FTWA‌R‌E

Co-designed with the IPU from the ground up for machine intelligence

Speak to an Expert
Poplar Software Stack Diagram

Introducing Poplar ®

The Poplar SDK is a complete software stack, which was co-designed from scratch with the IPU, to implement our graph toolchain in an easy to use and flexible software development environment.

At a high level, Poplar is fully integrated with standard machine learning frameworks so developers can port existing models easily, and get up and running out-of-the-box with new applications in a familiar environment.

Below these frameworks sits Poplar. For developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++.

Poplar White Paper
Supported Frameworks: ONNX, TensorFlow, PaddlePaddle, PyTorch, PyTorch Lightning, PyG and Huggingface

Standard framework support

Poplar seamlessly integrates with standard machine intelligence frameworks:

  • TensorFlow 1 & 2 support with full performant integration with TensorFlow XLA backend
  • PyTorch support for targeting IPU using the PyTorch ATEN backend 
  • PopART™ (Poplar Advanced Runtime) for training & inference; supports Python/C++ model building plus ONNX model input
  • Full support for PaddlePaddle
  • Other frameworks support coming soon
PopLibs

PopLibs™ Graph Libraries

PopLibs is a complete set of libraries, available as open source code, that support common machine learning primitives and building blocks:

  • Over 50 optimised functions for common machine learning models
  • More than 750 high performance compute elements
  • Simple C++ graph building API
  • Implement any application
  • Full control flow support
poplar_2_layer_compressed Click to Zoom

Graph Compiler

Our state of the art compiler simplifies IPU programming by handling the scheduling and work partitioning of large parallel programs including memory control:

  • Optimised execution of the entire application model to run efficiently on IPU platforms
  • Alleviates the burden on developers to manage data or model parallelism
  • Code generation using standard LLVM
graphengine_compressed Click to Zoom

Graph Engine

High performance Graph Runtime to execute models and stream data through models running on IPU:

  • Highly optimised IPU data movement
  • Interfaces to host memory system
  • Device management: configuring the IPU-Link network, loading applications to devices & performing setup
  • Debug & profiling capabilities
Poplar scaling Click to Zoom

Multi-IPU Scaling & Communication

Poplar takes on the heavy lifting, so you don't have to, in a world of growing model sizes and complexity:

  • High bandwidth IPU-Link™ communication, fully automated and managed by Poplar, treats multiple IPUs like a single IPU compute resource
  • Graph Compile Domain (GCD) allows a single application to be programmed against multiple IPU processors, enabling both data parallel and model parallel execution
  • Model sharding allows the simple splitting of applications across multiple devices
  • Combining sharding with replication allows you to take code data parallel with minimum effort
  • Advanced model pipelining lets users extract maximum system performance to run large models fast and efficiently

Open Source

At Graphcore we put power in the hands of AI developers allowing them to innovate. Poplar Graph Libraries (PopLibs) are fully open source and available on GitHub to allow the entire developer community to contribute to and enhance these powerful tools.

Read the blog

PopVision™ Analysis Tools

The PopVision™ family of analysis tools help developers gain a deep understanding of how applications are performing and utilising the IPU. Get a deep understanding of your code's inner workings with a user-friendly, graphical interface.

Read the blog

Straightforward Deployment

Pre-built Docker Hub containers with Poplar SDK, Tools and frameworks images to get up and running fast.

Docker

Standard Ecosystem Support

Ready for production with Microsoft Azure deployment, Kubernetes orchestration and Hyper-V virtualisation & security.

Kubernetes Microsoft Hyper-v
Learn More
gc_pt

PyTorch Lightning

The new PyTorch Lightning integration lets developers run any PyTorch models on IPUs with minimal code changes and optimal performance.

Read the announcement
PyTorch Blog Image

PyTorch for the IPU

Introducing our production release of PyTorch for the IPU, Poplar SDK 1.4 features and more.

Read the blog
resnet-inference-poplar

Poplar Software Webinar

Learn how to maximise the performance and efficiency of your AI applications at scale with this brief introduction to Graphcore's Poplar Software Stack.

Watch the webinar
opensource_compressed

Open Source

We've made our PopLibs libraries, TensorFlow for IPU & PopART™ code fully open source.

Read the blog
×