Today we are making our Poplar® software documentation publicly available on our website. It's available to customers on our support portal as before, but we decided to open up access due to overwhelming interest from developers wanting to find out more about how easy our Poplar SDK is to use.
The Poplar SDK provides a complete software stack to implement our Graph Toolchain. At a high level we have support for standard machine learning frameworks which are fully integrated to support applications running on the IPU processor. Pytorch and TensorFlow are extended with IPU support along with our Poplar advanced runtime, PopART™ for training and inference using the industry standard ONNX format.
Below these frameworks sits Poplar, a framework and set of standard libraries designed from the ground up to allow applications to be built and executed on the Graphcore IPU. Poplar is integrated directly into frameworks such as TensorFlow allowing the fully optimized compilation of models to our platform. The Graph Compiler is state of the art software designed for the scheduling and work partitioning of large parallel programs. Also contained within Poplar is the Graph Engine, this provides the runtime support needed to execute models and stream data through models running on the IPU. The Graph Engine manages the interaction between the host CPU and the network of IPU devices which are connected to it.
This is the full list of the Developer Documentation we are releasing publicly today:
We will be updating the docs with each new quarterly release of our SDK and new docs will be added as and when they become available so please keep checking back to our Developer docs on our website. Tutorials, getting started videos, on-demand workshops and other online resources are coming soon.
Visit the Graphcore GitHub page for applications and code examples for the IPU.
Graphcore is collaborating with Microsoft to bring state of the art IPU compute and Poplar software to the machine learning community on Microsoft Azure. If you are interested in accelerating your own next generation AI models, you can sign up for IPU Preview on Microsoft Azure here.