Our seamless development framework and IPU accelerators together make the fastest and most flexible platform for current and future machine intelligence applications, lowering the cost of AI in the cloud and improving performance and speed by between 10x and 100x.
Our systems will accelerate the full range of training, inference, and prediction approaches. The huge computational resources together with software tools and libraries that are flexible and easy to use, will allow researchers to explore machine intelligence across a much broader front than current solutions. This technology will enable recent success in deep learning to evolve rapidly towards useful, general artificial intelligence.
We deliver a seamless interface to leading machine learning development frameworks including TensorFlow and MXNet. To support this we provide a flexible, open source software framework of tools, drivers and application libraries, called Poplar. With a C++ and Python interface, Poplar is a graph programming framework designed to allow developers to modify and extend our wide set of libraries, making our IPU systems quick and easy to use.
The IPU is a completely new type of processor designed to help customers accelerate the development of current and next generation machine intelligence products and services.
The IPU has been optimized to work efficiently on the extremely complex high-dimensional models that machine intelligence requires. It emphasizes massively parallel, low-precision floating-point compute and provides much higher compute density than other solutions.
Like a human brain, the IPU holds the complete machine learning model inside the processor and has over 100x more memory bandwidth than other solutions. This results in both lower power consumption and much higher performance.
Our IPU-Appliance (coming in 2017) is designed to lower the cost of accelerating AI applications in cloud and enterprise datacenters. The IPU-Appliance aims to increase the performance of both training and inference by between 10x and 100x compared to the fastest systems in use today.
Our IPU-Accelerator is a PCIe card which can easily be plugged into a server to accelerate machine learning applications.
Get regular news and opinion about Graphcore, machine learning, AI and the future.