<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
FP8 header

Jul 05, 2022

Graphcore and AMD propose FP8 AI standard with Qualcomm support

Written By:

Roberto Mijat

Join the IPU conversation

Join our Graphcore community for free. Get help and share knowledge, find tutorials and tools that will help you grow.

Join on Slack

Graphcore is leading calls for an industry-wide standard in 8-bit floating point compute for artificial intelligence (AI), as systems-makers and AI practitioners look to take advantage of the performance and efficiency gains offered by lower-precision numerical representations.

Graphcore has created an 8-bit floating point format designed for AI, which we propose be adopted by the IEEE working group tasked with defining a new binary arithmetic notation for use in machine-learning.

To facilitate ease of adoption and strong support for a common standard, we believe that AI computing is best served by the adoption of this open, freely licensable standard. We are also offering the specification to other industry players, until such time as the IEEE formalises a standard.

Simon Knowles, CTO and co-founder of Graphcore said: “The advent of 8-bit floating point offers tremendous performance and efficiency benefits for AI compute. It is also an opportunity for the industry to settle on a single, open standard, rather than ushering in a confusing mix of competing formats.”

Mike Mantor, Corporate Fellow and Chief GPU Architect at AMD said: “This 8-bit floating point format will allow AMD to deliver dramatically improved training and inference performance for many types of AI models. As a strong supporter of industry standards, AMD is advocating for the adoption as the new standard for 8-bit floating point notation with IEEE.”

John Kehrli, Senior Director of Product Management at Qualcomm Technologies, Inc. said: “This proposal has emerged as a compelling format for 8-bit floating point compute, offering significant performance and efficiency gains for inference and can help reduce training and inference costs for cloud and edge. We are supportive of Graphcore’s proposal for 8-bit floating point as an industry standard for relevant applications.”


Setting the standard


The use of lower and mixed-precision notations in computation, such as mixed 16-bit and 32-bit, is commonplace in AI, maintaining high levels of accuracy while delivering efficiencies that help counter to the waning influence of Moore’s Law and Dennard Scaling.

With the move to 8-bit floating point, there is an opportunity for all of those involved in advancing artificial intelligence to coalesce around a standard that is AI-native and that will allow seamless interoperability across systems for both training and inference.

You can find more details of our proposal in a paper we’ve published on the 8-bit floating point format.

Any companies interested in licensing the technology until an industry standard is set can reach out to Graphcore at legalteam@graphcore.ai. We encourage all vendors in the industry to also contribute and join this standardisation effort.