<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

C600 IPU-Processor PCIE Card

The Graphcore® C600 IPU-Processor PCIe Card is a high-performance acceleration server card targeted for machine learning inference applications. Powered by the Graphcore Mk2 IPU Processor with FP8 support, the C600 is a dual-slot, full height PCI Express Gen4 card designed for mounting in industry standard server chassis to accelerate machine intelligence workloads.

  • Industry standard full height PCIe
  • Flexible & easy to use
  • Expert support to get you up and running quickly

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras huggingface onnx
IPU Processor Graphcore Mk2 IPU Processor with FP8 support
IPU-Cores™ 1,472 IPU-Cores, each one a high-performance processor capable of multi-thread, independent code execution
In-Processor Memory™

Each IPU-Core is paired with fast, local, tightly-coupled In-Processor Memory. The C600 accelerator includes 900MB of In-Processor Memory

Compute Up to 560 teraFLOPS of FP8 compute
Up to 280 teraFLOPS of FP16 compute
Up to 70 teraFLOPS of FP32 compute
System Interface Dual 8-lane PCIe Gen4 interfaces
Thermal Solution Passive
Form Factor PCIe full-height/length; double-slot
System Dimensions Length: 267mm (10.50”); Height: 111mm (4.37”); Width: 27.6mm (1.09”); Mass: 1.27kg (2.8lbs)
IPU-Link™ Support 32 lanes, 128 GB/s bandwidth (64 GB/s in each direction) IPU-Links
TDP 185W
Auxiliary Power Supply 8-pin
Quality Level Server grade