<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">



Ideal for exploration, the Bow Pod16 gives you all the power, performance and flexibility you need to fast-track your IPU prototypes and speed from pilot to production. Bow Pod16 is your easy-to-use starting point for building better, more innovative AI solutions with IPUs whether you're focused on language and vision, exploring GNNs and LSTMs or creating something entirely new.

  • Compact 5U form factor
  • Flexible & easy to use
  • Expert support to get you up and running quickly
Bow Pod16

Cirrascale’s Graphcloud is giving many AI innovators their first experience of what Graphcore’s IPU can do, as well as providing a flexible scale-up platform for those who need to expand their compute capability. The addition of Bow Pods to Graphcloud takes AI computing in the cloud to new levels of performance – whether that’s used to accelerate massive models across large Pod configurations, or to put more power in the hands of individual users in multi-tenancy setups.


Cirrascale Cloud Services



Biotech, pharma and healthcare providers are choosing Bow Pod16 to re-fuel their AI-driven business transformation

Line Graph


Banks, insurance companies and asset managers can supercharge their AI labs with Bow Pod16 systems



Bringing intelligence to industry to detect flaws in materials and equipment the human eye can't detect

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

Time to Train

Bow Pod Platforms | Preliminary Results (Pre-SDK2.5)| G16-EfficientNet-B4 Training
DGX A100 (A100-SXM4-80GB) | Published results

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
Processors 16x Bow IPUs
1U blade unites 4x Bow-2000 machines

14.4GB In-Processor-Memory

Up to 1TB Streaming Memory

Performance 5.6 petaFLOPS FP16.16
1.4 petaFLOPS FP32
Separate Cores 23,552
Threads 141,312
Host-Link 100 GE RoCEv2

Poplar SDK

TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 66kg + Host server
System Dimensions 4U + Host servers and switches
Host Server Selection of approved host servers from Graphcore partners
Storage Selection of approved systems from Graphcore partners
Thermal Air-Cooled

For more performance results, visit our Performance Results page