<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">



When you're ready to grow your AI compute capacity at supercomputing scale, choose Bow Pod256, a system designed for production deployment in your enterprise datacenter, private or public cloud. Experience massive efficiency and productivity gains when large language training runs are completed in hours or minutes instead of months and weeks. Bow Pod256 delivers AI at scale.

Also available as Bow Pod512 and Bow Pod1024

  • IPU at supercomputing scale
  • World-leading language and vision performance for new and emerging models
  • Multiple users & mixed workloads across many smaller vPods or harness the power of 256 Bow IPUs

For G-Core Labs customers, performance means progress: Graphcore IPUs let them develop and deploy their AI models faster and reach results that benefit their business sooner. The increase in compute power delivered by Bow Pods is going to supercharge innovation in artificial intelligence, while their easy availability on G-Core Labs cloud ensures that the opportunity is accessible to all.

Andre Reitenbach, CEO

G-Core Labs

Natural Language Processing


Natural language processing (NLP) delivers business value today for finance firms to biotech leaders, scale-ups to hyperscalers, improving internet search sentiment analysis, fraud detection, chatbots, drug discovery and more. Choose Bow Pod256 whether you are running large BERT models in production or starting to explore GPT class models or GNNs.

Computer Vision


State of the art computer vision is driving breakthroughs in medical imaging, claims processing, cosmology, smart cities, self-driving cars, and more. World leading performance for traditional powerhouses like ResNet50 and high accuracy emerging models like EfficientNet or Transformer-based Vision models are ready to run on Bow Pod256.

Scientific Research

Scientific Research

National labs, universities and research institutes are turning to Bow Pod256 to solve problems in physics, weather forecasting, computational fluid dynamics, protein folding, oil & gas exploration and more. Take advantage of the IPU's fine grained compute at scale for emerging  Graph Neural Networks (GNNs) and probabilistic models, explore sparsity and make the convergence of HPC and AI a reality.

World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.

EfficientNet Training Throughput

Bow Pod Platforms | Preliminary Results (Pre-SDK2.5) | G16-EfficientNet-B4 Training

Software tools and integrations to support every step of the AI lifecycle from development to deployment to improve productivity and AI infrastructure efficiency. And just make it easier to use.

tensorflow pytorch lightning paddlepaddle keras onnx huggingface slurm kubernetes openBMC redfish prometheus grafana openstack vmware
Processors 256x Bow IPUs
1U blade units 64x Bow 2000 machines

230.4GB In-Processor-Memory

Up to 16.3TB Streaming Memory

Performance 89.6 petaFLOPS FP16.16
22.4 petaFLOPS FP32
IPU Cores 376,832
Threads 2,260,992
Host-Link 100 GE RoCEv2


TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO

OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana

Slurm, Kubernetes

OpenStack, VMware ESG

System Weight 1,800kg + Host servers and switches
System Dimensions 64U + Host servers and switches
Host Server Selection of approved host servers from Graphcore partners
Storage Selection of approved systems from Graphcore partners
Thermal Air-Cooled

For more performance results, visit our Performance Results page