<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
bow-chips

Designed for Machine Intelligence

The Bow IPU

Delivering unprecedented performance and power efficiency for current and future AI innovation

A Major Leap Forward

The Bow IPU is the first processor in the world to use Wafer-on-Wafer (WoW) 3D stacking technology, taking the proven benefits of the IPU to the next level.

Featuring groundbreaking advances in compute architecture and silicon implementation, communication and memory, each Bow IPU delivers up to 350 teraFLOPS of AI compute, an impressive 40% leap forward in performance and up to 16% more power efficiency compared to the previous generation IPU.

  • PCIe Gen4 x16
  • 64GB/s bidrectional bandwidth to host
  • Advanced silicon 3D stacking technology
  • Closely coupled power delivery die
  • Higher operating frequency and enhanced overall performance
  • Efficient power delivery
  • Enables increase in operational performance
  • 11TB/s all to all IPU-Exchange
  • Non-blocking, any communication pattern
  • 1472 independent IPU-Tiles each with IPU-Core™ and In-Processor-Memory™
  • 900MB In-Processor-Memory per IPU
  • 65.4TB/s memory bandwidth per IPU
  • 1472 independent IPU-Cores
  • 8832 independent program threads executing in parallel
  • 10x IPU-Links
  • 320GB/s chip to chip bandwidth
marker

Click the markers on the diagram for more details about the chip

Swipe to see more

TSMC has worked closely with Graphcore as a leading customer for our breakthrough SoIC-WoW (Wafer–on-Wafer) solution as their pioneering designs in cutting-edge parallel processing architectures make them an ideal match for our technology. Graphcore has fully exploited the ability to add power delivery directly connected via our WoW technology to achieve a major step up in performance, and we look forward to working with them on further evolutions of this technology.

Paul de Bot, General Manager

TSMC Europe

What Makes IPUs Better For AI?

The IPU is a completely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence.

The compute and memory architecture are designed for AI scale-out. The hardware is developed together with the software, delivering a platform that is easy to use and excels at real-world applications.

Parallelism

Processors

Memory

Memory Access

CPU

Designed for scalar processes

CPU

Off-chip memory

GPU

SIMD/SIMT architecture. Designed for large blocks of dense contiguous data

gpu-diagram

Model and data spread across off-chip and small on-chip cache, and shared memory

IPU

Massively parallel MIMD. Designed for fine-grained, high-performance computing

ipu-diagram

Model and data tightly coupled, and large locally distributed SRAM

Scroll to View
Read Whitepaper
bow-pod-16

Bow Pod16

Ideal for exploring, the Bow Pod16 gives you all the power, performance and flexibility you need to fast-track IPU prototyping and leap from pilot to production.

Learn more
bow-pod-64

Bow Pod64

Ramp up your AI projects, speed up production and see faster time to business value. Bow Pod64 is the powerful, flexible building block for world-leading AI performance.

Learn more
bow-pod-256

Bow Pod256

When you're ready to grow your processing capacity to supercomputing scale, choose Bow Pod256 for production deployment in your enterprise datacenter, private or public cloud.

Learn more
×