Scientists are increasingly looking to AI methods to address their data processing challenges. IPU hardware empowers researchers to think beyond what they can achieve with today’s CPUs and GPUs and instead come up with the best possible algorithm to solve their problem.
With 59.4Bn transistors and more FP32 compute than any other processor, the IPU is a massively parallel processor built to run AI workloads efficiently. Many of today’s leading researchers are seeing impressive results when using IPUs for their own scientific experiments.
As climate change intensifies, extreme weather events such as storm surges, heatwaves, hurricanes and wildfires are becoming more frequent. While plans to cut emissions by 2030 were recently laid out at COP26, preparation for extreme weather events is growing in importance globally.
The science of numerical weather forecasting plays a vital role in this preparation. By predicting future weather events based on current climate data, organisations like the European Centre for Medium-Range Weather Forecasting (ECMWF) are working to alert authorities of upcoming extreme weather events earlier and with greater accuracy so interventions can be made to protect property and infrastructure, and potentially to save lives.
Neural networks have the potential to simulate cosmology data faster and more accurately than many traditional data analysis methods. The Université de Paris accelerated two cosmology deep learning use cases with IPUs using variational autoencoder (VAE), a deterministic neural network and a Bayesian neural network.
“Many of the data simulations that we have today are based on quite simple galaxy models. And with neural networks, what you can do is also learn more complex shapes of galaxies. And so that’s also very interesting to generate more realistic galaxy images. If the user wants to generate data on-the-fly to train neural networks, I would recommend using IPUs.”
Bastien Arcelin, Researcher at the Université de Paris
IPU systems allow physics researchers to run more complex algorithms and explore the potential of AI in particle physics. The University of Bristol benefitted from the IPU’s highly parallel compute in their research on CERN’s LHCb experiment, with 5.4x faster performance for their generative adversarial network (GAN) application.
“The capabilities and performance gains that we demonstrated showed the versatility of the IPU’s unique architecture. Moreover, the support that we received from Graphcore has been critical, and remains so, in our ongoing programme of exploring the power of IPUs for processing particle physics’ vast and rapidly increasing datasets.”
Jonas Rademacker, Professor of Physics at the University of Bristol
In a time of ever-growing model size and complexity, sparse computation offers a compelling solution. Graphcore’s technology has been designed to enable sparsity. IPU hardware can execute many, different calculations essential to sparse computation independently and in parallel, with sparse kernels and libraries integrated at a software level.
Graphcore has joined the three-year European SparCity project which seeks to create a supercomputing framework designed to harness sparse compute for new AI applications. Simula Research Laboratory is another key participant in SparCity:
“We are impressed with Graphcore’s technology for energy-efficient construction and execution of large, next-generation ML models, and we expect significant performance gains for several of our AI-oriented research projects in medical imaging and cardiac simulations.”
Are Magnus Bruaset, Research Director at Simula Research Laboratory
“Creating and maintaining solid state for use in quantum computers is, unsurprisingly, complex. Tuning them and keeping them stable requires analysing and controlling many sensitive variables in real-time. It is a perfect machine learning problem.
The advanced AI models we use are already testing the limits of today’s accelerators. Our early work with Graphcore’s IPU has resulted in dramatic performance gains, thanks to its raw computing power, and the way it manages classic AI challenges such as sparsity.
We’re tremendously excited by the announcement of Graphcore’s next generation IPU technology, and the associated computational power that will propel us further and faster into the future of quantum computing.”
Professor Andrew Briggs, Department of Materials, University of Oxford
Pre-configured with a 5.6 petaFLOP AI system, Bow Pod16 is where you experience the power and flexibility of larger IPU systems.Learn more
With 22.6 petaFLOPS of AI compute for both training and inference workloads, the Bow Pod64 is designed for AI at scale.Learn more
When you're ready to grow your processing capacity to supercomputing scale, choose Bow Pod 256 for production deployment in your enterprise datacenter, private or public cloud.Learn more