Scientists are increasingly looking to AI methods to address their data processing challenges. IPU hardware empowers researchers to think beyond what they can achieve with today’s CPUs and GPUs and instead come up with the best possible algorithm to solve their problem.
With 59.4Bn transistors and more FP32 compute than any other processor, the IPU is a massively parallel processor built to run AI workloads efficiently. Many of today’s leading researchers are seeing impressive results when using IPUs for their own scientific experiments.
Neural networks have the potential to simulate cosmology data faster and more accurately than many traditional data analysis methods. The Université de Paris accelerated two cosmology deep learning use cases with IPUs using variational autoencoder (VAE), a deterministic neural network and a Bayesian neural network.
“Many of the data simulations that we have today are based on quite simple galaxy models. And with neural networks, what you can do is also learn more complex shapes of galaxies. And so that’s also very interesting to generate more realistic galaxy images. If the user wants to generate data on-the-fly to train neural networks, I would recommend using IPUs.”
Bastien Arcelin, Researcher at the Université de Paris
IPU systems allow physics researchers to run more complex algorithms and explore the potential of AI in particle physics. The University of Bristol benefitted from the IPU’s highly parallel compute in their research on CERN’s LHCb experiment, with 5.4x faster performance for their generative adversarial network (GAN) application.
“The capabilities and performance gains that we demonstrated showed the versatility of the IPU’s unique architecture. Moreover, the support that we received from Graphcore has been critical, and remains so, in our ongoing programme of exploring the power of IPUs for processing particle physics’ vast and rapidly increasing datasets.”
Jonas Rademacker, Professor of Physics at the University of Bristol
In a time of ever-growing model size and complexity, sparse computation offers a compelling solution. Graphcore’s technology has been designed to enable sparsity. IPU hardware can execute many, different calculations essential to sparse computation independently and in parallel, with sparse kernels and libraries integrated at a software level.
Graphcore has joined the three-year European SparCity project which seeks to create a supercomputing framework designed to harness sparse compute for new AI applications. Simula Research Laboratory is another key participant in SparCity:
“We are impressed with Graphcore’s technology for energy-efficient construction and execution of large, next-generation ML models, and we expect significant performance gains for several of our AI-oriented research projects in medical imaging and cardiac simulations.”
Are Magnus Bruaset, Research Director at Simula Research Laboratory
“Creating and maintaining solid state for use in quantum computers is, unsurprisingly, complex. Tuning them and keeping them stable requires analysing and controlling many sensitive variables in real-time. It is a perfect machine learning problem.
The advanced AI models we use are already testing the limits of today’s accelerators. Our early work with Graphcore’s IPU has resulted in dramatic performance gains, thanks to its raw computing power, and the way it manages classic AI challenges such as sparsity.
We’re tremendously excited by the announcement of Graphcore’s next generation IPU technology, and the associated computational power that will propel us further and faster into the future of quantum computing.”
Professor Andrew Briggs, Department of Materials, University of Oxford
Pre-configured with a 4 petaFLOP AI system, IPU-POD16 is where you experience the power and flexibility of larger IPU systems.Learn more
Designed to support Professors, Researchers, Principal Investigators, Postdocs, PhD and Masters students conducting and publishing research using IPUs or in their coursework or teaching.Learn more
A secure IPU cloud service to add state of the art AI compute on demand - no on-premise infrastructure deployment required.Learn more