Machine Intelligence Academy
Championing ground-breaking AI research and innovation in higher education.Apply Now
Empowering Machine Learning Research
Graphcore builds tools and technology to enable innovations in Artificial Intelligence. Our academic programme is designed to support Professors, Researchers, Principal Investigators, Postdocs, PhD and Masters students conducting and publishing research using IPUs or in their coursework or teaching.
We are actively looking for projects and proposals that fall in one of these ML areas, but are open to hearing ideas for new and novel ways to use IPUs.
- Graph Neural Networks
- Methods for Efficient Stochastic Learning
- Distributed Learning in Large-Scale Machines
- New IPU Efficient Architectures for Multi-Modal Learning
- Conditional Sparse Training
We are looking to hear from Professors who are interested in developing and teaching coursework around Poplar software and IPU hardware.
Universities selected to participate in the programme will benefit from the following
Opportunity to test and use IPU hardware in the cloud at no-cost
Support letters for grant and funding proposals
Access to latest Graphcore Software tools including Poplar® and PopART®
Support and regular check-ins from Graphcore's in-house researchers and engineers
UC Berkeley & Google Brain used IPUs for parallel training of deep neural networks with local updates. They found that local parallelism is particularly effective in the high-compute regime.
“The work we did with Graphcore on parallel training of deep networks with local updates illustrates how the IPU’s radically different processor architecture can help enable new approaches to distributed computation and the training of ever-larger models. It is indicative of how Graphcore’s technology does not just deliver quantitatively better performance, against measures such as throughput and latency. The technology is also opening up fundamentally new approaches to the computational challenges that could otherwise hinder the progress of AI.”
Professor Pieter Abbeel, UC Berkeley
The Robot Vision Group at Imperial College London accelerated a classical Computer Vision problem using Gaussian Belief Propagation on IPUs.
“Having led one of the first academic teams to conduct and publish research based on the Graphcore IPU, this is a technology that brings both quantitative and qualitative benefits. We saw the IPU outperforming legacy chip architectures in our computer vision work, but also expanding our understanding of what was computationally possible in this field.”
Andrew Davison, Professor of Robot Vision at Imperial College London
Université de Paris used Graphcore IPUs to accelerate neural network training for cosmology applications with results showing that IPUs can outperform GPUs in some cases by as much as 4x faster time to train.
A researcher from their Astroparticle and Cosmology Laboratory looked at two deep learning use cases: galaxy image generation from a trained VAE latent space and galaxy shape estimation using a deterministic deep neural network and Bayesian neural network (BNN).
“Many of the data simulations that we have today are based on quite simple galaxy models. And with neural networks, what you can do is also learn more complex shapes of galaxies. And so that’s also very interesting to generate more realistic galaxy images. If the user wants to generate data on-the-fly to train neural networks, I would recommend using IPUs.”
Bastien Arcelin, Researcher at the Université de Paris.
University of Bristol tackled challenges in Particle Physics at CERN for HPC workloads. IPUs accelerated both training and inference for GANs, delivering a performance increase of up to 5.4x compared to GPUs.
“Our work examined the applicability of Graphcore’s IPU to several computational problems found in particle physics and critical to our research on the LHCb experiment at CERN. The capabilities and performance gains that we demonstrated showed the versatility of the IPU’s unique architecture. Moreover, the support that we received from Graphcore has been critical, and remains so, in our ongoing programme of exploring the power of IPUs for processing particle physics’ vast and rapidly increasing datasets.”
Jonas Rademacker, Professor of Physics at the University of Bristol.
University of Massachusetts accelerated Covid-19 modelling using Approximate Bayesian Computation to achieve a 30x speedup on IPUs compared with CPUs and a significant 7.5x speedup compared with GPUs.
“Creating and maintaining solid state qubits for use in quantum computers is, unsurprisingly, complex. Tuning them and keeping them stable requires analysing and controlling many sensitive variables in real-time. It is a perfect machine learning problem. The advanced AI models we use are already testing the limits of today’s accelerators. Our early work with Graphcore’s IPU has resulted in dramatic performance gains, thanks to its raw computing power, and the way it manages classic AI challenges such as sparsity. We’re tremendously excited by the announcement of Graphcore’s next generation IPU technology, and the associated computational power that will propel us further and faster into the future of quantum computing.”
Professor Andrew Briggs, University of Oxford