The service is set to become the go-to cloud platform for AI-first companies who want faster, more accurate and more actionable intelligence in applications such as natural language processing, computer vision and emerging graph neural networks.
G-Core Labs IPU Cloud features Graphcore’s highly differentiated, made-for-AI, IPU Bow Pod platform and mature software stack. It brings to cloud compute customers Graphcore’s proven ability to accelerate AI workloads and enable innovative new approaches across industries as diverse as biotech, healthcare, financial services, manufacturing, and consumer internet.
With locations in Luxembourg and the Netherlands, G-Core's IPU Cloud is ideally suited to those who need to process their data safely within the EU or EEA, in line with regulatory requirements around data sovereignty. A UK-based service will be launching later this year.
“The credentials of the Graphcore IPU are well established – it is simultaneously opening-up new avenues of exploration in AI, while delivering industry-leading performance-per-dollar. Now, with IPU systems available in G-Core Cloud it is easier than ever for anyone to access this revolutionary technology and build the compute capability that suits their needs - from starting out to scaling-up,” said Andre Reitenbach, CEO at G-Core Labs.
Unleash the IPU advantage today
Customers beginning their IPU journey can do so quickly and easily, using the included Graphcore Poplar SDK. Its deep integration with leading machine learning frameworks including TensorFlow, PyTorch, HuggingFace, Keras, PyTorch Lightning and PaddlePaddle, allows users to migrate their existing machine learning models to IPUs with minimal code changes.
Developers and data scientists also have the option to start with reference models optimized for IPU systems. These are available in Graphcore’s GitHub repository and include a wide range of popular models including image classification, object detection, natural language processing, speech recognition and graph models.
All of our reference models take advantage of the IPU’s unique and differentiated architecture to deliver outstanding performance and results.
With Graphcore IPU compute, time-to-train can be reduced from days to hours and from hours to minutes across many popular models. For deployment, inference models deliver higher throughput and lower latency.
Language model BERT fine tuning is 2.5x faster than the latest cloud GPUs, higher accuracy vision model, EfficientNet training 5x faster and graph neural network TGNN delivers 4x better performance and higher accuracy on an IPU.
IPU instances are available on-demand via a pay-as-you-go usage model with no upfront commitment. Alternatively, enterprises can choose the predictability of a fixed monthly cost and guaranteed availability. Either way, G-Core IPU Cloud eliminates the need for hefty upfront investment in AI infrastructure.
G-Core Labs has a proven track record with security and data safeguarding, with more than a decade of experience providing the secure computing backbone to some of the world’s leading technology companies.
Rapid deployment of resources means you waste less time on setting up new applications and can spend more time on business transformation through AI.
G-Core IPU Cloud facilitates an elastic business model, delivering the right amount of IPU resources for each stage of your AI journey.
Support from AI experts
From model porting and optimisation through to full design and provisioning of your cloud environment, we can help start or optimise your journey on the IPU Cloud.
Deployed and operated anywhere
In addition to the public IPU Cloud, G-Core offers the expertise and flexibility of managed private IPU clouds in on-premises datacenters, or at the edge, to deliver AI with speed and performance wherever your data resides.