<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">

Sep 12, 2023

Graphcore-Pienso partnership wins CogX Best Innovation: NLP Award

Written By:

Tim Santos

Try AI notebooks for free

Try IPUs in the cloud with a zero set-up, pre-configured Jupyter development environment on Paperspace

Try now for free

Join the IPU conversation

Join our Graphcore community for free. Get help and share knowledge, find tutorials and tools that will help you grow.

Join on Slack

Graphcore and Pienso’s hardware and software partnership has been named Best Innovation: NLP at the prestigious CogX Awards. 

Presenting the award, the judges said: "Pienso's groundbreaking low-code/no-code platform is empowering business experts everywhere, merging intuitive AI with powerful hardware, and truly revolutionizing the way enterprises harness large language models for real-time insights. Their collaboration with Graphcore has transformed hours of workload into mere minutes, truly setting the gold standard in the world of interactive AI."
 

The collaboration between our two companies demonstrates how a novel AI product, addressing a vast consumer need, can benefit from a true made-for-AI compute architecture, and how that potent combination is able to accelerate innovation and the uptake of commercial AI services. 

Pienso Co-founder and CTO Karthik Dinakar said: "It's very hard to build reliable, scalable production-caliber products using generative AI. Hardware accelerators, GPUs specifically, continue to be expensive and scarce.  Scarcity fuels opportunity - in this case, for end-to-end AI platforms that optimize performance at either the hardware or the software layer. In this partnership, we've done both and we couldn't be more excited about what comes next." 

Enterprise-ready language models 

The explosion of large language models (LLMs) with their broad capabilities is forcing businesses around the world to consider how to put this exciting technology to use. For many, that means the promise of extracting valuable new insights from vast repositories of data, whether it is sitting in archives or being generated in real-time. 

Increasingly, businesses are finding that a simple API attached to individual foundation models does not constitute an enterprise-ready tool and that large model makers may not be best placed to deliver the services they need.  

At the same time, commercial LLM users are considering the economics of using this type of AI. Prudent choices such as the use of open-source models and the right compute platform can hugely influence ongoing return on investment. 

This is the vast and pressing need that Pienso and Graphcore are addressing together.

Accessible AI 

Pienso’s software puts state-of-the-art LLMs in the hands of business decision makers – people with sector-specific expertise, but not necessarily AI or coding skills. Pienso allows companies to fine-tune and apply models to their particular industry or individual business without necessarily knowing what ‘fine-tuning’ means. The result is the ability to unlock actionable insights that were previously inaccessible and act on them quickly. 

Performance and efficiency are essential to delivering any commercial application. In the case of Pienso, accelerated compute makes a qualitative difference to the product’s capabilities, especially for those customers where real-time data insights are exponentially more valuable than doing so retrospectively – such as customer service operations or content moderation teams. 

In Graphcore, Pienso found not just the perfect AI compute solution, but a like-minded partner who shares the belief that AI is most powerful when it is accessible to the maximum number of people. 

In addition to the Graphcore IPU’s natural performance advantage, Pienso has used novel AI techniques, made possible by the unique, made-for-AI architecture, to extract large performance gains of up to 35X compared to the leading GPU-based system. 

Packing results (1)

Comparative performance for Pienso using Graphcore IPUs vs Nvidia GPUs

Graphcore and Pienso continue to work together, with our research teams exploring new ways of delivering accelerated insights using the latest large language models.

Meanwhile our commercial partnership means we are now jointly engaging with customers across a wide range of industries. To them, we represent something new and truly valuable – a high performance, enterprise-grade LLM platform that is accessible to anyone. 

For more information on how Pienso + Graphcore can unlock valuable insights for your business, visit our dedicated page.