Graphcore IPUs, built specifically for AI, are the ideal cloud compute for training, fine-tuning and deploying NLP applications quickly and efficiently.
Whether you are an AI Saas company focused on NLP-based platforms like intelligent chatbots or delivering real-time insights from customer service conversations or an enterprise exploring more efficient Large Language Models (LLMs) like GPT, read on to find out more.Jump to Resources
Applications with Natural Language Processing (NLP) models at their core are deployed by companies around the world to boost productivity, provide faster business insights, save money, increase security and reduce fraud, improve customer retention and overall business competitiveness.
Use cases include intelligent chatbots, sentiment analysis in finance, real time insights for customer service, content moderation in social networks, fraud detection in financial services, protein and genome analysis in drug discovery, content generation in marketing and translation, text summarisation of news and social media feeds and much more.
Large language models (LLMs) like GPT are increasing in size and capability but also in complexity and cost.
Companies are starting to explore new business opportunities by fine tuning LLMs like GPT on IPU cloud platforms with expert support from Graphcore to achieve GPT-based applications that are efficient and cost effective.Try in the cloud
Graphcore and Aleph Alpha are working together to research and deploy the next generation of multi-lingual Large Language Models (LLMs) on current & next-generation IPU systems. Applications include conversational platforms for more intelligent and efficient Q&A in chatbots and advanced semantic search for knowledge management systems with an interface that resembles asking questions to a human expert than entering keywords into a search engine.Learn more
Graphcore partner, Pienso, delivers a machine learning platform, based on IPU-powered NLP models, to help enterprises understand text data better than ever before. Customer service teams use the low code/no code Pienso service to generate insight, inform strategy, and inspire action, investment firms use Pienso to monitor the news and social feeds to inform investment strategies and social media groups have access to highly intelligent and easy to use content moderation tools.Learn more
Dolly 2.0 – The World’s First, Truly Open Instruction-Tuned LLM on IPUs – Inference
OpenAssistant Pythia 12B is an open-source and commercially usable chat-based assistant model trained on the OpenAssistant Conversations Dataset (OASST1)
Speech Transcription on IPUs using OpenAI's Whisper - Inference
Flan-T5-Large/XL inference on IPUs with Hugging Face
Text entailment on IPU using GPT-J 6B on PyTorch using fine-tuning.
Text generation on IPU using GPT-J 6B on PyTorch for inference.
GPT2-L training in PyTorch leveraging the Hugging Face Transformers library.
GPT2-L inference in PyTorch leveraging the Hugging Face Transformers library.
HuggingFace Optimum implementation for fine-tuning a BERT-Large transformer model.
SQuAD and MNLI on IPUs using DeBERTa with Hugging Face - Inference