Graphcore IPUs, built specifically for AI, are the ideal cloud compute for training, fine-tuning and deploying NLP applications quickly and efficiently.
Whether you are an AI Saas company focused on NLP-based platforms like intelligent chatbots or delivering real-time insights from customer service conversations or an enterprise exploring more efficient Large Language Models (LLMs) like GPT, read on to find out more.Jump to Resources
Applications with Natural Language Processing (NLP) models at their core are deployed by companies around the world to boost productivity, provide faster business insights, save money, increase security and reduce fraud, improve customer retention and overall business competitiveness.
Use cases include intelligent chatbots, sentiment analysis in finance, real time insights for customer service, content moderation in social networks, fraud detection in financial services, protein and genome analysis in drug discovery, content generation in marketing and translation, text summarisation of news and social media feeds and much more.
Large language models (LLMs) like GPT are increasing in size and capability but also in complexity and cost.
Companies are starting to explore new business opportunities by fine tuning LLMs like GPT on IPU cloud platforms with expert support from Graphcore to achieve GPT-based applications that are efficient and cost effective.
Check out our pre-canned GPT models in our model garden and try out Hugging Face GPT-J 6B in the Paperspace cloud.Try in the cloud
Graphcore and Aleph Alpha are working together to research and deploy the next generation of multi-lingual Large Language Models (LLMs) on current & next-generation IPU systems. Applications include conversational platforms for more intelligent and efficient Q&A in chatbots and advanced semantic search for knowledge management systems with an interface that resembles asking questions to a human expert than entering keywords into a search engine.Learn more
Graphcore partner, Pienso, delivers a machine learning platform, based on IPU-powered NLP models, to help enterprises understand text data better than ever before. Customer service teams use the low code/no code Pienso service to generate insight, inform strategy, and inspire action, investment firms use Pienso to monitor the news and social feeds to inform investment strategies and social media groups have access to highly intelligent and easy to use content moderation tools.Learn more
Text entailment on IPU using GPT-J 6B on PyTorch using fine-tuning.
Text generation on IPU using GPT-J 6B on PyTorch for inference.
GPT2-L training in PyTorch leveraging the Hugging Face Transformers library.
GPT2-L inference in PyTorch leveraging the Hugging Face Transformers library.
HuggingFace Optimum implementation for fine-tuning a BERT-Large transformer model.
HuggingFace Optimum implementation for fine-tuning RoBERTa-Base on the squad_v2 dataset for text generation and comprehension tasks
BERT-Large (Bidirectional Encoder Representations from Transformers) using PyTorch for NLP training on IPUs.