G-Core Labs IPU-based AI cloud is a Graphcore Bow IPU-POD scale-out cluster, offering an effortless way to add state of the art machine intelligence compute on demand, without the need for on-premises hardware deployment and AI infrastructure building from scratch.
The IPU is an entirely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. Cloud IPU’s robust performance and low cost make it ideal for machine learning teams looking to iterate quickly and frequently on their solutions.
World-class performance for natural language processing
Build, train and deploy ready-to-use ML models via dashboard, API, or Terraform
Dataset management and integration with S3/NFS storage
Version control: Hardware, Code, Dataset
Secure Trusted Cloud platform
Free egress traffic (for public or hybrid solutions)
SLA 99,9% guaranteed uptime
High-skilled technical support 24/7
Made in EU
With the AI Infrastructure, customers can now easily train and compare models or custom code training, and all your models are stored in one central model repository. These models can now be deployed to the same endpoints on G-Core Labs AI Infrastructure.
G-Core Labs IPU-based AI cloud is designed to help businesses across various fields, including finance, healthcare, manufacturing, and scientific research. It is built to support every stage of their AI adoption journey, from building proof of concept, to training and deployment.
AI model development
ML models: Face recognition, Object detection
AI training and hyperparameter tuning
Ready to order in Luxembourg in June 2022
IPU-POD systems let you break through barriers to unleash entirely new breakthroughs in machine intelligence with real business impact. Get ready for production with IPU-Pod64 and take advantage of a new approach to operationalize your AI projects.
IPU-Pod64 delivers ultimate flexibility to maximize all available space and power, no matter how it is provisioned. 16 petaFLOPS of AI-compute for both training and inference to develop and deploy on the same powerful system.
Soon ready to order in Amsterdam
Experience the democratization of AI and bring IPU-powered machine intelligence at supercomputer scale within reach with the arrival of IPU-Pod256 in the cloud. Designed to accelerate large and demanding machine learning models, IPU-Pod256 gives you the AI resources of a tech giant.
Kubernetes support makes it simple to automate application deployment, scaling, and management of IPU-Pods. Developers can build model replicas within and across multiple IPU-Pods and provision IPUs across many IPU-Pods for very large models.
|Bow Pod||Poplar Config||Our price|
|Bow Pod16||2x5320- 384GB RAM-2x960GB SSD-2x100G||
|Bow Pod64||2x5320- 384GB RAM-2x960GB SSD-2x100G||
|Bow Pod128||2x5320- 384GB RAM-2x960GB SSD-2x100G||
|Bow Pod256||2x5320- 384GB RAM-2x960GB SSD-2x100G||
With the help of IPU-based AI infrastructure solutions we are realizing the HPC ambitions of Luxembourg, turning the city into the heart of European AI hub. Thanks to Graphcore hardware and G-Core Labs edge cloud, the new AI infrastructure can be used fully as a service.
Head of Economic and Commercial Affairs
Embassy of Luxembourg in London
“This partnership between Luxembourg-based cloud and edge solutions provider G-Core Labs and the UK IPU producer Graphcore illustrates not only the vast opportunities that arise for trade and cooperation between the two countries, but it also confirms Luxembourg’s position as a leading data economy in the EU.”
CEO of G-Core Labs
“G-Core Labs is the first European provider to partner with Graphcore to bring innovations to a rapidly changing cloud market. To meet their changing AI needs, users are looking for trusted technologies that are highly efficient, easily accessible, and highly flexible.”
Co-founder and CEO of Graphcore
“Graphcore and G-Core Labs solution is perfect for AI. It will make the power and flexibility of the IPU available to anyone who wants to accelerate their current workloads or to explore the use of next generation ML models.”