Core42 launches free AI playground with inference services
Allows users to test drive the Qualcomm Cloud AI 100 Ultra Inference Accelerator
#UAE #inferencing – Core42, a UAE-based leader in sovereign cloud and AI infrastructure, has launched a free-to-access AI Developer Playground in partnership with Qualcomm. Built on Qualcomm’s Cloud AI accelerators and AI Inference Suite, the free-to-access platform enables developers and AI engineers to build, scale, and optimise AI applications with ease. The AI Playground removes infrastructure complexity, integrates pre-built AI models, and provides access to leading open-source AI solutions. Designed to accelerate AI adoption across industries, the platform offers scalable AI inference solutions for cloud and edge computing.
SO WHAT? – This initiative removes infrastructure complexity by integrating AI inference accelerators, standardised APIs, and pre-built generative AI applications to maximize efficiency. It also underscores Core42’s increasingly heterogeneous approach to providing AI compute to enterprises. The company recently installed one of the world’s most powerful NVIDIA Superpod’s. Meanwhile, Core42 is currently combining the power of Cerebras CS-3 and Qualcomm AI 100 Ultra accelerators, to provide the best price-performance to customers via its Condor AI platform.
Some key points about this Core42 announcement:
G42 group’s sovereign cloud and AI infrastructure company Core42 has launched a free-to-access AI Developer Playground in partnership with Qualcomm, providing inferencing-as-a-service. The new playground runs on the US multinational’s Cloud AI accelerators and AI Inference Suite.
Qualcomm’s AI Inference Suite offers a comprehensive set of enterprise-ready AI tools, enabling developers to build AI agents and applications across a range of use cases, including, Chat, Reasoning, Code or Image Generation, Enterprise RAG applications.
Developers gain access to enterprise-ready AI tools for applications in generative AI, computer vision, and AI automation.
The solution integrates with Kubernetes and bare-metal container deployments, optimising AI performance and cost efficiency.
Pre-built large language models, including Llama-3.3 70B and JAIS 30B, are available on the platform for developers to use.
The platform supports AI-powered agents across industries, from smart cities to digital marketing and medical AI.
This collaboration aligns with the growing demand for AI-driven automation, enabling scalable and efficient deployment.
Qualcomm’s advanced AI infrastructure enhances inference-as-a-service, boosting accessibility for businesses and developers.
ZOOM OUT? – The rise of AI inference-as-a-service reflects a broader shift towards cloud-based AI deployment. As AI adoption accelerates globally, infrastructure solutions that streamline AI implementation are becoming critical. Initiatives like Core42’s AI Playground are part of a larger trend where enterprises seek cost-effective, scalable AI solutions to drive innovation.