Red Hat is ramping up its focus on artificial intelligence (AI) with a series of new product offerings and enhancements aimed at simplifying the fine-tuning of AI models and the deployment of AI applications.
During the Red Hat Summit in Denver, CEO Matt Hicks highlighted the need to make progress in fine-tuning generative AI models more accessible to a wider audience. This led to the introduction of InstructLab, a technology from IBM designed to facilitate training and contribution to large language models by individuals without data science backgrounds.
By leveraging synthetic data generation and a multi-phase tuning framework, InstructLab enables organisations to improve large language models with less training data and lower costs. This approach not only streamlines AI model development but also provides more deployment options for businesses looking to utilize AI in their operations.
InstructLab is a key component of the RHEL AI platform, which allows organizations to create, test, and deploy generative AI models. This platform, which includes the open-source Granite LLM family from IBM Research, can be implemented across hybrid cloud environments for enhanced scalability.
Red Hat’s efforts to enhance its AI capabilities began with the introduction of OpenShift AI last year. Since then, the company has been dedicated to providing a robust platform for organizations seeking to integrate AI workloads into their existing applications.
As the demand for AI continues to grow, Red Hat remains committed to offering solutions that empower organizations to build and run AI applications effectively. By combining AI and open-source technologies, Red Hat is positioned to support businesses in leveraging AI innovations for their specific needs and objectives.