Red Hat, Inc. and Run:ai announced a collaboration to bring Run:ai?s resource allocation capabilities to Red Hat OpenShift AI. By streamlining AI operations and optimizing the underlying infrastructure, this collaboration enables enterprises to get the most out of AI resources, maximizing both human- and hardware-driven workflows, on a trusted MLOps platform for building, tuning, deploying and monitoring AI-enabled applications and models at scale. GPUs are the compute engines driving AI workflows, enabling model training, inference, experimentation and more.

These specialized processors, however, can be costly, especially when being used across distributed training jobs and inferencing. Red Hat and Run:ai are working to meet this critical need for GPU resource optimization with Run:ai?s certified OpenShift Operator on Red Hat OpenShift AI, which helps users scale and optimize wherever their AI workloads are located. Run:ai?s cloud-native compute orchestration platform on Red Hat OpenShift AI helps: Address GPU scheduling issues for AI workloads with a dedicated workload scheduler to more easily prioritize mission-critical workloads and confirm that sufficient resources are allocated to support those workloads.

Utilize fractional GPU and monitoring capabilities to dynamically allocate resources according to pre-set priorities and policies and increase infrastructure efficiency. Gain improved control and visibility over shared GPU infrastructure to provide easier access and resource allocation across IT, data science and application development teams. Run:ai?s certified OpenShift Operator is available now. In the future, Red Hat and Run:ai plan to continue building on this collaboration with additional integration capabilities for Run:ai on Red Hat OpenShift AI.

This aims to support more seamless customer experiences and further expedite moving AI models into production workflows with even greater consistency.