As the initial phase of the initiative, the companies plan to work together to optimize Hortonworks Data Platform, Hortonworks DataFlow, Hortonworks DataPlane and IBM Cloud Private for Data for use on Red Hat OpenShift, an industry-leading enterprise container and Kubernetes application platform. This can enable users to develop and deploy containerized big data workloads, ultimately making it easier for customers to manage data applications across hybrid cloud deployments. In addition, IBM and Hortonworks will extend their joint work to integrate key services offered through Hortonworks DataPlane with IBM Cloud Private for Data.
Enterprises are undergoing massive business model transformations, powered by the ability to process and analyze new types and tremendous amounts of data. As a result, many are moving to hybrid cloud environments that leverage lightweight microservices in the most efficient manner possible.
Hortonworks and IBM previously announced a collaboration to help businesses accelerate data-driven decision-making. Today’s news builds upon that foundation with the intent to bring big data workloads to a modern and container-based foundation, enabling customers to deploy Hortonworks and IBM platforms into a hybrid cloud environment powered by Red Hat OpenShift. The initiative includes the following:
“Kubernetes is the de facto container orchestration system and we have been working in this ecosystem since the project’s infancy to help make it ready for all, and especially the enterprise in Red Hat OpenShift. Today we are thrilled that Hortonworks and IBM Cloud Private’s data portfolio have selected Red Hat OpenShift as the trusted Kubernetes platform for big data workloads,” said Ashesh Badani, vice president and general manager, Cloud Platforms, Red Hat. “By building and managing their applications via containers and Kubernetes with OpenShift, customers and the big data ecosystem have opportunities to bring this next generation of big data workloads to the hybrid cloud and deliver the benefits of an agile, efficient, reliable, multi-cloud infrastructure.”
“The work that Red Hat, IBM and Hortonworks are doing to modernize enterprise big data workloads via containerization is aimed at helping customers to take advantage of the agility, economics and scale of a hybrid data architecture,” said Rob Bearden, chief executive officer of Hortonworks. “The innovations resulting from this collaboration can enable the seamless and trusted hybrid deployment model needed today by enterprises that are undergoing significant business model transformation.”
In addition to competitive and data challenges, organizations are also scrambling to bring applications once designed for public cloud behind the firewall for greater control, lower costs, greater security and easier management. In fact, in a recent IDC Cloud and AI Adoption Survey[1], more than 80 percent of respondents said they plan to move or repatriate data and workloads from public cloud environments behind the firewall to hosted private clouds or on-premises locations over the next year, because the initial expectations of a single public cloud provider were not realized.
“As these dynamics continue, they’ll work to slow innovation and hinder companies’ progression to enterprise AI,” said Rob Thomas, general manager, IBM Analytics. “Scaling the ladder to AI demands robust data prep, analytics, data science and governance, all of which are easily scaled and streamlined in the kind of containerized, Kubernetes-orchestrated environments that we’re talking about today.”