HPC workloads in OpenStack
HPC environments require additional consideration of traffic flows and usage patterns to address the needs of cloud clusters. HPC has high east-west traffic patterns for distributed computing within the network, but can also have substantial north-south traffic in and out of the network, depending on the application.
beowulf scientific cloud
hpc in the private cloud
OpenStack Bare Metal Provisioning enables an enterprise to provision physical, or bare metal machines, for a variety of hardware vendors with hardware-specific drivers. Bare Metal Provisioning integrates with the OpenStack Compute service to provision the bare metal machines in the same way that virtual machines (VM) are provisioned, and provides a solution for the bare-metal-to-trusted-tenant use case.
Generally speaking, OpenStack Baremetal Provisioning advantages include:
Hadoop clusters can be deployed on bare metal machines.
Hyperscale and high-performance computing (HPC) clusters can be deployed.
Database hosting for applications that are sensitive to virtual machines can be used.
Bare Metal Provisioning uses the Compute service for scheduling and quota management, and uses the Identity service for authentication. Instance images must be configured to support Bare Metal Provisioning instead of KVM. OpenStack Bare Metal Provisioning enables the user to provision physical, or bare metal machines, for a variety of hardware vendors with hardware-specific drivers. Bare Metal Provisioning integrates with the Compute service to provision the bare metal machines in the same way that virtual machines are provisioned, and provides a solution for this use case.
Eupraxia Labs delivered, to a scientific Department of Defense (DoD) entity, a classifed modeling-and-simulation HPC recommendation to fully manage a compute cluster with OpenStack. The implementation heavily leveraged all the benefits of a cloud operating system, such as OpenStack, without sacrificing any critical compute requirements of a High-Performance Computing (HPC) cluster. The modeling-and-simulation project plans are to fully utilize the design and integration services of a cloud (IaaS) and PaaS technology group within the DoD entity.
This particular implementation had both insecure (NIPRnet) and secure (SIPRnet) network requirements along with Platform-as-a-Service (PaaS) requirements for parallelized software development, testing, and production deployment. The method by which deployments will occur across the two secure and non-secure environments was beyond the scope of this case study.
Unique to this project was the need to virtualize the endpoints of the physical compute nodes in an overlay network. In this case, VXLAN. Although vendor-specific hardware will not be disclosed, the underlying networking equipment routes traffic between VXLAN segments and allows the virtual servers (virtual services) and physical servers (HPC nodes) to operate in a seamless overlay architecture. Security isolation of the physical server-based HPC cluster was specified using access controls native to OpenStack with some additional cybersecurity enhancements.
The hardware-accelerated Virtual Tunnel Endpoints (VTEP) of the underlying network infrastructure, coupled with high-bandwidth ethernet ports, dramatically improved performance whenever the modeling-and-simulation program requires virtualized services.
Just as important, the underlying network equipment optimizes the interaction between OpenStack virtual services and the physical servers.