Introduction to OpenStack Bare Metal (Ironic) Service, Hostripples Web Hosting

Introduction to OpenStack Bare Metal (Ironic) Service

     Ironic is one of the projects of OpenStack that caters bare metal machines against the virtual machines. Ironic can be used as a standalone service or it can also be used as a part of OpenStack Cloud. Ironic also interacts with several components of OpenStack like Identity (Keystone), Computer (Nova), Network (Neutron), Image (Glance) and also with Object (swift).

Once the Bare Metal service is properly set up with services like Compute and Network, then it is possible to cater both the physical as well as virtual machines by using the REST API of Compute services. Despite that, the group of actions illustrated is limited, and they emerge from various features of the physical servers and from the switched hardware like it is not possible to perform a live migration on a bare metal event.

For covering the vast range of hardware, the reference drivers are managed by the community. These reference drivers control the PXE and IPMI like open source technologies. If the community drivers fail to provide improved performance then the pluggable driver architecture of Ironic will permit the hardware vendors to write as well as supply drives which can lead to improved performance.

Why is it important to provision the Bear Metal (Physical Server)?

Consider the following used cases for provisioning bare metal in the OpenStack cloud:

  • For High-speed performance computing clusters.
  • The tasks related to Computing which requires accessing the hardware devices can’t be virtualized.
  • For hosting Database because few databases work unsatisfactorily in a hypervisor.
  • For only one tenant, improving the performance require dedicated hardware, security, dependability and other requirements related to guiding.
  • Or, quickly installing a cloud infrastructure.

Conceptual Architecture for provisioning of Bare Metal:

Following is the conceptual architecture view which represents the relationships and also explains how all the OpenStack services become operational throughout the provisioning of the Bare Metal.

Following are the major Technologies required for the Bare Metal Hosting:

  1. PXE i.e. Preboot Execution Environment:

Preboot Execution Environment is a part of Wfm which stands for Wired for Management description established by the Microsoft and Intel. The network interface card i.e. NIC and also the BIOS of the system are enabled by the Preboot Execution Environment for bootstrapping a computer over the network instead of a disk. Bootstrapping a computer is a process using which a computer system loads up the operating system into the local memory of the system thereby OS can be run by the processor.

This ability to enable the system to boot a computer over the network makes the installation of server simple and also simplifies the server management for the administrators.

2. DHCP i.e. Dynamic Host Configuration Protocol:

All of you may already know that DHCP is a uniform networking protocol which is mostly used on IP i.e. Internet Protocol networks for distributing the network configuration parameters dynamically like IP addresses for the services. With the help of Preboot Execution Environment, the system BIOS uses DHCP for obtaining IP addresses for the network interfaces and for locating the server which saves the NBP i.e. Network Bootstrap Program.

3. NBP or Network Bootstrap Program:

The Network Bootstrap Program is identical to loaders like Grand Unified Bootloader i.e. GRUB or Linux Loader i.e. LILO that are conventionally used in normal booting. Similar to the boot program inside an environment of hard drive, the Network Bootstrap program is also accountable for loading up the Operating System kernel into local memory whereby the operating system could be bootstrapped across a network.

4. TFTP i.e. Trivial File Transfer Protocol:

The Trivial File Transfer Protocol is a normal file transfer protocol which is simply used for automatic transfer of the setup or the boot files between the machines present in the local environment. In the Preboot Execution Environment, Trivial File Transfer Protocol is required for downloading the Network Bootstrap Program throughout the network with the help of the information obtained from the Dynamic Host Configuration Protocol or DHCP server.

5. IPMI i.e. Intelligent Platform Management Interface:

The system administrators who manage the outside of defined computer systems as well as monitor the operations of those computer systems with the help of Intelligent Platform Management Interface. Because Intelligent Platform Management Interface is a conventional computer systems interface. The above method for managing the computer systems might be switched off because of just a network connection to the hardware instead of a network connection to an operating system.

Understanding the deployment process for Bare Metal:

Following pre-requisites should be completed before starting the process of deployment:

  • For the bare metal provisioning, the dependent packages need to be setup on the node/s of the bare metal service on which the ironic conductor is executing such as tftp- server, syslinux, ipmi etc.
  • For making the use of bare metal service endpoint, it is important to configure the Nova as well as the compute driver must be configured for using the ironic driver on the Nova compute node/s.
  • It is also required to create flavors for the available hardware and also the flavor to boot from must be known to Nova.
  • You know that Glance provides the images and few image types which are needed for the successful deployment of the bare metal are listed below:
  • bm-deploy-kernel
  • bm-deploy-ramdisk
  • user-image
  • user-image-vmlinuz
  • user-image-initrd
  • Ironic RESTful API service will provide the hardware to be enrolled.

Deployment of ironic node:

In this section, we are describing how to deploy a general ironic node with the help of PXE and the ironic Python Agent (IPA). On the basis of the ironic driver interface used, few of the deployment steps may be slightly different despite that many of those steps will still remain the same.

  1. Via a message queue to the Nova scheduler, a request for a boot instance comes in, by a Nova API.
  2. Nova scheduler then connects the filters and locates the suitable hypervisor. It also uses flavors for matching with the targeted physical node. Flavors such as extra_specs, cpu_arch.
  3. Then the compute manager of Nova claims the rights of the chosen hypervisor resources.
  4. As per the network interfaces requested in a Nova boot request, tenant virtual interfaces i.e. VIFs in the networking service are created by the Nova compute manager. Here it is important to take a caution that is, the ports of MAC are generated randomly and then updated when the Virtual Interfaces are connected to few nodes for corresponding to the node network interface cards’ or bonds’.
  5. A kind of fake process is created by the nova-compute that includes complete information. It also calls the driver.spawn from the Nova computes’ virt layer.
  6. At the time of the spawn task, the virt driver performs the following tasks:
    • It updates about the target ironic node along with the information regarding the deploy image, UUID instance, capabilities requested and properties of various flavor.
    • The virt driver also confirms the nodes’ power and installs the interfaces, by revoking the ironic API.
    • The virt driver connects the already created VIFs i.e. Virtual Interfaces to the node. Every neutron port can then be connected to any ironic port or port group, with the port groups which are having first priority than other ports. At the ironic side, this task is performed by the network interface. Connections here means trying to save the VIF i.e. Virtual Interfaces identifier into the ironic port or port group and also updating the virtual interfaces i.e. VIF MAC to complement the port’s or port group’s MAC, as described earlier in point no. 4.
    • If requested, the virt driver creates a config drive.
  7. After this, the ironic virt driver of Nova releases a request for deployment through the ironic API for the bare metal node servicing ironic conductor.
  8. Then the Neutron API updates DHCP port for setting up PXE/TFTP options and also the VIFs i.e. Virtual interfaces are connected.
  9. In the previous step, the VFs are connected and the Neutron API informs the DHCP port to set up PXE/TFTP options. If you are using neutron network interface, then ironic generates isolated provisioning ports inside the networking service, and in case of flat network interface, the ports generated by Nova are being used for both i.e. for deployed networking instance and also for provisioning.
  10. In the next step, the ironic nodes boot interface creates a PXE setup and also stores the deploy kernel as well as a RAM disk.
  11. The command for enabling the network boot of a node is issued by the ironic nodes’ management interface.
  12. If it is required then the ironic nodes’ deploy interface stores the kernel and image instance and also if required ramdisk.
  13. Then the instruction to power on the node is issued by the ironic nodes’ power.
  14. Then the deploy ramdisk is booted by the node.
  15. On the basis of the accurate driver used, the conductor may either copy the image on the iSCSI on the physical node or from a short lasting URL, the image is downloaded by the deploy ramdisk. This short lasting URL can be created by object stores which are compatible with swift API. Thus the deployment of the image is completed.
  16. The boot interface of the node interchanges the PXE setup for referring to the image instances and also requests the ramdisk agent to power off the node swiftly. If the ramdisk agent fails to the soft power of the node, then with the help of IPMI/BMC call, the bare metal node can be powered off.
  17. If the provisioning ports were created then the deploy interface activates the network interface to delete or remove the provisioning ports and attaches the tenant ports to the node if they are not previously attached. After this, the node is powered on.

Important Note

At the time of bare metal deployment, there are 2 power cycles; for the first time, the node is powered-on when ramdisk is booted, and for the second time once the image is deployed.

  • The provisioning state of the bare metal node is changed to active.

Following diagram describes the above process of deployment of ironic node:

Introduction to OpenStack Bare Metal (Ironic) Service, Hostripples Web Hosting

That’s all for today! Thank you for reading the blog! Please do not forget to leave a comment in the comment section below!


Introduction to OpenStack Bare Metal (Ironic) Service, Hostripples Web Hosting
Vishwajit Kale
Vishwajit Kale blazed onto the digital marketing scene back in 2015 and is the digital marketing strategist of Hostripples, a company that aims to provide affordable web hosting solutions. Vishwajit is experienced in digital and content marketing along with SEO. He's fond of writing technology blogs, traveling and reading.