Introduction to provisioning
Provisioning is a process that starts with a bare physical or virtual machine and ends with a fully configured, ready-to-use operating system. Using orcharhino, you can define and automate fine-grained provisioning for a large number of hosts.
ATIX does not support provisioning Red Hat Enterprise Linux hosts using the minimal ISO image because it does not contain the |
Provisioning Methods
There are multiple ways to provision a host: image based, network based, boot disk based, and discovery based. The optimal host provisioning method depends on the type of host, the availability of DHCP; and whether it is to be cloned from an existing VM template.
For a better understanding of provisioning hosts using orcharhino, see our glossary for terminology and key terms. Key terms include provisioning, compute resource, provisioning template, and virtualization. |
This results in the following decision tree:
The starting point is the question of which type of host you want to provision:
-
choose image based host deployment.
-
choose image based installation if you want to clone the VM from a template/golden image.
-
depending on your network configuration, either
-
choose boot disk based installation if your hosts have static IP addresses;
-
or select network based installation if the network of the host provides DHCP.
-
-
choose discovery based installation.
-
or choose network based installation depending on the network configuration, either
-
choose boot disk based installation for hosts with static IP addresses;
-
or select network based installation if the network of the host provides DHCP.
-
The following graphic displays the steps of each provisioning method:
-
Depending on whether it is a virtual machine or bare metal host, orcharhino either
-
instructs the compute resource provider to create a virtual machine,
-
or a bare metal host has to be started manually, that is by hand or via a lights out management tool. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.
-
-
The host boots over the network and automatically searches for a DHCP server, which in turn assigns an IP address and configures the network. The DHCP server provides a PXE/iPXE file, which is a type of provisioning template.
We recommend having the orcharhino configure and manage the DHCP server or even take over its duties to the orcharhino or orcharhino proxy.
-
The new host fetches the AutoYaST/Kickstart/Preseed files from orcharhino or orcharhino proxy via TFTP and starts the unattended installation and initial configuration of the operating system.
-
The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.
-
The
boot_disk.iso
image contains the network configuration of the new host, that is it replaces the network configuration via DHCP in contrast to network based provisioning. There are four different kinds of boot disk images:-
the per-host image is the default image and does not require DHCP. It only contains the network configuration and then points to the orcharhino for the iPXE template followed by the provisioning template, that is an AutoYaST, Kickstart, or Preseed provisioning template.
-
the full-host image contains the network configuration, the kernel, the iPXE file, and the provisioning template. The image is operating system specific.
-
the generic image is a reusable image which works for any host registered on orcharhino and requires DHCP.
-
the subnet-generic image requires DHCP and chainloads the installation media from an orcharhino proxy assigned to the subnet of the host.
-
-
Both a bare metal host or a virtual machine require the
boot_disk.iso
to be attached.-
Virtual machines running on VMware vSphere will automatically receive the image and attach it to the new host. For other compute resource providers, you will need to attach the image by hand.
-
Bare metal host always need to receive the image manually. The same applies if you want to boot from a different
boot_disk.iso
image, other than the one provided by orcharhino. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.
-
-
The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.
Depending on how you want to configure your image, this can be done either
-
with a
finish
provisioning template via SSH only, in which case you require the network configuration to be provided via DHCP; -
or via cloud-init with a
userdata
provisioning template, in which case the image needs to point to your orcharhino or orcharhino proxy and have built-in support forcloud-init
. The exact configuration depends on your compute resource provider.
We currently only recommend using image based provisioning for hosts that have access to a DHCP server. |
-
orcharhino instructs the compute resource provider to create a virtual machine based on an existing VM template, that is by cloning a golden image.
-
Depending on whether a DHCP server is available,
-
the new virtual machine either receives its IP address and network configuration from the DHCP server,
-
or a cloud-init provisioning template is used to set up the network configuration. This can be used with and without DHCP.
Alternatively, orcharhino finishes the provisioning process based on provisioning templates via SSH.
-
-
If the compute resource is a cloud provider and the image is capable of
user_data
, the new host does not need to be reachable by orcharhino or orcharhino proxy via SSH but can itself notify orcharhino or orcharhino proxy when it is ready.
Operating System | Provisioning Template |
---|---|
AlmaLinux, CentOS, Oracle Linux, Red Hat Enterprise Linux, and Rocky Linux |
Kickstart template |
SUSE Linux Enterprise Server |
AutoYaST template |
Debian and Ubuntu |
Preseed template |
Discovery based provisioning is similar to boot disk based provisioning.
It uses a discovery.iso
image to boot the host into a minimal Linux system, reports it to orcharhino, and sends various facts including its MAC address to identify the host.
The host is then listed on the discovered hosts page.
Discovery based provisioning requires the host discovery plugin to be installed on your orcharhino. For more information, see Configuring the Discovery Service. |
-
Depending on whether a DHCP server is available,
-
the host either manually receives its network configuration and discovery image and boots the
discovery.iso
image in an interactive way; -
or the host automatically fetches its network configuration and discovery image from the DHCP server and boots the
discovery.iso
image in a non interactive way.
-
-
The
discovery.iso
image contains the information to which orcharhino or orcharhino proxy the host must report to and collects facts about the host and sends it to orcharhino or orcharhino proxy.-
If there are existing discovery rules, the host will automatically start a network based provisioning process based on discovery rules and host groups.
-
Else, the host appears on the list of discovered hosts. The then following process is very similar to creating a hew host.
There are two ways to start the installation of the operating system:
-
the host is rebooted and then gets the network based provisioning template by the DHCP server;
-
or the host is not rebooted but the installation is started via
kexec
with the current network configuration.
-
-
If you want to provision hosts based on Ansible playbooks, have a look at the application centric deployment guide. ACD helps you deploy and configure multi host applications using an Ansible playbook and application definition. |
Provisioning methods in orcharhino
With orcharhino, you can provision hosts by using multiple methods.
- Bare-Metal Provisioning
-
orcharhino provisions bare-metal hosts primarily by using PXE boot and MAC address identification. When provisioning bare-metal hosts with orcharhino, you can do the following:
-
Create host entries and specify the MAC address of the physical host to provision.
-
Boot blank hosts to use the orcharhino Discovery service, which creates a pool of hosts that are ready to be provisioned.
-
Boot and provision hosts by using PXE-less methods.
-
- Cloud Providers
-
orcharhino connects to private and public cloud providers to provision instances of hosts from images that are stored with the cloud environment. When provisioning from cloud with orcharhino, you can do the following:
-
Select which hardware profile or flavor to use.
-
Provision cloud instances from specific providers by using their APIs.
-
- Virtualization Infrastructure
-
orcharhino connects to virtualization infrastructure services, such as oVirt and VMware. When provisioning virtual machines with orcharhino, you can do the following:
-
Provision virtual machines from virtual image templates.
-
Use the same PXE-based boot methods as bare-metal providers.
-
For more information, see compute resources.
Supported cloud providers
You can connect the following cloud providers as compute resources to orcharhino:
-
Amazon EC2
-
Google Compute Engine
-
Microsoft Azure
Supported virtualization infrastructures
You can connect the following virtualization infrastructures as compute resources to orcharhino:
-
KVM (libvirt)
-
oVirt
-
VMware
-
KubeVirt
-
Proxmox
Network boot provisioning workflow
The provisioning process follows a basic PXE workflow:
-
You create a host and select a domain and subnet. orcharhino requests an available IP address from the DHCP orcharhino Proxy Server that is associated with the subnet or from the PostgreSQL database in orcharhino. orcharhino loads this IP address into the IP address field in the Create Host window. When you complete all the options for the new host, submit the new host request.
-
Depending on the configuration specifications of the host and its domain and subnet, orcharhino creates the following settings:
-
A DHCP record on orcharhino Proxy Server that is associated with the subnet.
-
A forward DNS record on orcharhino Proxy Server that is associated with the domain.
-
A reverse DNS record on the DNS orcharhino Proxy Server that is associated with the subnet.
-
PXELinux, Grub, Grub2, and iPXE configuration files for the host in the TFTP orcharhino Proxy Server that is associated with the subnet.
-
A Puppet certificate on the associated Puppet server.
-
A realm on the associated identity server.
-
-
The host is configured to boot from the network as the first device and HDD as the second device.
-
The new host requests a DHCP reservation from the DHCP server.
-
The DHCP server responds to the reservation request and returns TFTP
next-server
andfilename
options. -
The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.
-
A boot loader is returned over TFTP.
-
The boot loader fetches configuration for the host through its provisioning interface MAC address.
-
The boot loader fetches the operating system installer kernel, init RAM disk, and boot parameters.
-
The installer requests the provisioning template from orcharhino.
-
orcharhino renders the provision template and returns the result to the host.
-
The installer performs installation of the operating system.
-
The installer registers the host to orcharhino using Subscription Manager.
-
The installer notifies orcharhino of a successful build in the
postinstall
script.
-
-
The PXE configuration files revert to a local boot template.
-
The host reboots.
-
The new host requests a DHCP reservation from the DHCP server.
-
The DHCP server responds to the reservation request and returns TFTP
next-server
andfilename
options. -
The host requests the bootloader and menu from the TFTP server according to the PXELoader setting.
-
A boot loader is returned over TFTP.
-
The boot loader fetches the configuration for the host through its provision interface MAC address.
-
The boot loader initiates boot from the local drive.
-
If you configured the host to use any Puppet classes, the host configures itself using the modules.
The fully provisioned host performs the following workflow:
-
The host is configured to boot from the network as the first device and HDD as the second device.
-
The new host requests a DHCP reservation from the DHCP server.
-
The DHCP server responds to the reservation request and returns TFTP
next-server
andfilename
options. -
The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.
-
A boot loader is returned over TFTP.
-
The boot loader fetches the configuration settings for the host through its provisioning interface MAC address.
-
For BIOS hosts:
-
The boot loader returns non-bootable device so BIOS skips to the next device (boot from HDD).
-
-
For EFI hosts:
-
The boot loader finds Grub2 on a ESP partition and chainboots it.
-
-
If the host is unknown to orcharhino, a default bootloader configuration is provided. When Discovery service is enabled, it boots into discovery, otherwise it boots from HDD.
This workflow differs depending on custom options. For example:
- Discovery
-
If you use the discovery service, orcharhino automatically detects the MAC address of the new host and restarts the host after you submit a request. Note that TCP port 8443 must be reachable by the orcharhino Proxy to which the host is attached for orcharhino to restart the host.
- PXE-less Provisioning
-
After you submit a new host request, you must boot the specific host with the boot disk that you download from orcharhino and transfer using a USB port of the host.
- Compute Resources
-
orcharhino creates the virtual machine and retrieves the MAC address and stores the MAC address in orcharhino. If you use image-based provisioning, the host does not follow the standard PXE boot and operating system installation. The compute resource creates a copy of the image for the host to use. Depending on image settings in orcharhino, seed data can be passed in for initial configuration, for example using
cloud-init
. orcharhino can connect using SSH to the host and execute a template to finish the customization.By default, deleting the provisioned profile host from orcharhino does not destroy the actual VM on the external compute resource. To destroy the VM when deleting the host entry on orcharhino, navigate to Administer > Settings > Provisioning and configure this behavior using the destroy_vm_on_host_delete setting. If you do not destroy the associated VM and attempt to create a new VM with the same resource name later, it will fail because that VM name already exists in the external compute resource. You can still register the existing VM to orcharhino using the standard host registration workflow you would use for any already provisioned host.
Required boot order for network boot
- For physical or virtual BIOS hosts
-
-
Set the first booting device as boot configuration with network.
-
Set the second booting device as boot from hard drive. orcharhino manages TFTP boot configuration files, so hosts can be easily provisioned just by rebooting.
-
- For physical or virtual EFI hosts
-
-
Set the first booting device as boot configuration with network.
-
Depending on the EFI firmware type and configuration, the OS installer typically configures the OS boot loader as the first entry.
-
To reboot into installer again, use the
efibootmgr
utility to switch back to boot from network.
-
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0"). |