Introduction to provisioning
Provisioning is a process that starts with a blank machine and ends with a fully configured, ready-to-use operating system. Using orcharhino, you can define and automate fine-grained provisioning for a large number of hosts.
|
ATIX does not support provisioning Rocky Linux hosts using the minimal ISO image because it does not contain the |
Provisioning Methods
There are multiple ways to provision a host: image based, network based, boot disk based, and discovery based. The optimal host provisioning method depends on the type of host, the availability of DHCP; and whether it is to be cloned from an existing VM template.
|
For a better understanding of provisioning hosts using orcharhino, see our glossary for terminology and key terms. Key terms include provisioning, compute resource, provisioning template, and virtualization. |
This results in the following decision tree:
The starting point is the question of which type of host you want to provision:
-
choose image based host deployment.
-
choose image based installation if you want to clone the VM from a template/golden image.
-
depending on your network configuration, either
-
choose boot disk based installation if your hosts have static IP addresses;
-
or select network based installation if the network of the host provides DHCP.
-
-
choose discovery based installation.
-
or choose network based installation depending on the network configuration, either
-
choose boot disk based installation for hosts with static IP addresses;
-
or select network based installation if the network of the host provides DHCP.
-
The following graphic displays the steps of each provisioning method:
-
Depending on whether it is a virtual machine or bare metal host, orcharhino either
-
instructs the compute resource provider to create a virtual machine,
-
or a bare metal host has to be started manually, that is by hand or via a lights out management tool. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.
-
-
The host boots over the network and automatically searches for a DHCP server, which in turn assigns an IP address and configures the network. The DHCP server provides a PXE/iPXE file, which is a type of provisioning template.
We recommend having the orcharhino configure and manage the DHCP server or even take over its duties to the orcharhino or orcharhino proxy.
-
The new host fetches the AutoYaST/Kickstart/Preseed files from orcharhino or orcharhino proxy via TFTP and starts the unattended installation and initial configuration of the operating system.
-
The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.
-
The
boot_disk.isoimage contains the network configuration of the new host, that is it replaces the network configuration via DHCP in contrast to network based provisioning. There are four different kinds of boot disk images:-
the per-host image is the default image and does not require DHCP. It only contains the network configuration and then points to the orcharhino for the iPXE template followed by the provisioning template, that is an AutoYaST, Kickstart, or Preseed provisioning template.
-
the full-host image contains the network configuration, the kernel, the iPXE file, and the provisioning template. The image is operating system specific.
-
the generic image is a reusable image which works for any host registered on orcharhino and requires DHCP.
-
the subnet-generic image requires DHCP and chainloads the installation media from an orcharhino proxy assigned to the subnet of the host.
-
-
Both a bare metal host or a virtual machine require the
boot_disk.isoto be attached.-
Virtual machines running on VMware vSphere will automatically receive the image and attach it to the new host. For other compute resource providers, you will need to attach the image by hand.
-
Bare metal host always need to receive the image manually. The same applies if you want to boot from a different
boot_disk.isoimage, other than the one provided by orcharhino. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.
-
-
The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.
Depending on how you want to configure your image, this can be done either
-
with a
finishprovisioning template via SSH only, in which case you require the network configuration to be provided via DHCP; -
or via cloud-init with a
userdataprovisioning template, in which case the image needs to point to your orcharhino or orcharhino proxy and have built-in support forcloud-init. The exact configuration depends on your compute resource provider.
|
We currently only recommend using image based provisioning for hosts that have access to a DHCP server. |
-
orcharhino instructs the compute resource provider to create a virtual machine based on an existing VM template, that is by cloning a golden image.
-
Depending on whether a DHCP server is available,
-
the new virtual machine either receives its IP address and network configuration from the DHCP server,
-
or a cloud-init provisioning template is used to set up the network configuration. This can be used with and without DHCP.
Alternatively, orcharhino finishes the provisioning process based on provisioning templates via SSH.
-
-
If the compute resource is a cloud provider and the image is capable of
user_data, the new host does not need to be reachable by orcharhino or orcharhino proxy via SSH but can itself notify orcharhino or orcharhino proxy when it is ready.
| Operating System | Provisioning Template |
|---|---|
AlmaLinux, CentOS, Oracle Linux, Red Hat Enterprise Linux, and Rocky Linux |
Kickstart template |
SUSE Linux Enterprise Server |
AutoYaST template |
Debian and Ubuntu |
Preseed template |
Discovery based provisioning is similar to boot disk based provisioning.
It uses a discovery.iso image to boot the host into a minimal Linux system, reports it to orcharhino, and sends various facts including its MAC address to identify the host.
The host is then listed on the discovered hosts page.
|
Discovery based provisioning requires the host discovery plugin to be installed on your orcharhino. For more information, see Configuring the Discovery Service. |
-
Depending on whether a DHCP server is available,
-
the host either manually receives its network configuration and discovery image and boots the
discovery.isoimage in an interactive way; -
or the host automatically fetches its network configuration and discovery image from the DHCP server and boots the
discovery.isoimage in a non interactive way.
-
-
The
discovery.isoimage contains the information to which orcharhino or orcharhino proxy the host must report to and collects facts about the host and sends it to orcharhino or orcharhino proxy.-
If there are existing discovery rules, the host will automatically start a network based provisioning process based on discovery rules and host groups.
-
Else, the host appears on the list of discovered hosts. The then following process is very similar to creating a hew host.
There are two ways to start the installation of the operating system:
-
the host is rebooted and then gets the network based provisioning template by the DHCP server;
-
or the host is not rebooted but the installation is started via
kexecwith the current network configuration.
-
-
Provisioning methods in orcharhino
With orcharhino, you can provision hosts by using the following provisioning methods.
- Network based
-
You can use this method with hosts that are capable of PXE booting or UEFI HTTP booting.
You must configure the necessary network booting services, such as DHCP or TFTP. When you create a host entry, you select a PXE loader that can configure DHCP options for network booting if orcharhino Proxy is set up to configure the DHCP service automatically.
This method uses an operating system installer, such as Anaconda, Debian Installer, or YaST, and template-generated file with installation options downloaded from orcharhino, such as Kickstart, Preseed, or AutoYaST.
- Network based with PXE Discovery
-
You can create host entries by discovering hosts on a network. The discovery process uses the Discovery image to boot the host and report its hardware details to orcharhino. You can automate provisioning of discovered hosts by creating Discovery rules.
- PXE-less Discovery with kexec
-
You can use this method with hosts that do not support network boot.
You must configure DHCP for host discovery.
This method uses the Discovery image ISO to boot the host and report its hardware details to orcharhino. Using kernel execute (kexec), the Discovery image runs the installer downloaded from orcharhino.
|
Kernel execute is a Technology Preview feature only. Technology Preview features are not supported by ATIX AG. ATIX AG does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of ATIX AG Technology Preview features, see Technical Previews in the ATIX Service Portal. |
- Boot disk based
-
You can use this method with hosts that do not support network boot or when you do not have the option of configuring network booting services, such as DHCP.
This method supplies the booting capability but the rest of the provisioning process still downloads the installer and template-generated files from orcharhino to automate the installation.
- Image based
-
You can use this method with virtual machines or cloud instances.
This method uses a virtual image template or a cloud image from a compute resource to provision the operating system. orcharhino can customize the image by running either a pre-boot or post-boot configuration script generated from a provisioning template.
Provisioning hardware types
With orcharhino, you can provision hosts of the following hardware types.
- Bare-metal hosts
-
orcharhino provisions bare-metal hosts primarily by using network boot and MAC address identification. When provisioning bare-metal hosts with orcharhino, you can do the following:
-
Create host entries and specify the MAC address of the host to provision.
-
Boot blank hosts to the orcharhino Discovery service, which creates a pool of hosts that are ready to be provisioned.
-
Boot and provision PXE-less hosts by using boot disks.
-
- Virtual machines
-
orcharhino connects to virtualization infrastructure services, such as VMware. You can instruct orcharhino to use a particular hardware profile by defining a compute profile. When provisioning virtual machines with orcharhino, you can do the following:
-
Select which hardware profile to use.
-
Provision virtual machines from virtual image templates.
-
Use the same provisioning methods that you use to provision bare-metal hosts.
-
- Cloud instances
-
orcharhino connects to private and public cloud providers to provision instances of hosts from images stored in the cloud environment. You can instruct orcharhino to use a particular hardware profile by defining a compute profile.
For more information, see compute resources.
Supported client platforms in provisioning
orcharhino supports the following operating systems and architectures for host provisioning.
- Supported host operating systems
-
The hosts can use the following operating systems:
-
AlmaLinux
-
Amazon Linux
-
CentOS
-
Debian
-
Oracle Linux
-
Red Hat Enterprise Linux
-
Rocky Linux
-
SUSE Linux Enterprise Server
-
Ubuntu
-
- Supported host architectures
-
The hosts can use the following architectures:
-
AMD and Intel 64-bit architectures are supported for all operating systems
-
The 64-bit ARM architecture and IBM Power Systems, Little Endian, are supported for certain operating systems
For more information, see orcharhino Clients gen3 in the ATIX Service Portal.
-
Supported cloud providers
You can connect the following cloud providers as compute resources to orcharhino:
-
Amazon EC2
-
Google Compute Engine
-
Microsoft Azure
Supported virtualization infrastructures
You can connect the following virtualization infrastructures as compute resources to orcharhino:
-
KVM (libvirt)
-
oVirt
-
VMware
-
Proxmox
Appendix A: Using noVNC to access virtual machines
You can use your browser to access the VNC console of VMs created by orcharhino.
orcharhino supports using noVNC on the following virtualization platforms:
-
VMware
-
Libvirt
-
oVirt
-
You have a virtual machine created by orcharhino.
-
For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC.
-
You have imported the CA certificate used for orcharhino into your browser. Adding a security exception in the browser is not enough for using noVNC. For more information, see Importing the Katello root CA certificate using orcharhino management UI in Configuring authentication for orcharhino users.
-
On your orcharhino Server, configure the firewall to allow VNC service on ports 5900 to 5930.
$ firewall-cmd --add-port=5900-5930/tcp $ firewall-cmd --add-port=5900-5930/tcp --permanent
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource.
-
In the Virtual Machines tab, select the name of your virtual machine. Ensure the machine is powered on and then select Console.
Appendix B: Host parameter hierarchy
You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence:
| Parameter Level | Set in orcharhino management UI |
|---|---|
Globally defined parameters |
Configure > Global parameters |
Organization-level parameters |
Administer > Organizations |
Location-level parameters |
Administer > Locations |
Domain-level parameters |
Infrastructure > Domains |
Subnet-level parameters |
Infrastructure > Subnets |
Operating system-level parameters |
Hosts > Provisioning Setup > Operating Systems |
Host group-level parameters |
Configure > Host Groups |
Host parameters |
Hosts > All Hosts |
Appendix C: Permissions required to provision hosts
The following list provides an overview of the permissions a non-admin user requires to provision hosts.
| Resource name | Permissions | Additional details |
|---|---|---|
Activation Keys |
view_activation_keys |
|
Ansible role |
view_ansible_roles |
Required if Ansible is used. |
Architecture |
view_architectures |
|
Compute profile |
view_compute_profiles |
|
Compute resource |
view_compute_resources, create_compute_resources, destroy_compute_resources, power_compute_resources |
Required to provision bare-metal hosts. |
view_compute_resources_vms, create_compute_resources_vms, destroy_compute_resources_vms, power_compute_resources_vms |
Required to provision virtual machines. |
|
Content Views |
view_content_views |
|
Domain |
view_domains |
|
Environment |
view_environments |
|
Host |
view_hosts, create_hosts, edit_hosts, destroy_hosts, build_hosts, power_hosts, play_roles_on_host |
|
view_discovered_hosts, submit_discovered_hosts, auto_provision_discovered_hosts, provision_discovered_hosts, edit_discovered_hosts, destroy_discovered_hosts |
Required if the Discovery service is enabled. |
|
Hostgroup |
view_hostgroups, create_hostgroups, edit_hostgroups, play_roles_on_hostgroup |
|
Image |
view_images |
|
Lifecycle environment |
view_lifecycle_environments |
|
Location |
view_locations |
|
Medium |
view_media |
|
Operatingsystem |
view_operatingsystems |
|
Organization |
view_organizations |
|
Parameter |
view_params, create_params, edit_params, destroy_params |
|
Product and Repositories |
view_products |
|
Provisioning template |
view_provisioning_templates |
|
Ptable |
view_ptables |
|
orcharhino Proxy |
view_smart_proxies, view_smart_proxies_puppetca |
|
view_openscap_proxies |
Required if the OpenSCAP plugin is enabled. |
|
Subnet |
view_subnets |
-
Creating a Role in Administering orcharhino
-
Adding Permissions to a Role in Administering orcharhino
-
Assigning Roles to a User in Administering orcharhino
|
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0"). |