Provisioning Guide

Provisioning Methods

There are multiple ways to provision a host: image based, network based, boot disk based, and discovery based. The optimal host provisioning method depends on the type of host, the availability of DHCP; and whether it is to be cloned from an existing VM template.

This results in the following decision tree:

Provisioning guide decision tree

The starting point is the question of which type of host you want to provision:

cloud instances
  • choose image based host deployment.

virtual machines
  • choose image based installation if you want to clone the VM from a template/golden image.

  • depending on your network configuration, either

    • choose boot disk based installation if your hosts have static IP addresses;

    • or select network based installation if the network of the host provides DHCP.

bare metal hosts
  • choose discovery based installation.

  • or choose network based installation depending on the network configuration, either

    • choose boot disk based installation for hosts with static IP addresses;

    • or select network based installation if the network of the host provides DHCP.

The following graphic displays the steps of each provisioning method:

Provisioning guide methods
Network Based Provisioning
  1. Depending on whether it’s a virtual machine or bare metal host, orcharhino either

    • instructs the compute resource provider to create a new virtual machine,

    • or a bare metal host has to be started manually, i.e. by hand or via a lights out management tool. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.

  2. The host boots over the network and automatically searches for a DHCP server, which in turn assigns an IP address and configures the network. The DHCP server provides a PXE/iPXE file, which is a type of provisioning template.

    We recommend having the orcharhino configure and manage the DHCP server or even take over its duties to the orcharhino or orcharhino proxy.

  3. The new host fetches the AutoYaST/Kickstart/Preseed files from orcharhino or orcharhino proxy via TFTP and starts the unattended installation and initial configuration of the operating system.

  4. The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.

Boot Disk Based Provisioning
  1. The boot_disk.iso image contains the network configuration of the new host, i.e. it replaces the network configuration via DHCP in contrast to network based provisioning. There are four different kinds of boot disk images:

    • the per-host image is the default image and does not require DHCP. It only contains the network configuration and then points to the orcharhino for the iPXE template followed by the provisioning template, i.e. an AutoYaST, Kickstart, or Preseed provisioning template.

    • the full-host image contains the network configuration, the kernel, the iPXE file, and the provisioning template. The image is operating system specific.

    • the generic image is a reusable image which works for any host registered on orcharhino and requires DHCP.

    • the subnet-generic image requires DHCP and chainloads the installation media from an orcharhino proxy assigned to the subnet of the host.

  2. Both a bare metal host or a virtual machine require the boot_disk.iso to be attached.

    • Virtual machines running on VMware vSphere will automatically receive the image and attach it to the new host. For other compute resource providers, you will need to attach the image by hand.

    • Bare metal host always need to receive the image manually. The same applies if you want to boot from a different boot_disk.iso image, other than the one provided by orcharhino. For bare metal hosts, the MAC address of the new host must be submitted to orcharhino.

  3. The new host reports back to orcharhino after finishing the installation and with that the status in orcharhino is switched to installed.

Image Based Provisioning (PXE-less provisioning)

Depending on how you want to configure your image, this can be done either

  • with a finish provisioning template via SSH only, in which case you require the network configuration to be provided via DHCP;

  • or via cloud-init with a userdata provisioning template, in which case the image needs to point to your orcharhino or orcharhino proxy and have built-in support for cloud-init. The exact configuration depends on your compute resource provider.

We currently only recommend using image based provisioning for hosts that have access to a DHCP server.

  1. orcharhino instructs the compute resource provider to create a new virtual machine based on an existing VM template, i.e. by cloning a golden image.

  2. Depending on whether a DHCP server is available,

    • the new virtual machine either receives its IP address and network configuration from the DHCP server,

    • or a cloud-init provisioning template is used to set up the network configuration. This can be used with and without DHCP.

      Alternatively, orcharhino finishes the provisioning process based on provisioning templates via SSH.

  3. If the compute resource is a cloud provider and the image is capable of user_data, the new host does not need to be reachable by orcharhino or orcharhino proxy via SSH but can itself notify orcharhino or orcharhino proxy once it’s ready.

Operating System - Template Mapping
Operating System Provisioning Template

CentOS, Oracle Linux, and Red Hat Enterprise Linux

Kickstart template

SUSE Linux Enterprise Server

AutoYaST template

Debian and Ubuntu

Preseed template

Discovery Based Provisioning

Discovery based provisioning is similar to boot disk based provisioning. It uses a discovery.iso image to boot the host into a minimal Linux system, reports it to orcharhino, and sends various facts including its MAC address to identify the host. The host is then listed on the discovered hosts page.

Discovery based provisioning requires the host discovery plugin to be installed on your orcharhino.

  1. Depending on whether a DHCP server is available,

    • the host either manually receives its network configuration and discovery image and boots the discovery.iso image in an interactive way;

    • or the host automatically fetches its network configuration and discovery image from the DHCP server and boots the discovery.iso image in a non interactive way.

  2. The discovery.iso image contains the information to which orcharhino or orcharhino proxy the host must report to and collects facts about the host and sends it to orcharhino or orcharhino proxy.

    • If there are existing discovery rules, the host will automatically start a network based provisioning process based on discovery rules and host groups.

    • Else, the host appears on the list of discovered hosts. The then following process is very similar to creating a hew host.

      There are two ways to start the installation of the operating system:

      • the host is rebooted and then gets the network based provisioning template by the DHCP server;

      • or the host is not rebooted but the installation is started via kexec with the current network configuration.

If you want to provision hosts based on Ansible playbooks, have a look at the application centric deployment guide. ACD helps you deploy and configure multi host applications using an Ansible playbook and application definition.

Introduction

Provisioning Overview

Provisioning is a process that starts with a bare physical or virtual machine and ends with a fully configured, ready-to-use operating system. Using orcharhino, you can define and automate fine-grained provisioning for a large number of hosts.

There are many provisioning methods. For example, you can use orcharhino server’s integrated orcharhino Proxy or an external orcharhino Proxy to provision bare metal hosts using both PXE based and non-PXE based methods. You can also provision cloud instances from specific providers through their APIs. These provisioning methods are part of the orcharhino application life cycle to create, manage, and update hosts.

orcharhino has different methods for provisioning hosts:

Bare Metal Provisioning

orcharhino provisions bare metal hosts primarily through PXE boot and MAC address identification. You can create host entries and specify the MAC address of the physical host to provision. You can also boot blank hosts to use orcharhino’s discovery service, which creates a pool of ready-to-provision hosts. You can also boot and provision hosts through PXE-less methods.

Cloud Providers

orcharhino connects to private and public cloud providers to provision instances of hosts from images that are stored with the Cloud environment. This also includes selecting which hardware profile or flavor to use.

Virtualization Infrastructure

orcharhino connects to virtualization infrastructure services such as oVirt and VMware to provision virtual machines from virtual image templates or using the same PXE-based boot methods as bare metal providers.

Refer to the compute resource chapter for more information.

Network Boot Provisioning Workflow

PXE booting assumes that a host, either physical or virtual, is configured to boot from network as the first booting device, and from the hard drive as the second booting device.

The provisioning process follows a basic PXE workflow:

  1. You create a host and select a domain and subnet. orcharhino requests an available IP address from the DHCP orcharhino Proxy that is associated with the subnet or from the PostgreSQL database in orcharhino. orcharhino loads this IP address into the IP address field in the Create Host window. When you complete all the options for the new host, submit the new host request.

  2. Depending on the configuration specifications of the host and its domain and subnet, orcharhino creates the following settings:

    • A DHCP record on orcharhino Proxy that is associated with the subnet.

    • A forward DNS record on orcharhino Proxy that is associated with the domain.

    • A reverse DNS record on the DNS orcharhino Proxy that is associated with the subnet.

    • PXELinux, Grub, Grub2, and iPXE configuration files for the host in the TFTP orcharhino Proxy that is associated with the subnet.

    • A Puppet certificate on the associated Puppet server.

    • A realm on the associated identity server.

  3. The new host requests a DHCP reservation from the DHCP server.

  4. The DHCP server responds to the reservation request and returns TFTP next-server and filename options.

  5. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting.

  6. A boot loader is returned over TFTP.

  7. The boot loader fetches configuration for the host through its provisioning interface MAC address.

  8. The boot loader fetches the operating system installer kernel, init RAM disk, and boot parameters.

  9. The installer requests the provisioning template from orcharhino.

  10. orcharhino renders the provision template and returns the result to the host.

  11. The installer performs installation of the operating system.

    • The installer registers the host to orcharhino using orcharhino client.

    • The installer installs management tools such as katello-agent and puppet.

    • The installer notifies orcharhino of a successful build in the postinstall script.

  12. The PXE configuration files revert to a local boot template.

  13. The host reboots.

  14. The new host requests a DHCP reservation from the DHCP server.

  15. The DHCP server responds to the reservation request and returns TFTP next-server and filename options.

  16. The host requests the bootloader and menu from the TFTP server according to the PXELoader setting.

  17. A boot loader is returned over TFTP.

  18. The boot loader fetches the configuration for the host through its provision interface MAC address.

  19. The boot loader initiates boot from the local drive.

  20. If you configured the host to use any Puppet classes, the host configures itself using the modules.

This workflow differs depending on custom options. For example:

Discovery

If you use the discovery service, orcharhino automatically detects the MAC address of the new host and restarts the host after you submit a request. Note that TCP port 8443 must be reachable by the orcharhino Proxy to which the host is attached for orcharhino to restart the host.

PXE-less Provisioning

After you submit a new host request, you must boot the specific host with the boot disk that you download from orcharhino and transfer using a USB port of the host.

Compute Resources

orcharhino creates the virtual machine and retrieves the MAC address and stores the MAC address in orcharhino. If you use image-based provisioning, the host does not follow the standard PXE boot and operating system installation. The compute resource creates a copy of the image for the host to use. Depending on image settings in orcharhino, seed data can be passed in for initial configuration, for example using cloud-init. orcharhino can connect using SSH to the host and execute a template to finish the customization.

Configuring Provisioning Resources

Provisioning Contexts

A provisioning context is the combination of an organization and location that you specify for orcharhino components. The organization and location that a component belongs to sets the ownership and access for that component.

Organizations divide orcharhino components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through orcharhino and assign components to each individual organization. This ensures orcharhino server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in the Content Management Guide.

Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in the Content Management Guide.

Refer to the context menu for more information.

Setting the Provisioning Context

When you set a provisioning context, you define which organization and location to use for provisioning hosts.

The organization and location menus are located in the menu bar, on the upper left of the orcharhino web UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location. Refer to the context menu for more information.

Procedure
  1. Click Any Organization and select the organization.

  2. Click Any Location and select the location to use.

Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the orcharhino web UI and select My account to edit your user account settings.

CLI procedure
  • When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example:

    # hammer host list --organization "Default_Organization" --location "Default_Location"

    This command outputs hosts allocated for the Default_Organization and Default_Location.

Creating Operating Systems

An operating system is a collection of resources that define how orcharhino server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others.

Importing operating systems from Red Hat’s CDN creates new entries on the Hosts > Operating Systems page.

You can also add custom operating systems using the following procedure. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Operating systems and click New Operating system.

  2. In the Name field, enter a name to represent the operating system entry.

  3. In the Major field, enter the number that corresponds to the major version of the operating system.

  4. In the Minor field, enter the number that corresponds to the minor version of the operating system.

  5. In the Description field, enter a description of the operating system.

  6. From the Family list, select the operating system’s family.

  7. From the Root Password Hash list, select the encoding method for the root password.

  8. From the Architectures list, select the architectures that the operating system uses.

  9. Click the Partition table tab and select the possible partition tables that apply to this operating system.

  10. Click the Installation Media tab and enter the information for the installation media source. For more information, see Adding Installation Media to orcharhino.

  11. Click the Templates tab and select a PXELinux template, a Provisioning template, and a Finish template for your operating system to use. You can select other templates, for example an iPXE template, if you plan to use iPXE for provisioning.

  12. Click Submit to save your provisioning template.

CLI procedure
  • Create the operating system using the hammer os create command:

    # hammer os create --name "MyOS" \
    --description "My_custom_operating_system" \
    --major 7 --minor 3 --family "Redhat" --architectures "x86_64" \
    --partition-tables "My_Partition" --media "Red_Hat" \
    --provisioning-templates "My_Provisioning_Template"

Updating the Details of Multiple Operating Systems

Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default, a configuration template called Kickstart default PXELinux, and a provisioning template called Kickstart Default.

Procedure
  1. On orcharhino server, run the following Bash script:

    PARTID=$(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1)
    PXEID=$(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1)
    SATID=$(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1)
    
    for i in $(hammer --no-headers --csv os list | awk -F, {'print $1'})
    do
       hammer partition-table add-operatingsystem --id="${PARTID}" --operatingsystem-id="${i}"
       hammer template add-operatingsystem --id="${PXEID}" --operatingsystem-id="${i}"
       hammer os set-default-template --id="${i}" --config-template-id=${PXEID}
       hammer os add-config-template --id="${i}" --config-template-id=${SATID}
       hammer os set-default-template --id="${i}" --config-template-id=${SATID}
    done
  2. Display information about the updated operating system to verify that the operating system is updated correctly:

    # hammer os info --id 1

Creating Architectures

An architecture in orcharhino represents a logical grouping of hosts and operating systems. Architectures are created by orcharhino automatically when hosts check in with Puppet. Basic i386 and x86_64 architectures are already preset in orcharhino.

Use this procedure to create an architecture in orcharhino.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Architectures and click Create Architecture.

  2. In the Name field, enter a name for the architecture.

  3. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Operating Systems.

  4. Click Submit.

CLI procedure
  • Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture:

    # hammer architecture create --name "Architecture_Name" \
    --operatingsystems "os"

Creating Hardware Models

Use this procedure to create a hardware model in orcharhino so that you can specify which hardware model a host uses.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Hardware Models and click Create Model.

  2. In the Name field, enter a name for the hardware model.

  3. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system.

  4. In the Info field, enter a description of the hardware model.

  5. Click Submit to save your hardware model.

CLI procedure
  • Create a hardware model using the hammer model create command. The only required parameter is --name. Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option:

    # hammer model create --name "model_name" --info "description" \
    --hardware-model "hardware_model" --vendor-class "vendor_class"

Using a Synced Kickstart Repository for a Host’s Operating System

orcharhino contains a set of synchronized kickstart repositories that you use to install the provisioned host’s operating system. For more information about adding repositories, see Syncing Repositories in the Content Management Guide. Refer to the creating a repository page for more information.

Use this procedure to set up a kickstart repository.

Procedure
  1. Add the synchronized kickstart repository that you want to use to the existing Content View, or create a new Content View and add the kickstart repository.

    For Red Hat Enterprise Linux 8, ensure that you add both Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories.

    If you use a disconnected environment, you must import the Kickstart repositories from a Red Hat Enterprise Linux binary DVD. For more information, see Importing Kickstart Repositories in the Content Management Guide.

  2. Publish a new version of the Content View where the kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing Content Views in the Content Management Guide.

  3. When you create a host, in the Operating System tab, for Media Selection, select the Synced Content check box.

To view the kickstart tree, enter the following command:

# hammer medium list --organization "your_organization"

Adding Installation Media to orcharhino

Installation media are sources of packages that orcharhino server uses to install a base operating system on a machine from an external repository. You can use this parameter to install third-party content. Red Hat content is delivered through repository syncing instead.

Installation media must be in the format of an operating system installation tree, and must be accessible to the machine hosting the installer through an HTTP URL. You can view installation media by navigating to Hosts > Installation Media menu.

If you want to improve download performance when using installation media to install operating systems on multiple host, you must modify the installation medium’s Path to point to the closest mirror or a local copy.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Installation Media and click Create Medium.

  2. In the Name field, enter a name to represent the installation media entry.

  3. In the Path enter the URL or NFS share that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions:

    • $arch - The system architecture.

    • $version - The operating system version.

    • $major - The operating system major version.

    • $minor - The operating system minor version.

      Example HTTP path:

      http://download.example.com/centos/$version/Server/$arch/os/

      Example NFS path:

      nfs://download.example.com:/centos/$version/Server/$arch/os/

      Synchronized content on orcharhino Proxys always uses an HTTP path. orcharhino Proxy managed content does not support NFS paths.

  4. From the Operating system family list, select the distribution or family of the installation medium. For example, CentOS and Fedora are in the Red Hat family. Debian and Ubuntu belong to the Debian family.

  5. Click the Organizations and Locations tabs, to change the provisioning context. orcharhino server adds the installation medium to the set provisioning context.

  6. Click Submit to save your installation medium.

CLI procedure
  • Create the installation medium using the hammer medium create command:

    # hammer medium create --name "CustomOS" --os-family "Redhat" \
    --path 'http://download.example.com/centos/$version/Server/$arch/os/' \
    --organizations "My_Organization" --locations "My_Location"

Creating Partition Tables

A partition table is a type of template that defines the way orcharhino server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. orcharhino contains a set of default partition tables to use, including a Kickstart default. You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Partition Tables and click Create Partition Table.

  2. In the Name field, enter a name for the partition table.

  3. Select the Default check box if you want to set the template to automatically associate with new organizations or locations.

  4. Select the Snippet check box if you want to identify the template as a reusable snippet for other partition tables.

  5. From the Operating System Family list, select the distribution or family of the partitioning layout. For example, Red Hat Enterprise Linux, CentOS, and Fedora are in the Red Hat family.

  6. In the Template editor field, enter the layout for the disk partition. For example:

    zerombr
    clearpart --all --initlabel
    autopart

    You can also use the Template file browser to upload a template file.

    The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux 7.2 requires a layout that matches a kickstart file.

  7. In the Audit Comment field, add a summary of changes to the partition layout.

  8. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. orcharhino adds the partition table to the current provisioning context.

  9. Click Submit to save your partition table.

CLI procedure
  1. Before you create a partition table with the CLI, create a plain text file that contains the partition layout. This example uses the ~/my-partition file.

  2. Create the installation medium using the hammer partition-table create command:

    # hammer partition-table create --name "My Partition" --snippet false \
    --os-family Redhat --file ~/my-partition --organizations "My_Organization" \
    --locations "My_Location"

Dynamic Partition Example

Using an Anaconda kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the next sequence of events in the provisioning process:

zerombr
clearpart --all --initlabel
autopart <%= host_param('autopart_options') %>

Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer.

If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section.

#Dynamic (do not remove this line)

MEMORY=$((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024))
if [ "$MEMORY" -lt 2048 ]; then
    SWAP_MEMORY=$(($MEMORY * 2))
elif [ "$MEMORY" -lt 8192 ]; then
    SWAP_MEMORY=$MEMORY
elif [ "$MEMORY" -lt 65536 ]; then
    SWAP_MEMORY=$(($MEMORY / 2))
else
    SWAP_MEMORY=32768
fi

cat <<EOF > /tmp/diskpart.cfg
zerombr yes
clearpart --all --initlabel
part /boot --fstype ext4 --size 200 --asprimary
part swap --size "$SWAP_MEMORY"
part / --fstype ext4 --size 1024 --grow
EOF

Provisioning Templates

A provisioning template defines the way orcharhino server installs an operating system on a host.

orcharhino includes many template examples. In the orcharhino web UI, navigate to Hosts > Provisioning templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Provisioning templates > Create Template > Help.

Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing Hosts.

You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in the Content Management Guide.

You can synchronize templates between orcharhino server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in the Managing Hosts guide.

To view the history of changes applied to a template, navigate to Hosts > Provisioning templates, select one of the templates, and click History. Click Revert to override the content with the previous version. You can also revert to an earlier change. Click Show Diff to see information about a specific change:

  • The Template Diff tab displays changes in the body of a provisioning template.

  • The Details tab displays changes in the template description.

  • The History tab displays the user who made a change to the template and date of the change.

Refer to the provisioning templates page for more information.

Types of Provisioning Templates

There are various types of provisioning templates:

Provision

The main template for the provisioning process. For example, a kickstart template.

PXELinux, PXEGrub, PXEGrub2

PXE-based templates that deploy to the template orcharhino Proxy associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2.

Finish

Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finishing templates only for imaged-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment.

When a finish script successfully exits with the return code 0, orcharhino treats the code as a success and the host exits the build mode. Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD.

user_data

Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require orcharhino to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image.

Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init, which expects YAML input, or ignition, which expects JSON input.

cloud_init

Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plug-in, which attempts to download the template directly from orcharhino over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized.

Ensure that you meet the following requirements to use the cloud_init template:

  • Ensure that the image that you want to provision has the software to read the data installed and set to start during boot.

  • A provisioned host is able to reach orcharhino from the IP address that matches the host’s provisioning interface IP.

    Note that cloud-init does not work behind NAT.

Bootdisk

Templates for PXE-less boot methods.

Kernel Execution (kexec)

Kernel execution templates for PXE-less boot methods.

Script

An arbitrary script not used by default but useful for custom tasks.

ZTP

Zero Touch Provisioning templates.

POAP

PowerOn Auto Provisioning templates.

iPXE

Templates for iPXE or gPXE environments to use instead of PXELinux.

Creating Provisioning Templates

A provisioning template defines the way orcharhino server installs an operating system on a host. Use this procedure to create a new provisioning template.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates and click Create Template.

  2. In the Name field, enter a name for the provisioning template.

  3. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template.

CLI procedure
  1. Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file.

  2. Create the template using the hammer template create command and specify the type with the --type option:

    # hammer template create --name "My Provisioning Template" \
    --file ~/my-template --type provision --organizations "My_Organization" \
    --locations "My_Location"

Cloning Provisioning Templates

A provisioning template defines the way orcharhino server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates and search for the template that you want to use.

  2. Click Clone to duplicate the template.

  3. In the Name field, enter a name for the provisioning template.

  4. Select the Default check box to set the template to associate automatically with new organizations or locations.

  5. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file.

  6. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes.

  7. Click the Type tab and if your template is a snippet, select the Snippet check box. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates.

  8. From the Type list, select the type of the template. For example, Provisioning template.

  9. Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template.

  10. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments.

  11. Click the Organizations and Locations tabs to add any additional contexts to the template.

  12. Click Submit to save your provisioning template.

Creating Compute Profiles

You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage. A default installation of orcharhino contains three predefined profiles:

  • 1-Small

  • 2-Medium

  • 3-Large

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile.

  2. In the Name field, enter a name for the profile.

  3. Click Submit. A new window opens with the name of the compute profile.

  4. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile.

CLI procedure

The compute profile CLI commands are not yet implemented in orcharhino 5.6.

Setting a Default Encrypted Root Password for Hosts

If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password.

Procedure
  1. Generate an encrypted password:

    # python -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass("Confirm: ")) else exit()'
  2. Copy the password for later use.

  3. In the orcharhino web UI, navigate to Administer > Settings.

  4. On the Settings page, select the Provisioning tab.

  5. In the Name column, navigate to Root password, and click Click to edit.

  6. Paste the encrypted password, and click Save.

Using noVNC to Access Virtual Machines

You can use your browser to access the VNC console of VMs created by orcharhino.

orcharhino supports using noVNC on the following virtualization platforms:

  • VMware

  • Libvirt

  • oVirt

Prerequisites
  • You must have a virtual machine created by orcharhino.

  • For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC.

  • You must import the Katello root CA certificate into your orcharhino server. Adding a security exception in the browser is not enough for using noVNC. For more information, see the Installing the Katello Root CA Certificate section in the Administering orcharhino guide.

Procedure
  1. On the VM host system, configure the firewall to allow VNC service on ports 5900 to 5930:

    • On operating systems with the iptables command:

      # iptables -A INPUT -p tcp --dport 5900:5930 -j ACCEPT
      # service iptables save
    • On operating systems with the firewalld service:

      # firewall-cmd --add-port=5900-5930/tcp
      # firewall-cmd --add-port=5900-5930/tcp --permanent

      If you do not use firewall-cmd to configure the Linux firewall, implement using the command of your choice.

  2. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource.

  3. In the Virtual Machines tab, select the name of a VM host. Ensure the machine is powered on and then select Console.

Creating a Host Group

We recommend creating a host group to bundle provisioning details. This allows you to conveniently provision hosts multiple times without reentering common information.

Changing the provisioning parameters of an existing host group will only affect hosts provisioned via the host group at a later time, i.e. it will not alter any existing hosts. However, configuration management changes for a host group will affect existing hosts.

Configuring Networking

Each provisioning type requires some network configuration. Use this chapter to configure network services in your integrated orcharhino Proxy on orcharhino server.

New hosts must have access to your orcharhino Proxy. orcharhino Proxy can be either your integrated orcharhino Proxy on orcharhino server or an external orcharhino Proxy. You might want to provision hosts from an external orcharhino Proxy when the hosts are on isolated networks and cannot connect to orcharhino server directly, or when the content is synchronized with orcharhino Proxy. Provisioning using the external orcharhino Proxy can save on network bandwidth.

Configuring orcharhino Proxy has two basic requirements:

  1. Configuring network services. This includes:

    • Content delivery services

    • Network services (DHCP, DNS, and TFTP)

    • Puppet configuration

  2. Defining network resource data in orcharhino server to help configure network interfaces on new hosts.

The following instructions have similar applications to configuring standalone orcharhino Proxies managing a specific network. To configure orcharhino to use external DHCP, DNS, and TFTP services, see Configuring External Services in Installing orcharhino.

Network Resources

orcharhino contains networking resources that you must set up and configure to create a host. orcharhino includes the following networking resources:

Domain

You must assign every host that is managed by orcharhino to a domain. Using the domain, orcharhino can manage A, AAAA, and PTR records. Even if you do not want orcharhino to manage your DNS servers, you still must create and associate at least one domain. Domains are included in the naming conventions orcharhino hosts, for example, a host with the name test123 in the example.com domain has the fully qualified domain name test123.example.com.

Subnet

You must assign every host managed by orcharhino to a subnet. Using subnets, orcharhino can then manage IPv4 reservations. If there are no reservation integrations, you still must create and associate at least one subnet. When you manage a subnet in orcharhino, you cannot create DHCP records for that subnet outside of orcharhino. In orcharhino, you can use IP Address Management (IPAM) to manage IP addresses with one of the following options:

  • DHCP: DHCP orcharhino Proxy manages the assignment of IP addresses by finding the next available IP address starting from the first address of the range and skipping all addresses that are reserved. Before assigning an IP address, orcharhino Proxy sends an ICMP and TCP pings to check whether the IP address is in use. Note that if a host is powered off, or has a firewall configured to disable connections, orcharhino makes a false assumption that the IP address is available. This check does not work for hosts that are turned off, therefore, the DHCP option can only be used with subnets that orcharhino controls and that do not have any hosts created externally.

    The orcharhino Proxy DHCP module retains the offered IP addresses for a short period of time to prevent collisions during concurrent access, so some IP addresses in the IP range might remain temporarily unused.

  • Internal DB: orcharhino finds the next available IP address from the Subnet range by excluding all IP addresses from the orcharhino database in sequence. The primary source of data is the database, not DHCP reservations. This IPAM is not safe when multiple hosts are being created in parallel; in that case, use DHCP or Random DB IPAM instead.

  • Random DB: orcharhino finds the next available IP address from the Subnet range by excluding all IP addresses from the orcharhino database randomly. The primary source of data is the database, not DHCP reservations. This IPAM is safe to use with concurrent host creation as IP addresses are returned in random order, minimizing the chance of a conflict.

  • EUI-64: Extended Unique Identifier (EUI) 64bit IPv6 address generation, as per RFC2373, is obtained through the 48-bit MAC address.

  • External IPAM: Delegates IPAM to an external system through orcharhino Proxy feature. orcharhino currently does not ship with any external IPAM implementations, but several plug-ins are in development.

  • None: IP address for each host must be entered manually.

    Options DHCP, Internal DB and Random DB can lead to DHCP conflicts on subnets with records created externally. These subnets must be under exclusive orcharhino control.

    For more information about adding a subnet, see Adding a Subnet to orcharhino server.

DHCP Ranges

You can define the same DHCP range in orcharhino server for both discovered and provisioned systems, but use a separate range for each service within the same subnet.

orcharhino and DHCP Options

orcharhino manages DHCP reservations through a DHCP orcharhino Proxy. orcharhino also sets the next-server and filename DHCP options.

The next-server option

The next-server option provides the IP address of the TFTP server to boot from. This option is not set by default and must be set for each TFTP orcharhino Proxy. You can use the foreman-installer command with the --foreman-proxy-tftp-servername option to set the TFTP server in the /etc/foreman-proxy/settings.d/tftp.yml file:

# foreman-installer --foreman-proxy-tftp-servername 1.2.3.4

Each TFTP orcharhino Proxy then reports this setting through the API and orcharhino can retrieve the configuration information when it creates the DHCP record.

When the PXE loader is set to none, orcharhino does not populate the next-server option into the DHCP record.

If the next-server option remains undefined, orcharhino uses reverse DNS search to find a TFTP server address to assign, but you might encounter the following problems:

  • DNS timeouts during provisioning

  • Querying of incorrect DNS server. For example, authoritative rather than caching

  • Errors about incorrect IP address for the TFTP server. For example, PTR record was invalid

If you encounter these problems, check the DNS setup on both orcharhino and orcharhino Proxy, specifically the PTR record resolution.

The filename option

The filename option contains the full path to the file that downloads and executes during provisioning. The PXE loader that you select for the host or host group defines which filename option to use. When the PXE loader is set to none, orcharhino does not populate the filename option into the DHCP record. Depending on the PXE loader option, the filename changes as follows:

PXE loader option filename entry Notes

PXELinux BIOS

pxelinux.0

PXELinux UEFI

pxelinux.efi

iPXE Chain BIOS

undionly.kpxe

PXEGrub2 UEFI

grub2/grubx64.efi

x64 can differ depending on architecture

iPXE UEFI HTTP

\http://orcharhino-proxy.example.com:8000/httpboot/ipxe-x64.efi

Requires the httpboot feature and renders the filename as a full URL where orcharhino-proxy.example.com is a known host name of orcharhino Proxy in orcharhino.

Grub2 UEFI HTTP

\http://orcharhino-proxy.example.com:8000/httpboot/grub2/grubx64.efi

Requires the httpboot feature and renders the filename as a full URL where orcharhino-proxy.example.com is a known host name of orcharhino Proxy in orcharhino.

Troubleshooting DHCP Problems in orcharhino

orcharhino can manage an ISC DHCP server on internal or external DHCP orcharhino Proxy. orcharhino can list, create, and delete DHCP reservations and leases. However, there are a number of problems that you might encounter on occasions.

Out of sync DHCP records

When an error occurs during DHCP orchestration, DHCP records in the orcharhino database and the DHCP server might not match. To fix this, you must add missing DHCP records from the orcharhino database to the DHCP server and then remove unwanted records from the DHCP server as per the following steps:

  1. To preview the DHCP records that are going to be added to the DHCP server, enter the following command:

    # foreman-rake orchestration:dhcp:add_missing subnet_name=NAME
  2. If you are satisfied by the preview changes in the previous step, apply them by entering the above command with the perform=1 argument:

    # foreman-rake orchestration:dhcp:add_missing subnet_name=NAME perform=1
  3. To keep DHCP records in orcharhino and in the DHCP server synchronized, you can remove unwanted DHCP records from the DHCP server. Note that orcharhino assumes that all managed DHCP servers do not contain third-party records, therefore, this step might delete those unexpected records. To preview what records are going to be removed from the DHCP server, enter the following command:

    # foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME
  4. If you are satisfied by the preview changes in the previous step, apply them by entering the above command with the perform=1 argument:

    # foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME perform=1
PXE loader option change

When the PXE loader option is changed for an existing host, this causes a DHCP conflict. The only workaround is to overwrite the DHCP entry.

Incorrect permissions on DHCP files

An operating system update can update the dhcpd package. This causes the permissions of important directories and files to reset so that the DHCP orcharhino Proxy cannot read the required information.

Changing the DHCP orcharhino Proxy entry

orcharhino manages DHCP records only for hosts that are assigned to subnets with a DHCP orcharhino Proxy set. If you create a host and then clear or change the DHCP orcharhino Proxy, when you attempt to delete the host, the action fails.

If you create a host without setting the DHCP orcharhino Proxy and then try to set the DHCP orcharhino Proxy, this causes DHCP conflicts.

Deleted hosts entries in the dhcpd.leases file

Any changes to a DHCP lease are appended to the end of the dhcpd.leases file. Because entries are appended to the file, it is possible that two or more entries of the same lease can exist in the dhcpd.leases file at the same time. When there are two or more entries of the same lease, the last entry in the file takes precedence. Group, subgroup and host declarations in the lease file are processed in the same manner. If a lease is deleted, { deleted; } is appended to the declaration.

Prerequisites for Image Based Provisioning

Post-Boot Configuration Method

Images that use the finish post-boot configuration scripts require a managed DHCP server, such as orcharhino’s integrated orcharhino Proxy or an external orcharhino Proxy. The host must be created with a subnet associated with a DHCP orcharhino Proxy, and the IP address of the host must be a valid IP address from the DHCP range.

It is possible to use an external DHCP service, but IP addresses must be entered manually. The SSH credentials corresponding to the configuration in the image must be configured in orcharhino to enable the post-boot configuration to be made.

Check the following items when troubleshooting a virtual machine booted from an image that depends on post-configuration scripts:

  • The host has a subnet assigned in orcharhino server.

  • The subnet has a DHCP orcharhino Proxy assigned in orcharhino server.

  • The host has a valid IP address assigned in orcharhino server.

  • The IP address acquired by the virtual machine using DHCP matches the address configured in orcharhino server.

  • The virtual machine created from an image responds to SSH requests.

  • The virtual machine created from an image authorizes the user and password, over SSH, which is associated with the image being deployed.

  • orcharhino server has access to the virtual machine via SSH keys. This is required for the virtual machine to receive post-configuration scripts from orcharhino server.

Pre-Boot Initialization Configuration Method

Images that use the cloud-init scripts require a DHCP server to avoid having to include the IP address in the image. A managed DHCP orcharhino Proxy is preferred. The image must have the cloud-init service configured to start when the system boots and fetch a script or configuration data to use in completing the configuration.

Check the following items when troubleshooting a virtual machine booted from an image that depends on initialization scripts included in the image:

  • There is a DHCP server on the subnet.

  • The virtual machine has the cloud-init service installed and enabled.

Configuring Network Services

Some provisioning methods use orcharhino Proxy services. For example, a network might require orcharhino Proxy to act as a DHCP server. A network can also use PXE boot services to install the operating system on new hosts. This requires configuring orcharhino Proxy to use the main PXE boot services: DHCP, DNS, and TFTP.

Use the foreman-installer command with the options to configure these services on orcharhino server.

To configure these services on an external orcharhino Proxy, run orcharhino-installer --no-enable-foreman. Refer to the orcharhino proxy installation guide for more information.

orcharhino server uses eth0 for external communication, such as connecting to Red Hat’s CDN.

Procedure

To configure network services on orcharhino’s integrated orcharhino Proxy, complete the following steps:

  1. Enter the foreman-installer command to configure the required network services:

    # foreman-installer --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-managed true \
    --foreman-proxy-dhcp-gateway "192.168.140.1" \
    --foreman-proxy-dhcp-interface "eth1" \
    --foreman-proxy-dhcp-nameservers "192.168.140.2" \
    --foreman-proxy-dhcp-range "192.168.140.10 192.168.140.110" \
    --foreman-proxy-dhcp-server "192.168.140.2" \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-managed true \
    --foreman-proxy-dns-forwarders "8.8.8.8; 8.8.4.4" \
    --foreman-proxy-dns-interface "eth1" \
    --foreman-proxy-dns-reverse "140.168.192.in-addr.arpa" \
    --foreman-proxy-dns-server "127.0.0.1" \
    --foreman-proxy-dns-zone "example.com" \
    --foreman-proxy-tftp true \
    --foreman-proxy-tftp-managed true
  2. Find orcharhino Proxy that you configure:

    # hammer proxy list
  3. Refresh features of orcharhino Proxy to view the changes:

    # hammer proxy refresh-features --name "orcharhino.example.com"
  4. Verify the services configured on orcharhino Proxy:

    # hammer proxy info --name "orcharhino.example.com"

Multiple Subnets or Domains via Installer

The foreman-installer options allow only for a single DHCP subnet or DNS domain. One way to define more than one subnet is by using a custom configuration file.

For every additional subnet or domain, create an entry in /etc/foreman-installer/custom-hiera.yaml file:

dhcp::pools:
 isolated.lan:
   network: 192.168.99.0
   mask: 255.255.255.0
   gateway: 192.168.99.1
   range: 192.168.99.5 192.168.99.49

dns::zones:
  # creates @ SOA $::fqdn root.example.com.
  # creates $::fqdn A $::ipaddress
  example.com: {}

  # creates @ SOA test.example.net. hostmaster.example.com.
  # creates test.example.net A 192.0.2.100
  example.net:
    soa: test.example.net
    soaip: 192.0.2.100
    contact: hostmaster.example.com.

  # creates @ SOA $::fqdn root.example.org.
  # does NOT create an A record
  example.org:
    reverse: true

  # creates @ SOA $::fqdn hostmaster.example.com.
  2.0.192.in-addr.arpa:
    reverse: true
    contact: hostmaster.example.com.

Execute foreman-installer to perform the changes and verify that the /etc/dhcp/dhcpd.conf contains appropriate entries. Subnets must be then defined in orcharhino database.

DHCP, DNS, and TFTP Options for Network Configuration

DHCP Options
--foreman-proxy-dhcp

Enables the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-managed

Enables Foreman to manage the DHCP service. You can set this option to true or false.

--foreman-proxy-dhcp-gateway

The DHCP pool gateway. Set this to the address of the external gateway for hosts on your private network.

--foreman-proxy-dhcp-interface

Sets the interface for the DHCP service to listen for requests. Set this to eth1.

--foreman-proxy-dhcp-nameservers

Sets the addresses of the nameservers provided to clients through DHCP. Set this to the address for orcharhino server on eth1.

--foreman-proxy-dhcp-range

A space-separated DHCP pool range for Discovered and Unmanaged services.

--foreman-proxy-dhcp-server

Sets the address of the DHCP server to manage.

DNS Options
--foreman-proxy-dns

Enables DNS service. You can set this option to true or false.

--foreman-proxy-dns-managed

Enables Foreman to manage the DNS service. You can set this option to true or false.

--foreman-proxy-dns-forwarders

Sets the DNS forwarders. Set this to your DNS servers.

--foreman-proxy-dns-interface

Sets the interface to listen for DNS requests. Set this to eth1.

--foreman-proxy-dns-reverse

The DNS reverse zone name.

--foreman-proxy-dns-server

Sets the address of the DNS server to manage.

--foreman-proxy-dns-zone

Sets the DNS zone name.

TFTP Options
--foreman-proxy-tftp

Enables TFTP service. You can set this option to true or false.

--foreman-proxy-tftp-managed

Enables Foreman to manage the TFTP service. You can set this option to true or false.

--foreman-proxy-tftp-servername

Sets the TFTP server to use. Ensure that you use orcharhino Proxy’s IP address.

Run foreman-installer --help to view more options related to DHCP, DNS, TFTP, and other orcharhino orcharhino Proxy services

Using TFTP Services through NAT

You can use orcharhino TFTP services through NAT. To do this, on all NAT routers or firewalls, you must enable a TFTP service on UDP port 69 and enable the TFTP state tracking feature. For more information, see the documentation for your NAT device.

Using NAT on Linux with firewalld:

Use the following command to allow TFTP service on UDP port 69, load the kernel TFTP state tracking module, and make the changes persistent:

# firewall-cmd --add-service=tftp && firewall-cmd --runtime-to-permanent
For a NAT running on linux with iptables command:
  1. Configure the firewall to allow TFTP service UDP on port 69.

    # iptables -A OUTPUT -i eth0 -p udp --sport 69 -m state \
    --state ESTABLISHED -j ACCEPT
    # service iptables save
  2. Load the ip_conntrack_tftp kernel TFTP state module. In the /etc/sysconfig/iptables-config file, locate IPTABLES_MODULES and add ip_conntrack_tftp as follows:

    IPTABLES_MODULES="ip_conntrack_tftp"

Adding a Domain to orcharhino server

orcharhino server defines domain names for each host on the network. orcharhino server must have information about the domain and orcharhino Proxy responsible for domain name assignment.

Checking for Existing Domains

orcharhino server might already have the relevant domain created as part of orcharhino server installation. Switch the context to Any Organization and Any Location then check the domain list to see if it exists.

DNS Server Configuration Considerations

During the DNS record creation, orcharhino performs conflict DNS lookups to verify that the host name is not in active use. This check runs against one of the following DNS servers:

  • The system-wide resolver if Administer > Settings > Query local nameservers is set to true.

  • The nameservers that are defined in the subnet associated with the host.

  • The authoritative NS-Records that are queried from the SOA from the domain name associated with the host.

If you experience timeouts during DNS conflict resolution, check the following settings:

  • The subnet nameservers must be reachable from orcharhino server.

  • The domain name must have a Start of Authority (SOA) record available from orcharhino server.

  • The system resolver in the /etc/resolv.conf file must have a valid and working configuration.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure

To add a domain to orcharhino, complete the following steps:

  1. In the orcharhino web UI, navigate to Infrastructure > Domains and click Create Domain.

  2. In the DNS Domain field, enter the full DNS domain name.

  3. In the Fullname field, enter the plain text name of the domain.

  4. Click the Parameters tab and configure any domain level parameters to apply to hosts attached to this domain. For example, user defined Boolean or string parameters to use in templates.

  5. Click Add Parameter and fill in the Name and Value fields.

  6. Click the Locations tab, and add the location where the domain resides.

  7. Click the Organizations tab, and add the organization that the domain belongs to.

  8. Click Submit to save the changes.

CLI procedure
  • Use the hammer domain create command to create a domain:

    # hammer domain create --name "domain_name.com" \
    --description "My example domain" --dns-id 1 \
    --locations "My_Location" --organizations "My_Organization"

In this example, the --dns-id option uses 1, which is the ID of your integrated orcharhino Proxy on orcharhino server.

Adding a Subnet to orcharhino server

You must add information for each of your subnets to orcharhino server because orcharhino configures interfaces for new hosts. To configure interfaces, orcharhino server must have all the information about the network that connects these interfaces.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Subnets, and in the Subnets window, click Create Subnet.

  2. In the Name field, enter a name for the subnet.

  3. In the Description field, enter a description for the subnet.

  4. In the Network address field, enter the network address for the subnet.

  5. In the Network prefix field, enter the network prefix for the subnet.

  6. In the Network mask field, enter the network mask for the subnet.

  7. In the Gateway address field, enter the external gateway for the subnet.

  8. In the Primary DNS server field, enter a primary DNS for the subnet.

  9. In the Secondary DNS server, enter a secondary DNS for the subnet.

  10. From the IPAM list, select the method that you want to use for IP address management (IPAM). For more information about IPAM, see Network Resources.

  11. Enter the information for the IPAM method that you select.

  12. If you use the remote execution plugin, click the Remote Execution tab and select the orcharhino Proxy that controls the remote execution.

  13. Click the Domains tab and select the domains that apply to this subnet.

  14. Click the orcharhino Proxies tab and select the orcharhino Proxy that applies to each service in the subnet, including DHCP, TFTP, and reverse DNS services.

  15. Click the Parameters tab and configure any subnet level parameters to apply to hosts attached to this subnet. For example, user defined Boolean or string parameters to use in templates.

  16. Click the Locations tab and select the locations that use this orcharhino Proxy.

  17. Click the Organizations tab and select the organizations that use this orcharhino Proxy.

  18. Click Submit to save the subnet information.

CLI procedure
  • Create the subnet with the following command:

    # hammer subnet create --name "My_Network" \
    --description "your_description" \
    --network "192.168.140.0" --mask "255.255.255.0" \
    --gateway "192.168.140.1" --dns-primary "192.168.140.2" \
    --dns-secondary "8.8.8.8" --ipam "DHCP" \
    --from "192.168.140.111" --to "192.168.140.250" --boot-mode "DHCP" \
    --domains "example.com" --dhcp-id 1 --dns-id 1 --tftp-id 1 \
    --locations "My_Location" --organizations "My_Organization"
In this example, the --dhcp-id, --dns-id, and --tftp-id options use 1, which is the ID of the integrated orcharhino Proxy in orcharhino server.

Using Infoblox as DHCP and DNS Providers

You can use orcharhino Proxy to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses.

Limitations

All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on orcharhino Proxy and set up the view using the foreman-installer command, you cannot edit the view.

orcharhino Proxy communicates with a single Infoblox node using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox.

Hosting PXE-related files using Infoblox’s TFTP functionality is not supported. You must use orcharhino Proxy as a TFTP server for PXE provisioning. For more information, see Configuring Networking.

orcharhino IPAM feature cannot be integrated with Infoblox.

Prerequisites

You must have Infoblox account credentials to manage DHCP and DNS entries in orcharhino.

Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin.

The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API.

Installing the Infoblox CA Certificate on orcharhino Proxy

You must install Infoblox HTTPS CA certificate on the base system for all orcharhino Proxies that you want to integrate with Infoblox applications.

You can download the certificate from the Infoblox web UI, or you can use the following OpenSSL commands to download the certificate:

# update-ca-trust enable
# openssl s_client -showcerts -connect infoblox.example.com:443 </dev/null | \
openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt
# update-ca-trust extract
  • The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate.

To test the CA certificate, use a CURL query:

# curl -u admin:password https://infoblox.example.com/wapi/v2.0/network

Example positive response:

[
    {
        "_ref": "network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w:infoblox.example.com/24/default",
        "network": "192.168.202.0/24",
        "network_view": "default"
    }
]

Installing the DHCP Infoblox module

Use this procedure to install the DHCP Infoblox module on orcharhino Proxy. Note that you cannot manage records in separate views.

You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DNS Infoblox Module

DHCP Infoblox Record Type Considerations

Use only the --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress option to configure the DHCP and DNS modules.

Configuring both DHCP and DNS Infoblox modules with the host record type setting causes DNS conflicts and is not supported. If you install the Infoblox module on orcharhino Proxy with the --foreman-proxy-plugin-dhcp-infoblox-record-type option set to host, you must unset both DNS orcharhino Proxy and Reverse DNS orcharhino Proxy options because Infoblox does the DNS management itself. You cannot use the host option without creating conflicts and, for example, being unable to rename hosts in orcharhino.

Procedure

To install the Infoblox module for DHCP, complete the following steps:

  1. On orcharhino Proxy, enter the following command:

    # foreman-installer --enable-foreman-proxy-plugin-dhcp-infoblox \
    --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-managed false \
    --foreman-proxy-dhcp-provider infoblox \
    --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress \
    --foreman-proxy-dhcp-server infoblox.example.com \
    --foreman-proxy-plugin-dhcp-infoblox-username admin \
    --foreman-proxy-plugin-dhcp-infoblox-password infoblox \
    --foreman-proxy-plugin-dhcp-infoblox-network-view default \
    --foreman-proxy-plugin-dhcp-infoblox-dns-view default
  2. In the orcharhino web UI, navigate to Infrastructure > orcharhino Proxies and select the orcharhino Proxy with the Infoblox DHCP module and click Refresh.

  3. Ensure that the dhcp features are listed.

  4. For all domains managed through Infoblox, ensure that the DNS orcharhino Proxy is set for that domain. To verify, in the orcharhino web UI, navigate to Infrastructure > Domains, and inspect the settings of each domain.

  5. For all subnets managed through Infoblox, ensure that DHCP orcharhino Proxy and Reverse DNS orcharhino Proxy is set. To verify, in the orcharhino web UI, navigate to Infrastructure > Subnets, and inspect the settings of each subnet.

Installing the DNS Infoblox Module

Use this procedure to install the DNS Infoblox module on orcharhino Proxy. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Installing the DHCP Infoblox module.

DNS records are managed only in the default DNS view, it’s not possible to specify which DNS view to use.

To install the DNS Infoblox module, complete the following steps:

  1. On orcharhino Proxy, enter the following command to configure the Infoblox module:

    # foreman-installer --enable-foreman-proxy-plugin-dns-infoblox \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-managed false \
    --foreman-proxy-dns-provider infoblox \
    --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com \
    --foreman-proxy-plugin-dns-infoblox-username admin \
    --foreman-proxy-plugin-dns-infoblox-password infoblox \
    --foreman-proxy-plugin-dns-infoblox-dns-view default

    Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify a DNS Infoblox view other than the default view.

  2. In the orcharhino web UI, navigate to Infrastructure > orcharhino Proxies and select the orcharhino Proxy with the Infoblox DNS module and click Refresh.

  3. Ensure that the dns features are listed.

Configuring iPXE to Reduce Provisioning Times

You can use orcharhino to configure PXELinux to chainboot iPXE and boot using the HTTP protocol if you have the following restrictions that prevent you from using PXE:

  • A network with unmanaged DHCP servers.

  • A PXE service that is blacklisted on your network or restricted by a firewall.

  • An unreliable TFTP UDP-based protocol because of, for example, a low-bandwidth network.

iPXE Workflow Overview

The provisioning process using iPXE follows this workflow:

  • A discovered host boots over PXE.

  • The host loads either ipxe.efi or undionly.0.

  • The host initializes again on the network using DHCP.

  • The DHCP server detects the iPXE firmware and returns the iPXE template URL with the bootstrap flag.

  • The host requests iPXE template. orcharhino does not recognize the host, and because the bootstrap flag is set, the host receives the iPXE intermediate script template that ships with orcharhino.

  • The host runs the intermediate iPXE script and downloads the discovery image.

  • The host starts the discovery operating system and performs a discovery request.

  • The host is scheduled for provisioning and restarts.

  • The host boots over PXE.

  • The previous workflow repeats, but orcharhino recognizes the host’s remote IP address and instead of the intermediate template, the host receives a regular iPXE template.

  • The host reads the iPXE configuration and boots the installer.

  • From this point, the installation follows a regular PXE installation workflow.

Note that the workflow uses the discovery process, which is optional. To set up the discovery service, see Setting up the Discovery Service for iPXE.

With orcharhino, you can set up hosts to download either the ipxe.efi or undionly.kpxe over TFTP. When the file downloads, all communication continues using HTTP. orcharhino uses the iPXE provisioning script either to load an operating system installer or the next entry in the boot order.

There are three methods of using iPXE with orcharhino:

  1. Chainbooting virtual machines using hypervisors that use iPXE as primary firmware.

  2. Using PXELinux through TFTP to chainload iPXE directly on bare metal hosts.

  3. Using PXELinux through UNDI, which uses HTTP to transfer the kernel and the initial RAM disk on bare-metal hosts.

Security Information

The iPXE binary in Red Hat Enterprise Linux is built without some security features. For this reason, you can only use HTTP, and cannot use HTTPS.

+ Recompile iPXE from source to use security features like HTTPS.

Prerequisites

Before you begin, ensure that the following conditions are met:

  • A host exists on orcharhino to use.

  • The MAC address of the provisioning interface matches the host configuration.

  • The provisioning interface of the host has a valid DHCP reservation.

  • The NIC is capable of PXE booting. For more information, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk.

  • The NIC is compatible with iPXE.

Setting up the Discovery Service for iPXE

  1. On orcharhino Proxy, install the Foreman discovery service:

  2. On orcharhino Proxy, enable the httpboot service:

    # foreman-installer --foreman-proxy-httpboot true
  3. In the orcharhino web UI, navigate to Administer > Settings, and click the Provisioning tab.

  4. Locate the Default PXE global template entry row and in the Value column, change the value to discovery.

Chainbooting virtual machines

Some virtualization hypervisors use iPXE as primary firmware for PXE booting. Because of this, you can chainboot without TFTP and PXELinux.

Chainbooting virtual machine workflow

Using virtualization hypervisors removes the need for TFTP and PXELinux. It has the following workflow:

  1. Virtual machine starts

  2. iPXE retrieves the network credentials using DHCP

  3. iPXE retrieves the HTTP address using DHCP

  4. iPXE chainloads the iPXE template from the template orcharhino Proxy

  5. iPXE loads the kernel and initial RAM disk of the installer

If you want to use the discovery service with iPXE, see Setting up the Discovery Service for iPXE.

Ensure that the hypervisor that you want to use supports iPXE. The following virtualization hypervisors support iPXE:

Configuring orcharhino server to use iPXE

You can use the default template to configure iPXE booting for hosts. If you want to change the default values in the template, clone the template and edit the clone.

Procedure
  1. Copy a boot file to the TFTP directory on your orcharhino server:

    • For EFI systems, copy the ipxe.efi file:

      # cp /usr/share/ipxe/ipxe.efi /var/lib/tftpboot/
    • For BIOS systems, copy the undionly.kpxe file:

      # cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly.0
  2. In the orcharhino web UI, navigate to Hosts > Provisioning Templates, enter Kickstart default iPXE and click Search.

  3. Optional: If you want to change the template, click Clone, enter a unique name, and click Submit.

  4. Click the name of the template you want to use.

  5. If you clone the template, you can make changes you require on the Template tab.

  6. Click the Association tab, and select the operating systems that your host uses.

  7. Click the Locations tab, and add the location where the host resides.

  8. Click the Organizations tab, and add the organization that the host belongs to.

  9. Click Submit to save the changes.

  10. Navigate to Hosts > Operating systems and select the operating system of your host.

  11. Click the Templates tab.

  12. From the iPXE Template list, select the template you want to use.

  13. Click Submit to save the changes.

  14. Navigate to Hosts > All Hosts.

  15. In the Hosts page, select the host that you want to use.

  16. Select the Templates tab.

  17. From the iPXE template list, select Review to verify that the Kickstart default iPXE template is the correct template.

  18. To use the iPXE bootstrapping feature for orcharhino, configure the dhcpd.conf file as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "http://orcharhino.example.com/unattended/iPXE?bootstrap=1";
    } elsif option architecture = 00:06 {
      filename "ipxe.efi";
    } elsif option architecture = 00:07 {
      filename "ipxe.efi";
    } elsif option architecture = 00:09 {
      filename "ipxe.efi";
    } else {
      filename "undionly.0";
    }

    If you use an isolated network, use a orcharhino Proxy URL with TCP port 8000, instead of the URL of orcharhino server.

    Use http://orcharhino.example.com/unattended/iPXE?bootstrap=1 when orcharhino Proxy HTTP endpoint is disabled (installer option --foreman-proxy-http false). Template orcharhino Proxy plug-in has the default value 8000 when enabled and can be changed with --foreman-proxy-http-port installer option. In that case, use http://orcharhino-proxy.example.com:8000. You must update the /etc/dhcp/dhcpd.conf file after every upgrade.

Chainbooting orcharhino server to use iPXE directly

Use this procedure to set up iPXE to use a built-in driver for network communication or UNDI interface. There are separate procedures to configure orcharhino server and orcharhino Proxy to use iPXE.

You can use this procedure only with bare metal hosts.

Chainbooting iPXE directly or with UNDI workflow
  1. Host powers on

  2. PXE driver retrieves the network credentials using DHCP

  3. PXE driver retrieves the PXELinux firmware pxelinux.0 using TFTP

  4. PXELinux searches for the configuration file on the TFTP server

  5. PXELinux chainloads iPXE ipxe.lkrn or undionly-ipxe.0

  6. iPXE retrieves the network credentials using DHCP again

  7. iPXE retrieves HTTP address using DHCP

  8. iPXE chainloads the iPXE template from the template orcharhino Proxy

  9. iPXE loads the kernel and initial RAM disk of the installer

If you want to use the discovery service with iPXE, see Setting up the Discovery Service for iPXE.

Configuring orcharhino Server to use iPXE

You can use the default template to configure iPXE booting for hosts. If you want to change the default values in the template, clone the template and edit the clone.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates, enter PXELinux chain iPXE or, for BIOS systems, enter PXELinux chain iPXE UNDI, and click Search.

  2. Optional: If you want to change the template, click Clone, enter a unique name, and click Submit.

  3. Click the name of the template you want to use.

  4. If you clone the template, you can make changes you require on the Template tab.

  5. Click the Association tab, and select the operating systems that your host uses.

  6. Click the Locations tab, and add the location where the host resides.

  7. Click the Organizations tab, and add the organization that the host belongs to.

  8. Click Submit to save the changes.

  9. In the Provisioning Templates page, enter Kickstart default iPXE into the search field and click Search.

  10. Optional: If you want to change the template, click Clone, enter a unique name, and click Submit.

  11. Click the name of the template you want to use.

  12. If you clone the template, you can make changes you require on the Template tab.

  13. Click the Association tab, and associate the template with the operating system that your host uses.

  14. Click the Locations tab, and add the location where the host resides.

  15. Click the Organizations tab, and add the organization that the host belongs to.

  16. Click Submit to save the changes.

  17. Navigate to Hosts > Operating systems and select the operating system of your host.

  18. Click the Templates tab.

  19. From the PXELinux template list, select the template you want to use.

  20. From the iPXE template list, select the template you want to use.

  21. Click Submit to save the changes.

  22. Navigate to Hosts > All Hosts, and select the host you want to use.

  23. Select the Templates tab, and from the PXELinux template list, select Review to verify the template is the correct template.

  24. From the iPXE template list, select Review to verify the template is the correct template. If there is no PXELinux entry, or you cannot find the new template, navigate to Hosts > All Hosts, and on your host, click Edit. Click the Operating system tab and click the Provisioning Template Resolve button to refresh the list of templates.

  25. To use the iPXE bootstrapping feature for orcharhino, configure the dhcpd.conf file as follows:

    if exists user-class and option user-class = "iPXE" {
      filename "http://orcharhino.example.com/unattended/iPXE?bootstrap=1";
    } elsif option architecture = 00:06 {
      filename "ipxe.efi";
    } elsif option architecture = 00:07 {
      filename "ipxe.efi";
    } elsif option architecture = 00:09 {
      filename "ipxe.efi";
    } else {
      filename "undionly.0";
    }

    If you use an isolated network, use a orcharhino Proxy URL with TCP port 8000, instead of the URL of orcharhino server.

    For http://orcharhino.example.com/unattended/iPXE, you can also use a orcharhino orcharhino Proxy http://orcharhino-proxy.example.com:8000/unattended/iPXE. You must update the /etc/dhcp/dhcpd.conf file after every upgrade.

Chainbooting orcharhino orcharhino Proxy to use iPXE directly

You must perform this procedure on all orcharhino Proxies.

Procedure
  1. Install the ipxe-bootimgs RPM package:

    # yum install ipxe-bootimgs
  2. Copy the iPXE firmware to the TFTP server’s root directory. Do not use symbolic links because TFTP runs in the chroot environment.

    • For EFI systems, copy the ipxe.efi file:

      # cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/
    • For BIOS systems, copy the undionly.kpxe file:

      # cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0
  3. Correct the file contexts:

    # restorecon -RvF /var/lib/tftpboot/

Using PXE to Provision Hosts

There are four main ways to provision bare metal instances with orcharhino 5.6:

Unattended Provisioning

New hosts are identified by a MAC address and orcharhino server provisions the host using a PXE boot process.

Unattended Provisioning with Discovery

New hosts use PXE boot to load the orcharhino Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Configuring the Discovery Service.

PXE-less Provisioning

New hosts are provisioned with a boot disk or PXE-less discovery image that orcharhino server generates.

PXE-less Provisioning with Discovery

New hosts use an ISO boot disk that loads the orcharhino Discovery service. This service identifies hardware information about the host and lists it as an available host to provision. For more information, see Implementing PXE-less Discovery.

Discovery workflows are only available when the Discovery plug-in is installed. For more information, see Configuring the Discovery Service.
BIOS and UEFI Support

With orcharhino, you can perform both BIOS and UEFI based PXE provisioning.

Both BIOS and UEFI interfaces work as interpreters between the computer’s operating system and firmware, initializing the hardware components and starting the operating system at boot time.

In orcharhino provisioning, the PXE loader option defines the DHCP filename option to use during provisioning. For BIOS systems, use the PXELinux BIOS option to enable a provisioned node to download the pxelinux.0 file over TFTP. For UEFI systems, use the PXEGrub2 UEFI option to enable a TFTP client to download grub2/grubx64.efi file.

For BIOS provisioning, you must associate a PXELinux template with the operating system.

For UEFI provisioning, you must associate a PXEGrub2 template with the operating system.

If you associate both PXELinux and PXEGrub2 templates, orcharhino can deploy configuration files for both on a TFTP server, so that you can switch between PXE loaders easily.

Prerequisites for Bare Metal Provisioning

The requirements for bare metal provisioning include:

  • A orcharhino Proxy managing the network for bare metal hosts. For unattended provisioning and discovery-based provisioning, orcharhino server requires PXE server settings.

    For more information about networking requirements, see Configuring Networking.

    For more information about the Discovery service, Configuring the Discovery Service.

  • A bare metal host or a blank VM.

For information about the security token for unattended and PXE-less provisioning, see Configuring the Security Token Validity Duration.

Configuring the Security Token Validity Duration

When performing unattended and PXE-less provisioning, as a security measure, orcharhino automatically generates a unique token and adds this token to the OS installer recipe URL in the PXE configuration file (PXELinux, Grub2).

By default, the token is valid for 360 minutes. When you provision a host, ensure that you reboot the host within this time frame. If the token expires, it is no longer valid and you receive a 404 error and the operating system installer download fails.

To adjust the token’s duration of validity, in the orcharhino web UI, navigate to Administer > Settings, and click the Provisioning tab. Find the Token duration option, and click the edit icon and edit the duration, or enter 0 to disable token generation.

If token generation is disabled, an attacker can spoof client IP address and download OS installer recipe from orcharhino server, including the encrypted root password.

Creating Hosts with Unattended Provisioning

Unattended provisioning is the simplest form of host provisioning. You enter the host details on orcharhino server and boot your host. orcharhino server automatically manages the PXE configuration, organizes networking services, and provides the operating system and configuration for the host.

This method of provisioning hosts uses minimal interaction during the process.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs and change the context to match your requirements.

  4. From the Host Group list, select a host group that you want to use to populate the form.

  5. Click the Interface tab, and on the host’s interface, click Edit.

  6. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

  7. In the MAC address field, enter a MAC address for the host. This ensures the identification of the host during the PXE boot process.

  8. Ensure that orcharhino server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  9. In the MAC address field, enter a MAC address of the host’s provisioning interface. This ensures the identification of the host during the PXE boot process.

  10. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  11. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  12. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Provisioning Templates.

  13. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  14. Click Submit to save the host details.

    For more information about network interfaces, see Adding network interfaces.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for PXE booting the bare metal host. If you start the physical host and set its boot mode to PXE, the host detects the DHCP service of orcharhino server’s integrated orcharhino Proxy, receives HTTP endpoint of the Kickstart tree and installs the operating system.

When the installation completes, the host also registers to orcharhino server using the activation key and installs the necessary configuration and management tools from the orcharhino client repository.

CLI procedure
  1. Create the host with the hammer host create command.

    # hammer host create --name "My_Unattended_Host" --organization "My_Organization" \
    --location "My_Location" --hostgroup "My_Host_Group" --mac "aa:aa:aa:aa:aa:aa" \
    --build true --enabled true --managed true
  2. Ensure the network interface options are set using the hammer host interface update command.

    # hammer host interface update --host "test1" --managed true \
    --primary true --provision true

Creating Hosts with PXE-less Provisioning

Some hardware does not provide a PXE boot interface. In orcharhino, you can provision a host without PXE boot. This is also known as PXE-less provisioning and involves generating a boot ISO that hosts can use. Using this ISO, the host can connect to orcharhino server, boot the installation media, and install the operating system.

orcharhino also provides a PXE-less discovery service that operates without PXE-based services, such as DHCP and TFTP. For more information, see Implementing PXE-less Discovery.

Boot ISO Types

There are four types of boot ISOs:

Host image

A boot ISO for the specific host. This image contains only the boot files that are necessary to access the installation media on orcharhino server. The user defines the subnet data in orcharhino and the image is created with static networking.

Full host image

A boot ISO that contains the kernel and initial RAM disk image for the specific host. This image is useful if the host fails to chainload correctly. The provisioning template still downloads from orcharhino server.

Generic image

A boot ISO that is not associated with a specific host. The ISO sends the host’s MAC address to orcharhino server, which matches it against the host entry. The image does not store IP address details, and requires access to a DHCP server on the network to bootstrap. This image is also available from the /bootdisk/disks/generic URL on your orcharhino server, for example, https://orcharhino.example.com/bootdisk/disks/generic.

Subnet image

A boot ISO that is similar to the generic image but is configured with the address of a orcharhino Proxy. This image is generic to all hosts with a provisioning NIC on the same subnet.

Host image and Full host image contain provisioning tokens, therefore the generated image has limited lifespan. For more information about configuring security tokens, read Configuring the Security Token Validity Duration.

The Full host image is based on SYSLINUX and works with most hardware. When using a Host image, Generic image, or Subnet image, see supported hardware on ipxe.org for a list of hardware drivers expected to work with an iPXE-based boot disk.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name that you want to become the provisioned system’s host name.

  3. Click the Organization and Location tabs and change the context to match your requirements.

  4. From the Host Group list, select a host group that you want to use to populate the form.

  5. Click the Interface tab, and on the host’s interface, click Edit.

  6. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

  7. In the MAC address field, enter a MAC address for the host.

  8. Ensure that orcharhino server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  9. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  10. Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Provisioning Templates.

  11. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  12. Click Submit to save the host details.

This creates a host entry and the host details page appears.

The options on the upper-right of the window are the Boot disk menu. From this menu, one of the following images is available for download: Host image, Full host image, Generic image, and Subnet image.

CLI procedure
  1. Create the host with the hammer host create command.

    # hammer host create --name "My_Bare_Metal" --organization "My_Organization" \
    --location "My_Location" --hostgroup "My_Host_Group" --mac "aa:aa:aa:aa:aa:aa" \
    --build true --enabled true --managed true
  2. Ensure that your network interface options are set using the hammer host interface update command.

    # hammer host interface update --host "test3" --managed true \
    --primary true --provision true
  3. Download the boot disk from orcharhino server with the hammer bootdisk host command:

    • For Host image:

      # hammer bootdisk host --host test3.example.com
    • For Full host image:

      # hammer bootdisk host --host test3.example.com --full true
    • For Generic image:

      # hammer bootdisk generic
    • For Subnet image:

      # hammer bootdisk subnet --subnet subnetName

This creates a boot ISO for your host to use.

Write the ISO to a USB storage device using the dd utility or livecd-tools if required.

When you start the physical host and boot from the ISO or the USB storage device, the host connects to orcharhino server and starts installing operating system from its kickstart tree.

When the installation completes, the host also registers to orcharhino server using the activation key and installs the necessary configuration and management tools from the orcharhino client repository.

Creating Hosts with UEFI HTTP Boot Provisioning

You can provision hosts from orcharhino using the UEFI HTTP Boot. This is the only method with which you can provision hosts in IPv6 network.

To use the CLI instead of the web UI, see the CLI procedure.

Prerequisites
  • Ensure that you meet the requirements for HTTP booting. For more information, see HTTP Booting Requirements in Planning for orcharhino.

Procedure
  1. On orcharhino Proxy that you use for provisioning, update the grub2-efi package to the latest version:

    # yum install grub2-efi
  2. Enable foreman-proxy-http, foreman-proxy-httpboot, and foreman-proxy-tftp features.

    # foreman-installer --scenario katello \
    --foreman-proxy-httpboot true \
    --foreman-proxy-http true \
    --foreman-proxy-tftp true
  3. In the orcharhino web UI, navigate to Hosts > Create Host.

  4. In the Name field, enter a name for the host.

  5. Click the Organization and Location tabs and change the context to match your requirements.

  6. From the Host Group list, select a host group that you want to use to populate the form.

  7. Click the Interface tab, and on the host’s interface, click Edit.

  8. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

  9. In the MAC address field, enter a MAC address of the host’s provisioning interface. This ensures the identification of the host during the PXE boot process.

  10. Ensure that orcharhino server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  13. From the PXE Loader list, select Grub2 UEFI HTTP.

  14. Optional: Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use.

    For more information about associating provisioning templates, see Creating Provisioning Templates.

  15. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  16. Click Submit to save the host details.

    For more information about network interfaces, see Adding network interfaces.

  17. Set the host to boot in UEFI mode from network.

  18. Start the host.

  19. From the boot menu, select Kickstart default PXEGrub2.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of orcharhino Proxy with the Kickstart tree and installs the operating system.

When the installation completes, the host also registers to orcharhino server using the activation key and installs the necessary configuration and management tools from the orcharhino client repository.

CLI procedure
  1. On orcharhino Proxy that you use for provisioning, update the grub2-efi package to the latest version:

    # yum install grub2-efi
  2. Enable foreman-proxy-http, foreman-proxy-httpboot, and foreman-proxy-tftp true features.

    # foreman-installer --scenario katello \
    --foreman-proxy-httpboot true \
    --foreman-proxy-http true \
    --foreman-proxy-tftp true
  3. Create the host with the hammer host create command.

    # hammer host create --name "My_Host" \
    --organization "My_Organization" \
    --location "My_Location" \
    --hostgroup "My_Host_Group" \
    --mac "aa:aa:aa:aa:aa:aa" \
    --build true \
    --enabled true \
    --managed true \
    --pxe-loader "Grub2 UEFI HTTP"
  4. Ensure the network interface options are set using the hammer host interface update command.

    # hammer host interface update --host "My_Host" \
    --managed true \
    --primary true \
    --provision true
  5. Set the host to boot in UEFI mode from network.

  6. Start the host.

  7. From the boot menu, select Kickstart default PXEGrub2.

This creates the host entry and the relevant provisioning settings. This also includes creating the necessary directories and files for UEFI booting the bare metal host. When you start the physical host and set its boot mode to UEFI HTTP, the host detects the defined DHCP service, receives HTTP endpoint of orcharhino Proxy with the Kickstart tree and installs the operating system.

When the installation completes, the host also registers to orcharhino server using the activation key and installs the necessary configuration and management tools from the orcharhino client repository.

Deploying SSH Keys during Provisioning

Use this procedure to deploy SSH keys added to a user during provisioning. For information on adding SSH keys to a user, see Managing SSH Keys for a User in Administering orcharhino.

Procedure

To deploy SSH keys during provisioning, complete the following steps:

  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates.

  2. Create a provisioning template, or clone and edit an existing template. For more information, see Creating Provisioning Templates.

  3. In the template, click the Template tab.

  4. In the Template editor field, add the create_users snippet to the %post section:

    <%= snippet('create_users') %>
  5. Select the Default check box.

  6. Click the Association tab.

  7. From the Application Operating Systems list, select an operating system.

  8. Click Submit to save the provisioning template.

  9. Create a host that is associated with the provisioning template or rebuild a host using the OS associated with the modified template. For more information, see Creating a Host in the Managing Hosts guide.

    The SSH keys of the Owned by user are added automatically when the create_users snippet is executed during the provisioning process. You can set Owned by to an individual user or a user group. If you set Owned by to a user group, the SSH keys of all users in the user group are added automatically.

Configuring the Discovery Service

orcharhino can detect hosts on a network that are not in your orcharhino inventory. These hosts boot the discovery image that performs hardware detection and relays this information back to orcharhino server. This method creates a list of ready-to-provision hosts in orcharhino server without needing to enter the MAC address of each host.

When you boot a blank bare-metal host, the boot menu has two options: local and discovery. If you select discovery to boot the Discovery image, after a few minutes, the Discovery image completes booting and a status screen is displayed.

The Discovery service is enabled by default on orcharhino server. However, the default setting of the global templates is to boot from the local hard drive. To change the default setting, in the orcharhino web UI, navigate to Administer > Settings, and click the Provisioning tab. Locate the Default PXE global template entry row, and in the Value column, enter discovery.

PXE based provisioning

To use orcharhino server to provide the Discovery image, install the following RPM packages:

  • tfm-rubygem-foreman_discovery

  • tfm-rubygem-smart_proxy_discovery

The tfm-rubygem-foreman_discovery package contains the orcharhino plug-in to handle discovered nodes, connections, and necessary database structures, and API.

The tfm-rubygem-smart_proxy_discovery package configures orcharhino Proxy, such as the integrated orcharhino Proxy of orcharhino server, to act as a proxy for the Discovery service.

Refer to the installation section of the host discovery guide on how to install this plugin on your orcharhino.

When the installation completes, you can view the new menu option by navigating to Hosts > Discovered Hosts.

Installing the Discovery Service

Complete the following procedure to enable the Discovery service on orcharhino Proxy.

Procedure
  1. Enter the following commands on orcharhino Proxy:

    # yum install foreman-discovery-image tfm-rubygem-smart_proxy_discovery
  2. Restart the foreman-maintain services:

    # foreman-maintain service restart

    Refer to the host discovery guide on how to install the host discovery plugin.

  3. In the orcharhino web UI, navigate to Infrastructure > orcharhino Proxy.

  4. Click orcharhino Proxy and select Refresh from the Actions list. Locate Discovery in the list of features to confirm the Discovery service is now running.

Subnets

All subnets with discoverable hosts require an appropriate orcharhino Proxy selected to provide the Discovery service.

To check this, navigate to Infrastructure > orcharhino Proxies and verify if orcharhino Proxy that you want to use lists the Discovery feature. If not, click Refresh features.

In the orcharhino web UI, navigate to Infrastructure > Subnets, select a subnet, click the orcharhino Proxies tab, and select the Discovery Proxy that you want to use. Perform this for each subnet that you want to use.

The Provisioning Template PXELinux Discovery Snippet

For BIOS provisioning, the PXELinux global default template in the Hosts > Provisioning Templates window contains the snippet pxelinux_discovery. The snippet has the following lines:

LABEL discovery
  MENU LABEL Foreman Discovery Image
  KERNEL boot/fdi-image/vmlinuz0
  APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman
  IPAPPEND 2

The KERNEL and APPEND options boot the Discovery image and ramdisk. The APPEND option contains a proxy.url parameter, with the foreman_server_url macro as its argument. This macro resolves to the full URL of orcharhino server.

For UEFI provisioning, the PXEgrub2 global default template in the Hosts > Provisioning Templates window contains the snippet pxegrub2_discovery:

menuentry 'Foreman Discovery Image' --id discovery {
  linuxefi boot/fdi-image/vmlinuz0 rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman BOOTIF=01-$mac
  initrdefi boot/fdi-image/initrd0.img
}

To use orcharhino Proxy to proxy the discovery steps, edit /var/lib/tftpboot/pxelinux.cfg/default or /var/lib/tftpboot/grub2/grub.cfg and change the URL to orcharhino Proxy FQDN you want to use.

The global template is available on orcharhino server and all orcharhino Proxies that have the TFTP feature enabled.

Automatic Contexts for Discovered Hosts

orcharhino server assigns an organization and location to discovered hosts according to the following sequence of rules:

  1. If a discovered host uses a subnet defined in orcharhino, the host uses the first organization and location associated with the subnet.

  2. If the discovery_organization or discovery_location fact values are set, the discovered host uses these fact values as an organization and location. To set these fact values, navigate to Administer > Settings > Discovered, and add these facts to the Default organization and Default location fields. Ensure that the discovered host’s subnet also belongs to the organization and location set by the fact, otherwise orcharhino refuses to set it for security reasons.

  3. If none of the previous conditions exists, orcharhino assigns the first Organization and Location ordered by name.

You can change the organization or location using the bulk actions menu of the Discovered hosts page. Select the discovered hosts to modify and select Assign Organization or Assign Location from the Select Action menu.

Note that the foreman_organization and foreman_location facts are no longer valid values for assigning context for discovered hosts. You still can use these facts to configure the host for Puppet runs. To do this, navigate to Administer > Settings > Puppet section and add the foreman_organization and foreman_location facts to the Default organization and Default location fields.

Discovery Templates and Snippets Settings

To use the Discovery service, you must configure provisioning settings to set Discovery as the default service and set the templates that you want to use.

Setting Discovery Service as Default

For both BIOS and UEFI, to set the Discovery service as the default service that boots for hosts that are not present in your current orcharhino inventory, complete the following steps:

  1. In the orcharhino web UI, navigate to Administer > Settings and click the Provisioning tab.

  2. For the Default PXE global template entry, in the Value column, enter discovery.

To use a template, in the orcharhino web UI, navigate to Administer > Settings and click the Provisioning tab and set the templates that you want to use.

Customizing Templates and Snippets

Templates and snippets are locked to prevent changes. If you want to edit a template or snippet, clone it, save it with a unique name, and then edit the clone.

When you change the template or a snippet it includes, the changes must be propagated to orcharhino server’s default PXE template.

In the orcharhino web UI, navigate to Hosts > Provisioning Templates and click Build PXE Default.

This refreshes the default PXE template on orcharhino server.

Additional Settings
The proxy.url argument

During the orcharhino installation process, if you use the default option --enable-foreman-plugin-discovery, you can edit the proxy.url argument in the template to set the URL of orcharhino Proxy that provides the discovery service. You can change the proxy.url argument to the IP address or FQDN of another provisioning orcharhino Proxy that you want to use, but ensure that you append the port number, for example, 9090. If you use an alternative port number with the --foreman-proxy-ssl-port option during orcharhino installation, you must add that port number. You can also edit the proxy.url argument to use a orcharhino IP address or FQDN so that the discovered hosts communicate directly with orcharhino server.

The proxy.type argument

If you use a orcharhino Proxy FQDN for the proxy.url argument, ensure that you set the proxy.type argument to proxy. If you use a orcharhino FQDN, update the proxy.type argument to foreman.

proxy.url=https://orcharhino-proxy.example.com:8443 proxy.type=proxy
Rendering the orcharhino Proxy’s Host Name

orcharhino deploys the same template to all TFTP orcharhino Proxies and there is no variable or macro available to render the orcharhino Proxy’s host name. The hard-coded proxy.url does not not work with two or more TFTP orcharhino Proxies. As a workaround, every time you click Build PXE Defaults, edit the configuration file in the TFTP directory using SSH, or use a common DNS alias for appropriate subnets.

Tagged VLAN Provisioning

If you want to use tagged VLAN provisioning, and you want the discovery service to send a discovery request, add the following information to the KERNEL option in the discovery template:

fdi.vlan.primary=example_VLAN_ID

Creating Hosts from Discovered Hosts

Provisioning discovered hosts follows a provisioning process that is similar to PXE provisioning. The main difference is that instead of manually entering the host’s MAC address, you can select the host to provision from the list of discovered hosts.

To use the CLI instead of the web UI, see the CLI procedure.

Prerequisites

For information about the security token for unattended and PXE-less provisioning, see Configuring the Security Token Validity Duration.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Discovered host. Select the host you want to use and click Provision to the right of the list.

  2. Select from one of the two following options:

    • To provision a host from a host group, select a host group, organization, and location, and then click Create Host.

    • To provision a host with further customization, click Customize Host and enter the additional details you want to specify for the new host.

  3. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

    • orcharhino server automatically populates the MAC address from the Discovery results.

  4. Ensure that orcharhino server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  5. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  6. Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. The host must resolve to the following provisioning templates:

    • kexec Template: Discovery Red Hat kexec

    • provision Template: orcharhino Kickstart Default

      For more information about associating provisioning templates, see Provisioning Templates.

  7. Click Submit to save the host details.

When the host provisioning is complete, the discovered host becomes a content host. To view the host, navigate to Hosts > Content Hosts.

CLI procedure
  1. Identify the discovered host to use for provisioning:

    # hammer discovery list
  2. Select a host and provision it using a host group. Set a new host name with the --new-name option:

    # hammer discovery provision --name "host_name" \
    --new-name "new_host_name" --organization "My_Organization" \
    --location "My_Location" --hostgroup "My_Host_Group" --build true \
    --enabled true --managed true

    This removes the host from the discovered host listing and creates a host entry with the provisioning settings. The Discovery image automatically resets the host so that it can boot to PXE. The host detects the DHCP service on orcharhino server’s integrated orcharhino Proxy and starts installing the operating system. The rest of the process is identical to normal PXE workflow described in Creating Hosts with Unattended Provisioning.

Creating Discovery Rules

As a method of automating the provisioning process for discovered hosts, orcharhino provides a feature to create discovery rules. These rules define how discovered hosts automatically provision themselves, based on the assigned host group. For example, you can automatically provision hosts with a high CPU count as hypervisors. Likewise, you can provision hosts with large hard disks as storage servers.

To use the CLI instead of the web UI, see the CLI procedure.

NIC Considerations

Auto provisioning does not currently allow configuring NICs; all systems are being provisioned with the NIC configuration that was detected during discovery. However, you can set the NIC in the OS installer recipe scriplet, using a script, or using configuration management at a later stage.

Procedure
  1. In the orcharhino web UI, navigate to Configure > Discovery rules, and select Create Rule.

  2. In the Name field, enter a name for the rule.

  3. In the Search field, enter the rules to determine whether to provision a host. This field provides suggestions for values you enter and allows operators for multiple rules. For example: cpu_count > 8.

  4. From the Host Group list, select the host group to use as a template for this host.

  5. In the Hostname field, enter the pattern to determine host names for multiple hosts. This uses the same ERB syntax that provisioning templates use. The host name can use the @host attribute for host-specific values and the rand macro for a random number or the sequence_hostgroup_param_next macro for incrementing the value. For more information about provisioning templates, see Provisioning Templates and the API documentation.

    • myhost-<%= sequence_hostgroup_param_next("EL7/MyHostgroup", 10, "discovery_host") %>

    • myhost-<%= rand(99999) %>

    • abc-<%= @host.facts['bios_vendor'] %>-<%= rand(99999) %>

    • xyz-<%= @host.hostgroup.name %>

    • srv-<%= @host.discovery_rule.name %>

    • server-<%= @host.ip.gsub('.','-') + '-' + @host.hostgroup.subnet.name %>

      When creating host name patterns, ensure that the resulting host names are unique, do not start with numbers, and do not contain underscores or dots. A good approach is to use unique information provided by Facter, such as the MAC address, BIOS, or serial ID.

  6. In the Hosts limit field, enter the maximum number of hosts that you can provision with the rule. Enter 0 for unlimited.

  7. In the Priority field, enter a number to set the precedence the rule has over other rules. Rules with lower values have a higher priority.

  8. From the Enabled list, select whether you want to enable the rule.

  9. To set a different provisioning context for the rule, click the Organizations and Locations tabs and select the contexts you want to use.

  10. Click Submit to save your rule.

  11. Navigate to Hosts > Discovered Host and select one of the following two options:

    • From the Discovered hosts list on the right, select Auto-Provision to automatically provisions a single host.

    • On the upper right of the window, click Auto-Provision All to automatically provisions all hosts.

CLI procedure
  1. Create the rule with the hammer discovery-rule create command:

    # hammer discovery-rule create --name "Hypervisor" \
    --search "cpu_count  > 8" --hostgroup "My_Host_Group" \
    --hostname "hypervisor-<%= rand(99999) %>" \
    --hosts-limit 5 --priority 5 --enabled true
  2. Automatically provision a host with the hammer discovery auto-provision command:

    # hammer discovery auto-provision --name "macabcdef123456"

Implementing PXE-less Discovery

orcharhino provides a PXE-less Discovery service that operates without the need for PXE-based services (DHCP and TFTP). You accomplish this using orcharhino server’s Discovery image. Once a discovered node is scheduled for installation, it uses kexec command to reload Linux kernel with OS installer without rebooting the node.

Known Issues

The console might freeze during the process. On some hardware, you might experience graphical hardware problems.

PXEless mode

If you have not yet installed the Discovery service or image, see Building a Discovery Image.

Attended Use
  1. Copy this media to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at /dev/sdb:

    # dd bs=4M \
    if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \
    of=/dev/sdb
  2. Insert the Discovery boot media into a bare metal host, start the host, and boot from the media. The Discovery Image displays an option for either Manual network setup or Discovery with DHCP:

    If you select Manual network setup, the Discovery image requests a set of network options. This includes the primary network interface that connects to orcharhino server. This Discovery image also asks for network interface configuration options, such as an IPv4 Address, IPv4 Gateway, and an IPv4 DNS server.

  3. After entering these details, select Next.

  4. If you select Discovery with DHCP, the Discovery image requests only the primary network interface that connects to orcharhino server. It attempts to automatically configure the network interface using a DHCP server, such as one that a orcharhino Proxy provides.

  5. After the primary interface configuration, the Discovery image requests the Server URL, which is the URL of orcharhino server or orcharhino Proxy offering the Discovery service. For example, to use the integrated orcharhino Proxy on orcharhino server, use the following URL:

    https://orcharhino.example.com:8443
  6. Set the Connection type to Proxy, then select Next.

  7. Optional: The Discovery image also provides a set of fields to input Custom facts for the Facter tool to relay back to orcharhino server. These are entered in a name-value format. Provide any custom facts you require and select Confirm to continue.

orcharhino reports a successful communication with orcharhino server’s Discovery service. Navigate to Hosts > Discovered Hosts and view the newly discovered host.

For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.

Building a Discovery Image

The Discovery image is a minimal operating system that is PXE-booted on hosts to acquire initial hardware information and to check in with orcharhino. Discovered hosts keep running the Discovery image until they are rebooted into Anaconda, which then initiates the provisioning process.

The operating system image is based on Cent OS.

Use this procedure to build a orcharhino discovery image or rebuild an image if you change configuration files.

Do not use this procedure on your production orcharhino or orcharhino Proxy. Use either a dedicated environment or copy the synchronized repositories and a kickstart file to a separate server.

Prerequisites
  • Install the livecd-tools package:

    # yum install livecd-tools
Procedure

To build the orcharhino discovery image, complete the following steps:

  1. Open the /usr/share/foreman-discovery-image/foreman-discovery-image.ks file for editing:

    # vim /usr/share/foreman-discovery-image/foreman-discovery-image.ks
  2. Replace the repo lines in the kickstart file with the repository URLs:

    repo --name=centos --mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=os
    repo --name=centos-updates --mirrorlist=http://mirrorlist.centos.org/?release=7&arch=$basearch&repo=updates
    repo --name=epel7 --mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
    repo --name=centos-sclo-rh --mirrorlist=http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=sclo-rh
    repo --name=foreman-rails-el7 --baseurl=https://yum.theforeman.org/rails/foreman-nightly/el7/$basearch/
    repo --name=foreman-el7 --baseurl=http://yum.theforeman.org/nightly/el7/$basearch/
    repo --name=foreman-plugins-el7 --baseurl=http://yum.theforeman.org/plugins/nightly/el7/$basearch/
  3. Run the livecd-creator tool:

    # livecd-creator --title="Discovery-Image" \
    --compression-type=xz \
    --cache=var/cache/build-fdi \
    --config /usr/share/foreman-discovery-image/foreman-discovery-image.ks \
    --fslabel fdi \
    --tmpdir /var/tmp

    If you change fdi in the --fslabel option, you must also change the root label on the kernel command line when loading the image. fdi or the alternative name is appended to the .iso file that is created as part of this procedure. The PXE Discovery tool uses this name when converting from .iso to PXE.

    Use /var/tmp because this process requires close to 3GB of space and /tmp might have problems if the system is low on swap space.

  4. Verify that your fdi.iso file is created:

    # ls -h *.iso

When you create the .iso file, you can boot the .iso file over a network or locally. Complete one of the following procedures.

To boot the iso file over a network:
  1. To extract the initial ramdisk and kernel files from the .iso file over a network, enter the following command:

    # discovery-iso-to-pxe fdi.iso
  2. Create a directory to store your boot files:

    # mkdir /var/lib/tftpboot/boot/myimage
  3. Copy the initrd0.img and vmlinuz0 files to your new directory.

  4. Edit the KERNEL and APPEND entries in the /var/lib/tftpboot/pxelinux.cfg file to add the information about your own initial ramdisk and kernel files.

To boot the iso file locally:

If you want to create a hybrid .iso file for booting locally, complete the following steps:

  1. To convert the .iso file to an .iso hybrid file for PXE provisioning, enter the following command:

    # isohybrid --partok fdi.iso

    If you have grub2 packages installed, you can also use the following command to install a grub2 bootloader:

    # isohybrid --partok --uefi fdi.iso
  2. To add md5 checksum to the .iso file so it can pass installation media validation tests in orcharhino, enter the following command:

    # implantisomd5 fdi.iso

Unattended Use, Customization, and Image Remastering

You can create a customized Discovery ISO to automate the image configuration process after booting. The Discovery image uses a Linux kernel for the operating system, which passes kernel parameters to configure the discovery service. These kernel parameters include the following entries:

proxy.url

The URL of orcharhino Proxy or orcharhino server providing the Discovery service.

proxy.type

The proxy type. This is usually set to proxy to connect to orcharhino Proxy. This parameter also supports a legacy foreman option, where communication goes directly to orcharhino server instead of a orcharhino Proxy.

fdi.pxmac

The MAC address of the primary interface in the format of AA:BB:CC:DD:EE:FF. This is the interface you aim to use for communicating with orcharhino Proxy. In automated mode, the first NIC (using network identifiers in alphabetical order) with a link is used. In semi-automated mode, a screen appears and requests you to select the correct interface.

fdi.pxip, fdi.pxgw, fdi.pxdns

Manually configures IP address (fdi.pxip), the gateway (fdi.pxgw), and the DNS (fdi.pxdns) for the primary network interface. If your omit these parameters, the image uses DHCP to configure the network interface.

fdi.pxfactname1, fdi.pxfactname2 …​ fdi.pxfactnameN

Use to specify custom fact names.

fdi.pxfactvalue1, fdi.pxfactvalue2 …​ fdi.pxfactvalueN

The values for each custom fact. Each value corresponds to a fact name. For example, fdi.pxfactvalue1 sets the value for the fact named with fdi.pxfactname1.

fdi.pxauto

To set automatic or semi-automatic mode. If set to 0, the image uses semi-automatic mode, which allows you to confirm your choices through a set of dialog options. If set to 1, the image uses automatic mode and proceeds without any confirmation.

fdi.initnet

By default, the image initializes all network interfaces (value all). When this setting is set to bootif, only the network interface it was network-booted from will be initialized.

fdi.rootpw

By default, the root account is locked. Use this option to set a root password. You can enter both clear and encrypted passwords.

fdi.ssh

By default, the SSH service is disabled. Set this to 1 or true to enable SSH access.

fdi.ipv4.method

By default, NetworkManager IPv4 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv4 stack. This option works only in DHCP mode.

fdi.ipv6.method

By default, NetworkManager IPv6 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv6 stack. This option only works in DHCP mode.

fdi.zips

Filenames with extensions to be downloaded and started during boot. For more information, see see Extending the Discovery Image.

fdi.zipserver

TFTP server to use to download extensions from. For more information, see see Extending the Discovery Image.

fdi.countdown

Number of seconds to wait until the text-user interface is refreshed after the initial discovery attempt. This value defaults to 45 seconds. Increase this value if the status page reports the IP address as N/A.

fdi.dhcp_timeout

NetworkManager DHCP timeout. The default value is 300 seconds.

fdi.vlan.primary

VLAN tagging ID to set for the primary interface.

Using the discovery-remaster Tool to Remaster an OS Image
# discovery-remaster ~/iso/foreman-discovery-image-3.4.4-5.iso \
"fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
fdi.pxdns=192.168.140.2 proxy.url=https://orcharhino.example.com:8443 \
proxy.type=proxy fdi.pxfactname1=customhostname fdi.pxfactvalue1=myhost fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"

Copy this media to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at /dev/sdb:

# dd bs=4M \
if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \
of=/dev/sdb

Insert the Discovery boot media into a bare metal host, start the host, and boot from the media.

For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.

Extending the Discovery Image

You can extend the orcharhino Discovery image with custom facts, software, or device drivers. You can also provide a compressed archive file containing extra code for the image to use.

Procedure
  1. Create the following directory structure:

    .
    ├── autostart.d
    │   └── 01_zip.sh
    ├── bin
    │   └── ntpdate
    ├── facts
    │   └── test.rb
    └── lib
        ├── libcrypto.so.1.0.0
        └── ruby
            └── test.rb
    • The autostart.d directory contains scripts that are executed in POSIX order by the image when it starts, but before the host is registered to orcharhino.

    • The bin directory is added to the $PATH variable; you can place binary files in this directory and use them in the autostart scripts.

    • The facts directory is added to the FACTERLIB variable so that custom facts can be configured and sent to orcharhino.

    • The lib directory is added to the LD_LIBRARY_PATH variable and lib/ruby is added to the RUBYLIB variable, so that binary files in /bin can be executed correctly.

  2. After creating the directory structure, create a .zip file archive with the following command:

    zip -r my_extension.zip .
  3. To inform the Discovery image of the extensions it must use, place your zip files on your TFTP server with the Discovery image, and then update the APPEND line of the PXELinux template with the fdi.zips option where the paths are relative to the TFTP root. For example, if you have two archives at $TFTP/zip1.zip and $TFTP/boot/zip2.zip, use the following syntax:

    fdi.zips=zip1.zip,boot/zip2.zip

You can append new directives and options to the existing environment variables (PATH, LD_LIBRARY_PATH, RUBYLIB and FACTERLIB). If you want to specify the path explicitly in your scripts, the .zip file contents are extracted to the /opt/extension directory on the image.

You can create multiple .zip files but be aware that they are extracted to the same location on the Discovery image. Files extracted from in later .zip files overwrite earlier versions if they have the same file name.

Troubleshooting Discovery

If a machine is not listed in the orcharhino web UI in Hosts > Discovered Hosts, inspect the following configuration areas to help isolate the error:

  • Navigate to Hosts > Provisioning Templates and redeploy the default PXELinux template using the Build PXE Default button.

  • Verify the pxelinux.cfg/default configuration file on the TFTP orcharhino Proxy.

  • Ensure adequate network connectivity between hosts, orcharhino Proxy, and orcharhino server.

  • Check the PXELinux template in use and determine the PXE discovery snippet it includes. Snippets are named as follows: pxelinux_discovery, pxegrub_discovery, or pxegrub2_discovery. Verify the proxy.url and proxy.type options in the PXE discovery snippet.

  • Ensure that DNS is working correctly for the discovered nodes, or use an IP address in the proxy.url option in the PXE discovery snippet included in the PXELinux template you are using.

  • Ensure that the DHCP server is delivering IP addresses to the booted image correctly.

  • Ensure the discovered host or virtual machine has at least 1200 MB of memory. Less memory can lead to various random kernel panic errors because the image is extracted in-memory.

For gathering important system facts, use the discovery-debug command. It prints out system logs, network configuration, list of facts, and other information on the standard output. The typical use case is to redirect this output and copy it with the scp command for further investigation.

The first virtual console on the discovered host is reserved for systemd logs. Particularly useful system logs are tagged as follows:

  • discover-host — initial facts upload

  • foreman-discovery — facts refresh, reboot remote commands

  • nm-prepare — boot script which pre-configures NetworkManager

  • NetworkManager — networking information

Use TTY2 or higher to log in to a discovered host. The root account and SSH access are disabled by default, but you can enable SSH and set the root password using the following kernel command-line options in the Default PXELinux template on the APPEND line:

fdi.ssh=1 fdi.rootpw=My_Password

Using a Lorax Composer Image for Provisioning

In orcharhino, you can integrate with Cockpit to perform actions and monitor your hosts. Using Cockpit, you can access Lorax Composer and build images that you can then upload to a HTTP server and use this image to provision hosts. When you configure orcharhino for image provisioning, Anaconda installer partitions disks, downloads and mounts the image and copies files over to a host. The preferred image type is TAR.

Refer to the web console guide for more information.

Prerequisites
  1. An existing TAR image created via Lorax Composer.

Procedure

To use the Lorax Composer image with Anaconda Kickstart in orcharhino, complete the following steps:

  1. On orcharhino, create a custom product, add a custom file repository to this product, and upload the image to the repository.

  2. In the orcharhino web UI, navigate to Configure > Host Groups, and select the host group that you want to use.

  3. Click the Parameters tab, and then click Add Parameter.

  4. In the Name field, enter kickstart_liveimg.

  5. From the Type list, select string.

  6. In the Value field, enter the absolute path or a relative path in the following format custom/product/repository/image_name that points to the exact location where you store the image.

  7. Click Submit to save your changes.

You can use this image for bare metal provisioning and provisioning using a compute resource.

For more information about bare metal provisioning, see Using PXE to Provision Hosts. For more information about provisioning with different compute resources, see the relevant chapter for the compute resource that you want to use.

Provisioning Virtual Machines on KVM (libvirt)

Kernel-based Virtual Machines (KVMs) use an open source virtualization daemon and API called libvirt running on Red Hat Enterprise Linux. orcharhino can connect to the libvirt API on a KVM server, provision hosts on the hypervisor, and control certain virtualization functions.

You can use KVM provisioning to create hosts over a network connection or from an existing image.

Prerequisites
  • A orcharhino Proxy managing a network on the KVM server. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information about network service configuration for orcharhino Proxys, see Configuring Networking.

  • A server running KVM virtualization tools (libvirt daemon).

  • A virtual network running on the libvirt server. Only NAT and isolated virtual networks can be managed through orcharhino.

  • An existing virtual machine image if you want to use image-based provisioning. Ensure that this image exists in a storage pool on the KVM host. The default storage pool is usually located in /var/lib/libvirt/images. Only directory pool storage types can be managed through orcharhino.

  • Optional: The examples in these procedures use the root user for KVM. If you want to use a non-root user on the KVM server, you must add the user to the libvirt group on the KVM server:

    useradd -a -G libvirt non_root_user
  • A orcharhino user account with the following roles:

  • A custom role in orcharhino with the following permissions:

    • view_compute_resources

    • destroy_compute_resources_vms

    • power_compute_resources_vms

    • create_compute_resources_vms

    • view_compute_resources_vms

    • view_locations

    • view_subnets

      For more information about creating roles, see Creating a Role in the Administering orcharhino guide. For more information about adding permissions to a role, see Adding Permissions to a Role in the Administering orcharhino guide.

Configuring orcharhino server for KVM Connections

Before adding the KVM connection, create an SSH key pair for the foreman user to ensure a secure connection between orcharhino server and KVM.

Procedure
  1. On orcharhino server, switch to the foreman user:

    # su foreman -s /bin/bash
  2. Generate the key pair:

    $ ssh-keygen
  3. Copy the public key to the KVM server:

    $ ssh-copy-id root@kvm.example.com
  4. Exit the bash shell for the foreman user:

    $ exit
  5. Install the libvirt-client package:

    # yum install libvirt-client
  6. Use the following command to test the connection to the KVM server:

    # su foreman -s /bin/bash -c 'virsh -c qemu+ssh://root@kvm.example.com/system list'

Adding a KVM Connection to orcharhino server

Use this procedure to add KVM as a compute resource in orcharhino. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the new compute resource.

  3. From the Provider list, select Libvirt.

  4. In the Description field, enter a description for the compute resource.

  5. In the URL field, enter the connection URL to the KVM server. For example:

     qemu+ssh://root@kvm.example.com/system
  6. From the Display type list, select either VNC or Spice.

  7. Optional: To secure console access for new hosts with a randomly generated password, select the Set a randomly generated password on the display connection check box. You can retrieve the password for the VNC console to access the guest virtual machine console from the output of the following command executed on the KVM server:

    # virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password is randomly generated every time the console for the virtual machine is opened, for example, with virt-manager.

  8. Click Test Connection to ensure that orcharhino server connects to the KVM server without fault.

  9. Verify that the Locations and Organizations tabs are automatically set to your current context. If you want, add additional contexts to these tabs.

  10. Click Submit to save the KVM connection.

CLI procedure
  • To create a compute resource, enter the hammer compute-resource create command:

    # hammer compute-resource create --name "My_KVM_Server" \
    --provider "Libvirt" --description "KVM server at kvm.example.com" \
    --url "qemu+ssh://root@kvm.example.com/system" --locations "New York" \
    --organizations "My_Organization"

Adding KVM Images to orcharhino server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino server.

Note that you can manage only directory pool storage types through orcharhino.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click the name of the KVM connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the image’s base operating system.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. In the Image path field, enter the full path that points to the image on the KVM server. For example:

     /var/lib/libvirt/images/TestImage.qcow2
  9. Optional: Select the User Data check box if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the KVM server.

    # hammer compute-resource image create \
    --name "KVM Image" \
    --compute-resource "My_KVM_Server"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --user-data false \
    --uuid "/var/lib/libvirt/images/KVMimage.qcow2" \

Adding KVM Details to a Compute Profile

Use this procedure to add KVM hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the KVM compute resource.

  4. In the CPUs field, enter the number of CPUs to allocate to the new host.

  5. In the Memory field, enter the amount of memory to allocate to the new host.

  6. From the Image list, select the image to use if performing image-based provisioning.

  7. From the Network Interfaces list, select the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a orcharhino Proxy-managed network.

  8. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host.

  9. Click Submit to save the settings to the compute profile.

CLI procedure
  1. To create a compute profile, enter the following command:

    # hammer compute-profile create --name "Libvirt CP"
  2. To add the values for the compute profile, enter the following command:

    # hammer compute-profile values create --compute-profile "Libvirt CP" \
    --compute-resource "My_KVM_Server" \
    --interface "compute_type=network,compute_model=virtio,compute_network=examplenetwork" \
    --volume "pool_name=default,capacity=20G,format_type=qcow2" \
    --compute-attributes "cpus=1,memory=1073741824"

Creating Hosts on KVM

In orcharhino, you can use KVM provisioning to create hosts over a network connection or from an existing image:

  • If you want to create a host over a network connection, the new host must be able to access either orcharhino server’s integrated orcharhino Proxy or an external orcharhino Proxy on a KVM virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the KVM server to create and start a virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

  • If you want to create a host with an existing image, the new host entry triggers the KVM server to create the virtual machine using a pre-existing image as a basis for the new volume.

To use the CLI instead of the web UI, see the CLI procedure.

DHCP Conflicts

For network-based provisioning, if you use a virtual network on the KVM server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino server when booting new hosts.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the KVM connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  7. Click the Interface tab and click Edit on the host’s interface.

  8. Verify that the fields are automatically populated, particularly the following items:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

    • The MAC address field is blank. The KVM server assigns a MAC address to the host.

    • The Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

    • The KVM-specific fields are populated with settings from your compute profile. Modify these settings if required.

  9. Click the Operating System tab, and confirm that all fields automatically contain values.

  10. Select the Provisioning Method that you want to use:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

  11. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  12. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  13. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  14. Click Submit to save the host entry.

CLI procedure
  • To use network-based provisioning, create the host with the hammer host create command and include --provision-method build. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name "kvm-host1" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_KVM_Server" \
    --provision-method build \
    --build true \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --compute-attributes="cpus=1,memory=1073741824" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2" \
    --root-password "password"
  • To use image-based provisioning, create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name "kvm-host2" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_KVM_Server" \
    --provision-method image \
    --image "KVM Image" \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --compute-attributes="cpus=1,memory=1073741824" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Provisioning Virtual Machines on oVirt

oVirt is an enterprise-grade server and desktop virtualization platform. In orcharhino, you can manage virtualization functions through oVirt’s REST API. This includes creating virtual machines and controlling their power states.

You can use oVirt provisioning to create virtual machines over a network connection or from an existing image.

You can use cloud-init to configure the virtual machines that you provision. Using cloud-init avoids any special configuration on the network, such as a managed DHCP and TFTP, to finish the installation of the virtual machine. This method does not require orcharhino to connect to the provisioned virtual machine using SSH to run the finish script.

Prerequisites
  • A orcharhino Proxy managing a logical network on the oVirt environment. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information, see Configuring Networking.

  • An existing template, other than the blank template, if you want to use image-based provisioning. For more information about creating templates for virtual machines, see Templates in the oVirt documentation.

  • An administration-like user on oVirt for communication with orcharhino server. Do not use the admin@internal user for this communication. Instead, create a new oVirt user with the following permissions:

    • System > Configure System > Login Permissions

    • Network > Configure vNIC Profile > Create

    • Network > Configure vNIC Profile > Edit Properties

    • Network > Configure vNIC Profile > Delete

    • Network > Configure vNIC Profile > Assign vNIC Profile to VM

    • Network > Configure vNIC Profile > Assign vNIC Profile to Template

    • Template > Provisioning Operations > Import/Export

    • VM > Provisioning Operations > Create

    • VM > Provisioning Operations > Delete

    • VM > Provisioning Operations > Import/Export

    • VM > Provisioning Operations > Edit Storage

    • Disk > Provisioning Operations > Create

    • Disk > Disk Profile > Attach Disk Profile

      For more information about how to create a user and add permissions in oVirt, see Users and Roles in the oVirt documentation.

Procedure Overview
  1. Adding the oVirt Connection to orcharhino server.

  2. Optional: Preparing Cloud-init Images in oVirt. Use this procedure if you want to use cloud-init to configure an image-based virtual machine.

  3. Optional: Adding oVirt Images to orcharhino server. Use this procedure if you want to use image-based provisioning.

  4. Optional: Preparing a Cloud-init Template. Use this procedure if you want to use cloud-init to configure an image-based virtual machine.

  5. Adding oVirt Details to a Compute Profile.

  6. Creating Hosts on oVirt.

Adding the oVirt Connection to orcharhino server

Use this procedure to add oVirt as a compute resource in orcharhino. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the new compute resource.

  3. From the Provider list, select oVirt.

  4. In the Description field, enter a description for the compute resource.

  5. In the URL field, enter the connection URL for the oVirt Engine’s API in the following form: https://ovirt.example.com/ovirt-engine/api/v3.

  6. Optionally, select the Use APIv4 check box to evaluate the new oVirt Engine API.

  7. In the User field, enter the name of a user with permissions to access oVirt Engine’s resources.

  8. In the Password field, enter the password of the user.

  9. Click Load Datacenters to populate the Datacenter list with data centers from your oVirt environment.

  10. From the Datacenter list, select a data center.

  11. From the Quota ID list, select a quota to limit resources available to orcharhino.

  12. In the X509 Certification Authorities field, enter the certificate authority for SSL/TLS access. Alternatively, if you leave the field blank, a self-signed certificate is generated on the first API request by the server.

  13. Click the Locations tab and select the location you want to use.

  14. Click the Organizations tab and select the organization you want to use.

  15. Click Submit to save the compute resource.

CLI procedure
  • Enter the hammer compute-resource create command with Ovirt for --provider and the name of the data center you want to use for --datacenter.

    # hammer compute-resource create \
    --name "My_oVirt" --provider "Ovirt" \
    --description "oVirt server at ovirt.example.com" \
    --url "https://ovirt.example.com/ovirt-engine/api" \
    --use-v4 "false" --user "orcharhino_User" \
    --password "My_Password" \
    --locations "New York" --organizations "My_Organization" \
    --datacenter "My_Datacenter"

    Optionally, to evaluate the new oVirt Engine API, change false to true for the --use-v4 option.

Preparing Cloud-init Images in oVirt

To use cloud-init during provisioning, you must prepare an image with cloud-init installed in oVirt, and then import the image to orcharhino to use for provisioning.

Procedure
  1. In oVirt, create a virtual machine to use for image-based provisioning in orcharhino.

  2. On the virtual machine, install cloud-init:

    yum install cloud-init
  3. To the /etc/cloud/cloud.cfg file, add the following information:

    datasource_list: ["NoCloud", "ConfigDrive"]
  4. In oVirt, create an image from this virtual machine.

When you add this image to orcharhino, ensure that you select the User Data check box.

Adding oVirt Images to orcharhino server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino server.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click the name of the oVirt connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the image’s base operating system.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. From the Image list, select an image from the oVirt compute resource.

  9. Optional: Select the User Data check box if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid option to store the template UUID on the oVirt server.

    # hammer compute-resource image create \
    --name "oVirt_Image" \
    --compute-resource "My_oVirt"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --uuid "9788910c-4030-4ae0-bad7-603375dd72b1" \

Preparing a Cloud-init Template

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates, and click Create Template.

  2. In the Name field, enter a name for the template.

  3. In the Editor field, enter the following template details:

    <%#
    kind: user_data
    name: Cloud-init
    -%>
    #cloud-config
    hostname: <%= @host.shortname %>
    
    <%# Allow user to specify additional SSH key as host paramter -%>
    <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%>
    ssh_authorized_keys:
    <% if @host.params['sshkey'].present? -%>
      - <%= @host.params['sshkey'] %>
    <% end -%>
    <% if @host.params['remote_execution_ssh_keys'].present? -%>
    <% @host.params['remote_execution_ssh_keys'].each do |key| -%>
      - <%= key %>
    <% end -%>
    <% end -%>
    <% end -%>
    runcmd:
      - |
        #!/bin/bash
    <%= indent 4 do
        snippet 'subscription_manager_registration'
    end %>
    <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
      <%= indent 4 do
        snippet 'idm_register'
      end %>
    <% end -%>
    <% unless @host.operatingsystem.atomic? -%>
        # update all the base packages from the updates repository
        yum -t -y -e 0 update
    <% end -%>
    <%
        # safemode renderer does not support unary negation
        non_atomic = @host.operatingsystem.atomic? ? false : true
        pm_set = @host.puppetmaster.empty? ? false : true
        puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet'])
    %>
    <% if puppet_enabled %>
        yum install -y puppet
        cat > /etc/puppet/puppet.conf << EOF
      <%= indent 4 do
        snippet 'puppet.conf'
      end %>
        EOF
        # Setup puppet to run on system reboot
        /sbin/chkconfig --level 345 puppet on
    
        /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : "--server #{@host.puppetmaster}" %> --no-daemonize
        /sbin/service puppet start
    <% end -%>
    phone_home:
     url: <%= foreman_url('built') %>
     post: []
     tries: 10pp
  4. Click the Type tab and from the Type list, select User data template.

  5. Click the Association tab, and from the Applicable Operating Systems list, select the operating system that you want associate with the template.

  6. Click the Locations tab, and from the Locations list, select the location that you want to associate with the template.

  7. Click the Organizations tab, and from the Organization list, select the organization that you want to associate with the template.

  8. Click Submit.

  9. Navigate to Hosts > Operating Systems, and select the operating system you want to associate with the template.

  10. Click the Templates tab, and from the User data template list, select the name of the new template.

  11. Click Submit.

Adding oVirt Details to a Compute Profile

Use this procedure to add oVirt hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the oVirt compute resource.

  4. From the Cluster list, select the target host cluster in the oVirt environment.

  5. From the Template list, select the oVirt template to use for the Cores and Memory settings.

  6. In the Cores field, enter the number of CPU cores to allocate to the new host.

  7. In the Memory field, enter the amount of memory to allocate to the new host.

  8. From the Image list, select image to use for image-based provisioning.

  9. In the Network Interfaces area, enter the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a orcharhino Proxy-managed network. For each network interface, enter the following details:

    1. In the Name field, enter the name of the network interface.

    2. From the Network list, select The logical network that you want to use.

  10. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host. For each volume, enter the following details:

    1. In the Size (GB) enter the size, in GB, for the new volume.

    2. From the Storage domain list, select the storage domain for the volume.

    3. From the Preallocate disk, select either thin provisioning or preallocation of the full disk.

    4. From the Bootable list, select whether you want a bootable or non-bootable volume.

  11. Click Submit to save the compute profile.

CLI procedure
  1. To create a compute profile, enter the following command:

    # hammer compute-profile create --name "oVirt CP"
  2. To set the values for the compute profile, enter the following command:

    # hammer compute-profile values create --compute-profile "oVirt CP" \
    --compute-resource "My_oVirt" \
    --interface "compute_interface=Interface_Type,compute_name=eth0,compute_network=satnetwork" \
    --volume "size_gb=20G,storage_domain=Data,bootable=true" \
    --compute-attributes "cluster=Default,cores=1,memory=1073741824,start=true""

Creating Hosts on oVirt

In orcharhino, you can use oVirt provisioning to create hosts over a network connection or from an existing image:

  • If you want to create a host over a network connection, the new host must be able to access either orcharhino server’s integrated orcharhino Proxy or an external orcharhino Proxy on a oVirt virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the oVirt server to create and start a virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

  • If you want to create a host with an existing image, the new host entry triggers the oVirt server to create the virtual machine using a pre-existing image as a basis for the new volume.

To use the CLI instead of the web UI, see the CLI procedure.

DHCP Conflicts

For network-based provisioning, if you use a virtual network on the oVirt server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino server when booting new hosts.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the oVirt connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  7. Click the Interface tab and click Edit on the host’s interface.

  8. Verify that the fields are automatically populated, particularly the following items:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

    • The MAC address field is blank. The oVirt server assigns a MAC address to the host.

    • The Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

    • The oVirt-specific fields are populated with settings from your compute profile. Modify these settings if required.

  9. Click the Operating System tab, and confirm that all fields automatically contain values.

  10. Select the Provisioning Method that you want to use:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

  11. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  12. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  13. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  14. Click Submit to save the host entry.

CLI procedure
  • To use network-based provisioning, create the host with the hammer host create command and include --provision-method build. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name "oVirt-vm1" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_oVirt" \
    --provision-method build \
    --build true \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork" \
    --compute-attributes="cluster=Default,cores=1,memory=1073741824,start=true" \
    --volume="size_gb=20G,storage_domain=Data,bootable=true"
  • To use image-based provisioning, create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name "oVirt-vm2" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_oVirt" \
    --provision-method image \
    --image "oVirt_Image" \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork" \
    --compute-attributes="cluster=Default,cores=1,memory=1073741824,start=true" \
    --volume="size_gb=20G,storage_domain=Data,bootable=true"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Provisioning Virtual Machines in VMware vSphere

VMware vSphere is an enterprise-level virtualization platform from VMware. orcharhino can interact with the vSphere platform, including creating new virtual machines and controlling their power management states.

Prerequisites for VMware vSphere Provisioning

The requirements for VMware vSphere provisioning include:

  • A orcharhino Proxy managing a network on the vSphere environment. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information, see Configuring Networking.

  • An existing VMware template if you want to use image-based provisioning.

Creating a VMware vSphere User

The VMware vSphere server requires an administration-like user for orcharhino server communication. For security reasons, do not use the administrator user for such communication. Instead, create a user with the following permissions:

For VMware vCenter Server version 5.6, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Change Config (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Edit Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

For VMware vCenter Server version 6.5, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Configuration (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

Adding a VMware vSphere Connection to orcharhino server

Use this procedure to add a VMware vSphere connection in orcharhino server’s compute resources. To use the CLI instead of the web UI, see the CLI procedure.

Ensure that the host and network-based firewalls are configured to allow orcharhino to vCenter communication on TCP port 443. Verify that orcharhino is able to resolve the host name of vCenter and vCenter is able to resolve orcharhino server’s host name.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources, and in the Compute Resources window, click Create Compute Resource.

  2. In the Name field, enter a name for the resource.

  3. From the Provider list, select VMware.

  4. In the Description field, enter a description for the resource.

  5. In the VCenter/Server field, enter the IP address or host name of the vCenter server.

  6. In the User field, enter the user name with permission to access the vCenter’s resources.

  7. In the Password field, enter the password for the user.

  8. Click Load Datacenters to populate the list of data centers from your VMware vSphere environment.

  9. From the Datacenter list, select a specific data center to manage from this list.

  10. In the Fingerprint field, ensure that this field is populated with the fingerprint from the data center.

  11. From the Display Type list, select a console type, for example, VNC or VMRC. Note that VNC consoles are unsupported on VMware ESXi 6.5 and later.

  12. Optional: In the VNC Console Passwords field, select the Set a randomly generated password on the display connection check box to secure console access for new hosts with a randomly generated password. You can retrieve the password for the VNC console to access guest virtual machine console from the libvirtd host from the output of the following command:

    # virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password randomly generates every time the console for the virtual machine opens, for example, with virt-manager.

  13. From the Enable Caching list, you can select whether to enable caching of compute resources. For more information, see Caching of Compute Resources.

  14. Click the Locations and Organizations tabs and verify that the values are automatically set to your current context. You can also add additional contexts.

  15. Click Submit to save the connection.

CLI procedure
  • Create the connection with the hammer compute-resource create command. Select Vmware as the --provider and set the instance UUID of the data center as the --uuid:

    # hammer compute-resource create --name "My_vSphere" \
    --provider "Vmware" \
    --description "vSphere server at vsphere.example.com" \
    --server "vsphere.example.com" --user "My_User" \
    --password "My_Password" --locations "My_Location" --organizations "My_Organization" \
    --datacenter "My_Datacenter"

Adding VMware vSphere Images to orcharhino server

VMware vSphere uses templates as images for creating new virtual machines. If using image-based provisioning to create new hosts, you need to add VMware template details to your orcharhino server. This includes access details and the template name.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click the VMware vSphere connection.

  2. In the Name field, enter a name for the image.

  3. From the Operatingsystem list, select the image’s base operating system.

  4. From the Architecture list, select the operating system architecture.

  5. In the User field, enter the SSH user name for image access. This is normally the root user.

  6. In the Password field, enter the SSH password for image access.

  7. From the User data list, select whether you want the images to support user data input, such as cloud-init data.

  8. In the Image field, enter the relative path and name of the template on the vSphere environment. Do not include the data center in the relative path.

  9. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the relative template path on the vSphere environment.

    # hammer compute-resource image create --name "Test_vSphere_Image" \
    --operatingsystem "RedHat 7.2" --architecture "x86_64" \
    --username root --uuid "Templates/RHEL72" \
    --compute-resource "My_vSphere"

Adding VMware vSphere Details to a Compute Profile

You can predefine certain hardware settings for virtual machines on VMware vSphere. You achieve this through adding these hardware settings to a compute profile. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles and, in the Compute Profiles window, click the name of the compute profile, and then click the vSphere connection.

  2. In the CPUs field, enter the number of CPUs to allocate to the new host.

  3. In the Cores per socket field, enter the number of cores to allocate to each CPU.

  4. In the Memory field, enter the amount of memory to allocate to the new host.

  5. In the Cluster field, enter the name of the target host cluster on the VMware environment.

  6. From the Resource pool list, select an available resource allocations for the host.

  7. In the Folder field, enter the folder to organize the host.

  8. From the Guest OS list, select the operating system you want to use in VMware vSphere.

  9. From the SCSI controller list, select the disk access method for the host.

  10. From the Virtual H/W version list, select the underlying VMware hardware abstraction to use for virtual machines.

  11. You can select the Memory hot add or CPU hot add check boxes if you want to add more resources while the virtual machine is powered.

  12. From the Image list, select the image to use if performing image-based provisioning.

  13. From the Network Interfaces list, select the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a orcharhino Proxy-managed network.

  14. Select the Eager zero check box if you want to use eager zero thick provisioning. If unchecked, the disk uses lazy zero thick provisioning.

  15. Click Submit to save the compute profile.

CLI procedure
  1. To create the compute profile, enter the following command:

    # hammer compute-profile create --name "VMWare CP"
  2. To add the values for the compute profile, enter the following command:

    # hammer compute-profile values create --compute-profile "VMWare CP" \
    --compute-resource "My_vSphere" \
    --interface "compute_type=VirtualE1000,compute_network=mynetwork \
    --volume "size_gb=20G,datastore=Data,name=myharddisk,thin=true" \
    --compute-attributes "cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true"

Creating Hosts on a VMware vSphere Server

The VMware vSphere provisioning process provides the option to create hosts over a network connection or using an existing image.

For network-based provisioning, you must create a host to access either orcharhino server’s integrated orcharhino Proxy or an external orcharhino Proxy on a VMware vSphere virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the VMware vSphere server to create the virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

DHCP Conflicts

If you use a virtual network on the VMware vSphere server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino server when booting new hosts.

For image-based provisioning, use the pre-existing image as a basis for the new volume.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter the name that you want to become the provisioned system’s host name.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the VMware vSphere connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  7. Click the Interface tab and click Edit on the host’s interface.

  8. Verify that the fields are automatically populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino server automatically assigns an IP address for the new host.

  9. Ensure that the MAC address field is blank. The VMware vSphere server assigns one to the host.

  10. Verify that the Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

  11. In the interface window, review the VMware vSphere-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. Select the Provisioning Method that you want:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

    • If the foreman_bootdisk plug-in is installed, and you want to use boot-disk provisioning, click Boot disk based. Refer to boot-disk based provisioning for more information.

  14. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  15. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.

  16. Click the Parameters tab and ensure that a parameter exists that provides an activation key. If not, add an activation key

  17. Click Submit to save the host entry.

CLI procedure
  • Create the host from a network with the hammer host create command and include --provision-method build to use network-based provisioning.

    # hammer host create --name "vmware-test1" --organization "My_Organization" \
    --location "New York" --hostgroup "Base" \
    --compute-resource "My_vSphere" --provision-method build \
    --build true --enabled true --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"
  • Create the host from an image with the hammer host create command and include --provision-method image to use image-based provisioning.

    # hammer host create --name "vmware-test2" --organization "My_Organization" \
    --location "New York" --hostgroup "Base" \
    --compute-resource "My_VMware" --provision-method image \
    --image "Test VMware Image" --enabled true --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Using the VMware vSphere Cloud-init and Userdata Templates for Provisioning

You can use VMware with the Cloud-init and Userdata templates to insert user data into the new virtual machine, to make further VMware customization, and to enable the VMware-hosted virtual machine to call back to orcharhino.

You can use the same procedures to set up a VMware compute resource within orcharhino, with a few modifications to the work flow.

VMware cloud-init Provisioning Overview

provisioning guide provisioning user data sequence

When you set up the compute resource and images for VMware provisioning in orcharhino, the following sequence of provisioning events occur:

  • The user provisions one or more virtual machines using the orcharhino web UI, API, or hammer

  • orcharhino calls the VMware vCenter to clone the virtual machine template

  • orcharhino userdata provisioning template adds customized identity information

  • When provisioning completes, the Cloud-init provisioning template instructs the virtual machine to call back to orcharhino Proxy when cloud-init runs

  • VMware vCenter clones the template to the virtual machine

  • VMware vCenter applies customization for the virtual machine’s identity, including the host name, IP, and DNS

  • The virtual machine builds, cloud-init is invoked and calls back orcharhino on port 80, which then redirects to 443

Port and Firewall Requirements

Because of the cloud-init service, the virtual machine always calls back to orcharhino even if you register the virtual machine to orcharhino Proxy. Ensure that you configure port and firewall settings to open any necessary connections.

For more information about port and firewall requirements, see Port and Firewall Requirements in the Installing orcharhino and Ports and Firewalls Requirements in Installing orcharhino Proxy.

Associating the userdata and Cloud-init Templates with the Operating System
  1. In the orcharhino web UI, navigate to Hosts > Operating Systems, and select the operating system that you want to use for provisioning.

  2. Click the Template tab.

  3. From the Cloud-init template list, select Cloudinit default.

  4. From the User data template list, select UserData open-vm-tools.

  5. Click Submit to save the changes.

Preparing an Image to use the cloud-init Template

To prepare an image, you must first configure the settings that you require on a virtual machine that you can then save as an image to use in orcharhino.

To use the cloud-init template for provisioning, you must configure a virtual machine so that cloud-init is installed, enabled, and configured to call back to orcharhino server.

For security purposes, you must install a CA certificate to use HTTPs for all communication. This procedure includes steps to clean the virtual machine so that no unwanted information transfers to the image you use for provisioning.

If you have an image with cloud-init, you must still follow this procedure to enable cloud-init to communicate with orcharhino because cloud-init is disabled by default.

These instructions are for Red Hat Enterprise Linux or Fedora OS, follow similar steps for other Linux distributions.

  1. On the virtual machine that you use to create the image, install cloud-init, open-vm-tools, and perl:

    # yum -y install cloud-init open-vm-tools perl
  2. Create a configuration file for cloud-init:

    # vi /etc/cloud/cloud.cfg.d/example_cloud-init_config.cfg
  3. Add the following information to the example_cloud_init_config.cfg file:

    datasource_list: [NoCloud]
    datasource:
      NoCloud:
        seedfrom: https://orcharhino.example.com/userdata/
    EOF
  4. Enable the CA certificates for the image:

    # update-ca-trust enable
  5. Download the katello-server-ca.crt file from orcharhino server:

    # wget -O /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt http://orcharhino.example.com/pub/katello-server-ca.crt
  6. To update the record of certificates, enter the following command:

    # update-ca-trust extract
  7. Use the following commands to clean the image:

    # systemctl stop rsyslog
    # systemctl stop auditd
    # package-cleanup -oldkernels -count=1
    # yum clean all
  8. Use the following commands to reduce logspace, remove old logs, and truncate logs:

    # logrotate -f /etc/logrotate.conf
    # rm -f /var/log/*-???????? /var/log/*.gz
    # rm -f /var/log/dmesg.old
    # rm -rf /var/log/anaconda
    # cat /dev/null > /var/log/audit/audit.log
    # cat /dev/null > /var/log/wtmp
    # cat /dev/null > /var/log/lastlog
    # cat /dev/null > /var/log/grubby
  9. Remove udev hardware rules:

    # rm -f /etc/udev/rules.d/70*
  10. Remove the uuid from ifcfg scripts:

    # cat > /etc/sysconfig/network-scripts/ifcfg-ens192 <<EOM
    DEVICE=ens192
    ONBOOT=yes
    EOM
  11. Remove the SSH host keys:

    # rm -f /etc/ssh/SSH_keys
  12. Remove root user’s shell history:

    # rm -f ~root/.bash_history
    # unset HISTFILE
  13. Remove root user’s SSH history:

    # rm -rf ~root/.ssh/known_hosts

You can now create an image from this virtual machine.

You can use the Adding VMware vSphere Images to orcharhino server section to add the image to orcharhino.

Configuring orcharhino Proxy to Forward the user data Template

If you deploy orcharhino with the orcharhino Proxy templates feature, you must configure orcharhino to recognize hosts' IP addresses forwarded over the X-Forwarded-For HTTP header to serve correct template payload.

For security reasons, orcharhino recognizes this HTTP header only from localhost. For each individual orcharhino Proxy, you must configure a regular expression to recognize hosts' IP addresses. From the web UI, you can do this by navigating to Administer > Settings > Provisioning, and changing the Remote address setting. From the CLI, you can do this by entering the following command:

# hammer settings set --name remote_addr --value '(localhost(4|6|4to6)?|192.168.122.(1|2|3))'

Caching of Compute Resources

Caching of compute resources speeds up rendering of VMware information.

Enabling Caching of Compute Resources

To enable or disable caching of compute resources:

  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources.

  2. Click the Edit button to the right of the VMware server you want to update.

  3. Select the Enable caching check box.

Refreshing the Compute Resources Cache

To refresh the cache of compute resources to update compute resources information:

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources.

  2. Select a VMware server you want to refresh the compute resources cache for and click the Refresh Cache button.

CLI procedure

Use this API call to refresh the compute resources cache:

# curl -H "Accept:application/json,version=2" \
-H "Content-Type:application/json" -X PUT \
-u username:password -k \
https://orcharhino.example.com/api/compute_resources/compute_resource_id/refresh_cache

Use the hammer compute-resource list command to determine the ID of the VMware server you want to refresh the compute resources cache for.

Provisioning Virtual Machines on KubeVirt

KubeVirt addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing virtual machine (VM)-based workloads that cannot be easily containerized. This technology provides a unified development platform where developers can build, modify, and deploy applications residing in application containers and VMs in a shared environment. These capabilities support rapid application modernization across the open hybrid cloud.

With orcharhino, you can create a compute resource for KubeVirt so that you can provision and manage Kubernetes virtual machines using orcharhino.

Prerequisites
  • A KubeVirt user that has the cluster-admin permissions for the Openshift Container Platform virtual cluster.

  • A orcharhino Proxy managing a network on the KubeVirt server. Ensure that no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information about network service configuration for orcharhino Proxys, see Configuring Networking.

  • A orcharhino user account with the following roles:

  • A custom role in orcharhino with the following permissions:

    • view_compute_resources

    • destroy_compute_resources_vms

    • power_compute_resources_vms

    • create_compute_resources_vms

    • view_compute_resources_vms

    • view_locations

    • view_subnets

      For more information about creating roles, see Creating a Role in the Administering orcharhino guide. For more information about adding permissions to a role, see Adding Permissions to a Role in the Administering orcharhino guide.

Adding a KubeVirt Connection to orcharhino server

Use this procedure to add KubeVirt as a compute resource in orcharhino.

Procedure
  1. Enter the following foreman-installer command to enable the KubeVirt plugin for orcharhino:

    # foreman-installer --enable-foreman-plugin-kubevirt
  2. Generate a bearer token to use for HTTP and HTTPs authentication. On the KubeVirt server, list the secrets that contain tokens:

    # kubectl get secrets
  3. List the token for your secret:

    # kubectl get secrets YOUR_SECRET -o jsonpath='{.data.token}' | base64 -d | xargs

    Make a note of this token to use later in this procedure.

  4. In the orcharhino web UI, navigate to Infrastructure > Compute Resources, and click Create Compute Resource.

  5. In the Name field, enter a name for the new compute resource.

  6. From the Provider list, select KubeVirt.

  7. In the Description field, enter a description for the compute resource.

  8. In the Hostname field, enter the address of the KubeVirt server that you want to use.

  9. In the API Port field, enter the port number that you want to use for provisioning requests from orcharhino to KubeVirt.

  10. In the Namespace field, enter the user name of the KubeVirt virtual cluster that you want to use.

  11. In the Token field, enter the bearer token for HTTP and HTTPs authentication.

  12. Optional: In the X509 Certification Authorities field, enter a certificate to enable client certificate authentication for API server calls.

Provisioning Cloud Instances in Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides public cloud compute resources. Using orcharhino, you can interact with Amazon EC2’s public API to create cloud instances and control their power management states. Use the procedures in this chapter to add a connection to an Amazon EC2 account and provision a cloud instance.

Refer to the Amazon EC2 guide for more information.

Prerequisites for Amazon EC2 Provisioning

The requirements for Amazon EC2 provisioning include:

  • A orcharhino Proxy managing a network in your EC2 environment. Use a Virtual Private Cloud (VPC) to ensure a secure network between the hosts and orcharhino Proxy.

  • An Amazon Machine Image (AMI) for image-based provisioning.

Adding an Amazon EC2 Connection to the orcharhino server

Use this procedure to add the Amazon EC2 connection in orcharhino server’s compute resources. To use the CLI instead of the web UI, see the CLI procedure.

Time Settings and Amazon Web Services

Amazon Web Services uses time settings as part of the authentication process. Ensure that orcharhino server’s time is correctly synchronized. Ensure that an NTP service, such as ntpd or chronyd, is running properly on orcharhino server. Failure to provide the correct time to Amazon Web Services can lead to authentication failures.

For more information about synchronizing time in orcharhino, see Synchronizing Time in Installing orcharhino.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and in the Compute Resources window, click Create Compute Resource.

  2. In the Name field, enter a name to identify the Amazon EC2 compute resource.

  3. From the Provider list, select EC2.

  4. In the Description field, enter information that helps distinguish the resource for future use.

  5. Optional: From the HTTP proxy list, select an HTTP proxy to connect to external API services. You must add HTTP proxies to orcharhino before you can select a proxy from this list. For more information, see Using an HTTP Proxy with Compute Resources.

  6. In the Access Key and Secret Key fields, enter the access keys for your Amazon EC2 account. For more information, see Managing Access Keys for your AWS Account on the Amazon documentation website.

  7. Optional: Click the Load Regions button to populate the Regions list.

  8. From the Region list, select the Amazon EC2 region or data center to use.

  9. Click the Locations tab and ensure that the location you want to use is selected, or add a different location.

  10. Click the Organizations tab and ensure that the organization you want to use is selected, or add a different organization.

  11. Click Submit to save the Amazon EC2 connection.

  12. Select the new compute resource and then click the SSH keys tab, and click Download to save a copy of the SSH keys to use for SSH authentication. If you require SSH keys at a later stage, follow the procedure in Connecting to an Amazon EC2 instance using SSH.

CLI procedure
  • Create the connection with the hammer compute-resource create command. Use --user and --password options to add the access key and secret key respectively.

    # hammer compute-resource create --name "My_EC2" --provider "EC2" \
    --description "Amazon EC2 Public Cloud` --user "user_name" \
    --password "secret_key" --region "us-east-1" --locations "New York" \
    --organizations "My_Organization"

Using an HTTP Proxy with Compute Resources

In some cases, the EC2 compute resource that you use might require a specific HTTP proxy to communicate with orcharhino. In orcharhino, you can create an HTTP proxy and then assign the HTTP proxy to your EC2 compute resource.

However, if you configure an HTTP proxy for orcharhino in Administer > Settings, and then add another HTTP proxy for your compute resource, the HTTP proxy that you define in Administer > Settings takes precedence.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > HTTP Proxies, and select New HTTP Proxy.

  2. In the Name field, enter a name for the HTTP proxy.

  3. In the URL field, enter the URL for the HTTP proxy, including the port number.

  4. Optional: Enter a username and password to authenticate to the HTTP proxy, if your HTTP proxy requires authentication.

  5. Click Test Connection to ensure that you can connect to the HTTP proxy from orcharhino.

  6. Click the Locations tab and add a location.

  7. Click the Organization tab and add an organization.

  8. Click Submit.

Adding Amazon EC2 Images to orcharhino server

Amazon EC2 uses image-based provisioning to create hosts. You must add image details to your orcharhino server. This includes access details and image location.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and select an Amazon EC2 connection.

  2. Click the Images tab, and then click New Image.

  3. In the Name field, enter a name to identify the image for future use.

  4. From the Operating System list, select the operating system that corresponds with the image you want to add.

  5. From the Architecture list, select the operating system’s architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. In the Image ID field, enter the Amazon Machine Image (AMI) ID for the image. This is usually in the following format: ami-xxxxxxxx.

  9. Optional: Select the User Data check box if the images support user data input, such as cloud-init data. If you enable user data, the Finish scripts are automatically disabled. This also applies in reverse: if you enable the Finish scripts, this disables user data.

  10. Optional: In the IAM role field, enter the Amazon security role used for creating the image.

  11. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the Amazon EC2 server.

    # hammer compute-resource image create --name "Test Amazon EC2 Image" \
    --operatingsystem "RedHat 7.2" --architecture "x86_64" --username root \
    --user-data true --uuid "ami-my_ami_id" --compute-resource "My_EC2"

Adding Amazon EC2 Details to a Compute Profile

You can add hardware settings for instances on Amazon EC2 to a compute profile.

Procedure

To add hardware settings, complete the following steps:

  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles and click the name of your profile, then click an EC2 connection.

  2. From the Flavor list, select the hardware profile on EC2 to use for the host.

  3. From the Image list, select the image to use for image-based provisioning.

  4. From the Availability zone list, select the target cluster to use within the chosen EC2 region.

  5. From the Subnet list, add the subnet for the EC2 instance. If you have a VPC for provisioning new hosts, use its subnet.

  6. From the Security Groups list, select the cloud-based access rules for ports and IP addresses to apply to the host.

  7. From the Managed IP list, select either a Public IP or a Private IP.

  8. Click Submit to save the compute profile.

CLI procedure

The compute profile CLI commands are not yet implemented in orcharhino. As an alternative, you can include the same settings directly during the host creation process.

Creating Image-Based Hosts on Amazon EC2

The Amazon EC2 provisioning process creates hosts from existing images on the Amazon EC2 server. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. From the Host Group list, you can select a host group to populate most of the new host’s fields.

  4. From the Deploy on list, select the EC2 connection.

  5. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  6. Click the Interface tab, and then click Edit on the host’s interface, and verify that the fields are populated with values. Leave the Mac Address field blank. orcharhino server automatically selects and IP address and the Managed, Primary, and Provision options for the first interface on the host.

  7. Click the Operating System tab and confirm that all fields are populated with values.

  8. Click the Virtual Machine tab and confirm that all fields are populated with values.

  9. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  10. Click Submit to save your changes.

This new host entry triggers the Amazon EC2 server to create the instance, using the pre-existing image as a basis for the new volume.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image to use image-based provisioning.

    # hammer host create --name "ec2-test1" --organization "My_Organization" \
    --location "New York" --hostgroup "Base" \
    --compute-resource "My_EC2" --provision-method image \
    --image "Test Amazon EC2 Image" --enabled true --managed true \
    --interface "managed=true,primary=true,provision=true,subnet_id=EC2" \
    --compute-attributes="flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Connecting to an Amazon EC2 instance using SSH

You can connect remotely to an Amazon EC2 instance from orcharhino server using SSH. However, to connect to any Amazon Web Services EC2 instance that you provision through orcharhino, you must first access the private key that is associated with the compute resource in the Foreman database, and use this key for authentication.

Procedure
  1. To locate the compute resource list, on your orcharhino server base system, enter the following command, and note the ID of the compute resource that you want to use:

    # hammer compute-resource list
  2. Switch user to the postgres user:

    # su - postgres
  3. Initiate the postgres shell:

    $ psql
  4. Connect to the Foreman database as the user postgres:

    # postgres=# \c foreman
  5. Select the secret from key_pairs where compute_resource_id = 3:

    # select secret from key_pairs where compute_resource_id = 3; secret
  6. Copy the key from after -----BEGIN RSA PRIVATE KEY----- until -----END RSA PRIVATE KEY-----.

  7. Create a .pem file and paste your key into the file:

    # vim Keyname.pem
  8. Ensure that you restrict access to the .pem file:

    # chmod 600 Keyname.pem
  9. To connect to the Amazon EC2 instance, enter the following command:

    ssh -i Keyname.pem   ec2-user@example.aws.com

Configuring a Finish Template for an Amazon Web Service EC2 Environment

You can use orcharhino finish templates during the provisioning of Linux instances in an Amazon EC2 environment.

If you want to use a Finish template with SSH, orcharhino must reside within the EC2 environment and in the correct security group. orcharhino currently performs SSH finish provisioning directly, not using orcharhino Proxy. If orcharhino server does not reside within EC2, the EC2 virtual machine reports an internal IP rather than the necessary external IP with which it can be reached.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Provisioning Templates.

  2. In the Provisioning Templates page, enter Kickstart default finish into the search field and click Search.

  3. On the Kickstart default finish template, select Clone.

  4. In the Name field, enter a unique name for the template.

  5. In the template, prefix each command that requires root privileges with sudo, except for yum or equivalent commands, or add the following line to run the entire template as the sudo user:

    sudo -s << EOS
    _Template_ _Body_
    EOS
  6. Click the Association tab, and associate the template with a Red Hat Enterprise Linux operating system that you want to use.

  7. Click the Locations tab, and add the the location where the host resides.

  8. Click the Organizations tab, and add the organization that the host belongs to.

  9. Make any additional customizations or changes that you require, then click Submit to save your template.

  10. Navigate to Hosts > Operating systems and select the operating system that you want for your host.

  11. Click the Templates tab, and from the Finish Template list, select your finish template.

  12. Navigate to Hosts > Create Host and enter the information about the host that you want to create.

  13. Click the Parameters tab and navigate to Host parameters.

  14. In Host parameters, click the Add Parameter button three times to add three new parameter fields. Add the following three parameters:

    1. In the Name field, enter remote_execution_ssh_keys. In the corresponding Value field, enter the output of cat /usr/share/foreman-proxy/.ssh/id_rsa_foreman_proxy.pub.

    2. In the Name field, enter remote_execution_ssh_user. In the corresponding Value field, enter ec2-user.

    3. In the Name field, enter activation_keys. In the corresponding Value field, enter your activation key.

  15. Click Submit to save the changes.

More Information about Amazon Web Services and orcharhino

For information about how to install and use the Amazon Web Service Client on Linux, see Install the AWS Command Line Interface on Linux in the Amazon Web Services documentation.

For information about importing and exporting virtual machines in Amazon Web Services, see VM Import/Export in the Amazon Web Services documentation.

Provisioning Cloud Instances on Google Compute Engine

orcharhino can interact with Google Compute Engine (GCE), including creating new virtual machines and controlling their power management states. Only image-based provisioning is supported for creating GCE hosts.

Refer to the Google GCE guide for more information.

Prerequisites
  • In your GCE project, configure a service account with the necessary IAM Compute role. For more information, see Compute Engine IAM roles in the GCE documentation.

  • In your GCE project-wise metadata, set the enable-oslogin to FALSE. For more information, see Enabling or disabling OS Login in the GCE documentation.

  • Optional: If you want to use Puppet with GCE hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs.

  • Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates.

Adding a Google Compute Engine Connection to orcharhino server

Use this procedure to add Google Compute Engine (GCE) as a compute resource in orcharhino. To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In GCE, generate a service account key in JSON format and upload this file to the /usr/share/foreman/ directory on orcharhino server.

  2. On orcharhino server, change the owner for the service account key to the foreman user:

    # chown foreman /usr/share/foreman/gce_key.json
  3. Configure permissions for the service account key to ensure that the file is readable:

    # chmod 0600 /usr/share/foreman/gce_key.json
  4. Restore SELinux context for the service account key:

    # restorecon -vv /usr/share/foreman/gce_key.json
  5. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  6. In the Name field, enter a name for the compute resource.

  7. From the Provider list, select Google.

  8. Optional: In the Description field, enter a description for the resource.

  9. In the Google Project ID field, enter the project ID.

  10. In the Client Email field, enter the client email.

  11. In the Certificate Path field, enter the path to the service account key. For example, /usr/share/foreman/gce_key.json.

  12. Click Load Zones to populate the list of zones from your GCE environment.

  13. From the Zone list, select the GCE zone to use.

  14. Click Submit.

CLI procedure
  1. In GCE, generate a service account key in JSON format and upload this file to the /usr/share/foreman/ directory on orcharhino server.

  2. On orcharhino server, change the owner for the service account key to the foreman user:

    # chown foreman /usr/share/foreman/gce_key.json
  3. Configure permissions for the service account key to ensure that the file is readable:

    # chmod 0600 /usr/share/foreman/gce_key.json
  4. Restore SELinux context for the service account key:

    # restorecon -vv /usr/share/foreman/gce_key.json
  5. Use the hammer compute-resource create command to add a GCE compute resource to orcharhino:

    # hammer compute-resource create --name 'gce_cr' \
    --provider 'gce' \
    --project 'gce_project_id' \
    --key-path 'gce_key.json' \
    --zone 'us-west1-b' \
    --email 'gce_email'

Adding Google Compute Engine Images to orcharhino server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino server.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click the name of the Google Compute Engine connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the image’s base operating system.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. Specify a user other than root, because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers.

  7. From the Image list, select an image from the Google Compute Engine compute resource.

  8. Optional: Select the User Data check box if the image supports user data input, such as cloud-init data.

  9. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. With the --username option, specify a user other than root, because the root user cannot connect to a GCE instance using SSH keys. The username must begin with a letter and consist of lowercase letters and numbers.

    # hammer compute-resource image create \
    --name 'gce_image_name' \
    --compute-resource 'gce_cr' \
    --operatingsystem-id 1 \
    --architecture-id 1 \
    --uuid '3780108136525169178' \
    --username 'admin'

Adding Google Compute Engine Details to a Compute Profile

Use this procedure to add GCE hardware settings to a compute profile. When you create a host on GCE using this compute profile, these settings are automatically populated.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the GCE compute resource.

  4. From the Machine Type list, select the machine type to use for provisioning.

  5. From the Image list, select the image to use for provisioning.

  6. From the Network list, select the GCE network to use for provisioning.

  7. Optional: Select the Associate Ephemeral External IP check box to assign a dynamic ephemeral IP address that orcharhino uses to communicate with the host. This public IP address changes when you reboot the host. If you need a permanent IP address, reserve a static public IP address on GCE and attach it to the host.

  8. In the Size (GB) field, enter the size of the storage to create on the host.

  9. Click Submit to save the compute profile.

CLI procedure
  1. Create a compute profile to use with the GCE compute resource:

    # hammer compute-profile create --name gce_profile
  2. Add GCE details to the compute profile.

    # hammer compute-profile values create --compute-profile gce_profile \
    --compute-resource 'gce_cr' \
    --volume "size_gb=20" \
    --compute-attributes "machine_type=f1-micro,associate_external_ip=true,network=default"

Creating Image-based Hosts on Google Compute Engine

In orcharhino, you can use Google Compute Engine provisioning to create hosts from an existing image. The new host entry triggers the Google Compute Engine server to create the instance using the pre-existing image as a basis for the new volume.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the Google Compute Engine connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  7. From the Lifecycle Environment list, select the environment.

  8. Click the Interfaces tab and click Edit on the host’s interface.

  9. Verify that the fields are automatically populated, particularly the following items:

    • The Name from the Host tab becomes the DNS name.

    • The MAC address field is blank. Google Compute Engine assigns a MAC address to the host during provisioning.

    • orcharhino server automatically assigns an IP address for the new host.

    • The Domain field is populated with the required domain.

    • The Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

  10. Click the Operating System tab, and confirm that all fields automatically contain values.

  11. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  12. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  13. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  14. Click Submit to save the host entry.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name "GCE_VM" \
    --organization "Your_Organization" \
    --location "Your_Location" \
    --compute-resource gce_cr_name
    --compute-profile "gce_profile_name" \
    --provision-method 'image' \
    --image gce_image_name \
    --root-password "your_root_password" \
    --interface "type=interface,domain_id=1,managed=true,primary=true,provision=true" \
    --puppet-environment-id 1 \
    --puppet-ca-proxy-id 1 \
    --puppet-proxy-id 1 \
    --architecture x86_64 \
    --operatingsystem "operating_system_name"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Provisioning Cloud Instances on Microsoft Azure Resource Manager

orcharhino can interact with Microsoft Azure Resource Manager, including creating new virtual machines and controlling their power management states. Only image-based provisioning is supported for creating Azure hosts. This includes provisioning using Marketplace images, custom images, and shared image gallery.

For more information about Azure Resource Manager concepts, see Azure Resource Manager documentation.

Note that orcharhino does not support Azure Government.

Prerequisites
  • Ensure that you have the correct permissions to create an Azure Active Directory application. For more information, see Check Azure AD permissions in the Microsoft identity platform (Azure Active Directory for developers) documentation.

  • You must create and configure an Azure Active Directory application and service principle to obtain Application or client ID, Directory or tenant ID, and Client Secret. For more information, see Use the portal to create an Azure AD application and service principal that can access resources in the Microsoft identity platform (Azure Active Directory for developers) documentation.

  • Optional: If you want to use Puppet with Azure hosts, navigate to Administer > Settings > Puppet and enable the Use UUID for certificates setting to configure Puppet to use consistent Puppet certificate IDs.

  • Based on your needs, associate a finish or user_data provisioning template with the operating system you want to use. For more information about provisioning templates, see Provisioning Templates.

  • Optional: If you want the virtual machine to use a static private IP address, create a subnet in orcharhino with the Network Address field matching the Azure subnet address.

  • Before creating RHEL BYOS images, you must accept the image terms either in the Azure CLI or Portal so that the image can be used to create and manage virtual machines for your subscription.

Installing the Foreman AzureRm Plug-in

Refer to the installation section of the Microsoft Azure guide on how to install this plugin.

Adding a Microsoft Azure Resource Manager Connection to orcharhino server

Use this procedure to add Microsoft Azure Resource Manager as a compute resource in orcharhino. Note that you must add a separate compute resource for each Azure Resource Manager region that you want to use.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the compute resource.

  3. From the Provider list, select Azure Resource Manager.

  4. Optional: In the Description field, enter a description for the resource.

  5. In the Client ID field, enter the Application or client ID.

  6. In the Client Secret field, enter the client secret.

  7. In the Subscription ID field, enter the subscription ID.

  8. In the Tenant ID field, enter the Directory or tenant ID.

  9. Click Load Regions. This tests if your connection to Azure Resource Manager is successful and loads the regions available in your subscription.

  10. From the Azure Region list, select the Azure region to use.

  11. Click Submit.

CLI procedure
  • Use the hammer compute-resource create command to add an Azure compute resource to orcharhino. Note that the value for the --region option must be in lowercase and must not contain special characters.

    # hammer compute-resource create --name azure_cr_name \
    --provider azurerm \
    --tenant tenant-id \
    --app-ident client-id \
    --secret-key client-secret \
    --sub-id subscription-id \
    --region 'region'

Adding Microsoft Azure Resource Manager Images to orcharhino server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino server.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Resources and click the name of the Microsoft Azure Resource Manager connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the image’s base operating system.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. You cannot use the root user.

  7. Optional: In the Password field, enter a password to authenticate with.

  8. In the Azure Image Name field, enter an image name in the format prefix://UUID.

    • For a custom image, use the prefix custom. For example, custom://image-name.

    • For a shared gallery image, use the prefix gallery. For example, gallery://image-name.

    • For public and RHEL Bring Your Own Subscription (BYOS) images, use the prefix marketplace. For example, marketplace://OpenLogicCentOS:7.5:latest.

  9. Optional: Select the User Data check box if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Note that the username that you enter for the image must be the same that you use when you create a host with this image. The --password option is optional when creating an image. You cannot use the root user.

    # hammer compute-resource image create \
    --name Azure_image_name \
    --compute-resource azure_cr_name \
    --uuid 'marketplace://RedHat:RHEL:7-RAW:latest' \
    --username 'azure_username' \
    --user-data no

Adding Microsoft Azure Resource Manager Details to a Compute Profile

Use this procedure to add Azure hardware settings to a compute profile. When you create a host on Azure using this compute profile, these settings are automatically populated.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the Azure compute resource.

  4. From the Resource group list, select the resource group to provision to.

  5. From the VM Size list, select a size of a virtual machine to provision.

  6. From the Platform list, select Linux. orcharhino does not support provisioning virtual machines running Microsoft operating systems.

  7. In the Username field, enter a user name to authenticate with. Note that the username that you enter for compute profile must be the same that you use when creating an image.

  8. To authenticate the user, use one of the following options:

    • To authenticate using a password, enter a password in the Password field.

    • To authenticate using an SSH key, enter an SSH key in the SSH Key field.

  9. Optional: If you want the virtual machine to use a premium virtual machine disk, select the Premium OS Disk check box.

  10. From the OS Disk Caching list, select the disc caching setting.

  11. Optional: In the Custom Script Command field, enter commands to perform on the virtual machine when the virtual machine is provisioned.

  12. Optional: If you want to run custom scripts when provisioning finishes, in the Comma separated file URIs field, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because orcharhino downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and scripts require sudo privileges to be executed.

  13. Optional: If you want to create an additional volume on the VM, click the Add Volume button, enter the Size in GB and select the Data Disk Caching method.

  14. Click Add Interface.

  15. From the Azure Subnet list, select the Azure subnet to provision to.

  16. From the Public IP list, select the public IP setting.

  17. Optional: If you want the virtual machine to use a static private IP, select the Static Private IP check box.

  18. Click Submit.

CLI procedure
  1. Create a compute profile to use with the Azure Resource Manager compute resource:

    # hammer compute-profile create --name compute_profile_name
  2. Add Azure details to the compute profile. With the username setting, enter the SSH user name for image access. Note that the username that you enter for compute profile must be the same that you use when creating an image.

    # hammer compute-profile values create \
    --compute-profile "compute_profile_name" \
    --compute-resource azure_cr_name \
    --compute-attributes="resource_group=resource_group,vm_size=Standard_B1s,username=azure_user,password=azure_password,platform=Linux,script_command=touch /var/tmp/text.txt" \
    --interface="compute_public_ip=Dynamic,compute_network=mysubnetID,compute_private_ip=false"
    --volume="disk_size_gb=5,data_disk_caching=None"

    Optional: If you want to run scripts on the virtual machine after provisioning, specify the following settings:

    • To enter the script directly, with the script_command setting, enter a command to be executed on the provisioned virtual machine.

    • To run a script from a URI, with the script_uris setting, enter comma-separated file URIs of scripts to use. The scripts must contain sudo at the beginning because orcharhino downloads files to the /var/lib/waagent/custom-script/download/0/ directory on the host and therefore scripts require sudo privileges to be executed.

Creating Image-based Hosts on Microsoft Azure Resource Manager

In orcharhino, you can use Microsoft Azure Resource Manager provisioning to create hosts from an existing image. The new host entry triggers the Microsoft Azure Resource Manager server to create the instance using the pre-existing image as a basis for the new volume.

To use the CLI instead of the web UI, see the CLI procedure.

Procedure
  1. In the orcharhino web UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the Microsoft Azure Resource Manager connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  7. From the Lifecycle Environment list, select the environment.

  8. Click the Interfaces tab and click Edit on the host’s interface.

  9. Verify that the fields are automatically populated, particularly the following items:

    • The Name from the Host tab becomes the DNS name.

    • The MAC address field is blank. Microsoft Azure Resource Manager assigns a MAC address to the host during provisioning.

    • The Azure Subnet field is populated with the required Azure subnet.

    • The Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

  10. Optional: If you want to use a static private IP address, from the IPv4 Subnet list select the orcharhino subnet with the Network Address field matching the Azure subnet address. In the IPv4 Address field, enter an IPv4 address within the range of your Azure subnet.

  11. Click the Operating System tab, and confirm that all fields automatically contain values.

  12. For Provisioning Method, ensure Image Based is selected.

  13. From the Image list, select the Azure Resource Manager image that you want to use for provisioning.

  14. In the Root Password field, enter the root password to authenticate with.

  15. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  16. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  17. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  18. Click Submit to save the host entry.

CLI procedure
  • Create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    # hammer host create \
    --name="Azure_VM" \
    --organization "Your_Organization" \
    --location "Your_Location" \
    --compute-resource azure_cr_name \
    --compute-profile "compute_profile_name" \
    --provision-method 'image' \
    --image Azure_image_name \
    --domain domain_name \
    --architecture x86_64 \
    --operatingsystem "operating_system_name"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Appendix A: Initialization Script for Provisioning Examples

If you have not followed the examples in the orcharhino Content Management Guide, you can use the following initialization script to create an environment for provisioning examples.

Create a script file (content-init.sh) and include the following:

#!/bin/bash

MANIFEST=$1

# Import the content from Red Hat CDN
hammer organization create --name "ACME" --label "ACME" \
--description "Our example organization for managing content."
hammer subscription upload --file ~/$MANIFEST --organization "ACME"
hammer repository-set enable \
--name "Red Hat Enterprise Linux 7 Server (RPMs)" \
--releasever "7Server" --basearch "x86_64" \
--product "Red Hat Enterprise Linux Server" --organization "ACME"
hammer repository-set enable \
--name "Red Hat Enterprise Linux 7 Server (Kickstart)" \
--releasever "7Server" --basearch "x86_64" \
--product "Red Hat Enterprise Linux Server" --organization "ACME"
hammer repository-set enable \
--name "Red Hat orcharhino client (for RHEL 7 Server) (RPMs)" \
--basearch "x86_64" --product "Red Hat Enterprise Linux Server" \
--organization "ACME"
hammer product synchronize --name "Red Hat Enterprise Linux Server" \
--organization "ACME"

# Create our application life cycle
hammer lifecycle-environment create --name "Development" \
--description "Environment for ACME's Development Team" \
--prior "Library" --organization "ACME"
hammer lifecycle-environment create --name "Testing" \
--description "Environment for ACME's Quality Engineering Team" \
--prior "Development" --organization "ACME"
hammer lifecycle-environment create --name "Production" \
--description "Environment for ACME's Product Releases" \
--prior "Testing" --organization "ACME"

# Create and publish our Content View
hammer content-view create --name "Base" \
--description "Base operating system" \
--repositories "Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server,Red Hat orcharhino client for RHEL 7 Server RPMs x86_64" \
--organization "ACME"
hammer content-view publish --name "Base" \
--description "Initial content view for our operating system" \
--organization "ACME"
hammer content-view version promote --content-view "Base" --version 1 \
--to-lifecycle-environment "Development" --organization "ACME"
hammer content-view version promote --content-view "Base" --version 1 \
--to-lifecycle-environment "Testing" --organization "ACME"
hammer content-view version promote --content-view "Base" --version 1 \
--to-lifecycle-environment "Production" --organization "ACME"

Set executable permissions on the script:

# chmod +x content-init.sh

Download a copy of your Subscription Manifest from the Red Hat Customer Portal and run the script on the manifest:

# ./content-init.sh manifest_98f4290e-6c0b-4f37-ba79-3a3ec6e405ba.zip

This imports the necessary Red Hat content for the provisioning examples in this guide.

Appendix B: Provisioning FIPS-Compliant Hosts

orcharhino supports provisioning hosts that comply with the National Institute of Standards and Technology’s Security Requirements for Cryptographic Modules standard, reference number FIPS 140-2, referred to here as FIPS.

To enable the provisioning of hosts that are FIPS-compliant, complete the following tasks:

  • Change the provisioning password hashing algorithm for the operating system

  • Create a host group and set a host group parameter to enable FIPS

For more information about creating host groups, see Creating a Host Group in the Managing Hosts guide.

The provisioned hosts have the FIPS-compliant settings applied. To confirm that these settings are enabled, complete the steps in Verifying FIPS Mode is Enabled.

Change the Provisioning Password Hashing Algorithm

To provision FIPS-compliant hosts, you must first set the password hashing algorithm that you use in provisioning to SHA256. This configuration setting must be applied for each operating system you want to deploy as FIPS-compliant.

  1. Identify the Operating System IDs.

    $ hammer os list
  2. Update each operating system’s password hash value.

    $ hammer os update --title Operating_System \
      --password-hash SHA256
  3. Repeat this command for each of the operating systems, using the matching value in the TITLE column:

    $ hammer os update --title "RedHat version_number" \
      --password-hash SHA256

    Note that you cannot use a comma-separated list of values.

Setting the FIPS-Enabled Parameter

To provision a FIPS-compliant host, you must create a host group and set the host group parameter fips_enabled to true. If this is not set to true, or is absent, the FIPS-specific changes do not apply to the system. You can set this parameter when you provision a host or for a host group.

To set this parameter when provisioning a host, append --parameters fips_enabled=true to the Hammer command.

$ hammer hostgroup set-parameter --name fips_enabled \
 --value 'true' \
 --hostgroup prod_servers

For more information, see the output of the command hammer hostgroup set-parameter --help.

Verifying FIPS Mode is Enabled

To verify these FIPS compliance changes have been successful, you must provision a host and check its configuration.

  1. Log on to the host as root or with an admin-level account.

  2. Enter the following command:

    $ cat /proc/sys/crypto/fips_enabled

    A value of 1 confirms that FIPS mode is enabled.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").