Configuring provisioning resources

Provisioning contexts

A provisioning context is the combination of an organization and location that you specify for orcharhino components. The organization and location that a component belongs to sets the ownership and access for that component.

Organizations divide orcharhino components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through orcharhino and assign components to each individual organization. This ensures orcharhino Server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in Managing Organizations and Locations.

Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in Managing Organizations and Locations.

Setting the provisioning context

When you set a provisioning context, you define which organization and location to use for provisioning hosts.

The organization and location menus are located in the menu bar, on the upper left of the orcharhino management UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location.

Procedure
  1. Click Any Organization and select the organization.

  2. Click Any Location and select the location to use.

Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the orcharhino management UI and select My account to edit your user account settings.

CLI procedure
  • When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example:

    $ hammer host list --organization "My_Organization" --location "My_Location"

    This command outputs hosts allocated to My_Organization and My_Location.

Creating operating systems

An operating system is a collection of resources that define how orcharhino Server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others.

You can add operating systems using the following procedure. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

You can use a script to add operating system entries to your orcharhino Server.

On your orcharhino Server, uncomment the operating systems and orcharhino Client for CentOS Stream that you want to add in /etc/orcharhino-ansible/or_operating_systems_vars.yaml, replace the default organization and location names, and run /opt/orcharhino/automation/play_operating_systems.sh. For more information, see /usr/share/orcharhino-ansible/README.md on your orcharhino Server.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Operating systems and click New Operating system.

  2. In the Name field, enter a name to represent the operating system entry.

  3. In the Major field, enter the number that corresponds to the major version of the operating system.

  4. In the Minor field, enter the number that corresponds to the minor version of the operating system.

  5. In the Description field, enter a description of the operating system.

  6. From the Family list, select the operating system’s family.

  7. From the Root Password Hash list, select the encoding method for the root password.

  8. From the Architectures list, select the architectures that the operating system uses.

  9. Click the Partition table tab and select the possible partition tables that apply to this operating system.

  10. Click the Installation Media tab and enter the information for the installation media source. For more information, see Adding Installation Media to orcharhino.

  11. Click the Templates tab and select a PXELinux template, a Provisioning template, and a Finish template for your operating system to use. You can select other templates, for example an iPXE template, if you plan to use iPXE for provisioning.

  12. On the Parameters tab, add the or_client_repo_url parameter, select the string type, and enter the value http://orcharhino.example.com/pulp/content/_My_Organization_/Library/custom/centos_Client/centos_Client_centos/.

    If you want to provision hosts through orcharhino Proxy Servers, you have to use the or_client_repo_path parameter containing the Published At path of your content on orcharhino Server. orcharhino Proxy Server will add the FQDN of your orcharhino Proxy Server to generate the URL based on the content source setting of your host.

  13. Click Submit to save your provisioning template.

CLI procedure
  • Create the operating system using the hammer os create command:

    $ hammer os create \
    --architectures "x86_64" \
    --description "My_Operating_System" \
    --family "Redhat" \
    --major 9 \
    --media "Red Hat" \
    --minor 0 \
    --name "CentOS Stream" \
    --partition-tables "My_Partition_Table" \
    --provisioning-templates "My_Provisioning_Template"

Updating the details of multiple operating systems

Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default, a configuration template called Kickstart default PXELinux, and a provisioning template called Kickstart Default.

Procedure
  1. On orcharhino Server, run the following Bash script:

    PARTID=$(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1)
    PXEID=$(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1)
    ORCHARHINO_ID=$(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1)
    
    for i in $(hammer --no-headers --csv os list | awk -F, {'print $1'})
    do
       hammer partition-table add-operatingsystem --id="${PARTID}" --operatingsystem-id="${i}"
       hammer template add-operatingsystem --id="${PXEID}" --operatingsystem-id="${i}"
       hammer os set-default-template --id="${i}" --config-template-id=${PXEID}
       hammer os add-config-template --id="${i}" --config-template-id=${ORCHARHINO_ID}
       hammer os set-default-template --id="${i}" --config-template-id=${ORCHARHINO_ID}
    done
  2. Display information about the updated operating system to verify that the operating system is updated correctly:

    $ hammer os info --id 1

Creating architectures

An architecture in orcharhino represents a logical grouping of hosts and operating systems. Architectures are created by orcharhino automatically when hosts check in with Puppet. The x86_64 architecture is already preset in orcharhino.

Use this procedure to create an architecture in orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Provisioning Setup > Architectures.

  2. Click Create Architecture.

  3. In the Name field, enter a name for the architecture.

  4. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Provisioning Setup > Operating Systems.

  5. Click Submit.

CLI procedure
  • Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture:

    $ hammer architecture create \
    --name "My_Architecture" \
    --operatingsystems "My_Operating_System"

Creating hardware models

Use this procedure to create a hardware model in orcharhino so that you can specify which hardware model a host uses.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Provisioning Setup > Hardware Models.

  2. Click Create Model.

  3. In the Name field, enter a name for the hardware model.

  4. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system.

  5. In the Info field, enter a description of the hardware model.

  6. Click Submit to save your hardware model.

CLI procedure
  • Create a hardware model using the hammer model create command. The only required parameter is --name. Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option:

    $ hammer model create \
    --hardware-model "My_Hardware_Model" \
    --info "My_Description" \
    --name "My_Hardware_Model_Name" \
    --vendor-class "My_Vendor_Class"

Using a synchronized Kickstart repository for a host’s operating system

orcharhino contains a set of synchronized Kickstart repositories that you use to install the provisioned host’s operating system. For more information about adding repositories, see Syncing Repositories in Managing Content.

Use this procedure to set up a Kickstart repository.

Prerequisites

You must enable both BaseOS and Appstream Kickstart before provisioning.

Procedure
  1. Add the synchronized Kickstart repository that you want to use to the existing content view, or create a new content view and add the Kickstart repository.

    For CentOS Stream 8, ensure that you add both CentOS Stream 8 for x86_64 - AppStream Kickstart x86_64 8 and CentOS Stream 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories.

    If you use a disconnected environment, you must import the Kickstart repositories from a CentOS Stream binary DVD. For more information, see Importing Kickstart Repositories in Managing Content.

  2. Publish a new version of the content view where the Kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing content views in Managing Content.

  3. When you create a host, in the Operating System tab, for Media Selection, select the Synced Content checkbox.

To view the Kickstart tree, enter the following command:

$ hammer medium list --organization "My_Organization"

Adding installation media to orcharhino

Installation media are sources of packages that orcharhino Server uses to install a base operating system on a machine from an external repository.

You can view installation media by navigating to Hosts > Provisioning Setup > Installation Media.

Installation media must be in the format of an operating system installation tree and must be accessible from the machine hosting the installer through an HTTP URL.

If you want to improve download performance when using installation media to install operating systems on multiple hosts, you must modify the Path of the installation medium to point to the closest mirror or a local copy.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Provisioning Setup > Installation Media.

  2. Click Create Medium.

  3. In the Name field, enter a name to represent the installation media entry.

  4. In the Path enter the URL that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions:

    • $arch – The system architecture.

    • $version – The operating system version.

    • $major – The operating system major version.

    • $minor – The operating system minor version.

      Example HTTP path:

      http://http://download.example.com/centos/$version/Server/$arch/os/
  5. From the Operating system family list, select the distribution or family of the installation medium. CentOS Stream belongs to the Red Hat family.

  6. Click the Organizations and Locations tabs, to change the provisioning context. orcharhino Server adds the installation medium to the set provisioning context.

  7. Click Submit to save your installation medium.

ATIX AG provides file repositories for installation media. You can synchronize them to provision hosts in disconnected environments by using local installation media. For a list of upstream URLs, see Using file repositories for installation media for Debian/Ubuntu in the ATIX Service Portal.

ATIX AG provides the following installation media as file repository:

  • Debian 9

  • Debian 10

  • Debian 11

  • Debian 12

  • Ubuntu 18.04

  • Ubuntu 20.04

  • Ubuntu 22.04

CLI procedure
  • Create the installation medium using the hammer medium create command:

    $ hammer medium create \
    --locations "My_Location" \
    --name "My_Operating_System" \
    --organizations "My_Organization" \
    --os-family "Redhat" \
    --path "http://http://download.example.com/centos/$version/Server/$arch/os/"

Creating partition tables

A partition table is a type of template that defines the way orcharhino Server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. orcharhino contains a set of default partition tables to use, including a Kickstart default. You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Templates > Partition Tables.

  2. Click Create Partition Table.

  3. In the Name field, enter a name for the partition table.

  4. Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations.

  5. Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables.

  6. From the Operating System Family list, select the distribution or family of the partitioning layout. CentOS Stream belongs to the Red Hat family. CentOS Stream belongs to the Red Hat family.

  7. In the Template editor field, enter the layout for the disk partition.

    The format of the layout must match that for the intended operating system. For example, CentOS Stream requires a layout that matches a Kickstart file, such as:

    zerombr
    clearpart --all --initlabel
    autopart

    For more information, see dynamic partition example.

    You can also use the file browser in the template editor to import the layout from a file.

  8. In the Audit Comment field, add a summary of changes to the partition layout.

  9. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. orcharhino adds the partition table to the current provisioning context.

  10. Click Submit to save your partition table.

CLI procedure
  1. Create a plain text file, such as ~/My_Partition_Table, that contains the partition layout.

    The format of the layout must match that for the intended operating system. For example, CentOS Stream requires a layout that matches a Kickstart file, such as:

    zerombr
    clearpart --all --initlabel
    autopart

    For more information, see dynamic partition example.

  2. Create the installation medium using the hammer partition-table create command:

    $ hammer partition-table create \
    --file "~/My_Partition_Table" \
    --locations "My_Location" \
    --name "My_Partition_Table" \
    --organizations "My_Organization" \
    --os-family "Redhat" \
    --snippet false

Dynamic partition example

Using an Anaconda Kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the next sequence of events in the provisioning process:

zerombr
clearpart --all --initlabel
autopart <%= host_param('autopart_options') %>

Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer.

If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section.

#Dynamic (do not remove this line)

MEMORY=$((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024))
if [ "$MEMORY" -lt 2048 ]; then
    SWAP_MEMORY=$(($MEMORY * 2))
elif [ "$MEMORY" -lt 8192 ]; then
    SWAP_MEMORY=$MEMORY
elif [ "$MEMORY" -lt 65536 ]; then
    SWAP_MEMORY=$(($MEMORY / 2))
else
    SWAP_MEMORY=32768
fi

cat <<EOF > /tmp/diskpart.cfg
zerombr
clearpart --all --initlabel
part /boot --fstype ext4 --size 200 --asprimary
part swap --size "$SWAP_MEMORY"
part / --fstype ext4 --size 1024 --grow
EOF

Provisioning templates

A provisioning template defines the way orcharhino Server installs an operating system on a host.

orcharhino includes many template examples. In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Templates > Provisioning Templates > Create Template > Help.

Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing Hosts.

You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in Managing Organizations and Locations.

You can synchronize templates between orcharhino Server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in Managing Hosts.

To view the history of changes applied to a template, navigate to Hosts > Templates > Provisioning Templates, select one of the templates, and click History. Click Revert to override the content with the previous version. You can also revert to an earlier change. Click Show Diff to see information about a specific change:

  • The Template Diff tab displays changes in the body of a provisioning template.

  • The Details tab displays changes in the template description.

  • The History tab displays the user who made a change to the template and date of the change.

Kinds of provisioning templates

There are various kinds of provisioning templates:

Provision

The main template for the provisioning process. For example, a Kickstart template.

PXELinux, PXEGrub, PXEGrub2

PXE-based templates that deploy to the template orcharhino Proxy associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2.

Finish

Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finish templates only for image-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment.

When a finish script successfully exits with the return code 0, orcharhino treats the code as a success and the host exits the build mode.

Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD.

user_data

Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require orcharhino to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image.

Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init, which expects YAML input, or ignition, which expects JSON input.

cloud_init

Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plugin, which attempts to download the template directly from orcharhino over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized.

Ensure that you meet the following requirements to use the cloud_init template:

  • Ensure that the image that you want to provision has the software to read the data installed and set to start during boot.

  • A provisioned host is able to reach orcharhino from the IP address that matches the host’s provisioning interface IP.

    Note that cloud-init does not work behind NAT.

Bootdisk

Templates for PXE-less boot methods.

Kernel Execution (kexec)

Kernel execution templates for PXE-less boot methods.

Script

An arbitrary script not used by default but useful for custom tasks.

ZTP

Zero Touch Provisioning templates.

POAP

PowerOn Auto Provisioning templates.

iPXE

Templates for iPXE or gPXE environments to use instead of PXELinux.

Creating provisioning templates

A provisioning template defines the way orcharhino Server installs an operating system on a host. Use this procedure to create a new provisioning template.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template.

  2. In the Name field, enter a name for the provisioning template.

  3. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template.

CLI procedure
  1. Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file.

  2. Create the template using the hammer template create command and specify the type with the --type option:

    $ hammer template create \
    --file ~/my-template \
    --locations "My_Location" \
    --name "My_Provisioning_Template" \
    --organizations "My_Organization" \
    --type provision

Cloning provisioning templates

A provisioning template defines the way orcharhino Server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Find the template that you want to use.

  3. Click Clone to duplicate the template.

  4. In the Name field, enter a name for the provisioning template.

  5. Select the Default checkbox to set the template to associate automatically with new organizations or locations.

  6. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file.

  7. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes.

  8. Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates.

  9. From the Type list, select the type of the template. For example, Provisioning template.

  10. Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template.

  11. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments.

  12. Click the Organizations and Locations tabs to add any additional contexts to the template.

  13. Click Submit to save your provisioning template.

Creating custom provisioning snippets

You can execute custom code before and/or after the host provisioning process.

Prerequisites
  • Check your provisioning template to ensure that it supports the custom snippets you want to use.

    You can view all provisioning templates under Hosts > Templates > Provisioning Templates.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template.

  2. In the Name field, enter a name for your custom provisioning snippet. The name must start with the name of a provisioning template that supports including custom provisioning snippets:

    • Append custom pre to the name of a provisioning template to run code before provisioning a host.

    • Append custom post to the name of a provisioning template to run code after provisioning a host.

  3. On the Type tab, select Snippet.

  4. Click Submit to create your custom provisioning snippet.

CLI procedure
  1. Create a plain text file that contains your custom snippet.

  2. Create the template using hammer:

    $ hammer template create \
    --file "/path/to/My_Snippet" \
    --locations "My_Location" \
    --name "My_Template_Name_custom_pre" \
    --organizations "_My_Organization" \
    --type snippet

Custom provisioning snippet example for CentOS Stream

You can use Custom Post snippets to call external APIs from within the provisioning template directly after provisioning a host.

Kickstart default finish custom post Example for CentOS Stream
echo "Calling API to report successful host deployment"

yum install -y curl ca-certificates

curl -X POST \
-H "Content-Type: application/json" \
-d '{"name": "<%= @host.name %>", "operating_system": "<%= @host.operatingsystem.name %>", "status": "provisioned",}' \
"https://api.example.com/"

Installing katello-host-tools-tracer during Provisioning

The katello-host-tools package is responsible for the connection between managed hosts and content from orcharhino Server and orcharhino Proxies. It uploads currently used repositories and installed packages to orcharhino. The katello-host-tools-tracer package signals orcharhino if services such as nginx have to be restarted or the managed host itself has to be rebooted after updating packages such as the Linux Kernel.

Procedure
  1. In the orcharhino management UI, navigate to the host group you are using to provision your host.

    Note that you can add the parameter to any level in the parameter hierarchy, for example to individual hosts, host groups, or operating systems.

  2. Add a parameter with name redhat_install_host_tracer_tools, type boolean, and value true.

  3. In the orcharhino management UI, navigate to the provisioning template you are using to provision your host.

  4. Ensure to reference the snippet to install katello-host-tools-tracer:

    <%= snippet 'redhat_register' %>

    After the deployment, orcharhino will automatically install both katello-host-tools and katello-host-tools-tracer on your managed host.

Installing katello-host-tools-tracer Manually

The katello-host-tools package is responsible for the connection between managed hosts and content from orcharhino Server and orcharhino Proxies. It uploads currently used repositories and installed packages to orcharhino. The katello-host-tools-tracer package signals orcharhino if services such as nginx have to be restarted or the managed host itself has to be rebooted after updating packages such as the Linux Kernel.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Jobs.

  2. Click Run Job.

  3. In the Job category drop down menu, select Commands.

  4. In the Job template drop down menu, select Run Command - SSH Default.

  5. In the Search Query field, filter your managed hosts, for example all hosts running CentOS Stream:

    os = CentOS

    Alternatively, you can also filter for specific hosts using name = my-host.example.com.

  6. In the Command field, enter the command to install the katello-host-tools-tracer package:

    $ dnf install -y katello-host-tools-tracer
  7. Click Submit to install the package.

Creating compute profiles

You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

A default installation of orcharhino contains three predefined profiles:

  • 1-Small

  • 2-Medium

  • 3-Large

You can apply compute profiles to all supported compute resources:

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile.

  2. In the Name field, enter a name for the profile.

  3. Click Submit. A new window opens with the name of the compute profile.

  4. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile.

CLI procedure
  1. Create a new compute profile:

    $ hammer compute-profile create --name "My_Compute_Profile"
  2. Set attributes for the compute profile:

    $ hammer compute-profile values create \
    --compute-attributes "flavor=m1.small,cpus=2,memory=4GB,cpu_mode=default \
    --compute-resource "My_Compute_Resource" \
    --compute-profile "My_Compute_Profile" \
    --volume size=40GB
  3. Optional: To update the attributes of a compute profile, specify the attributes you want to change. For example, to change the number of CPUs and memory size:

    $ hammer compute-profile values update \
    --compute-resource "My_Compute_Resource" \
    --compute-profile "My_Compute_Profile" \
    --attributes "cpus=2,memory=4GB" \
    --interface "type=network,bridge=br1,index=1" \
    --volume "size=40GB"
  4. Optional: To change the name of the compute profile, use the --new-name attribute:

    $ hammer compute-profile update \
    --name "My_Compute_Profile" \
    --new-name "My_New_Compute_Profile"
Additional resources
  • For more information about creating compute profiles by using Hammer, enter hammer compute-profile --help.

Setting a default encrypted root password for hosts

If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password.

The default root password can be inherited by a host group and consequentially by hosts in that group.

If you change the password and reprovision the hosts in the group that inherits the password, the password will be overwritten on the hosts.

Procedure
  1. Generate an encrypted password:

    $ python3 -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass("Confirm: ")) else exit()'
  2. Copy the password for later use.

  3. In the orcharhino management UI, navigate to Administer > Settings.

  4. On the Settings page, select the Provisioning tab.

  5. In the Name column, navigate to Root password, and click Click to edit.

  6. Paste the encrypted password, and click Save.

Using noVNC to access virtual machines

You can use your browser to access the VNC console of VMs created by orcharhino.

orcharhino supports using noVNC on the following virtualization platforms:

  • VMware

  • Libvirt

  • oVirt

Prerequisites
  • You must have a virtual machine created by orcharhino.

  • For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC.

  • You must import the Katello root CA certificate into your orcharhino Server. Adding a security exception in the browser is not enough for using noVNC. For more information, see Installing the Katello Root CA Certificate in Administering orcharhino.

Procedure
  1. On your orcharhino Server, configure the firewall to allow VNC service on ports 5900 to 5930.

    $ firewall-cmd --add-port=5900-5930/tcp
    $ firewall-cmd --add-port=5900-5930/tcp --permanent
  2. In the orcharhino management UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource.

  3. In the Virtual Machines tab, select the name of your virtual machine. Ensure the machine is powered on and then select Console.

Removing a virtual machine upon host deletion

By default, when you delete a host provisioned by orcharhino, orcharhino does not remove the actual VM on the compute resource. You can configure orcharhino to remove the VM when deleting the host entry on orcharhino.

If you do not remove the associated VM and attempt to create a new VM with the same FQDN later, it will fail because that VM already exists in the compute resource. You can still re-register the existing VM to orcharhino.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites
  • Your orcharhino account has a role that grants the view_settings and edit_settings permissions.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings > Provisioning.

  2. Change the value of the Destroy associated VM on host delete setting to Yes.

CLI procedure
  • Configure orcharhino to remove a VM upon host deletion by using Hammer:

    $ hammer settings set \
    --name destroy_vm_on_host_delete \
    --value true \
    --location "My_Location" \
    --organization "My_Organization"
Next steps
  • You can delete a host and orcharhino removes its associated VM in the compute resource.

Host parameter hierarchy

You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence:

Parameter Level Set in orcharhino management UI

Globally defined parameters

Configure > Global parameters

Organization-level parameters

Administer > Organizations

Location-level parameters

Administer > Locations

Domain-level parameters

Infrastructure > Domains

Subnet-level parameters

Infrastructure > Subnets

Operating system-level parameters

Hosts > Provisioning Setup > Operating Systems

Host group-level parameters

Configure > Host Groups

Host parameters

Hosts > All Hosts

Permissions required to provision hosts

The following list provides an overview of the permissions a non-admin user requires to provision hosts.

Resource name Permissions Additional details

Activation Keys

view_activation_keys

Ansible role

view_ansible_roles

Required if Ansible is used.

Architecture

view_architectures

Compute profile

view_compute_profiles

Compute resource

view_compute_resources, create_compute_resources, destroy_compute_resources, power_compute_resources

Required to provision bare-metal hosts.

view_compute_resources_vms, create_compute_resources_vms, destroy_compute_resources_vms, power_compute_resources_vms

Required to provision virtual machines.

Content Views

view_content_views

Domain

view_domains

Environment

view_environments

Host

view_hosts, create_hosts, edit_hosts, destroy_hosts, build_hosts, power_hosts, play_roles_on_host

view_discovered_hosts, submit_discovered_hosts, auto_provision_discovered_hosts, provision_discovered_hosts, edit_discovered_hosts, destroy_discovered_hosts

Required if the Discovery service is enabled.

Hostgroup

view_hostgroups, create_hostgroups, edit_hostgroups, play_roles_on_hostgroup

Image

view_images

Lifecycle environment

view_lifecycle_environments

Location

view_locations

Medium

view_media

Operatingsystem

view_operatingsystems

Organization

view_organizations

Parameter

view_params, create_params, edit_params, destroy_params

Product and Repositories

view_products

Provisioning template

view_provisioning_templates

Ptable

view_ptables

orcharhino Proxy

view_smart_proxies, view_smart_proxies_puppetca

view_openscap_proxies

Required if the OpenSCAP plugin is enabled.

Subnet

view_subnets

Additional resources

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0").