Managing Hosts Guide

Overview of Hosts in orcharhino

A host is any Linux client that orcharhino manages. Hosts can be physical or virtual. Virtual hosts can be deployed on any platform supported by orcharhino, such as Amazon EC2, Google Compute Engine, libvirt, Microsoft Azure, Oracle Linux Virtualization Manager, oVirt, Proxmox, RHV, and VMware vSphere.

orcharhino enables host management at scale, including monitoring, provisioning, remote execution, configuration management, software management, and subscription management. You can manage your hosts from the orcharhino management UI or from the command line.

In the orcharhino management UI, you can browse all hosts recognized by orcharhino server, grouped by type:

  • All Hosts - a list of all hosts recognized by orcharhino server.

  • Discovered Hosts - a list of bare-metal hosts detected on the provisioning network by the Discovery plug-in.

  • Content Hosts - a list of hosts that manage tasks related to content and subscriptions.

  • Host Collections - a list of user-defined collections of hosts used for bulk actions such as errata installation.

To search for a host, type in the Search field, and use an asterisk (*) to perform a partial string search. For example, if searching for a content host named dev-node.example.com, click the Content Hosts page and type dev-node* in the Search field. Alternatively, *node* will also find the content host dev-node.example.com.

orcharhino server is listed as a host itself even if it is not self-registered. Do not delete orcharhino server from the list of hosts.

Administering Hosts

This chapter describes creating, registering, administering, and removing hosts.

Creating a Host in orcharhino

Use this procedure to create a host in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, click Hosts > Create Host.

  2. On the Host tab, enter the required details.

  3. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove.

  4. On the Puppet Classes tab, select the Puppet classes you want to include.

  5. On the Interfaces tab:

    1. For each interface, click Edit in the Actions column and configure the following settings as required:

      • Type — For a Bond or BMC interface, use the Type list and select the interface type.

      • MAC address — Enter the MAC address.

      • DNS name — Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN.

      • Domain — Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets.

      • IPv4 Subnet — Select an IPv4 subnet for the host from the list.

      • IPv6 Subnet — Select an IPv6 subnet for the host from the list.

      • IPv4 address — If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not mange DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations.

      • IPv6 address — If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address.

      • Managed — Select this check box to configure the interface during provisioning to use the orcharhino Proxy provided DHCP and DNS services.

      • Primary — Select this check box to use the DNS name from this interface as the host portion of the FQDN.

      • Provision — Select this check box to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading RPMs by anaconda, Puppet setup in a %post script, will use the primary interface.

      • Virtual NIC — Select this check box if this interface is not a physical device. This setting has two options:

        • Tag — Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet.

        • Attached to — Enter the device name of the interface this virtual interface is attached to.

    2. Click OK to save the interface configuration.

    3. Optionally, click Add Interface to include an additional network interface. See Adding Network Interfaces for details.

    4. Click Submit to apply the changes and exit.

  6. On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection. If you want to use non Red Hat operating systems, select All Media, then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both.

  7. On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter.

  8. On the Additional Information tab, enter additional information about the host.

  9. Click Submit to complete your provisioning request.

CLI procedure
  • To create a host associated to a host group, enter the following command:

    # hammer host create \
    --name "host_name" \
    --hostgroup "hostgroup_name" \
    --interface="primary=true, \
                provision=true, \
                mac=mac_address, \
                ip=ip_address" \
    --organization "Your_Organization" \
    --location "Your_Location" \
    --ask-root-password yes

    This command prompts you to specify the root password. It is required to specify the host’s IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the --subnet, and --domain parameters. You can set additional interfaces using the --interface option, which accepts a list of key-value pairs. For the list of available interface settings, enter the hammer host create --help command.

Changing a Module Stream for a Host

If you have a Red Hat Enterprise Linux 8 or clone operating system host, you can modify the module stream for the repositories you install.

After you create the host, you can enable, disable, install, update, and remove module streams from your host in the orcharhino management UI.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Content Hosts and click the name of the host that contains the modules you want to change.

  2. Click the Module Streams tab.

  3. From the Available Module Streams list, locate the module that you want to change. You can use the Filter field to refine the list entries. You can also use the Filter Status list to search for modules with a specific status.

  4. From the Actions list, select the change that you want to make to the module.

  5. In the Job Invocation window, ensure that the job information is accurate. Change any details that you require, and then click Submit.

Creating a Host Group

If you create a high volume of hosts, many of the hosts can have common settings and attributes. Adding these settings and attributes for every new host is time consuming. If you use host groups, you can apply common attributes to hosts that you create.

A host group functions as a template for common host settings, containing many of the same details that you provide to hosts. When you create a host with a host group, the host inherits the defined settings from the host group. You can then provide additional details to individualize the host.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Host Group Hierarchy

You can create a hierarchy of host groups. Aim to have one base level host group that represents all hosts in your organization and provide general settings, and then nested groups to provide specific settings. For example, you can have a base level host group that defines the operating system, and two nested host groups that inherit the base level host group:

  • Hostgroup: Base (Red Hat Enterprise Linux 7.6)

    • Hostgroup: Webserver (applies the httpd Puppet class)

      • Host: webserver1.example.com (web server)

      • Host: webserver2.example.com (web server)

    • Hostgroup: Storage (applies the nfs Puppet class)

      • Host: storage1.example.com (storage server)

      • Host: storage2.example.com (storage server)

    • Host: custom.example.com (custom host)

In this example, all hosts use Red Hat Enterprise Linux 7.6 as their operating system because of their inheritance of the Base host group. The two web server hosts inherit the settings from the Webserver host group, which includes the httpd Puppet class and the settings from the Base host group. The two storage servers inherit the settings from the Storage host group, which includes the nfs Puppet class and the settings from the Base host group. The custom host only inherits the settings from the Base host group.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Host Groups and click Create Host Group.

  2. If you have an existing host group that you want to inherit attributes from, you can select a host group from the Parent list. If you do not, leave this field blank.

  3. Enter a Name for the new host group.

  4. Enter any further information that you want future hosts to inherit.

  5. Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove.

  6. Click the additional tabs and add any details that you want to attribute to the host group.

    Puppet fails to retrieve the Puppet CA certificate while registering a host with a host group associated with a Puppet environment created inside a Production environment.

    To create a suitable Puppet environment to be associated with a host group, manually create a directory and change the owner:

    # mkdir /etc/puppetlabs/code/environments/example_environment
    # chown apache /etc/puppetlabs/code/environments/example_environment
  7. Click Submit to save the host group.

CLI procedure
  • Create the host group with the hammer hostgroup create command. For example:

    # hammer hostgroup create --name "Base" \
    --lifecycle-environment "Production" --content-view "Base" \
    --puppet-environment "production" --content-source-id 1 \
    --puppet-ca-proxy-id 1 --puppet-proxy-id 1 --domain "example.com" \
    --subnet `ACME's Internal Network` --architecture "x86_64" \
    --operatingsystem "RedHat 7.2" --medium-id 9 \
    --partition-table "Kickstart default" --root-pass "p@55w0rd!" \
    --locations "New York" --organizations "ACME"

Creating a Host Group for Each LifeCycle Environment

Use this procedure to create a host group for the Library lifecycle environment and add nested host groups for other lifecycle environments.

Procedure

To create a host group for each life cycle environment, run the following Bash script:

MAJOR="7"
ARCH="x86_64"
ORG="Your Organization"
LOCATIONS="Your Location"
PTABLE_NAME="Kickstart default"
DOMAIN="example.com"

hammer --output csv --no-headers lifecycle-environment list --organization "${ORG}" | cut -d ',' -f 2 | while read LC_ENV; do
  [[ ${LC_ENV} == "Library" ]] && continue

  hammer hostgroup create --name "rhel-${MAJOR}server-${ARCH}-${LC_ENV}" \
    --architecture "${ARCH}" \
    --partition-table "${PTABLE_NAME}" \
    --domain "${DOMAIN}" \
    --organizations "${ORG}" \
    --query-organization "${ORG}" \
    --locations "${LOCATIONS}" \
    --lifecycle-environment "${LC_ENV}"
done

Changing the Group of a Host

Use this procedure to change the group of a host.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Select the check box of the host you want to change.

  3. From the Select Action list, select Change Group. A new option window opens.

  4. From the Host Group list, select the group that you want for your host.

  5. Click Submit.

Changing the Environment of a Host

Use this procedure to change the environment of a host.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Select the check box of the host you want to change.

  3. From the Select Action list, select Change Environment. A new option window opens.

  4. From the Environment list, select the new environment for your host.

  5. Click Submit.

Changing the Managed Status of a Host

Hosts provisioned by orcharhino are Managed by default. When a host is set to Managed, you can configure additional host parameters from orcharhino server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it.

If you need to obtain reports about configuration management on systems using an operating system not supported by orcharhino, set the host to Unmanaged.

Use this procedure to switch a host between Managed and Unmanaged status.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Select the host.

  3. Click Edit.

  4. Click Manage host or Unmanage host to change the host’s status.

  5. Click Submit.

Assigning a Host to a Specific Organization

Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in the Content Management Guide.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Select the check box of the host you want to change.

  3. From the Select Action list, select Assign Organization. A new option window opens.

  4. From the Select Organization list, select the organization that you want to assign your host to. Select the check box Fix Organization on Mismatch.

    A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings.

  5. Click Submit.

Assigning a Host to a Specific Location

Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in the Content Management Guide.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Select the check box of the host you want to change.

  3. From the Select Action list, select Assign Location. A new option window opens.

  4. Navigate to the Select Location list and choose the location that you want for your host. Select the check box Fix Location on Mismatch.

    A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings.

  5. Click Submit.

Removing a Host from orcharhino

Use this procedure to remove a host from orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts or Hosts > Content Hosts. Note that there is no difference from what page you remove a host, from All hosts or Content Hosts. In both cases, orcharhino removes a host completely.

  2. Select the hosts that you want to remove.

  3. From the Select Action list, select Delete Hosts.

  4. Click Submit to remove the host from orcharhino permanently.

By default, the Destroy associated VM on host delete setting is set to no. If a host record that is associated with a virtual machine is deleted, the virtual machine will remain on the compute resource.

To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab. Setting Destroy associated VM on host delete to yes deletes the virtual machine if the host record that is associated with the virtual machine is deleted. To avoid deleting the virtual machine in this situation, disassociate the virtual machine from orcharhino without removing it from the compute resource or change the setting.

Disassociating A Virtual Machine from orcharhino without Removing it from a Hypervisor

  1. In the orcharhino management UI, navigate to Hosts > All Hosts and select the check box to the left of the hosts to be disassociated.

  2. From the Select Action list, select the Disassociate Hosts button.

  3. Optionally, select the check box to keep the hosts for future action.

  4. Click Submit.

Registering a Host to orcharhino

Hosts can be registered to orcharhino by generating a curl command on orcharhino and running this command on hosts. This method uses two templates: global registration template and host initial configuration template. That gives you complete control over the host registration process. You can set default templates by navigating to Administer > Settings, and clicking the Provisioning tab.

Prerequisites
  • The orcharhino user that generates the curl command must have the create_hosts permission.

  • You must have root privileges on the host that you want to register.

  • You must have an activation key created.

  • Optional: If you want to register hosts to Red Hat Insights, you must synchronize the rhel-7-server-rpms repository and make it available in the activation key that you use. This is required to install the insights-client package on hosts.

  • Optional: If you want to register hosts through orcharhino Proxy, ensure that the Registration feature is enabled on this orcharhino Proxy.

    Navigate to Infrastructure > orcharhino Proxies, click the orcharhino Proxy that you want to use, and locate the Registration feature in the Active features list.

    Optional: If the Registration feature is not enabled on your orcharhino Proxy, enter the following command on the orcharhino Proxy to enable it:

    # foreman-installer --foreman-proxy-registration \
    --foreman-proxy-templates \
    --foreman-proxy-template-url 'http://orcharhino-proxy.example.com'
Procedure
  1. Navigate to Hosts > Register Host.

  2. Optional: From the Host Group list, select the host group to associate the hosts with. Fields that inherit value from Host group: Operating System, Activation Key(s) and Lifecycle environment.

  3. From the Operating System list, select the operating system of hosts that you want to register.

  4. Optional: From the orcharhino Proxy list, select the orcharhino Proxy to register hosts through. You must select the internal orcharhino Proxy if you do not want to use an external orcharhino Proxy.

  5. Optional: Insecure - This makes the first call insecure, however, during this first call, hosts download the CA file from orcharhino. Hosts will use this CA file to connect to orcharhino with all future calls making them secure.

    If an attacker, located in the network between orcharhino and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key.

  6. From the Setup REX list, select whether you want to deploy orcharhino SSH keys to hosts or not.

    If set to Yes, SSH keys will be installed on the registered host. The inherited value is based on the host_registration_remote_execution parameter. It can be inherited e.g. from host group, operating system, organization. When overridden, the selected value will be stored on host parameter level.

  7. From the Setup Insights list, select whether you want to install insights-client and register the hosts to Insights.

    The Insights tool is available for Red Hat Enterprise Linux only. It has no effect on other operating systems. You must enable the following repositories on a registered machine:

    • RHEL 6: rhel-6-server-rpms

    • RHEL 7: rhel-7-server-rpms

    • RHEL 8: rhel-8-for-x86_64-appstream-rpms (The insights-client package is installed by default on RHEL 8 except in environments whereby RHEL 8 was deployed with "Minimal Install" option)

  8. Optional: orcharhino uses the JSON Web Tokens (JWT) for authentication. The duration of this token defines how long the generated curl command works. If you want to change the duration of the token, enter the required duration to the Token lifetime (hours) field. You can set the duration to 0 - 999 999 hours or unlimited.

    Note that orcharhino uses the permissions of the user who generates the curl command for authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. Also, the scope of JWT is limited to the registration endpoints only and cannot be used anywhere else.

  9. Optional: In the Remote Execution Interface field, enter a network interface that hosts must use for the SSH connection. If you keep this field blank, orcharhino uses the default network interface.

  10. Optional: Install packages - Install packages on the host when registered. Can be set by host_packages parameter

  11. Optional: Update packages - Update packages on the host when registered. Can be set by host_update_packages parameter

  12. Optional: Repository - A repository to be added before the registration is performed. For example, it can be useful to make the subscription-manager packages available for the purpose of the registration. For Red Hat family distributions, this should be the URL of the repository. For example, http://rpm.example.com/. For Debian OS families, it’s the whole list file content, for example 'deb http://deb.example.com/ buster 1.0'.

  13. Optional: Repository GPG key URL - If packages are GPG signed, the public key can be specified here to verify the packages signatures. It needs to be specified in the ascii form with the GPG public key header.

  14. In the Activation Key(s) field, enter one or more activation keys to assign to hosts.

  15. Optional: Lifecycle environment

  16. Optional: Ignore errors - Ignore subscription manager errors

  17. Optional: Force - Remove any katello-ca-consumer rpms before registration and run subscription-manager with --force argument.

  18. Click Generate command.

  19. Copy the generated curl command to enter it on the hosts.

    The following is an example of the curl command with the --insecure option:

    curl -sS --insecure https://orcharhino.example.com/register...

    If you do not want to call the curl command with the --insecure option, you can manually copy and install the CA file on each host.

    To do this find where orcharhino stores the CA file by navigating to Administer > Settings > Authentication and locating the value for the SSL CA file setting.

    Copy the CA file to the /etc/pki/ca-trust/source/anchors/ directory on hosts and enter the following commands:

    # update-ca-trust enable
    # update-ca-trust
  20. On the hosts that you want to register, enter the curl command as root.

Customizing the Registration Templates

Use information in this section if you want to customize the registration process.

Note that all default templates in orcharhino are locked. If you want to customize the registration process, you need to clone the default templates and edit the clones. Then, in Administer > Settings > Provisioning change the Default Global registration template and Default 'Host initial configuration' template settings to point to your custom templates.

Templates

The registration process uses the following registration templates:

  • The Global Registration template contains steps for registering hosts to orcharhino. This template renders when hosts access the /register endpoint.

  • The Linux host_init_config default template contains steps for initial configuration of hosts after they are registered.

Global Parameters

You can configure the following global parameters by navigating to Configure > Global Parameters:

  • The host_registration_remote_execution parameter is used in the remote_execution_ssh_keys snippet, the default value is true.

  • The host_registration_insights parameter is used in the insights snippet, the default value is false.

  • The host_packages parameter is for installing packages on the host.

  • The remote_execution_ssh_keys, remote_execution_ssh_user, remote_execution_create_user and remote_execution_effective_user_method parameters are used in the remote_execution_ssh_keys. For more details se the details of the snippet.

Snippets

Snippet is used in the Linux host_init_config default template:

  • The remote_execution_ssh_keys snippet deploys SSH keys to the host only when the host_registration_remote_execution parameter is true.

  • The insights snippet downloads and installs the Red Hat Insights client when global parameter host_registration_insights is set to true.

  • The puppetlabs_repo and puppet_setup snippets downloads and installs puppet agent on the host (only when puppet master is present)

  • The host_init_config_post is empty snippet for user’s custom actions on during host initial configuration.

Variables

This table describes what variables are used in the Global Registration template.

Table 1. The Global Registration Template Variables
Variable Command argument Description

@user

none

Current authenticated user object.

@organization

organization_id

If organization_id is not set, then user’s default organization is set, or the first organization from user’s organizations list.

@location

location_id

If location_id is not set, user’s default location is set, or the first location from user’s locations list.

@hostgroup

hostgroup_id

Host group of the host.

@operatingsystem

operatingsystem_id

Host operating system.

@setup_insights

setup_insights

Override the value of the host_registration_insights global parameter for the registered host and install insights client.

@setup_remote_execution

setup_remote_execution

Override the value of host_registration_remote_execution global parameter for the registered host and deploy SSH keys for remote execution.

@setup_remote_execution

setup_remote_execution

Set default interface of host for the remote execution.

@packages

packages

Packages to install

@repo

repo

Add repository on the host

@repo_gpg_key_url

repo_gpg_key_url

Set repository GPG key form URL

@activation_keys

activation_keys

Host activation keys.

@force

force

Remove any katello-ca-consumer* rpms and run subscription-manager register command with --force argument.

@ignore_subman_errors

ignore_subman_errors

Ignore subscription-manager errors

@lifecycle_environment_id

lifecycle_environment_id

Life cycle environment id

@registration_url

none

URL for the /register endpoint.

Adding Network Interfaces

orcharhino supports specifying multiple network interfaces for a single host. You can configure these interfaces when creating a new host as described in Creating a Host in orcharhino or when editing an existing host.

There are several types of network interfaces that you can attach to a host. When adding a new interface, select one of:

  • Interface: Allows you to specify an additional physical or virtual interface. There are two types of virtual interfaces you can create. Use VLAN when the host needs to communicate with several (virtual) networks using a single interface, while these networks are not accessible to each other. Use alias to add an additional IP address to an existing interface.

    For more information about adding a physical interface, see Adding a Physical Interface.

    For more information about adding a virtual interface, see Adding a Virtual Interface.

  • Bond: Creates a bonded interface. NIC bonding is a way to bind multiple network interfaces together into a single interface that appears as a single device and has a single MAC address. This enables two or more network interfaces to act as one, increasing the bandwidth and providing redundancy. See Adding a Bonded Interface for details.

  • BMC: Baseboard Management Controller (BMC) allows you to remotely monitor and manage the physical state of machines. For more information about BMC, see Enabling Power Management on Managed Hosts in Installing orcharhino. For more information about configuring BMC interfaces, see Adding a Baseboard Management Controller (BMC) Interface.

Additional interfaces have the Managed flag enabled by default, which means the new interface is configured automatically during provisioning by the DNS and DHCP orcharhino Proxys associated with the selected subnet. This requires a subnet with correctly configured DNS and DHCP orcharhino Proxys. If you use a Kickstart method for host provisioning, configuration files are automatically created for managed interfaces in the post-installation phase at /etc/sysconfig/network-scripts/ifcfg-interface_id.

Virtual and bonded interfaces currently require a MAC address of a physical device. Therefore, the configuration of these interfaces works only on bare-metal hosts.

Adding a Physical Interface

Use this procedure to add an additional physical interface to a host.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Click Edit next to the host you want to edit.

  3. On the Interfaces tab, click Add Interface.

  4. Keep the Interface option selected in the Type list.

  5. Specify a MAC address. This setting is required.

  6. Specify the Device Identifier, for example eth0. The identifier is used to specify this physical interface when creating bonded interfaces, VLANs, and aliases.

  7. Specify the DNS name associated with the host’s IP address. orcharhino saves this name in orcharhino Proxy associated with the selected domain (the "DNS A" field) and orcharhino Proxy associated with the selected subnet (the "DNS PTR" field). A single host can therefore have several DNS entries.

  8. Select a domain from the Domain list. To create and manage domains, navigate to Infrastructure > Domains.

  9. Select a subnet from the Subnet list. To create and manage subnets, navigate to Infrastructure > Subnets.

  10. Specify the IP address. Managed interfaces with an assigned DHCP orcharhino Proxy require this setting for creating a DHCP lease. DHCP-enabled managed interfaces are automatically provided with a suggested IP address.

  11. Select whether the interface is Managed. If the interface is managed, configuration is pulled from the associated orcharhino Proxy during provisioning, and DNS and DHCP entries are created. If using kickstart provisioning, a configuration file is automatically created for the interface.

  12. Select whether this is the Primary interface for the host. The DNS name from the primary interface is used as the host portion of the FQDN.

  13. Select whether this is the Provision interface for the host. TFTP boot takes place using the provisioning interface. For image-based provisioning, the script to complete the provisioning is executed through the provisioning interface.

  14. Select whether to use the interface for Remote execution.

  15. Leave the Virtual NIC check box clear.

  16. Click OK to save the interface configuration.

  17. Click Submit to apply the changes to the host.

Adding a Virtual Interface

Use this procedure to configure a virtual interface for a host. This can be either a VLAN or an alias interface.

An alias interface is an additional IP address attached to an existing interface. An alias interface automatically inherits a MAC address from the interface it is attached to; therefore, you can create an alias without specifying a MAC address. The interface must be specified in a subnet with boot mode set to static.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Click Edit next to the host you want to edit.

  3. On the Interfaces tab, click Add Interface.

  4. Keep the Interface option selected in the Type list.

  5. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.

    Specify a MAC address for managed virtual interfaces so that the configuration files for provisioning are generated correctly. However, a MAC address is not required for virtual interfaces that are not managed.

    If creating a VLAN, specify ID in the form of eth1.10 in the Device Identifier field. If creating an alias, use ID in the form of eth1:10.

  6. Select the Virtual NIC check box. Additional configuration options specific to virtual interfaces are appended to the form:

    • Tag: Optionally set a VLAN tag to trunk a network segment from the physical network through to the virtual interface. If you do not specify a tag, managed interfaces inherit the VLAN tag of the associated subnet. User-specified entries from this field are not applied to alias interfaces.

    • Attached to: Specify the identifier of the physical interface to which the virtual interface belongs, for example eth1. This setting is required.

  7. Click OK to save the interface configuration.

  8. Click Submit to apply the changes to the host.

Adding a Bonded Interface

Use this procedure to configure a bonded interface for a host. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts.

  2. Click Edit next to the host you want to edit.

  3. On the Interfaces tab, click Add Interface.

  4. Select Bond from the Type list. Additional type-specific configuration options are appended to the form.

  5. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.

    Bonded interfaces use IDs in the form of bond0 in the Device Identifier field.

    A single MAC address is sufficient.

  6. Specify the configuration options specific to bonded interfaces:

    • Mode: Select the bonding mode that defines a policy for fault tolerance and load balancing. See Bonding Modes Available in orcharhino for a brief description of each bonding mode.

    • Attached devices: Specify a comma-separated list of identifiers of attached devices. These can be physical interfaces or VLANs.

    • Bond options: Specify a space-separated list of configuration options, for example miimon=100.

  7. Click OK to save the interface configuration.

  8. Click Submit to apply the changes to the host.

CLI procedure
  • To create a host with a bonded interface, enter the following command:

    # hammer host create --name bonded_interface \
    --hostgroup-id 1 \
    --ip=192.168.100.123 \
    --mac=52:54:00:14:92:2a \
    --subnet-id=1 \
    --managed true \
       --interface="identifier=eth1, \
                   mac=52:54:00:62:43:06, \
                   managed=true, \
                   type=Nic::Managed, \
                   domain_id=1, \
                   subnet_id=1" \
       --interface="identifier=eth2, \
                   mac=52:54:00:d3:87:8f, \
                   managed=true, \
                   type=Nic::Managed, \
                   domain_id=1, \
                   subnet_id=1" \
       --interface="identifier=bond0, \
                   ip=172.25.18.123, \
                   type=Nic::Bond, \
                   mode=active-backup, \
                   attached_devices=[eth1,eth2], \
                   managed=true, \
                   domain_id=1, \
                   subnet_id=1" \
    --organization "Your_Organization" \
    --location "Your_Location" \
    --ask-root-password yes
Table 2. Bonding Modes Available in orcharhino
Bonding Mode Description

balance-rr

Transmissions are received and sent sequentially on each bonded interface.

active-backup

Transmissions are received and sent through the first available bonded interface. Another bonded interface is only used if the active bonded interface fails.

balance-xor

Transmissions are based on the selected hash policy. In this mode, traffic destined for specific peers is always sent over the same interface.

broadcast

All transmissions are sent on all bonded interfaces.

802.a3

Creates aggregation groups that share the same settings. Transmits and receives on all interfaces in the active group.

balance-tlb

The outgoing traffic is distributed according to the current load on each bonded interface.

balance-alb

Receive load balancing is achieved through Address Resolution Protocol (ARP) negotiation.

Adding a Baseboard Management Controller (BMC) Interface

Use this procedure to configure a baseboard management controller (BMC) interface for a host that supports this feature.

Prerequisites
  • The ipmitool package is installed.

  • You know the MAC address, IP address, and other details of the BMC interface on the host, and the appropriate credentials for that interface.

    You only need the MAC address for the BMC interface if the BMC interface is managed, so that it can create a DHCP reservation.

Procedure
  1. Enable BMC on the orcharhino Proxy server if it is not already enabled:

    1. Configure BMC power management on orcharhino Proxy by running the foreman-installer script with the following options:

      # foreman-installer --foreman-proxy-bmc=true \
      --foreman-proxy-bmc-default-provider=ipmitool
    2. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.

    3. From the list in the Actions column, click Refresh. The list in the Features column should now include BMC.

  2. In the orcharhino management UI, navigate to Hosts > All hosts.

  3. Click Edit next to the host you want to edit.

  4. On the Interfaces tab, click Add Interface.

  5. Select BMC from the Type list. Type-specific configuration options are appended to the form.

  6. Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.

  7. Specify the configuration options specific to BMC interfaces:

    • Username and Password: Specify any authentication credentials required by BMC.

    • Provider: Specify the BMC provider.

  8. Click OK to save the interface configuration.

  9. Click Submit to apply the changes to the host.

Upgrading Hosts from RHEL 7 to RHEL 8

You can use a job template to upgrade your Red Hat Enterprise Linux 7 hosts to Red Hat Enterprise Linux 8.

Prerequisites
Procedure
  1. On orcharhino, enable the foreman_plugin_leapp puppet module:

    # foreman-installer --enable-foreman-plugin-leapp
  2. In the orcharhino management UI, navigate to Hosts > All Hosts.

  3. Select the hosts that you want to upgrade to Red Hat Enterprise Linux 8.

  4. In the upper right of the Hosts window, from the Select Action list, select Preupgrade check with Leapp.

  5. Click Submit to start the pre-upgrade check.

  6. When the check is finished, click the Leapp preupgrade report tab to see if LEAPP has found any issues on RHEL 7 hosts. Issues that have the Inhibitor flag are considered crucial and are likely to break the upgrade procedure. Some issues might have documentation linked that describe how to fix them.

  7. Optional: If you have issues that have commands associated with them, you can fix them with a remote job. To do that, select these issues, click the Fix Selected button, and submit the job.

  8. After you fixed the issues, click the Rerun button, and then click Submit to run the pre-upgrade check again to verify that your RHEL 7 hosts do not have any issues and are ready to be upgraded.

  9. When your systems are ready for the upgrade, click the Run Upgrade button and click Submit to start the upgrade.

Host Management and Monitoring Using Web Console

Web Console is an interactive web interface that you can use to perform actions and monitor Red Hat Enterprise Linux hosts. You can enable a remote-execution feature to integrate orcharhino with Web Console. When you install Web Console on a host that you manage with orcharhino, you can view the Web Console dashboards of that host from within the orcharhino management UI. You can also use the features that are integrated with Web Console, for example, Lorax Composer.

Integrating orcharhino with Web Console

By default, Web Console integration is disabled in orcharhino. If you want to access Web Console features for your hosts from within orcharhino, you must first enable Web Console integration on orcharhino server.

Procedure
  • On orcharhino server, run foreman-installer with the --enable-foreman-plugin-remote-execution-cockpit option:

    # foreman-installer --enable-foreman-plugin-remote-execution-cockpit

Managing and Monitoring Hosts Using Web Console

You can access the Web Console web UI through the orcharhino management UI and use the functionality to manage and monitor hosts in orcharhino.

Prerequisites
  • Web Console is enabled in orcharhino.

  • Web Console is installed on the host that you want to view:

  • orcharhino or orcharhino Proxy can authenticate to the host with SSH keys. For more information, Distributing SSH Keys for Remote Execution.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts and select the host that you want to manage and monitor with Web Console.

  2. In the upper right of the host window, click Web Console.

You can now access the full range of features available for host monitoring and management, for example, Lorax Composer, through the Web Console.

Using Report Templates to Monitor Hosts

You can use report templates to query orcharhino data to obtain information about, for example, host status, registered hosts, applicable errata, applied errata, subscription details, and user activity. You can use the report templates that ship with orcharhino or write your own custom report templates to suit your requirements. The reporting engine uses the embedded Ruby (ERB) syntax. For more information about writing templates and ERB syntax, see Template Writing Reference.

You can create a template, or clone a template and edit the clone. For help with the template syntax, click a template and click the Help tab.

Generating Host Monitoring Reports

To view the report templates in the orcharhino management UI, navigate to Monitor > Report Templates.

To schedule reports, configure a cron job or use the orcharhino management UI.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Report Templates.

  2. To the right of the report template that you want to use, click Generate.

  3. Optional: To schedule a report, to the right of the Generate at field, click the icon to select the date and time you want to generate the report at.

  4. Optional: To send a report to an e-mail address, select the Send report via e-mail check box, and in the Deliver to e-mail addresses field, enter the required e-mail address.

  5. Optional: Apply search query filters. To view all available results, do not populate the filter field with any values.

  6. Click Submit. A CSV file that contains the report is downloaded. If you have selected the Send report via e-mail check box, the host monitoring report is sent to your e-mail address.

CLI procedure

To generate a report, complete the following steps:

  1. List all available report templates:

    # hammer report-template list
  2. Generate a report:

    # hammer report-template generate --id template ID

    This command waits until the report fully generates before completing. If you want to generate the report as a background task, you can use the hammer report-template schedule command.

Creating a Report Template

In orcharhino, you can create a report template and customize the template to suit your requirements. You can import existing report templates and further customize them with snippets and template macros.

Report templates use Embedded Ruby (ERB) syntax. To view information about working with ERB syntax and macros, in the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template, and then click the Help tab.

When you create a report template in orcharhino, safe mode is enabled by default. For more information about safe mode, see Report Template Safe Mode.

For more information about writing templates, see the Template Writing Reference.

For more information about macros you can use in report templates, see Templates Macros.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template.

  2. In the Name field, enter a unique name for your report template.

  3. If you want the template to be available to all locations and organizations, select Default.

  4. Create the template directly in the template editor or import a template from a text file by clicking Import. For more information about importing templates, see Importing Report Templates.

  5. Optional: In the Audit Comment field, you can add any useful information about this template.

  6. Click the Input tab, and in the Name field, enter a name for the input that you can reference in the template in the following format: input('name'). Note that you must save the template before you can reference this input value in the template body.

  7. Select whether the input value is mandatory. If the input value is mandatory, select the Required check box.

  8. From the Value Type list, select the type of input value that the user must input.

  9. Optional: If you want to use facts for template input, select the Advanced check box.

  10. Optional: In the Options field, define the options that the user can select from. If this field remains undefined, the users receive a free-text field in which they can enter the value they want.

  11. Optional: In the Default field, enter a value, for example, a host name, that you want to set as the default template input.

  12. Optional: In the Description field, you can enter information that you want to display as inline help about the input when you generate the report.

  13. Optional: Click the Type tab, and select whether this template is a snippet to be included in other templates.

  14. Click the Location tab and add the locations where you want to use the template.

  15. Click the Organizations tab and add the organizations where you want to use the template.

  16. Click Submit to save your changes.

Exporting Report Templates

You can export report templates that you create in orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Report Templates.

  2. Locate the template that you want to export, and from the list in the Actions column, select Export.

  3. Repeat this action for every report template that you want to download.

An .erb file that contains the template downloads.

CLI procedure
  1. To view the report templates available for export, enter the following command:

    # hammer report-template list

    Note the template ID of the template that you want to export in the output of this command.

  2. To export a report template, enter the following command:

    # hammer report-template dump --id template_ID > example_export.erb

Exporting Report Templates Using the orcharhino API

You can use the orcharhino report_templates API to export report templates from orcharhino.

Procedure
  1. Use the following request to retrieve a list of available report templates:

    Example request:
    $ curl --insecure --user admin:redhat \
    --request GET \
    --config https://orcharhino.example.com/api/report_templates \
    | json_reformat

    In this example, the json_reformat tool is used to format the JSON output.

    Example response:
    {
        "total": 6,
        "subtotal": 6,
        "page": 1,
        "per_page": 20,
        "search": null,
        "sort": {
            "by": null,
            "order": null
        },
        "results": [
            {
                "created_at": "2019-11-20 17:49:52 UTC",
                "updated_at": "2019-11-20 17:49:52 UTC",
                "name": "Applicable errata",
                "id": 112
            },
            {
                "created_at": "2019-11-20 17:49:52 UTC",
                "updated_at": "2019-11-20 17:49:52 UTC",
                "name": "Applied Errata",
                "id": 113
            },
            {
                "created_at": "2019-11-30 16:15:24 UTC",
                "updated_at": "2019-11-30 16:15:24 UTC",
                "name": "Hosts - complete list",
                "id": 158
            },
            {
                "created_at": "2019-11-20 17:49:52 UTC",
                "updated_at": "2019-11-20 17:49:52 UTC",
                "name": "Host statuses",
                "id": 114
            },
            {
                "created_at": "2019-11-20 17:49:52 UTC",
                "updated_at": "2019-11-20 17:49:52 UTC",
                "name": "Registered hosts",
                "id": 115
            },
            {
                "created_at": "2019-11-20 17:49:52 UTC",
                "updated_at": "2019-11-20 17:49:52 UTC",
                "name": "Subscriptions",
                "id": 116
            }
        ]
    }
  2. Note the id of the template that you want to export, and use the following request to export the template:

    Example request:
    $ curl --insecure --output /tmp/_Example_Export_Template.erb_ \
    --user admin:password --request GET --config \
    https://orcharhino.example.com/api/report_templates/158/export

    Note that 158 is an example ID of the template to export.

    In this example, the exported template is redirected to host_complete_list.erb.

Importing Report Templates

You can import a report template into the body of a new template that you want to create. Note that using the orcharhino management UI, you can only import templates individually. For bulk actions, use the orcharhino API. For more information, see Importing Report Templates Using the orcharhino API.

Prerequisite
  • You must have exported templates from orcharhino to import them to use in new templates. For more information see Exporting Report Templates.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Report Templates.

  2. In the upper right of the Report Templates window, click Create Template.

  3. On the upper right of the Editor tab, click the folder icon, and select the .erb file that you want to import.

  4. Edit the template to suit your requirements.

  5. Click Submit.

For more information about customizing your new template, see Template Writing Reference.

Importing Report Templates Using the orcharhino API

You can use the orcharhino API to import report templates into orcharhino. Importing report templates using the orcharhino API automatically parses the report template metadata and assigns organizations and locations.

Prerequisites
Procedure
  1. Use the following example to format the template that you want to import to a .json file:

    # cat Example_Template.json
    {
        "name": "Example Template Name",
        "template": "
        Enter ERB Code Here
    "
    }
    Example JSON File with ERB Template:
    {
        "name": "Hosts - complete list",
        "template": "
    <%#
    name: Hosts - complete list
    snippet: false
    template_inputs:
    - name: host
      required: false
      input_type: user
      advanced: false
      value_type: plain
      resource_type: Katello::ActivationKey
    model: ReportTemplate
    -%>
    <% load_hosts(search: input('host')).each_record do |host| -%>
    <%
          report_row(
              'Server FQND': host.name
          )
    -%>
    <%  end -%>
    <%= report_render %>
    "
    }
  2. Use the following request to import the template:

    $ curl --insecure --user admin:redhat \
    --data @Example_Template.json --header "Content-Type:application/json" \
    --request POST --config https://orcharhino.example.com/api/report_templates/import
  3. Use the following request to retrieve a list of report templates and validate that you can view the template in orcharhino:

    $ curl --insecure --user admin:redhat \
     --request GET --config https://orcharhino.example.com/api/report_templates | json_reformat

Report Template Safe Mode

When you create report templates in orcharhino, safe mode is enabled by default. Safe mode limits the macros and variables that you can use in the report template. Safe mode prevents rendering problems and enforces best practices in report templates. The list of supported macros and variables is available in the orcharhino management UI.

To view the macros and variables that are available, in the orcharhino management UI, navigate to Monitor > Report Templates and click Create Template. In the Create Template window, click the Help tab and expand Safe mode methods.

While safe mode is enabled, if you try to use a macro or variable that is not listed in Safe mode methods, the template editor displays an error message.

To view the status of safe mode in orcharhino, in the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab. Locate the Safemode rendering row to check the value.

Configuring Host Collections

A host collection is a group of multiple content hosts. This feature enables you to perform the same action on multiple hosts at once. These actions can include the installation, removal, and update of packages and errata, change of assigned life cycle environment, and change of Content View. You can create host collections to suit your requirements, and those of your company. For example, group hosts in host collections by function, department, or business unit.

Creating a Host Collection

The following procedure shows how to create host collections.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Click New Host Collection.

  3. Add the Name of the host collection.

  4. Clear Unlimited Content Hosts, and enter the desired maximum number of hosts in the Limit field.

  5. Add the Description of the host collection.

  6. Click Save.

CLI procedure
  • To create a host collection, enter the following command:

    # hammer host-collection create \
    --organization "Your_Organization" \
    --name hc_name

Cloning a Host Collection

The following procedure shows how to clone a host collection.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. On the left hand panel, click the host collection you want to clone.

  3. Click Copy Collection.

  4. Specify a name for the cloned collection.

  5. Click Create.

Removing a Host Collection

The following procedure shows how to remove a host collection.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Choose the host collection to be removed.

  3. Click Remove. An alert box appears:

    Are you sure you want to remove host collection Host Collection Name?
  4. Click Remove.

Adding a Host to a Host Collection

The following procedure shows how to add hosts to host collections.

Prerequisites

A host must be registered to orcharhino to add it to a Host Collection.

Note that if you add a host to a host collection, the orcharhino auditing system does not log the change.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Select the host collection where the host should be added.

  3. On the Hosts tab, select the Add subtab.

  4. Select the hosts to be added from the table and click Add Selected.

CLI procedure

To add hosts to a host collection, enter the following command:

# hammer host-collection add-host \
--id hc_ID \
--host-ids host_ID1,host_ID2...

Removing a Host from a Host Collection

The following procedure shows how to remove hosts from host collections.

Note that if you remove a host from a host collection, the host collection record in the database is not modified so the orcharhino auditing system does not log the change.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Choose the desired host collection.

  3. On the Hosts tab, select the List/Remove subtab.

  4. Select the hosts you want to remove from the host collection and click Remove Selected.

Adding Content to a Host Collection

These steps show how to add content to host collections in orcharhino.

Adding Packages to a Host Collection

The following procedure shows how to add packages to host collections.

Prerequisites
  • The content to be added should be available in one of the existing repositories or added prior to this procedure.

  • Content should be promoted to the environment where the hosts are assigned.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Select the host collection where the package should be added.

  3. On the Collection Actions tab, click Package Installation, Removal, and Update.

  4. To update all packages, click the Update All Packages button to use the default method. Alternatively, select the drop-down icon to the right of the button to select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.

  5. Select the Package or Package Group radio button as required.

  6. In the field provided, specify the package or package group name. Then click:

    • Install - to install a new package using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.

    • Update - to update an existing package in the host collection using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.

Adding Errata to a Host Collection

The following procedure shows how to add errata to host collections.

Prerequisites
  • The errata to be added should be available in one of the existing repositories or added prior to this procedure.

  • Errata should be promoted to the environment where the hosts are assigned.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collections.

  2. Select the host collection where the errata should be added.

  3. On the Collection Actions tab, click Errata Installation.

  4. Select the errata you want to add to the host collection and click the Install Selected button to use the default method. Alternatively, select the drop-down icon to the right of the button to select a method to use. Selecting the via remote execution - customize first menu entry takes you to the Job invocation page where you can customize the action.

Removing Content from a Host Collection

The following procedure shows how to remove packages from host collections.

To Remove Content from a Host Collection:
  1. Click Hosts > Host Collections.

  2. Click the host collection where the package should be removed.

  3. On the Collection Actions tab, click Package Installation, Removal, and Update.

  4. Select the Package or Package Group radio button as required.

  5. In the field provided, specify the package or package group name.

  6. Click the Remove button to remove the package or package group using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.

Changing the Life Cycle Environment or Content View of a Host Collection

The following procedure shows how to change the assigned life cycle environment or Content View of host collections.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Host Collection.

  2. Selection the host collection where the life cycle environment or Content View should be changed.

  3. On the Collection Actions tab, click Change assigned Life Cycle Environment or Content View.

  4. Select the life cycle environment to be assigned to the host collection.

  5. Select the required Content View from the list.

  6. Click Assign.

    The changes take effect in approximately 4 hours. To make the changes take effect immediately, on the host, enter the following command:

    # subscription-manager refresh

    You can use remote execution to run this command on multiple hosts at the same time.

Using Puppet for Configuration Management

Use this section as a guide to configuring orcharhino to use Puppet for configuration management on managed hosts.

Introduction to Puppet

Puppet is a configuration management tool using a declarative language to describe the state of managed hosts.

Puppet increases your productivity as you can administer multiple hosts simultaneously. At the same time, it decreases your configuration effort as Puppet makes it easy to verify and possibly correct the state hosts are in.

Puppet Architecture

Puppet uses a server-client architecture with managed hosts running the Puppet agent and orcharhino or any orcharhino Proxies acting as a Puppet server. The Puppet agent enforces the declared state on managed hosts. It sends facts to the Puppet server, receives catalogs that are compiled from Puppet manifests, and sends reports to your orcharhino. Differences between the declared state and the actual state are called drifts in Puppet terminology.

Puppet facts are information about the hosts running the Puppet agent. Running puppet facts on a host displays Puppet facts in JSON format. They are collected by the Puppet agent and reported to the Puppet server on each run.

orcharhino users can define Puppet parameters as key-value pairs which behave similar to host parameters or Ansible variables.

This section is set out to use a Puppet module to install, configure, and manage the ntp service.

Prerequisites for Puppet

There are four requirements to run the Puppet agent on managed hosts:

  1. Sync the Puppet agent from puppet.com. For example, Puppet 6 for CentOS 7 on x86_64 or Puppet 6 for Ubuntu 20.04 with release focal, component puppet6, and architecture amd64.

    Manage these repositories by using products, sync plans, content views, composite content views, and lifecycle environments. For more information, see basic content management workflow in the Content Management Guide.

  2. Provide the Puppet agent to the managed host using their activation key.

  3. Add either enable-puppet5 or enable-puppet6 as parameter to your host or host group. Set the parameter type to boolean and its value to true.

  4. Set a Puppet environment, Puppet server, and Puppet CA when provisioning a host.

Installing a Puppet Module from Puppet Forge

The Puppet Forge is a repository for pre-built Puppet modules. Puppet modules flagged as supported are officially supported and tested by Puppet Inc.

This example shows how to add the ntp module to managed hosts.

Procedure
  1. Navigate to forge.puppet.com and search for ntp. One of the first modules is puppetlabs/ntp.

  2. Connect to your orcharhino server using SSH and install the Puppet module:

    # puppet module install puppetlabs-ntp -i /etc/puppetlabs/code/environment/production/modules

    Use the -i parameter to specify the path and Puppet environment, for example production.

    Once the installation is completed, the output looks as follows:

    Notice: Preparing to install into /etc/puppetlabs/code/environment/production/modules ...
    Notice: Created target directory /etc/puppetlabs/code/environment/production/modules
    Notice: Downloading from https://forgeapi.puppet.com ...
    Notice: Installing -- do not interrupt ...
    /etc/puppetlabs/code/environment/production/modules
    |-| puppetlabs-ntp (v8.3.0)
      |-- puppetlabs-stdlib (v4.25.1) [/etc/puppetlabs/code/environments/production/modules]

An alternative way to install a Puppet module is to copy a folder containing the Puppet module to the module path as mentioned above. Ensure to resolve its dependencies manually.

Updating a Puppet Module

You can update an existing Puppet module using the puppet command.

Procedure
  1. Connect to your Puppet server using SSH and find out where the Puppet modules are located:

    # puppet config print modulepath

    This returns output as follows:

    /etc/puppetlabs/code/environments/production/modules:/etc/puppetlabs/code/environments/common:/etc/puppetlabs/code/modules:/opt/puppetlabs/puppet/modules:/usr/share/puppet/modules
  2. If the module is located in the path as displayed above, the following command updates a module:

    # puppet module upgrade module name

Importing a Puppet Module

Import the Puppet module to orcharhino or any attached orcharhino Proxy before you assign any of its classes to managed hosts. Ensure to select Any Organization and Any Location as context, otherwise the import might fail.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Classes.

  2. Click the Import button in the upper right corner and select which orcharhino Proxy you want to import modules from. You may typically choose between your orcharhino server or any attached orcharhino Proxy.

  3. Select the Puppet module you want to import:

    1. Select the classes of your choice and check the box on the left.

    2. Click the Update button to import the Puppet classes to orcharhino.

  4. Importing a Puppet module results in a notification as follows:

    Successfully updated environments and Puppet classes from the on-disk Puppet installation

Configuring a Puppet Class

You can configure a Puppet class after you have imported it to orcharhino server. This example overrides the default list of ntp servers.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Classes and select the ntp Puppet module to change its configuration.

  2. Select the Smart Class Parameter tab and search for servers.

  3. Ensure the Override check box is selected.

  4. Set the Parameter Type drop down menu to array.

  5. Insert a list of ntp servers as Default Value:

    ["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]

    An alternative way to describe the array is the yaml syntax:

    - 0.de.pool.ntp.org
    - 1.de.pool.ntp.org
    - 2.de.pool.ntp.org
    - 3.de.pool.ntp.org
  6. Click the Submit button after adding the values. This changes the default configuration of the Puppet module ntp.

Managing Puppet Modules with r10k

You can manage Puppet modules and environments using r10k. r10k uses a list of Puppet roles from a Puppetfile to download Puppet modules from the Puppet Forge or git repositories. It does not handle dependencies between Puppet modules.

A Puppetfile looks as follows:

forge "https://forge.puppet.com"

mod "puppet-nginx",
  :git => "https://github.com/voxpupuli/puppet-nginx.git",
  :ref => "master"

mod "puppetlabs/apache"
mod "puppetlabs/ntp", "8.3.0"

Using Puppet Classes

There are three different ways to use Puppet classes within orcharhino. You can assign Puppet classes to either a single host or multiple hosts using host groupings such as a location based context or Puppet config groups in combination with host groups.

A Puppet module typically consists of several Puppet classes. You can only assign Puppet classes to hosts.

Assigning a Puppet Class to Individual Hosts

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All hosts and click on the edit button of the host you want to add the ntp Puppet class to.

  2. Select the Puppet classes tab and look for the ntp class.

  3. Click the + symbol next to ntp to add the ntp submodule to the list of included classes.

  4. Click the Submit button at the bottom to save your changes.

    If the Puppet classes tab of an individual host is empty, check if it’s assigned to the proper Puppet environment.

Select the YAML button on the host overview page to verify the Puppet configuration. This produces output similar as follows:

---
parameters:
  // shortened YAML output
classes:
  ntp:
    servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]'
environment: production
...

Connect to your host using SSH and check the content of /etc/ntp.conf to verify the ntp configuration. This examples assumes your host is running CentOS 7. Other operating systems may store the ntp config file in a different path.

You may need to run the Puppet agent on your host by executing the following command:

# puppet agent -t

Running the following command on the host checks which ntp servers are used for clock synchronization:

# cat /etc/ntp.conf

This returns output similar as follows:

# ntp.conf: Managed by puppet.
server 0.de.pool.ntp.org
server 1.de.pool.ntp.org
server 2.de.pool.ntp.org
server 3.de.pool.ntp.org

You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically.

Assigning a Puppet Class to a Host Group

Use a host group to assign the ntp Puppet class to multiple hosts at once. Every host you deploy based on this host group has this Puppet class installed.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Host Groups to create a host group or edit an existing one.

  2. Set the following parameters:

    1. The Lifecycle Environment describes the stage in which certain versions of content are available to hosts.

    2. The Content View is comprised of products and allows for version control of content repositories.

    3. The Environment allows you to supply a group of hosts with their own dedicated configuration.

  3. In the orcharhino management UI, navigate to Configure > Classes.

  4. Add the Puppet class to the included classes or to the included config groups if a Puppet config group is configured.

  5. Click Submit to save the changes.

Creating a Puppet Config Group

A Puppet config group is a named list of Puppet classes that allows you to combine their capabilities and assign them to managed hosts at a click. This is equivalent to the concept of profiles in pure Puppet.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Config Groups and select the New Config Group button.

  2. Select the classes you want to add to the config group.

    1. Choose a meaningful Name for the Puppet config group.

    2. Add selected Puppet classes to the Included Classes field.

  3. Click Submit to save the changes.

Puppet Parameter Hierarchy

Puppet parameters are structured hierarchically. Parameters with a higher placement override parameters with lower placements:

  1. Global parameters

  2. Organization parameters

  3. Location parameters

  4. Host group parameters

  5. Host parameters

For example, host specific parameters override any other parameter and location parameters only override any organization or global parameters. This feature is especially useful when grouping hosts with location or organization contexts.

Overriding Puppet Parameters on Individual Hosts

You can override parameters on individual hosts. This is recommended if you have multiple hosts and only want to make changes to a single one.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts and select a host.

  2. Click Edit and select the Parameters tab.

  3. Click the Override button to edit the Puppet parameter.

  4. Click Submit to save the changes.

Overriding Puppet Parameters Location Based

You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following examples chooses the location context to illustrate setting context based parameters.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Classes and select the Smart Class Parameter tab.

  2. Choose a parameter.

  3. Use the Order list to define the hierarchy of the Puppet parameters. The individual host (fqdn) marks the most and the location context (location) the least relevant.

  4. Check Merge Overrides if you want to add all further matched parameters after finding the first match.

  5. Check Merge Default if you want to also include the default value even if there are more specific values defined.

  6. Check Avoid Duplicates if you want to create a list of unique values for the selected parameter.

  7. The matcher field requires an attribute type from the order list. For example, you can choose Paris as location context and set the value to French ntp servers.

  8. Use the Add Matcher button to add more matchers.

  9. Click Submit to save the changes.

Overriding Puppet Parameters Organization Based

You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following example chooses the organization context to illustrate setting context based parameters.

Note that organization based Puppet parameters are overridden by location based Puppet parameters.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Classes and select the Smart Class Parameter tab.

  2. Choose a parameter.

  3. Use the Order list to define the hierarchy of the Puppet parameters. The individual host (fqdn) marks the most and the organization context (organization) the least relevant.

  4. Check Merge Overrides if you want to add all further matched parameters after finding the first match.

  5. Check Merge Default if you want to also include the default value even if there are more specific values defined.

  6. Check Avoid Duplicates if you want to create a list of unique values for the selected parameter.

  7. The matcher field requires an attribute type from the order list.

  8. Use the Add Matcher button to add more matchers.

  9. Click Submit to save the changes.

Puppet Run Once Using SSH

Assign the proper job template to the Run Puppet Once feature to run Puppet on managed hosts.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Remote Execution Features.

  2. Select the puppet_run_host remote execution feature.

  3. Assign the Run Puppet Once - SSH Default job template.

Run Puppet on managed hosts by running a job and selecting category Puppet and template Run Puppet Once - SSH Default. Alternatively, click the Run Puppet Once button in the Schedule Remote Job drop down menu on the host details page.

Setting Out of Sync Time for Puppet Managed Hosts

orcharhino considers hosts managed by Puppet to be out of sync if the last Puppet report is older than the combined values of outofsync_interval and puppet_interval set in minutes. By default, the Puppet agent on managed hosts runs every 30 minutes. The puppet_interval is set to 35 minutes and the global outofsync_interval is set to 30 minutes.

The effective time after which hosts are considered out of sync is the sum of outofsync_interval and puppet_interval. Setting the global outofsync_interval to 30 and the puppet_interval to 60 results in a total of 90 minutes after which the host status changes to out of sync.

Setting the Puppet Agent Run Interval

Set the interval when the Puppet Agent runs and sends reports to orcharhino.

Procedure
  1. Connect to your managed host using SSH.

  2. Add the Puppet Agent run interval to /etc/puppetlabs/puppet/puppet.conf, for example runinterval = 1h.

Setting the Puppet Interval

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings, and click the Puppet tab.

  2. In the Puppet interval field, edit the value to the duration, in minutes, after which hosts reporting using Puppet are considered to be out of sync. Set a duration in minutes after which hosts reporting using Puppet are considered to be out of sync.

Setting the Out of Sync Interval

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. On the General tab, edit Out of sync interval. Set a duration in minutes after which hosts are considered to be out of sync.

    You can also overwrite this on individual hosts or host groups by adding the outofsync_interval parameter.

Setting Out of Sync Interval for Individual Hosts

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts.

  2. Click Edit for a selected host.

  3. On the Parameters tab, click Add Parameter.

  4. In the Name field, enter outofsync_interval.

Setting Out of Sync Interval for Host Groups

Procedure
  1. In the orcharhino management UI, navigate to Configure > Host Groups.

  2. Select a host group.

  3. On the Parameters tab, click Add Parameter.

  4. In the Name field, enter outofsync_interval.

Additional Resources for Puppet

Using Salt for Configuration Management

Use this section as a guide to configuring orcharhino to use Salt for configuration management on managed hosts.

Introduction to Salt

Salt can be used as a configuration management tool similar to Ansible or Puppet.

This guide describes how to use Salt for configuration management in orcharhino. This guide contains information about how to install the Salt plugin, how to integrate orcharhino with an existing Salt Master, and how to configure managed hosts with Salt.

Salt offers two distinct modes of operation: Clientless using SSH or the Salt Minion client software.

orcharhino’s Salt plugin supports exclusively the Salt Minion approach.

Salt Architecture

You need a Salt Master that either runs on your orcharhino server or orcharhino Proxy with the Salt plugin enabled. You can also use an existing Salt Master by installing and configuring the relevant orcharhino Proxy features on the existing Salt Master host.

For more information on installing a Salt Master, consult the official Salt documentation. Alternatively, use the bootstrap instructions found in the official Salt package repository.

Terminology
  • Managed hosts are referred to as Salt Minions.

  • Information in form of key-value pairs gathered from Salt Minions is referred to as Salt Grains.

  • Configuration templates are referred to as Salt States.

  • Bundles of Salt States are referred to as Salt Environments.

Use the same Salt version on the Salt Master as you are using on your Salt Minions. You can use orcharhino’s content management to provide managed hosts with the correct version of the Salt Minion client software.

Table 3. Required Ports from Salt Master to Salt Minions
Port Protocol Service Required For

4505 and 4506

TCP

HTTPS

Salt Master to Salt Minions

9191

TCP

HTTPS

Salt API

Installing Salt Plugin

To configure managed hosts with Salt, you need to install the Salt plugin.

Select Salt as a configuration management system during step five of the main orcharhino installation steps. Choosing this option installs and configures both the Salt plugin and a Salt Master on your orcharhino.

Procedure
  1. Connect to your orcharhino using SSH:

    # ssh root@orcharhino.example.com
  2. Install the Salt plugin:

    # foreman-installer \
    --enable-foreman-plugin-salt \
    --enable-foreman-proxy-plugin-salt

Configuring Salt

After you have installed the Salt plugin, you need to connect it to a Salt Master. This is required when adding Salt support to an existing orcharhino installation, when adding an existing Salt Master to orcharhino, or when setting up Salt on orcharhino Proxy.

If you select Salt during the main orcharhino installation steps, the installer automatically performs those steps on your orcharhino. Use the following sections as a reference for installing Salt manually on your orcharhino or orcharhino Proxy.

Perform all actions on your Salt Master unless noted otherwise. This is either your orcharhino server or orcharhino Proxy with Salt enabled.

Configuring orcharhino

You need to configure orcharhino to use the Salt plugin.

Procedure
  1. Connect to your orcharhino using SSH:

    # ssh root@orcharhino.example.com
  2. Extend the /etc/sudoers file to allow the foreman-proxy user to run Salt:

    Cmd_Alias SALT = /usr/bin/salt, /usr/bin/salt-key
    foreman-proxy ALL = NOPASSWD: SALT
    Defaults:foreman-proxy !requiretty
  3. Add a user called saltuser to access the Salt API:

    # adduser --no-create-home --shell /bin/false --home-dir / saltuser
    # passwd saltuser

    Enter the password for the Salt user twice.

    The command adduser saltuser -p password does not work. Using it prevents you from importing Salt States.

Configuring Salt Master

Configure your Salt Master to configure managed hosts using Salt.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Set orcharhino as an external node classifier in /etc/salt/master for Salt:

    master_tops:
      ext_nodes: /usr/bin/foreman-node
  3. Enable Salt Pillar data for use with orcharhino:

    ext_pillar:
      - puppet: /usr/bin/foreman-node
  4. Add a Salt Environment called base that is associated with the /srv/salt directory:

    file_roots:
      base:
        - /srv/salt
  5. Use the saltuser user for the Salt API and specify the connection settings in /etc/salt/master:

    external_auth:
      pam:
        saltuser:
          - '@runner'
    
    rest_cherrypy:
      port: 9191
      host: 0.0.0.0
      ssl_key: /etc/puppetlabs/puppet/ssl/private_keys/orcharhino.example.com.pem
      ssl_crt: /etc/puppetlabs/puppet/ssl/certs/orcharhino.example.com.pem
  6. Optional: Use Salt as a remote execution provider.

    You can run arbitrary commands on managed hosts by using the existing connection from your Salt Master to Salt Minions. Configure the foreman-proxy user to run Salt commands in /etc/salt/master:

    publisher_acl:
      foreman-proxy:
        - state.template_str

Authenticating Salt Minions Using Salt Autosign Grains

Configure orcharhino to automatically accept Salt Minions using Salt autosign Grains.

Procedure
  1. Configure the Grains key file on the Salt Master and add a reactor in the /etc/salt/master file:

    autosign_grains_dir: /var/lib/foreman-proxy/salt/grains
    
    reactor:
      - 'salt/auth':
        - /var/lib/foreman-proxy/salt/reactors/foreman_minion_auth.sls

    The Grains file holds the acceptable keys when you deploy Salt Minions. The reactor initiates an interaction with the Salt plugin if the Salt Minion is successfully authenticated.

  2. Add the reactor to the salt/auth event.

  3. Copy the Salt runners into your file_roots runners directory. The directory depends on your /etc/salt/master config. If it is configured to use /srv/salt, create the runners folder /srv/salt/_runners and copy the Salt runners into it.

    # mkdir -p /srv/salt/_runners
    # cp /var/lib/foreman-proxy/salt/runners/* /srv/salt/_runners/
  4. Set the host name of your orcharhino server in /var/lib/foreman-proxy/salt/reactors/foreman_minion_auth.sls:

    [...]
    remove_autosign_key_custom_runner:
      runner.foreman_https.query_cert:
        - method: PUT
        - host: orcharhino.example.com
        - path: /salt/api/v2/salt_autosign_auth?name={{ data['id'] }}
        - cert: /etc/pki/katello/puppet/puppet_client.crt
        - key: /etc/pki/katello/puppet/puppet_client.key
        - port: 443
    [...]
  5. Restart the Salt Master service:

    # systemctl restart salt-master
  6. Enable the Salt reactors and runners in your Salt Environment:

    # salt-run saltutil.sync_all

Authenticating Salt Minions Using Host Names

Configure orcharhino to authenticate Salt Minions based on their host names. This relies on the autosign.conf file that stores the host names of Salt Minions the Salt Master accepts.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Enable the autosign.conf file in /etc/salt/master:

    autosign_file: /etc/salt/autosign.conf
    permissive_pki_access: True
  3. Create the /etc/salt/autosign.conf file and set appropriate ownership and permissions:

    # touch /etc/salt/autosign.conf
    # chgrp foreman-proxy /etc/salt/autosign.conf
    # chmod 660 /etc/salt/autosign.conf

Enabling Salt Grains Upload

Managed hosts running the Salt Minion client software can upload Salt Grains to orcharhino server or orcharhino Proxy. Salt Grains are collected system properties, for example the operating system or IP address of a Salt Minion.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Edit /etc/salt/foreman.yaml on your Salt Master:

    :proto: https
    :host: orcharhino.example.com
    :port: 443
    :ssl_ca: "/etc/puppetlabs/puppet/ssl/ssl_ca.pem"
    :ssl_cert: "/etc/puppetlabs/puppet/ssl/client_cert.pem"
    :ssl_key: "/etc/puppetlabs/puppet/ssl/client_key.pem"
    :timeout: 10
    :salt: /usr/bin/salt
    :upload_grains: true

Configuring Salt API

Configure the Salt API in /etc/foreman-proxy/settings.d/salt.yml.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Edit /etc/foreman-proxy/settings.d/salt.yml:

    :use_api: true
    :api_auth: pam
    :api_url: https://orcharhino.example.com:9191
    :api_username: saltuser
    :api_password: password

    Ensure to use the password of the previously created saltuser.

Activating Salt

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Restart the Salt services:

    # systemctl restart salt-master salt-api
  3. Connect to your orcharhino using SSH:

    # ssh root@orcharhino.example.com
  4. Restart the orcharhino services:

    # foreman-maintain service restart
  5. Refresh your orcharhino Proxy features.

    1. In the orcharhino management UI, navigate to Infrastructure > Smart Proxies.

    2. Click Refresh for the relevant orcharhino Proxy.

Setting up Salt Minions

Salt Minions require the Salt Minion client software to interact with your Salt Master.

Using orcharhinos Content Management

Provide managed hosts with the required Salt Minion client software using orcharhino.

Procedure
  1. Create a product called Salt.

  2. Create a repository within the Salt product for each operating system supported by orcharhino that you want to install the Salt Minion client software on.

    Add the operating system to the name of the repository, for example Salt for Ubuntu 20.04.

    You can find the upstream URL for the Salt packages on the official Salt package repository. The URL depends on both the Salt version and the operating system, for example https://repo.saltproject.io/py3/ubuntu/20.04/amd64/3003/.

  3. Synchronize the previously created products.

  4. Create a Content View for each repository.

  5. Create a Composite Content View for each major version of each operating system to make the new content available.

  6. Add each of your operating system specific Salt Content Views to your main Composite Content View for that operating system and version.

  7. Publish a new version of the Composite Content View from the previous step.

  8. Promote the Content View from the previous step to your lifecycle environments as appropriate.

  9. Optional: Create activation keys for your Composite Content View and lifecycle environment combinations.

Creating a Host Group With Salt

To bundle provisioning and configuration settings, you can create a host group with Salt enabled for managed hosts.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Host Groups.

  2. Click Create Host Group.

  3. Click the Host Group tab and select a Salt Environment and a Salt Master.

  4. Click the Salt States tab and assign Salt States to your host group.

  5. Click the Activation Keys tab and select an activation key containing the Salt Minion client software.

  6. Click Submit to save your host group.

Managed hosts deployed using this host group automatically install and configure the required Salt Minion client software and register with your Salt Master. For more information, see creating a Host Group in the Managing Hosts guide.

Deploying Minion Hosts

Deploy managed hosts that are fully provisioned and configured for Salt usage.

Prerequisites to deploy Salt Minions
  1. A Salt Master

  2. A Salt Environment

  3. A Content View containing the required Salt Minion client software

  4. An activation key

  5. A lifecycle environment

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Create Host.

  2. Select the previously created host group with Salt enabled.

  3. Click Submit to deploy a host.

Verifying the Connection between Salt Master and Salt Minions

Verify the connection between your Salt Master and Salt Minions.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Ping your Salt Minions:

    # salt "*" test.ping
  3. Display all Salt Grains of all connected Salt Minions:

    # salt "*" grains.items

Salt Usage

Salt Minions managed by orcharhino are associated with a Salt Master and a Salt Environment. The associated Salt Environment within orcharhino must match the actual Salt Environment from the file_roots option in the /etc/salt/master file. You can configure managed hosts with Salt once they are associated with your orcharhino server or orcharhino Proxy and the Salt Minion client software is installed.

Importing Salt States

A Salt State configures parts of a host, for example, a service or the installation of a package. You can import Salt States from your Salt Master to orcharhino. The Salt Master configuration in this guide uses a Salt Environment called base that includes the Salt States stored in /srv/salt/.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Salt States and click Import from FQDN.

  2. Optional: Click Edit to assign Salt States to Salt Environments.

  3. Optional: Click Delete to remove a Salt State from your orcharhino. This only removes the Salt State from orcharhino, not from the disk of your Salt Master.

  4. Click Submit to import the Salt States.

After you have imported Salt States, you can assign them to hosts or Host Groups. Salt applies these Salt States to any hosts they are assigned to every time you run state.highstate.

Configure the paths for Salt States and Salt Pillars in /etc/salt/master. By default, Salt States are located in /srv/salt and and Salt Pillars in /srv/pillar.

Viewing Salt Autosign Keys

The Salt Keys page lists hosts and their Salt keys. You can manually accept, reject, or delete keys.

Use the Salt autosign feature to automatically accept signing requests from managed hosts. By default, managed hosts are supplied with a Salt key during host provisioning.

This feature only covers the autosign via autosign.conf authentication procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Smart Proxies.

  2. Select a orcharhino Proxy.

  3. In the Actions drop down menu, click Salt Keys.

Viewing Salt Reports

Salt reports are uploaded to orcharhino every ten minutes using the /etc/cron.d/smart_proxy_salt cron job. You can trigger this action manually on your Salt Master:

# /usr/sbin/upload-salt-reports
Procedure
  1. To view Salt reports, in the orcharhino management UI, navigate to Monitor > Config Management Reports.

  2. To view Salt reports associated with an individual host, in the orcharhino management UI, navigate to Hosts > All Hosts and select a single host.

Enabling Salt Report Uploads

The Salt Master can directly upload Salt reports to orcharhino.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. Ensure the reactor is present:

    # file /var/lib/foreman-proxy/salt/reactors/foreman_report_upload.sls
  3. Copy report upload script:

    # cp /var/lib/foreman-proxy/salt/runners/foreman_report_upload.py /srv/salt/_runners/
  4. Add the reactor to /etc/salt/master:

    reactor:
      - 'salt/auth':
        - /var/lib/foreman-proxy/salt/reactors/foreman_minion_auth.sls
      - 'salt/job/*/ret/*':
        - /var/lib/foreman-proxy/salt/reactors/foreman_report_upload.sls
  5. Restart the Salt Master:

    # systemctl restart salt-master
  6. Enable the new runner:

    # salt-run saltutil.sync_all
  7. If you use a cron job to upload facts from your Salt Master to orcharhino, disable the cron job:

    # rm -f /etc/cron.d/smart_proxy_salt

Alternatively, you can upload Salt reports from your Salt Master to orcharhino manually:

# /usr/sbin/upload-salt-reports

Salt Variables

You can configure Salt Variables within orcharhino. The configured values are available as Salt Pillar data.

For more information, see Puppet smart class parameters section on how to configure the variables' data.

Viewing ENC Parameters

You can use orcharhino as an external node classifier for Salt. Click Salt ENC on the host overview page to view assigned Salt States. This shows a list of parameters that are made available for Salt usage as Salt Pillar data.

You can check what parameters are truly available on the Salt side by completing the following procedure.

Procedure
  1. Connect to your Salt Master using SSH:

    # ssh root@salt-master.example.com
  2. View available ENC parameters:

    # salt '*' pillar.items
  3. Optional: Refresh the Salt Pillar data if a parameter is missing:

    # salt '*' saltutil.refresh_pillar

Running Salt

You can run arbitrary Salt functions using orcharhino’s remote execution features.

Most commonly, you can run salt.highstate on one or more Salt Minions. This applies all relevant Salt States on your managed hosts.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts.

  2. Select one or multiple hosts.

  3. Click Run Salt from the actions drop down menu.

    Alternatively, you can click Run Salt from the Schedule Remote Job drop down menu on the host overview page.

Triggering a remote execution job takes you to the corresponding job overview page.

If you select a host from the list, you can see the output of the remote job in real time.

To view remote jobs, in the orcharhino management UI, navigate to Monitor > Jobs.

Running state.highstate generates a Salt report. Note that reports are uploaded to orcharhino every ten minutes.

You can use orcharhino’s remote execution features to run arbitrary Salt functions. This includes an optional test mode that tells you what would happen without actually changing anything.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts

  2. Select a host.

  3. Click Schedule Remote Job.

  4. For the Job category field, select Salt-Call from the drop down menu.

  5. For the Job template field, select Salt Run function from the drop down menu.

  6. In the Function field, enter the name of the function that you want to trigger, for example, pillar.items or test.ping.

  7. Optional: Click Display advanced fields and enable test-run.

  8. Click Submit to run the job.

You can also schedule remote jobs or run them recurrently. For more information, see Configuring and Setting up Remote Jobs in the Managing Hosts guide.

Alternatively, you can define recurrent actions using the native Salt way. For example, you can schedule hourly state.highstate runs on individual Salt Minions by extending /etc/salt/minion:

schedule:
  highstate:
    function: state.highstate
    minutes: 60

Salt Example

This example uses a Salt State to manage the /etc/motd file on one or more Salt Minions. It demonstrates the use of orcharhino as an external node classifier and the use of Salt Grains.

Procedure
  1. Create a global parameter called vendor_name with the string orcharhino as its value.

  2. Add a new Salt State called motd to your Salt Master.

  3. Create the /srv/salt/motd/ directory:

    # mkdir -p /srv/salt/motd/
  4. Create /srv/salt/motd/init.sls as a Salt State file:

    /etc/motd:
      file.managed:
        - user: root
        - group: root
        - mode: 0644
        - source: salt://motd/motd.template
        - template: jinja
  5. Create /srv/salt/motd/motd.template as a template referenced by the Salt State file:

    Welcome to {{ grains['fqdn'] }} Powered by {{ salt['pillar.get']('vendor_name') }}

    Access the fqdn Salt Grain from within this template and retrieve the vendor_name parameter from the Salt Pillar.

  6. Import the motd Salt State into orcharhino.

  7. Verify that Salt has been given access to the vendor_name parameter by running either of the following commands on your Salt Master:

    # salt '*' pillar.items | grep -A 1 vendor_name
    # salt '*' pillar.get vendor_name

    If the output does not include the value of the vendor_name parameter, you must refresh the Salt Pillar data first:

    # salt '*' saltutil.refresh_pillar

    For information about how to refresh Salt Pillar data, see Salt ENC parameters.

  8. Add the motd Salt State to your Salt Minions or a host group.

  9. Apply the Salt State by running state.highstate.

  10. Optional: Verify the contents of /etc/motd on a Salt Minion.

Additional Resources for Salt

Configuring and Setting up Remote Jobs

Use this section as a guide to configuring orcharhino to execute jobs on remote hosts.

Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times.

About Running Jobs on Hosts

You can run jobs on hosts remotely from orcharhino Proxies using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution.

For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the orcharhino Proxy base operating system. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.

Communication occurs through orcharhino Proxy, which means that orcharhino server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution orcharhino Proxy has access to port 22 on the target hosts.

orcharhino uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide.

Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates.

Any orcharhino Proxy base operating system is a client of orcharhino server’s internal orcharhino Proxy, and therefore this section applies to any type of host connected to orcharhino server, including orcharhino Proxies.

You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values.

In addition, you can specify custom values for templates when you run the command.

For more information, see Executing a Remote Job.

Remote Execution Workflow

When you run a remote job on hosts, for every host, orcharhino performs the following actions to find a remote execution orcharhino Proxy to use.

orcharhino searches only for orcharhino Proxies that have the remote execution feature enabled.

  1. orcharhino finds the host’s interfaces that have the Remote execution check box selected.

  2. orcharhino finds the subnets of these interfaces.

  3. orcharhino finds remote execution orcharhino Proxies assigned to these subnets.

  4. From this set of orcharhino Proxies, orcharhino selects the orcharhino Proxy that has the least number of running jobs. By doing this, orcharhino ensures that the jobs load is balanced between remote execution orcharhino Proxies.

  5. If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino selects the most lightly loaded orcharhino Proxy from the following types of orcharhino Proxies that are assigned to the host:

    • DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets

    • DNS orcharhino Proxy assigned to the host’s domain

    • Realm orcharhino Proxy assigned to the host’s realm

    • Puppet server orcharhino Proxy

    • Puppet CA orcharhino Proxy

    • OpenSCAP orcharhino Proxy

  6. If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Enable Global orcharhino Proxy setting is enabled, orcharhino selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.

Permissions for Remote Execution

You can control which users can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles:

  • Remote Execution Manager: This role allows access to all remote execution features and functionality.

  • Remote Execution User: This role only allows running jobs; it does not provide permission to modify job templates.

You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission, the user can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or orcharhino Proxies are visible to the role.

The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. For more information on working with roles and permissions see Creating and Managing Roles in the Administering orcharhino.

The following example shows filters for the execute_template_invocation permission:

name = Reboot and host.name = staging.example.com
name = Reboot and host.name ~ *.staging.example.com
name = "Restart service" and host_group.name = webservers

The first line in this example permits the user to apply the Reboot template to one selected host. The second line defines a pool of hosts with names ending with .staging.example.com. The third line binds the template with a host group.

Permissions assigned to users can change over time. If a user has already scheduled some jobs to run in the future, and the permissions have changed, this can result in execution failure because the permissions are checked immediately before job execution.

Creating a Job Template

Use this procedure to create a job template. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. Navigate to Hosts > Job templates.

  2. Click New Job Template.

  3. Click the Template tab, and in the Name field, enter a unique name for your job template.

  4. Select Default to make the template available for all organizations and locations.

  5. Create the template directly in the template editor or upload it from a text file by clicking Import.

  6. Optional: In the Audit Comment field, add information about the change.

  7. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories.

  8. Optional: In the Description Format field, enter a description template. For example, Install package %package_name. You can also use %template_name and %job_category in your template.

  9. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks.

  10. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete.

  11. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab.

  12. Optional: Click Foreign input set to include other templates in this job.

  13. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting.

  14. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet.

  15. Click the Location tab and add the locations where you want to use the template.

  16. Click the Organizations tab and add the organizations where you want to use the template.

  17. Click Submit to save your changes.

You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide.

CLI procedure
  1. To create a job template using a template-definition file, enter the following command:

    # hammer job-template create \
    --file "path_to_template_file" \
    --name "template_name" \
    --provider-type SSH \
    --job-category "category_name"

Configuring the Fallback to Any orcharhino Proxy Remote Execution Setting in orcharhino

You can enable the Fallback to Any orcharhino Proxy setting to configure orcharhino to search for remote execution orcharhino Proxies from the list of orcharhino Proxies that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to orcharhino Proxies that do not have the remote execution feature enabled.

If the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded orcharhino Proxy from the set of all orcharhino Proxies assigned to the host, such as the following:

  • DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets

  • DNS orcharhino Proxy assigned to the host’s domain

  • Realm orcharhino Proxy assigned to the host’s realm

  • Puppet server orcharhino Proxy

  • Puppet CA orcharhino Proxy

  • OpenSCAP orcharhino Proxy

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. Click RemoteExecution.

  3. Configure the Fallback to Any orcharhino Proxy setting.

CLI procedure

Enter the hammer settings set command on orcharhino to configure the Fallback to Any orcharhino Proxy setting. For example, to set the value to true, enter the following command:

# hammer settings set --name=remote_execution_fallback_proxy --value=true

Configuring the Global orcharhino Proxy Remote Execution Setting in orcharhino

By default, orcharhino searches for remote execution orcharhino Proxies in hosts' organizations and locations regardless of whether orcharhino Proxies are assigned to hosts' subnets or not. You can disable the Enable Global orcharhino Proxy setting if you want to limit the search to the orcharhino Proxies that are assigned to hosts' subnets.

If the Enable Global orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. Click RemoteExection.

  3. Configure the Enable Global orcharhino Proxy setting.

CLI procedure
  • Enter the hammer settings set command on orcharhino to configure the Enable Global orcharhino Proxy setting. For example, to set the value to true, enter the following command:

    # hammer settings set --name=remote_execution_global_proxy --value=true

Configuring orcharhino to Use an Alternative Directory to Execute Remote Jobs on Hosts

By default, orcharhino uses the /var/tmp directory on the client system to execute the remote execution jobs. If the client system has noexec set for the /var/ volume or file system, you must configure orcharhino to use an alternative directory because otherwise the remote execution job fails since the script cannot be run.

Procedure
  1. Create a new directory, for example new_place:

    # mkdir /remote_working_dir
  2. Copy the SELinux context from the default var directory:

    # chcon --reference=/var /remote_working_dir
  3. Configure the system:

    # foreman-installer --foreman-proxy-plugin-remote-execution-ssh-remote-working-dir /remote_working_dir

Distributing SSH Keys for Remote Execution

To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from orcharhino Proxy to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22.

Use one of the following methods to distribute the public SSH key from orcharhino Proxy to target hosts:

orcharhino distributes SSH keys for the remote execution feature to the hosts provisioned from orcharhino by default.

If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance.

Distributing SSH Keys for Remote Execution Manually

To distribute SSH keys manually, complete the following steps:

Procedure
  1. Enter the following command on orcharhino Proxy. Repeat for each target host you want to manage:

    # ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@target.example.com
  2. To confirm that the key was successfully copied to the target host, enter the following command on orcharhino Proxy:

    # ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy root@target.example.com

Using the orcharhino API to Obtain SSH Keys for Remote Execution

To use the orcharhino API to download the public key from orcharhino Proxy, complete this procedure on each target host.

Procedure
  1. On the target host, create the ~/.ssh directory to store the SSH key:

    # mkdir ~/.ssh
  2. Download the SSH key from orcharhino Proxy:

    # curl https://orcharhino-proxy.example.com:8443/ssh/pubkey >> ~/.ssh/authorized_keys
  3. Configure permissions for the ~/.ssh directory:

    # chmod 700 ~/.ssh
  4. Configure permissions for the authorized_keys file:

    # chmod 600 ~/.ssh/authorized_keys

Configuring a Kickstart Template to Distribute SSH Keys during Provisioning

You can add a remote_execution_ssh_keys snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning. Kickstart templates that orcharhino ships include this snippet by default. Therefore, orcharhino copies the SSH key for remote execution to the systems during provisioning.

Procedure
  • To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use:

    <%= snippet 'remote_execution_ssh_keys' %>

Configuring a keytab for Kerberos Ticket Granting Tickets

Use this procedure to configure orcharhino to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets.

Procedure
  1. Find the ID of the foreman-proxy user:

    # id -u foreman-proxy
  2. Modify the umask value so that new files have the permissions 600:

    # umask 077
  3. Create the directory for the keytab:

    # mkdir -p "/var/kerberos/krb5/user/USER_ID"
  4. Create a keytab or copy an existing keytab to the directory:

    # cp your_client.keytab /var/kerberos/krb5/user/USER_ID/client.keytab
  5. Change the directory owner to the foreman-proxy user:

    # chown -R foreman-proxy:foreman-proxy "/var/kerberos/krb5/user/USER_ID"
  6. Ensure that the keytab file is read-only:

    # chmod -wx "/var/kerberos/krb5/user/USER_ID/client.keytab"
  7. Restore the SELinux context:

    # restorecon -RvF /var/kerberos/krb5

Configuring Kerberos Authentication for Remote Execution

You can use Kerberos authentication to establish an SSH connection for remote execution on orcharhino hosts.

Prerequisites
  • Enroll orcharhino server on the Kerberos server

  • Enroll the orcharhino target host on the Kerberos server

  • Configure and initialize a Kerberos user account for remote execution

  • Ensure that the foreman-proxy user on orcharhino has a valid Kerberos ticket granting ticket

Procedure
  1. To install and enable Kerberos authentication for remote execution, enter the following command:

    # foreman-installer --scenario katello \
     --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true
  2. To edit the default user for remote execution, in the orcharhino management UI, navigate to Administer > Settings and click the RemoteExecution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account.

  3. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account.

To confirm that Kerberos authentication is ready to use, run a remote job on the host.

Setting up Job Templates

orcharhino provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates. If you want to use a template without making changes, proceed to Executing a Remote Job.

You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone.

Procedure
  1. To clone a template, in the Actions column, select Clone.

  2. Enter a unique name for the clone and click Submit to save the changes.

Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide.

Ansible Considerations

To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with ---. You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in orcharhino. For more information, see Synchronizing Repository Templates in the Managing Hosts guide.

Parameter Variables

At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host’s edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the orcharhino management UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No.

Executing a Remote Job

You can execute a job that is based on a job template against one or more hosts.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. Navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list.

  2. From the Select Action list, select Schedule Remote Job.

  3. On the Job invocation page, define the main job settings:

  4. Select the Job category and the Job template you want to use.

  5. Optional: Select a stored search string in the Bookmark list to specify the target hosts.

  6. Optional: Further limit the targeted hosts by entering a Search query. The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts.

  7. The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template.

  8. Optional: To configure advanced settings for the job, click Display advanced fields. Some of the advanced settings depend on the job template, the following settings are general:

    • Effective user defines the user for executing the job, by default it is the SSH user.

    • Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts.

    • Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the previous task took too long to finish, is canceled.

    • Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks.

    • Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized.

      Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs.

  9. To run the job immediately, ensure that Schedule is set to Execute now. You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition.

  10. Click Submit. This displays the Job Overview page, and when the job completes, also displays the status of the job.

CLI procedure
  • Enter the following command on orcharhino:

# hammer settings set --name=remote_execution_global_proxy --value=false

To execute a remote job with custom parameters, complete the following steps:

  1. Find the ID of the job template you want to use:

    # hammer job-template list
  2. Show the template details to see parameters required by your template:

    # hammer job-template info --id template_ID
  3. Execute a remote job with custom parameters:

    # hammer job-invocation create \
    --job-template "template_name" \
    --inputs key1="value",key2="value",... \
    --search-query "query"

    Replace query with the filter expression that defines hosts, for example "name ~ rex01". For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help.

Monitoring Jobs

You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required.

Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch.

Procedure
  1. Navigate to the Job page. This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect.

  2. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running.

  3. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time.

  4. Click Back to Job at any time to return to the Job Details page.

CLI procedure

To monitor the progress of a job while it is running, complete the following steps:

  1. Find the ID of a job:

    # hammer job-invocation list
  2. Monitor the job output:

    # hammer job-invocation output \
    --id job_ID \
    --host host_name
  3. Optional: to cancel a job, enter the following command:

    # hammer job-invocation cancel \
    --id job_ID

Synchronizing Template Repositories

In orcharhino, you can synchronize repositories of job templates, provisioning templates, report templates, and partition table templates between orcharhino server and a version control system or local directory. In this chapter, a Git repository is used for demonstration purposes.

This section details the workflow for:

  • installing and configuring the TemplateSync plug-in

  • performing exporting and importing tasks

Enabling the TemplateSync plug-in

  1. To enable the plug-in on your orcharhino server, enter the following command:

    # foreman-installer --enable-foreman-plugin-templates
  2. To verify that the plug-in is installed correctly, ensure Administer > Settings includes the TemplateSync menu.

Configuring the TemplateSync plug-in

In the orcharhino management UI, navigate to Administer > Settings > TemplateSync to configure the plug-in. The following table explains the attributes behavior. Note that some attributes are used only for importing or exporting tasks.

Table 4. Synchronizing Templates Plug-in configuration
Parameter API parameter name Meaning on importing Meaning on exporting

Associate

associate

Accepted values: always, new, never

Associates templates with OS, Organization, and Location based on metadata.

N/A

Branch

branch

Specifies the default branch in Git repository to read from.

Specifies the default branch in Git repository to write to.

Dirname

dirname

Specifies the subdirectory under the repository to read from.

Specifies the subdirectory under the repository to write to.

Filter

filter

Imports only templates with names that match this regular expression.

Exports only templates with names that match this regular expression.

Force import

force

Imported templates overwrite locked templates with the same name.

N/A

Lock templates

lock

Do not overwrite existing templates when you import a new template with the same name, unless Force import is enabled.

N/A

Metadata export mode

metadata_export_mode

Accepted values: refresh, keep, remove

N/A

Defines how metadata is handled when exporting:

  • Refresh — remove existing metadata from the template content and generate new metadata based on current assignments and attributes.

  • Keep — retain the existing metadata.

  • Remove — export template without metadata. Useful if you want to add metadata manually.

Negate

negate

Accepted values: true, false

Imports templates ignoring the filter attribute.

Exports templates ignoring the filter attribute.

Prefix

prefix

Adds specified string to the beginning of the template if the template name does not start with the prefix already.

N/A

Repo

repo

Defines the path to the repository to synchronize from.

Defines the path to a repository to export to.

Verbosity

verbose

Accepted values: true, false

Enables writing verbose messages to the logs for this action.

N/A

Importing and Exporting Templates

You can import and export templates using the orcharhino management UI, Hammer CLI, or orcharhino API. orcharhino API calls use the role-based access control system, which enables the tasks to be executed as any user. You can synchronize templates with a version control system, such as Git, or a local directory.

Importing Templates

You can import templates from a repository of your choice. You can use different protocols to point to your repository, for example /tmp/dir, git://example.com, https://example.com, and ssh://example.com.

Prerequisites
  • Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:

    <%#
    kind: provision
    name: My Kickstart File
    oses:
    - RedHat 7
    - RedHat 6
    locations:
    - First Location
    - Second Location
    organizations:
    - Default Organization
    - Extra Organization
    %>
Procedure
  1. In the orcharhino management UI, navigate to Hosts > Sync Templates.

  2. Click Import.

  3. Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to import. For more information about each field, see Configuring the TemplateSync plug-in.

  4. Click Submit.

The orcharhino management UI displays the status of the import. The status is not persistent; if you leave the status page, you cannot return to it.

CLI procedure
  • To import a template from a repository, enter the following command:

    $ hammer import-templates \
        --prefix '[Custom Index] ' \
        --filter '.*Template Name$' \
        --repo https://github.com/examplerepo/exampledirectory \
        --branch my_branch \
        --organization 'Default Organization'

    For better indexing and management of your templates, use --prefix to set a category for your templates. To select certain templates from a large repository, use --filter to define the title of the templates that you want to import. For example --filter '.*Ansible Default$' imports various Ansible Default templates.

Exporting Templates

You can export templates to a version control server, such as a Git repository.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Sync Templates.

  2. Click Export.

  3. Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to export. For more information about each field, see Configuring the TemplateSync plug-in.

  4. Click Submit.

The orcharhino management UI displays the status of the export. The status is not persistent; if you leave the status page, you cannot return to it.

CLI procedure
  1. Clone a local copy of your Git repository:

    $ git clone https://github.com/theforeman/community-templates /custom/templates
  2. Change the owner of your local directory to the foreman user, and change the SELinux context with the following commands:

    # chown -R foreman:foreman /custom/templates
    # chcon -R -t httpd_sys_rw_content_t /custom/templates
  3. To export the templates to your local repository, enter the following command:

    hammer export-templates --organization 'Default Organization' --repo /custom/templates

    When exporting templates, avoid temporary directories like /tmp or /var/tmp because the backend service runs with systemd private temporary directories.

Synchronizing Templates Using the orcharhino API

Prerequisites
  • Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:

    <%#
    kind: provision
    name: My Kickstart File
    oses:
    - RedHat 7
    - RedHat 6
    locations:
    - First Location
    - Second Location
    organizations:
    - Default Organization
    - Extra Organization
    %>
Procedure
  1. Configure a version control system that uses SSH authorization, for example gitosis, gitolite, or git daemon.

  2. Configure the TemplateSync plug-in settings on a TemplateSync tab.

    1. Change the Branch setting to match the target branch on a Git server.

    2. Change the Repo setting to match the Git repository. For example, for the repository located in git@git.example.com/templates.git set the setting into ssh://git@git.example.com/templates.git.

  3. Accept Git SSH host key as the foreman user:

    # sudo -u foreman ssh git.example.com

    You can see the Permission denied, please try again. message in the output, which is expected, because the SSH connection cannot succeed yet.

  4. Create an SSH key pair if you do not already have it. Do not specify a passphrase.

    # sudo -u foreman ssh-keygen
  5. Configure your version control server with the public key from your orcharhino, which resides in /usr/share/foreman/.ssh/id_rsa.pub.

  6. Export templates from your orcharhino server to the version control repository specified in the TemplateSync menu:

    $ curl -H "Accept:application/json,version=2" \
    -H "Content-Type:application/json" \
    -u login:password \
    -k https://_orcharhino.example.com/api/v2/templates/export \
    -X POST
    
    {"message":"Success"}
  7. Import templates to orcharhino server after their content was changed:

    $ curl -H "Accept:application/json,version=2" \
    -H "Content-Type:application/json" \
    -u login:password \
    -k https://_orcharhino.example.com/api/v2/templates/import \
    -X POST
    
    {“message”:”Success”}

    Note that templates provided by orcharhino are locked and you cannot import them by default. To overwrite this behavior, change the Force import setting in the TemplateSync menu to yes or add the force parameter -d '{ "force": "true" }’ to the import command.

Synchronizing Templates with a Local Directory Using the orcharhino API

Synchronizing templates with a local directory is useful if you have configured a version control repository in the local directory. That way, you can edit templates and track the history of edits in the directory. You can also synchronize changes to orcharhino server after editing the templates.

Prerequisites
  • Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:

    <%#
    kind: provision
    name: My Kickstart File
    oses:
    - RedHat 7
    - RedHat 6
    locations:
    - First Location
    - Second Location
    organizations:
    - Default Organization
    - Extra Organization
    %>
Procedure
  1. Create the directory where templates are stored and apply appropriate permissions and SELinux context:

    # mkdir -p /usr/share/templates_dir/
    # chown foreman /usr/share/templates_dir/
    # chcon -t httpd_sys_rw_content_t /usr/share/templates_dir/ -R
  2. Change the Repo setting on the TemplateSync tab to match the export directory /usr/share/templates_dir/.

  3. Export templates from your orcharhino server to a local directory:

    $ curl -H "Accept:application/json,version=2" \
    -H "Content-Type:application/json" \
    -u login:password \
    -k https://_orcharhino.example.com/api/v2/templates/export \
    -X POST \
    
    {"message":"Success"}
  4. Import templates to orcharhino server after their content was changed:

    $ curl -H "Accept:application/json,version=2" \
    -H "Content-Type:application/json" \
    -u login:password \
    -k https://_orcharhino.example.com/api/v2/templates/import \
    -X POST
    
    {“message”:”Success”}

    Note that templates provided by orcharhino are locked and you cannot import them by default. To overwrite this behavior, change the Force import setting in the TemplateSync menu to yes or add the force parameter -d '{ "force": "true" }’ to the import command.

You can override default API settings by specifying them in the request with the -d parameter. The following example exports templates to the git.example.com/templates repository:

$ curl -H "Accept:application/json,version=2" \
-H "Content-Type:application/json" \
-u login:password \
-k https://orcharhino.example.com/api/v2/templates/export \
-X POST \
-d "{\"repo\":\"git.example.com/templates\"}"

Advanced Git Configuration

You can perform additional Git configuration for the TemplateSync plug-in using the command line or editing the .gitconfig file.

Accepting a self-signed Git certificate

If you are using a self-signed certificate authentication on your Git server, validate the certificate with the git config http.sslCAPath command.

For example, the following command verifies a self-signed certificate stored in /cert/cert.pem:

# sudo -u foreman git config --global http.sslCAPath cert/cert.pem

For a complete list of advanced options, see the git-config manual page.

Uninstalling the plug-in

To avoid errors after removing the foreman_templates plugin:

  1. Disable the plug-in using the orcharhino installer:

    # foreman-installer --no-enable-foreman-plugin-templates
  2. Clean custom data of the plug-in. The command does not affect any templates that you created.

    # foreman-rake templates:cleanup
  3. Uninstall the plug-in:

    # yum remove foreman-plugin-templates

Appendix A: Template Writing Reference

Embedded Ruby (ERB) is a tool for generating text files based on templates that combine plain text with Ruby code. orcharhino uses ERB syntax in the following cases:

Provisioning templates

For more information, see Creating Provisioning Templates in the Provisioning Guide.

Remote execution job templates

For more information, see Configuring and Setting up Remote Jobs.

Report templates

For more information, see Using Report Templates to Monitor Hosts.

Templates for partition tables

For more information, see Creating Partition Tables in the Provisioning Guide.

This section provides an overview of orcharhino-specific macros and variables that can be used in ERB templates along with some usage examples. Note that the default templates provided by orcharhino (Hosts > Provisioning templates, Hosts > Job templates, Monitor > Report Templates ) also provide a good source of ERB syntax examples.

When provisioning a host or running a remote job, the code in the ERB is executed and the variables are replaced with the host specific values. This process is referred to as rendering. orcharhino server has the safemode rendering option enabled by default, which prevents any harmful code being executed from templates.

Accessing the Template Writing Reference in the orcharhino management UI

You can access the template writing reference document in the orcharhino management UI.

  1. Log in to the orcharhino management UI.

  2. Navigate to Administer > About.

  3. Click the Templates DSL link in the Support section.

Writing ERB Templates

The following tags are the most important and commonly used in ERB templates:

<% %>

All Ruby code is enclosed within <% %> in an ERB template. The code is executed when the template is rendered. It can contain Ruby control flow structures as well as orcharhino-specific macros and variables. For example:

<% if @host.operatingsystem.family == "Redhat" && @host.operatingsystem.major.to_i > 6 -%>
systemctl <%= input("action") %> <%= input("service") %>
<% else -%>
service <%= input("service") %> <%= input("action") %>
<% end -%>

Note that this template silently performs an action with a service and returns nothing at the output.

<%= %>

This provides the same functionality as <% %> but when the template is executed, the code output is inserted into the template. This is useful for variable substitution, for example:

Example input:

echo <%= @host.name %>

Example rendering:

host.example.com

Example input:

<% server_name = @host.fqdn %>
<%= server_name %>

Example rendering:

host.example.com

Note that if you enter an incorrect variable, no output is returned. However, if you try to call a method on an incorrect variable, the following error message returns:

Example input:

<%= @example_incorrect_variable.fqdn -%>

Example rendering:

undefined method `fqdn' for nil:NilClass
<% -%>, <%= -%>

By default, a newline character is inserted after a Ruby block if it is closed at the end of a line:

Example input:

<%= "line1" %>
<%= "line2" %>

Example rendering:

line1
line2

To change the default behavior, modify the enclosing mark with -%>:

Example input:

<%= "line1" -%>
<%= "line2" %>

Example rendering:

line1line2

This is used to reduce the number of lines, where Ruby syntax permits, in rendered templates. White spaces in ERB tags are ignored.

An example of how this would be used in a report template to remove unnecessary newlines between a FQDN and IP address:

Example input:

<%= @host.fqdn -%>
<%= @host.ip -%>

Example rendering:

host.example.com10.10.181.216
<%# %>

Encloses a comment that is ignored during template rendering:

Example input:

<%# A comment %>

This generates no output.

Indentation in ERB templates

Because of the varying lengths of the ERB tags, indenting the ERB syntax might seem messy. ERB syntax ignore white space. One method of handling the indentation is to declare the ERB tag at the beginning of each new line and then use white space within the ERB tag to outline the relationships within the syntax, for example:

<%- load_hosts.each do |host| -%>
<%-   if host.build? %>
<%=     host.name %> build is in progress
<%-   end %>
<%- end %>

Troubleshooting ERB Templates

The orcharhino management UI provides two ways to verify the template rendering for a specific host:

  • Directly in the template editor – when editing a template (under Hosts > Partition tables, Hosts > Provisioning templates, or Hosts > Job templates), on the Template tab click Preview and select a host from the list. The template then renders in the text field using the selected host’s parameters. Preview failures can help to identify issues in your template.

  • At the host’s details page – select a host at Hosts > All hosts and click the Templates tab to list templates associated with the host. Select Review from the list next to the selected template to view it’s rendered version.

Generic orcharhino-Specific Macros

This section lists orcharhino-specific macros for ERB templates.

You can use the macros listed in the following table across all kinds of templates.

Table 5. Generic Macros
Name Description

indent(n)

Indents the block of code by n spaces, useful when using a snippet template that is not indented.

foreman_url(kind)

Returns the full URL to host-rendered templates of the given kind. For example, templates of the "provision" type usually reside at http://HOST/unattended/provision.

snippet(name)

Renders the specified snippet template. Useful for nesting provisioning templates.

snippets(file)

Renders the specified snippet found in the Foreman database, attempts to load it from the unattended/snippets/ directory if it is not found in the database.

snippet_if_exists(name)

Renders the specified snippet, skips if no snippet with the specified name is found.

Templates Macros

If you want to write custom templates, you can use some of the following macros.

Depending on the template type, some of the following macros have different requirements.

For more information about the available macros for report templates, in the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template. In the Create Template window, click the Help tab.

For more information about the available macros for job templates, in the orcharhino management UI, navigate to Hosts > Job Templates, and click the New Job Template. In the New Job Template window, click the Help tab.

input

Using the input macro, you can customize the input data that the template can work with. You can define the input name, type, and the options that are available for users. For report templates, you can only use user inputs. When you define a new input and save the template, you can then reference the input in the ERB syntax of the template body.

<%= input('cpus') %>

This loads the value from user input cpus.

load_hosts

Using the load_hosts macro, you can generate a complete list of hosts.

<%- load_hosts().each_record do |host| -%>
<%=     host.name %>

Use the load_hosts macro with the each_record macro to load records in batches of 1000 to reduce memory consumption.

If you want to filter the list of hosts for the report, you can add the option search: input(‘Example_Host’):

<% load_hosts(search: input('Example_Host')).each_record do |host| -%>
<%=  host.name %>
<% end -%>

In this example, you first create an input that you then use to refine the search criteria that the load_hosts macro retrieves.

report_row

Using the report_row macro, you can create a formatted report for ease of analysis. The report_row macro requires the report_render macro to generate the output.

Example input:
<%- load_hosts(search: input('Example_Host')).each_record do |host| -%>
<%-   report_row(
        'Server FQDN': host.name
      ) -%>
<%- end -%>
<%= report_render -%>
Example rendering:
Server FQDN
host1.example.com
host2.example.com
host3.example.com
host4.example.com
host5.example.com
host6.example.com

You can add extra columns to the report by adding another header. The following example adds IP addresses to the report:

Example input:
<%- load_hosts(search: input('host')).each_record do |host| -%>
<%-   report_row(
      'Server FQDN': host.name,
           'IP': host.ip
      ) -%>
<%- end -%>
<%= report_render -%>
Example rendering:
Server FQDN,IP
host1.example.com,10.8.30.228
host2.example.com,10.8.30.227
host3.example.com,10.8.30.226
host4.example.com,10.8.30.225
host5.example.com,10.8.30.224
host6.example.com,10.8.30.223
report_render

This macro is available only for report templates.

Using the report_render macro, you create the output for the report. During the template rendering process, you can select the format that you want for the report. YAML, JSON, HTML, and CSV formats are supported.

<%= report_render -%>
render_template()

This macro is available only for job templates.

Using this macro, you can render a specific template. You can also enable and define arguments that you want to pass to the template.

Host-Specific Variables

The following variables enable using host data within templates. Note that job templates accept only @host variables.

Table 6. Host Specific Variables and Macros
Name Description

@host.architecture

The architecture of the host.

@host.bond_interfaces

Returns an array of all bonded interfaces. See Parsing Arrays.

@host.capabilities

The method of system provisioning, can be either build (for example kickstart) or image.

@host.certname

The SSL certificate name of the host.

@host.diskLayout

The disk layout of the host. Can be inherited from the operating system.

@host.domain

The domain of the host.

@host.environment

The Puppet environment of the host.

@host.facts

Returns a Ruby hash of facts from Facter. For example to access the 'ipaddress' fact from the output, specify @host.facts['ipaddress'].

@host.grub_pass

Returns the host’s GRUB password.

@host.hostgroup

The host group of the host.

host_enc['parameters']

Returns a Ruby hash containing information on host parameters. For example, use host_enc['parameters']['lifecycle_environment'] to get the life cycle environment of a host.

@host.image_build?

Returns true if the host is provisioned using an image.

@host.interfaces

Contains an array of all available host interfaces including the primary interface. See Parsing Arrays.

@host.interfaces_with_identifier('IDs')

Returns array of interfaces with given identifier. You can pass an array of multiple identifiers as an input, for example @host.interfaces_with_identifier(['eth0', 'eth1']). See Parsing Arrays.

@host.ip

The IP address of the host.

@host.location

The location of the host.

@host.mac

The MAC address of the host.

@host.managed_interfaces

Returns an array of managed interfaces (excluding BMC and bonded interfaces). See Parsing Arrays.

@host.medium

The assigned operating system installation medium.

@host.name

The full name of the host.

@host.operatingsystem.family

The operating system family.

@host.operatingsystem.major

The major version number of the assigned operating system.

@host.operatingsystem.minor

The minor version number of the assigned operating system.

@host.operatingsystem.name

The assigned operating system name.

@host.operatingsystem.boot_files_uri(medium_provider)

Full path to the kernel and initrd, returns an array.

@host.os.medium_uri(@host)

The URI used for provisioning (path configured in installation media).

host_param('parameter_name')

Returns the value of the specified host parameter.

host_param_false?('parameter_name')

Returns false if the specified host parameter evaluates to false.

host_param_true?('parameter_name')

Returns true if the specified host parameter evaluates to true.

@host.primary_interface

Returns the primary interface of the host.

@host.provider

The compute resource provider.

@host.provision_interface

Returns the provisioning interface of the host. Returns an interface object.

@host.ptable

The partition table name.

@host.puppet_ca_server

The Puppet CA server the host must use.

@host.puppetmaster

The Puppet server the host must use.

@host.pxe_build?

Returns true if the host is provisioned using the network or PXE.

@host.shortname

The short name of the host.

@host.sp_ip

The IP address of the BMC interface.

@host.sp_mac

The MAC address of the BMC interface.

@host.sp_name

The name of the BMC interface.

@host.sp_subnet

The subnet of the BMC network.

@host.subnet.dhcp

Returns true if a DHCP proxy is configured for this host.

@host.subnet.dns_primary

The primary DNS server of the host.

@host.subnet.dns_secondary

The secondary DNS server of the host.

@host.subnet.gateway

The gateway of the host.

@host.subnet.mask

The subnet mask of the host.

@host.url_for_boot(:initrd)

Full path to the initrd image associated with this host. Not recommended, as it does not interpolate variables.

@host.url_for_boot(:kernel)

Full path to the kernel associated with this host. Not recommended, as it does not interpolate variables, prefer boot_files_uri.

@provisioning_type

Equals to 'host' or 'hostgroup' depending on type of provisioning.

@static

Returns true if the network configuration is static.

@template_name

Name of the template being rendered.

grub_pass

Returns the GRUB password wrapped in md5pass argument, that is --md5pass=#{@host.grub_pass}.

ks_console

Returns a string assembled using the port and the baud rate of the host which can be added to a kernel line. For example console=ttyS1,9600.

root_pass

Returns the root password configured for the system.

The majority of common Ruby methods can be applied on host-specific variables. For example, to extract the last segment of the host’s IP address, you can use:

<% @host.ip.split('.').last %>

Kickstart-Specific Variables

The following variables are designed to be used within kickstart provisioning templates.

Table 7. Kickstart Specific Variables
Name Description

@arch

The host architecture name, same as @host.architecture.name.

@dynamic

Returns true if the partition table being used is a %pre script (has the #Dynamic option as the first line of the table).

@epel

A command which will automatically install the correct version of the epel-release rpm. Use in a %post script.

@mediapath

The full kickstart line to provide the URL command.

@osver

The operating system major version number, same as @host.operatingsystem.major.

Conditional Statements

In your templates, you might perform different actions depending on which value exists. To achieve this, you can use conditional statements in your ERB syntax.

In the following example, the ERB syntax searches for a specific host name and returns an output depending on the value it finds:

Example input:

<% load_hosts().each_record do |host| -%>
<% if @host.name == "host1.example.com" -%>
<%      result="positive" -%>
<%  else -%>
<%      result="negative" -%>
<%  end -%>
<%= result -%>

Example rendering:

host1.example.com
positive

Parsing Arrays

While writing or modifying templates, you might encounter variables that return arrays. For example, host variables related to network interfaces, such as @host.interfaces or @host.bond_interfaces, return interface data grouped in an array. To extract a parameter value of a specific interface, use Ruby methods to parse the array.

Finding the Correct Method to Parse an Array

The following procedure is an example that you can use to find the relevant methods to parse arrays in your template. In this example, a report template is used, but the steps are applicable to other templates.

  1. To retrieve the NIC of a content host, in this example, using the @host.interfaces variable returns class values that you can then use to find methods to parse the array.

    Example input:
    <%= @host.interfaces -%>
    Example rendering:
    <Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>
  2. In the Create Template window, click the Help tab and search for the ActiveRecord_Associations_CollectionProxy and Nic::Base classes.

  3. For ActiveRecord_Associations_CollectionProxy, in the Allowed methods or members column, you can view the following methods to parse the array:

    [] each find_in_batches first map size to_a
  4. For Nic::Base, in the Allowed methods or members column, you can view the following method to parse the array:

    alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid
  5. To iterate through an interface array, add the relevant methods to the ERB syntax:

    Example input:
    <% load_hosts().each_record do |host| -%>
    <%    host.interfaces.each do |iface| -%>
      iface.alias?: <%= iface.alias? %>
      iface.attached_to: <%= iface.attached_to %>
      iface.bond_options: <%= iface.bond_options %>
      iface.children_mac_addresses: <%= iface.children_mac_addresses %>
      iface.domain: <%= iface.domain %>
      iface.fqdn: <%= iface.fqdn %>
      iface.identifier: <%= iface.identifier %>
      iface.inheriting_mac: <%= iface.inheriting_mac %>
      iface.ip: <%= iface.ip %>
      iface.ip6: <%= iface.ip6 %>
      iface.link: <%= iface.link %>
      iface.mac: <%= iface.mac %>
      iface.managed?: <%= iface.managed? %>
      iface.mode: <%= iface.mode %>
      iface.mtu: <%= iface.mtu %>
      iface.physical?: <%= iface.physical? %>
      iface.primary: <%= iface.primary %>
      iface.provision: <%= iface.provision %>
      iface.shortname: <%= iface.shortname %>
      iface.subnet: <%= iface.subnet %>
      iface.subnet6: <%= iface.subnet6 %>
      iface.tag: <%= iface.tag %>
      iface.virtual?: <%= iface.virtual? %>
      iface.vlanid: <%= iface.vlanid %>
    <%- end -%>
    Example rendering:
    host1.example.com
      iface.alias?: false
      iface.attached_to:
      iface.bond_options:
      iface.children_mac_addresses: []
      iface.domain:
      iface.fqdn: host1.example.com
      iface.identifier: ens192
      iface.inheriting_mac: 00:50:56:8d:4c:cf
      iface.ip: 10.10.181.13
      iface.ip6:
      iface.link: true
      iface.mac: 00:50:56:8d:4c:cf
      iface.managed?: true
      iface.mode: balance-rr
      iface.mtu:
      iface.physical?: true
      iface.primary: true
      iface.provision: true
      iface.shortname: host1.example.com
      iface.subnet:
      iface.subnet6:
      iface.tag:
      iface.virtual?: false
      iface.vlanid:

Example Template Snippets

Checking if a Host Has Puppet and Puppetlabs Enabled

The following example checks if the host has the Puppet and Puppetlabs repositories enabled:

<%
pm_set = @host.puppetmaster.empty? ? false : true
puppet_enabled = pm_set || host_param_true?('force-puppet')
puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo')
%>
Capturing Major and Minor Versions of a Host’s Operating System

The following example shows how to capture the minor and major version of the host’s operating system, which can be used for package related decisions:

<%
os_major = @host.operatingsystem.major.to_i
os_minor = @host.operatingsystem.minor.to_i
%>

<% if ((os_minor < 2) && (os_major < 14)) -%>
...
<% end -%>
Importing Snippets to a Template

The following example imports the subscription_manager_registration snippet to the template and indents it by four spaces:

<%= indent 4 do
snippet 'subscription_manager_registration'
end %>
Conditionally Importing a Kickstart Snippet

The following example imports the kickstart_networking_setup snippet if the host’s subnet has the DHCP boot mode enabled:

<% subnet = @host.subnet %>
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<%= snippet 'kickstart_networking_setup' %>
<% end -%>
Parsing Values from Host Custom Facts

You can use the host.facts variable to parse values from a host’s facts and custom facts.

In this example luks_stat is a custom fact that you can parse in the same manner as dmi::system::serial_number, which is a host fact:

'Serial': host.facts['dmi::system::serial_number'],
'Encrypted': host.facts['luks_stat'],

In this example, you can customize the Applicable Errata report template to parse for custom information about the kernel version of each host:

<%-     report_row(
          'Host': host.name,
          'Operating System': host.operatingsystem,
          'Kernel': host.facts['uname::release'],
          'Environment': host.lifecycle_environment,
          'Erratum': erratum.errata_id,
          'Type': erratum.errata_type,
          'Published': erratum.issued,
          'Applicable since': erratum.created_at,
          'Severity': erratum.severity,
          'Packages': erratum.package_names,
          'CVEs': erratum.cves,
          'Reboot suggested': erratum.reboot_suggested,
        ) -%>

Appendix B: Job Template Examples and Extensions

Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements.

Customizing Job Templates

When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones.

The following template combines default templates to install and start the httpd service on Red Hat Enterprise Linux systems:

<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %>
<%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %>

The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax:

<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %>

With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set, and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list.

Default Job Template Categories

Job template category Description

Packages

Templates for performing package related actions. Install, update, and remove actions are included by default.

Puppet

Templates for executing Puppet runs on target hosts.

Power

Templates for performing power related actions. Restart and shutdown actions are included by default.

Commands

Templates for executing custom commands on remote hosts.

Services

Templates for performing service related actions. Start, stop, restart, and status actions are included by default.

Katello

Templates for performing content related actions. These templates are used mainly from different parts of the orcharhino management UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation.

Example restorecon Template

This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts.

  1. Navigate to Hosts > Job templates. Click New Job Template.

  2. Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor:

    restorecon -RvF <%= input("directory") %>

    The <%= input("directory") %> string is replaced by a user-defined directory during job invocation.

  3. On the Job tab, set Job category to Commands.

  4. Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor.

  5. Click Required so that the command cannot be executed without the user specified parameter.

  6. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon.

  7. Click Submit.

See Executing a restorecon Template on Multiple Hosts for information on how to execute a job based on this template.

Rendering a restorecon Template

This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template. This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts.

Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:

<%= render_template("Run Command - restorecon", :directory => "/home") %>

Executing a restorecon Template on Multiple Hosts

This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory.

  1. Navigate to Hosts > All hosts and select target hosts. Select Schedule Remote Job from the Select Action list.

  2. In the Job invocation page, select the Commands job category and the Run Command - restorecon job template.

  3. Type /home in the directory field.

  4. Set Schedule to Execute now.

  5. Click Submit. You are taken to the Job invocation page where you can monitor the status of job execution.

Including Power Actions in Templates

This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents orcharhino from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly.

Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:

<%= render_template("Power Action - SSH Default", :action => "restart") %>

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").