Managing Hosts Guide
Overview of Hosts in orcharhino
A host is any Linux client that orcharhino manages. Hosts can be physical or virtual. Virtual hosts can be deployed on any platform supported by orcharhino, such as Amazon EC2, Google Compute Engine, libvirt, Microsoft Azure, Oracle Linux Virtualization Manager, oVirt, Proxmox, RHV, and VMware vSphere.
orcharhino enables host management at scale, including monitoring, provisioning, remote execution, configuration management, software management, and subscription management. You can manage your hosts from the orcharhino management UI or from the command line.
In the orcharhino management UI, you can browse all hosts recognized by orcharhino Server, grouped by type:
-
All Hosts - a list of all hosts recognized by orcharhino Server.
-
Discovered Hosts - a list of bare-metal hosts detected on the provisioning network by the Discovery plug-in.
-
Content Hosts - a list of hosts that manage tasks related to content and subscriptions.
-
Host Collections - a list of user-defined collections of hosts used for bulk actions such as errata installation.
To search for a host, type in the Search field, and use an asterisk (*) to perform a partial string search.
For example, if searching for a content host named dev-node.example.com
, click the Content Hosts page and type dev-node*
in the Search field.
Alternatively, *node*
will also find the content host dev-node.example.com.
orcharhino Server is listed as a host itself even if it is not self-registered. Do not delete orcharhino Server from the list of hosts. |
For a better understanding of managing hosts using orcharhino, see our glossary for terminology and key terms. Key terms include patch and release management, configuration management using Ansible, Puppet, or Salt, and host groups. For more information, see Managing Hosts guide. |
Administering Hosts
This chapter describes creating, registering, administering, and removing hosts.
Creating a Host in orcharhino
Use this procedure to create a host in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, click Hosts > Create Host.
-
On the Host tab, enter the required details.
-
Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove.
-
On the Puppet Classes tab, select the Puppet classes you want to include.
-
On the Interfaces tab:
-
For each interface, click Edit in the Actions column and configure the following settings as required:
-
Type — For a Bond or BMC interface, use the Type list and select the interface type.
-
MAC address — Enter the MAC address.
-
DNS name — Enter the DNS name that is known to the DNS server. This is used for the host part of the FQDN.
-
Domain — Select the domain name of the provisioning network. This automatically updates the Subnet list with a selection of suitable subnets.
-
IPv4 Subnet — Select an IPv4 subnet for the host from the list.
-
IPv6 Subnet — Select an IPv6 subnet for the host from the list.
-
IPv4 address — If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address. The address can be omitted if provisioning tokens are enabled, if the domain does not mange DNS, if the subnet does not manage reverse DNS, or if the subnet does not manage DHCP reservations.
-
IPv6 address — If IP address management (IPAM) is enabled for the subnet, the IP address is automatically suggested. Alternatively, you can enter an address.
-
Managed — Select this check box to configure the interface during provisioning to use the orcharhino Proxy provided DHCP and DNS services.
-
Primary — Select this check box to use the DNS name from this interface as the host portion of the FQDN.
-
Provision — Select this check box to use this interface for provisioning. This means TFTP boot will take place using this interface, or in case of image based provisioning, the script to complete the provisioning will be executed through this interface. Note that many provisioning tasks, such as downloading RPMs by anaconda, Puppet setup in a
%post
script, will use the primary interface. -
Virtual NIC — Select this check box if this interface is not a physical device. This setting has two options:
-
Tag — Optionally set a VLAN tag. If unset, the tag will be the VLAN ID of the subnet.
-
Attached to — Enter the device name of the interface this virtual interface is attached to.
-
-
-
Click OK to save the interface configuration.
-
Optionally, click Add Interface to include an additional network interface. For more information, see Adding Network Interfaces.
-
Click Submit to apply the changes and exit.
-
-
On the Operating System tab, enter the required details. For Red Hat operating systems, select Synced Content for Media Selection. If you want to use non Red Hat operating systems, select All Media, then select the installation media from the Media Selection list. You can select a partition table from the list or enter a custom partition table in the Custom partition table field. You cannot specify both.
-
On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Puppet Class, Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter.
-
On the Additional Information tab, enter additional information about the host.
-
Click Submit to complete your provisioning request.
-
To create a host associated to a host group, enter the following command:
# hammer host create \ --name "My_Host_Name" \ --hostgroup "My_Host_Group" \ --interface="primary=true, \ provision=true, \ mac=mac_address, \ ip=ip_address" \ --organization "My_Organization" \ --location "My_Location" \ --ask-root-password yes
This command prompts you to specify the root password. It is required to specify the host’s IP and MAC address. Other properties of the primary network interface can be inherited from the host group or set using the
--subnet
, and--domain
parameters. You can set additional interfaces using the--interface
option, which accepts a list of key-value pairs. For the list of available interface settings, enter thehammer host create --help
command.
Cloning Hosts
You can clone existing hosts.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
In the Actions menu, click Clone.
-
On the Host tab, ensure to provide a Name different from the original host.
-
On the Interfaces tab, ensure to provide a different IP address.
-
Click Submit to clone the host.
For more information, see Creating a Host in orcharhino.
Changing a Module Stream for a Host
If you have a Red Hat Enterprise Linux 8 or clone operating system host, you can modify the module stream for the repositories you install.
After you create the host, you can enable, disable, install, update, and remove module streams from your host in the orcharhino management UI.
-
In the orcharhino management UI, navigate to Hosts > Content Hosts and click the name of the host that contains the modules you want to change.
-
Click the Module Streams tab.
-
From the Available Module Streams list, locate the module that you want to change. You can use the Filter field to refine the list entries. You can also use the Filter Status list to search for modules with a specific status.
-
From the Actions list, select the change that you want to make to the module.
-
In the Job Invocation window, ensure that the job information is accurate. Change any details that you require, and then click Submit.
Creating a Host Group
If you create a high volume of hosts, many of the hosts can have common settings and attributes. Adding these settings and attributes for every new host is time consuming. If you use host groups, you can apply common attributes to hosts that you create.
A host group functions as a template for common host settings, containing many of the same details that you provide to hosts. When you create a host with a host group, the host inherits the defined settings from the host group. You can then provide additional details to individualize the host.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
You can create a hierarchy of host groups. Aim to have one base level host group that represents all hosts in your organization and provide general settings, and then nested groups to provide specific settings. For example, you can have a base level host group that defines the operating system, and two nested host groups that inherit the base level host group:
-
Hostgroup:
Base
(Red Hat Enterprise Linux 7.6)-
Hostgroup:
Webserver
(applies thehttpd
Puppet class)-
Host:
webserver1.example.com
(web server) -
Host:
webserver2.example.com
(web server)
-
-
Hostgroup:
Storage
(applies thenfs
Puppet class)-
Host:
storage1.example.com
(storage server) -
Host:
storage2.example.com
(storage server)
-
-
Host:
custom.example.com
(custom host)
-
In this example, all hosts use Red Hat Enterprise Linux 7.6 as their operating system because of their inheritance of the Base
host group.
The two web server hosts inherit the settings from the Webserver
host group, which includes the httpd
Puppet class and the settings from the Base
host group.
The two storage servers inherit the settings from the Storage
host group, which includes the nfs
Puppet class and the settings from the Base
host group.
The custom host only inherits the settings from the Base
host group.
-
In the orcharhino management UI, navigate to Configure > Host Groups and click Create Host Group.
-
If you have an existing host group that you want to inherit attributes from, you can select a host group from the Parent list. If you do not, leave this field blank.
-
Enter a Name for the new host group.
-
Enter any further information that you want future hosts to inherit.
-
Click the Ansible Roles tab, and from the Ansible Roles list, select one or more roles that you want to add to the host. Use the arrow icon to manage the roles that you add or remove.
-
Click the additional tabs and add any details that you want to attribute to the host group.
Puppet fails to retrieve the Puppet CA certificate while registering a host with a host group associated with a Puppet environment created inside a
Production
environment.To create a suitable Puppet environment to be associated with a host group, manually create a directory and change the owner:
# mkdir /etc/puppetlabs/code/environments/example_environment # chown apache /etc/puppetlabs/code/environments/example_environment
-
Click Submit to save the host group.
-
Create the host group with the
hammer hostgroup create
command. For example:# hammer hostgroup create --name "Base" \ --architecture "My_Architecture" \ --content-source-id _My_Content_Source_ID_ \ --content-view "_My_Content_View_" \ --domain "_My_Domain_" \ --lifecycle-environment "_My_Lifecycle_Environment_" \ --locations "_My_Location_" \ --medium-id _My_Installation_Medium_ID_ \ --operatingsystem "_My_Operating_System_" \ --organizations "_My_Organization_" \ --partition-table "_My_Partition_Table_" \ --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ \ --puppet-environment "_My_Puppet_Environment_" \ --puppet-proxy-id _My_Puppet_Proxy_ID_ \ --root-pass "My_Password" \ --subnet "_My_Subnet_"
Creating a Host Group for Each Lifecycle Environment
Use this procedure to create a host group for the Library lifecycle environment and add nested host groups for other lifecycle environments.
To create a host group for each life cycle environment, run the following Bash script:
MAJOR="My_Major_OS_Version"
ARCH="My_Architecture"
ORG="My_Organization"
LOCATIONS="My_Location"
PTABLE_NAME="My_Partition_Table"
DOMAIN="My_Domain"
hammer --output csv --no-headers lifecycle-environment list --organization "${ORG}" | cut -d ',' -f 2 | while read LC_ENV; do
[[ ${LC_ENV} == "Library" ]] && continue
hammer hostgroup create --name "rhel-${MAJOR}server-${ARCH}-${LC_ENV}" \
--architecture "${ARCH}" \
--partition-table "${PTABLE_NAME}" \
--domain "${DOMAIN}" \
--organizations "${ORG}" \
--query-organization "${ORG}" \
--locations "${LOCATIONS}" \
--lifecycle-environment "${LC_ENV}"
done
Changing the Host Group of a Host
Use this procedure to change the Host Group of a host.
If you reprovision a host after changing the host group, the fresh values that the host inherits from the host group will be applied.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Select the check box of the host you want to change.
-
From the Select Action list, select Change Group. A new option window opens.
-
From the Host Group list, select the group that you want for your host.
-
Click Submit.
Changing the Environment of a Host
Use this procedure to change the environment of a host.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Select the check box of the host you want to change.
-
From the Select Action list, select Change Environment. A new option window opens.
-
From the Environment list, select the new environment for your host.
-
Click Submit.
Changing the Managed Status of a Host
Hosts provisioned by orcharhino are Managed by default. When a host is set to Managed, you can configure additional host parameters from orcharhino Server. These additional parameters are listed on the Operating System tab. If you change any settings on the Operating System tab, they will not take effect until you set the host to build and reboot it.
If you need to obtain reports about configuration management on systems using an operating system not supported by orcharhino, set the host to Unmanaged.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Select the host.
-
Click Edit.
-
Click Manage host or Unmanage host to change the host’s status.
-
Click Submit.
Assigning a Host to a Specific Organization
Use this procedure to assign a host to a specific organization. For general information about organizations and how to configure them, see Managing Organizations in the Content Management Guide.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Select the check box of the host you want to change.
-
From the Select Action list, select Assign Organization. A new option window opens.
-
From the Select Organization list, select the organization that you want to assign your host to. Select the check box Fix Organization on Mismatch.
A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the organization you want to assign the host to. The option Fix Organization on Mismatch will add such a resource to the organization, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one organization to another will fail, even if there is no actual mismatch in settings.
-
Click Submit.
Assigning a Host to a Specific Location
Use this procedure to assign a host to a specific location. For general information about locations and how to configure them, see Creating a Location in the Content Management Guide.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Select the check box of the host you want to change.
-
From the Select Action list, select Assign Location. A new option window opens.
-
Navigate to the Select Location list and choose the location that you want for your host. Select the check box Fix Location on Mismatch.
A mismatch happens if there is a resource associated with a host, such as a domain or subnet, and at the same time not associated with the location you want to assign the host to. The option Fix Location on Mismatch will add such a resource to the location, and is therefore the recommended choice. The option Fail on Mismatch will always result in an error message. For example, reassigning a host from one location to another will fail, even if there is no actual mismatch in settings.
-
Click Submit.
Removing a Host from orcharhino
Use this procedure to remove a host from orcharhino.
-
In the orcharhino management UI, navigate to Hosts > All hosts or Hosts > Content Hosts. Note that there is no difference from what page you remove a host, from All hosts or Content Hosts. In both cases, orcharhino removes a host completely.
-
Select the hosts that you want to remove.
-
From the Select Action list, select Delete Hosts.
-
Click Submit to remove the host from orcharhino permanently.
By default, the To delete a virtual machine on the compute resource, navigate to Administer > Settings and select the Provisioning tab.
Setting |
Disassociating A Virtual Machine from orcharhino without Removing It from a Hypervisor
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select the check box to the left of the hosts to be disassociated.
-
From the Select Action list, select the Disassociate Hosts button.
-
Optional: Select the check box to keep the hosts for future action.
-
Click Submit.
Registering Hosts to orcharhino
There are three main methods for registering hosts to orcharhino Server or orcharhino Proxy:
-
(Default method) Generate a
curl
command from orcharhino and run this command from unlimited number of hosts to register them using the global registration template. This method is suited for hosts that haven’t been registered to orcharhino yet.By using this method you can also deploy orcharhino SSH keys to hosts during registration to orcharhino to enable hosts for remote execution jobs. For more information about the remote execution jobs, see Configuring and Setting up Remote Jobs.
By using this method you can also configure hosts with Red Hat Insights during registration to orcharhino. For more information about using Insights with orcharhino hosts, see Monitoring Hosts Using Red Hat Insights.
-
(Deprecated) Download and install the consumer RPM
orcharhino.example.com/pub/katello-ca-consumer-latest.noarch.rpm
and then run Subscription Manager. This method is suited for hosts that haven’t been provisioned through orcharhino. -
(Deprecated) Download and run the bootstrap script (
orcharhino.example.com/pub/bootstrap.py
). This method is suited for hosts that haven’t been provisioned through orcharhino.
You can also register Atomic Hosts to orcharhino Server or orcharhino Proxy.
Use one of the following procedures to register a host:
Use the following procedures to install and configure host tools:
Registering Hosts
Hosts can be registered to orcharhino by generating a curl
command on orcharhino and running this command on hosts.
This method uses two templates: global registration
template and host initial configuration
template.
That gives you complete control over the host registration process.
You can set default templates by navigating to Administer > Settings, and clicking the Provisioning tab.
-
The orcharhino user that generates the
curl
command must have thecreate_hosts
permission. -
You must have root privileges on the host that you want to register.
-
You must have an activation key created.
-
Optional: If you want to register hosts to Red Hat Insights, you must synchronize the
rhel-7-server-rpms
repository and make it available in the activation key that you use. This is required to install theinsights-client
package on hosts. -
orcharhino Server, any orcharhino Proxys, and all hosts must be synchronized with the same NTP server, and have a time synchronization tool enabled and running.
-
The daemon rhsmcertd must be running on the hosts.
-
An activation key must be available for the host. For more information, see Managing Activation Keys in the Content Management Guide.
-
Subscription Manager must be version 1.10 or later. The package is available in the standard Red Hat Enterprise Linux repository.
-
Optional: If you want to register hosts through orcharhino Proxy, ensure that the Registration and Templates features are enabled on this orcharhino Proxy.
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, click the orcharhino Proxy that you want to use, and locate the Registration feature in the Active features list.
Optional: If the Registration feature is not enabled on your orcharhino Proxy, enter the following command on the orcharhino Proxy to enable it:
# foreman-installer --foreman-proxy-registration \ --foreman-proxy-templates \ --foreman-proxy-template-url 'http://orcharhino-proxy.example.com'
-
In the orcharhino management UI, navigate to Hosts > Register Host.
-
Optional: Select a different Organization.
-
Optional: Select a different Location.
-
Optional: From the Host Group list, select the host group to associate the hosts with. Fields that inherit value from Host group: Operating system, Activation Keys and Lifecycle environment.
-
Optional: From the orcharhino Proxy list, select the orcharhino Proxy to register hosts through.
-
Optional: From the Operating system list, select the operating system of hosts that you want to register. Specifying an operating system is required when you register machines without
subscription-manager
, such as Debian or Ubuntu. -
Optional: Select the Insecure option, if you want to make the first call insecure. During this first call, hosts download the CA file from orcharhino. Hosts will use this CA file to connect to orcharhino with all future calls making them secure.
It is recommended to avoid insecure calls.
If an attacker, located in the network between orcharhino and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key.
Instead, you can manually copy and install the CA file on each host before registering the host.
To do this, find where orcharhino stores the CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting.
Copy the CA file to the
/etc/pki/ca-trust/source/anchors/
directory on hosts and enter the following commands:# update-ca-trust enable # update-ca-trust
Then register the hosts with a secure
curl
command, such as:# curl -sS https://orcharhino.example.com/register ...
The following is an example of the
curl
command with the--insecure
option:# curl -sS --insecure https://orcharhino.example.com/register ...
-
Select the Advanced tab.
-
From the Setup REX list, select whether you want to deploy orcharhino SSH keys to hosts or not.
If set to
Yes
, public SSH keys will be installed on the registered host. The inherited value is based on thehost_registration_remote_execution
parameter. It can be inherited, for example from a host group, an operating system, or an organization. When overridden, the selected value will be stored on host parameter level. -
From the Setup Insights list, select whether you want to install
insights-client
and register the hosts to Insights.The Insights tool is available for Red Hat Enterprise Linux only. It has no effect on other operating systems.
You must enable the following repositories on a registered machine:
-
RHEL 6:
rhel-6-server-rpms
-
RHEL 7:
rhel-7-server-rpms
-
RHEL 8:
rhel-8-for-x86_64-appstream-rpms
The
insights-client
package is installed by default on RHEL 8 except in environments whereby RHEL 8 was deployed with "Minimal Install" option.
-
-
Optional: In the Token lifetime (hours) field, change the validity duration of the JSON Web Token (JWT) that orcharhino uses for authentication. The duration of this token defines how long the generated
curl
command works. You can set the duration to 0 - 999 999 hours or unlimited.Note that orcharhino applies the permissions of the user who generates the
curl
command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration.The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else.
-
Optional: In the Remote Execution Interface field, enter the identifier of a network interface that hosts must use for the SSH connection. If you keep this field blank, orcharhino uses the default network interface.
-
Optional: In the Install packages field, list the packages (separated with spaces) that you want to install on the host upon registration. This can be set by the
host_packages
parameter. -
Optional: Select the Update packages option to update all packages on the host upon registration. This can be set by the
host_update_packages
parameter. -
Optional: In the Repository field, enter a repository to be added before the registration is performed. For example, it can be useful to make the
subscription-manager
package available for the purpose of the registration. For Red Hat family distributions, enter the URL of the repository, for examplehttp://rpm.example.com/
. For Debian OS families, enter the whole line of list file content, for exampledeb http://deb.example.com/ buster 1.0
. -
Optional: In the Repository GPG key URL field, specify the public key to verify the signatures of GPG-signed packages. It needs to be specified in the ASCII form with the GPG public key header.
-
In the Activation Keys field, enter one or more activation keys to assign to hosts.
-
Optional: Select the Lifecycle environment.
-
Optional: Select the Ignore errors option if you want to ignore subscription manager errors.
-
Optional: Select the Force option if you want to remove any
katello-ca-consumer
rpms before registration and runsubscription-manager
with the--force
argument. -
Click the Generate button.
-
Copy the generated
curl
command. -
On the hosts that you want to register, run the
curl
command asroot
.
For Red Hat Enterprise Linux 6.3 hosts, the release version defaults to Red Hat Enterprise Linux 6 Server and needs to be pointed to the 6.3 repository:
|
Customizing the Registration Templates
Use information in this section if you want to customize the registration process.
Note that all default templates in orcharhino are locked. If you want to customize the registration process, you need to clone the default templates and edit the clones. Then, in Administer > Settings > Provisioning change the Default Global registration template and Default 'Host initial configuration' template settings to point to your custom templates.
The registration process uses the following registration templates:
-
The Global Registration template contains steps for registering hosts to orcharhino. This template renders when hosts access the
/register
endpoint. -
The Linux host_init_config default template contains steps for initial configuration of hosts after they are registered.
You can configure the following global parameters by navigating to Configure > Global Parameters:
-
The
host_registration_remote_execution
parameter is used in theremote_execution_ssh_keys
snippet, the default value istrue
. -
The
host_registration_insights
parameter is used in theinsights
snippet, the default value isfalse
. -
The
host_packages
parameter is for installing packages on the host. -
The
remote_execution_ssh_keys
,remote_execution_ssh_user
,remote_execution_create_user
andremote_execution_effective_user_method
parameters are used in theremote_execution_ssh_keys
. For more details se the details of the snippet. -
The
encrypt_grub
parameter is to enable setting of an encrypted bootloader password for the host, the default value isfalse
.To actually set the password, use the
grub_pass
macro in your template.
Snippet is used in the Linux host_init_config default template:
-
The
remote_execution_ssh_keys
snippet deploys SSH keys to the host only when thehost_registration_remote_execution
parameter istrue
. -
The
insights
snippet downloads and installs the Red Hat Insights client when global parameterhost_registration_insights
is set to true. -
The
puppetlabs_repo
andpuppet_setup
snippets downloads and installs puppet agent on the host (only when puppet master is present) -
The
host_init_config_post
is empty snippet for user’s custom actions on during host initial configuration.
This table describes what variables are used in the Global Registration
template.
Variable | Command argument | Description |
---|---|---|
|
none |
Current authenticated user object. |
|
|
If |
|
|
If |
|
|
Host group of the host. |
|
|
Host operating system. |
|
|
Override the value of the |
|
|
Override the value of |
|
|
Set default interface of host for the remote execution. |
|
|
Packages to install |
|
|
Add repository on the host |
|
|
Set repository GPG key form URL |
|
|
Host activation keys. |
|
|
Remove any |
|
|
Ignore subscription-manager errors |
|
|
Life cycle environment id |
|
none |
URL for the |
Registering an Atomic Host to orcharhino
Use the following procedure to register an Atomic Host to orcharhino.
-
Log in to the Atomic Host as the
root
user. -
Retrieve
katello-rhsm-consumer
from orcharhino Server:# wget http://orcharhino.example.com/pub/katello-rhsm-consumer
-
Change the mode of
katello-rhsm-consumer
to make it executable:# chmod +x katello-rhsm-consumer
-
Run
katello-rhsm-consumer
:# ./katello-rhsm-consumer
-
Register with Subscription Manager:
# subscription-manager register
The Katello agent is not supported on Atomic Hosts. |
Registering a Host to orcharhino Using The Bootstrap Script
Deprecated Use registration via Global registration feature.
Use the bootstrap script to automate content registration and Puppet configuration. You can use the bootstrap script to register new hosts, or to migrate existing hosts to orcharhino from orcharhino 5, RHN, SAM, or RHSM.
The katello-client-bootstrap
package is installed by default on orcharhino Server’s base operating system.
The bootstrap.py
script is installed in the /var/www/html/pub/
directory to make it available to hosts at orcharhino.example.com/pub/bootstrap.py
.
The script includes documentation in the /usr/share/doc/katello-client-bootstrap-version/README.md
file.
To use the bootstrap script, you must install it on the host.
As the script is only required once, and only for the root
user, you can place it in /root
or /usr/local/sbin
and remove it after use.
This procedure uses /root
.
-
You have a orcharhino user with the permissions required to run the bootstrap script. The examples in this procedure specify the
admin
user. If this is not acceptable to your security policy, create a new role with the minimum permissions required and add it to the user that will run the script. For more information, see Setting Permissions for the Bootstrap Script. -
You have an activation key for your hosts with the orcharhino client repository enabled. For information on configuring activation keys, see Managing Activation Keys in the Content Management Guide.
-
You have created a host group. For more information about creating host groups, see Creating a Host Group.
If a host group is associated with a Puppet environment created inside a Production
environment, Puppet fails to retrieve the Puppet CA certificate while registering a host from that host group.
To create a suitable Puppet environment to be associated with a host group, follow these steps:
-
Manually create a directory and change the owner:
# mkdir /etc/puppetlabs/code/environments/example_environment # chown apache /etc/puppetlabs/code/environments/example_environment
-
In the orcharhino management UI, navigate to Configure > Environments and click Import environment from. The button name includes the FQDN of the internal or external orcharhino Proxy.
-
Choose the created directory and click Update.
-
Log in to the host as the
root
user. -
Download the script:
# curl -O http://orcharhino.example.com/pub/bootstrap.py
-
Make the script executable:
# chmod +x bootstrap.py
-
Confirm that the script is executable by viewing the help text:
-
On Red Hat Enterprise Linux 8:
# /usr/libexec/platform-python bootstrap.py -h
-
On other Red Hat Enterprise Linux versions:
# ./bootstrap.py -h
-
-
Enter the bootstrap command with values suitable for your environment.
For the
--server
option, specify the FQDN of orcharhino Server or a orcharhino Proxy. For the--location
,--organization
, and--hostgroup
options, use quoted names, not labels, as arguments to the options. For advanced use cases, see Advanced Bootstrap Script Configuration.-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# ./bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key
-
-
Enter the password of the orcharhino user you specified with the
--login
option.The script sends notices of progress to stdout.
-
When prompted by the script, approve the host’s Puppet certificate. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies and find the orcharhino or orcharhino Proxy you specified with the
--server
option. -
From the list in the Actions column, select Certificates.
-
In the Actions column, click Sign to approve the host’s Puppet certificate.
-
Return to the host to see the remainder of the bootstrap process completing.
-
In the orcharhino management UI, navigate to Hosts > All hosts and ensure that the host is connected to the correct host group.
-
Optional: After the host registration is complete, remove the script:
# rm bootstrap.py
Setting Permissions for the Bootstrap Script
Use this procedure to configure a orcharhino user with the permissions required to run the bootstrap script. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Administer > Users.
-
Select an existing user by clicking the required Username. A new pane opens with tabs to modify information about the selected user. Alternatively, create a new user specifically for the purpose of running this script.
-
Click the Roles tab.
-
Select Edit hosts and Viewer from the Roles list.
The Edit hosts role allows the user to edit and delete hosts as well as being able to add hosts. If this is not acceptable to your security policy, create a new role with the following permissions and assign it to the user:
-
view_organizations
-
view_locations
-
view_domains
-
view_hostgroups
-
view_hosts
-
view_architectures
-
view_ptables
-
view_operatingsystems
-
create_hosts
-
-
Click Submit.
-
Create a role with the minimum permissions required by the bootstrap script. This example creates a role with the name Bootstrap:
# ROLE='Bootstrap' hammer role create --name "$ROLE" hammer filter create --role "$ROLE" --permissions view_organizations hammer filter create --role "$ROLE" --permissions view_locations hammer filter create --role "$ROLE" --permissions view_domains hammer filter create --role "$ROLE" --permissions view_hostgroups hammer filter create --role "$ROLE" --permissions view_hosts hammer filter create --role "$ROLE" --permissions view_architectures hammer filter create --role "$ROLE" --permissions view_ptables hammer filter create --role "$ROLE" --permissions view_operatingsystems hammer filter create --role "$ROLE" --permissions create_hosts
-
Assign the new role to an existing user:
# hammer user add-role --id user_id --role Bootstrap
Alternatively, you can create a new user and assign this new role to them. For more information on creating users with Hammer, see Managing Users and Roles in the Administering orcharhino guide.
Advanced Bootstrap Script Configuration
This section has more examples for using the bootstrap script to register or migrate a host.
These examples specify the |
Migrating a Host From One orcharhino to Another orcharhino
Use the script with --force
to remove the katello-ca-consumer-*
packages from the old orcharhino and install the katello-ca-consumer-*
packages on the new orcharhino.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --force
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --force
Registering a Host to orcharhino without Puppet
By default, the bootstrap script configures the host for content management and configuration management.
If you have an existing configuration management system and do not want to install Puppet on the host, use --skip-puppet
.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --skip-puppet
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --skip-puppet
Registering a Host to orcharhino for Content Management Only
To register a system as a content host, and omit the provisioning and configuration management functions, use --skip-foreman
.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --server orcharhino.example.com \ --organization="Example Organization" \ --activationkey=activation_key \ --skip-foreman
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --server orcharhino.example.com \ --organization="Example Organization" \ --activationkey=activation_key \ --skip-foreman
Changing the Method the Bootstrap Script Uses to Download the Consumer RPM
By default, the bootstrap script uses HTTP to download the consumer RPM from \http://orcharhino.example.com/pub/katello-ca-consumer-latest.noarch.rpm
.
In some environments, you might want to allow HTTPS only between the host and orcharhino.
Use --download-method
to change the download method from HTTP to HTTPS.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --download-method https
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --download-method https
Providing the host’s IP address to orcharhino
On hosts with multiple interfaces or multiple IP addresses on one interface, you might need to override the auto-detection of the IP address and provide a specific IP address to orcharhino.
Use --ip
.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --ip 192.x.x.x
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --ip 192.x.x.x
Enabling Remote Execution on the Host
Use --rex
and --rex-user
to enable remote execution and add the required SSH keys for the specified user.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --rex \ --rex-user root
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --rex \ --rex-user root
Creating a Domain for a Host During Registration
To create a host record, the DNS domain of a host needs to exist in orcharhino prior to running the script.
If the domain does not exist, add it using --add-domain
.
-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py \ --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --add-domain
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --add-domain
Providing an Alternative FQDN for the Host
If the host’s host name is not an FQDN, or is not RFC-compliant (containing a character such as an underscore), the script will fail at the host name validation stage. If you cannot update the host to use an FQDN that is accepted by orcharhino, you can use the bootstrap script to specify an alternative FQDN.
-
Set
create_new_host_when_facts_are_uploaded
andcreate_new_host_when_report_is_uploaded
to false using Hammer:# hammer settings set \ --name create_new_host_when_facts_are_uploaded \ --value false # hammer settings set \ --name create_new_host_when_report_is_uploaded \ --value false
-
Use
--fqdn
to specify the FQDN that will be reported to orcharhino:-
On Red Hat Enterprise Linux 8, enter the following command:
# /usr/libexec/platform-python bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --fqdn node100.example.com
-
On Red Hat Enterprise Linux 5, 6, or 7, enter the following command:
# bootstrap.py --login=admin \ --server orcharhino.example.com \ --location="Example Location" \ --organization="Example Organization" \ --hostgroup="Example Host Group" \ --activationkey=activation_key \ --fqdn node100.example.com
-
Installing the Katello Agent
You can install the Katello agent to remotely update orcharhino clients.
The Katello agent is deprecated and will be removed in a future orcharhino version. Migrate your processes to use the remote execution feature to update clients remotely. For more information, see Migrating from Katello Agent to Remote Execution in the Managing Hosts Guide. |
The katello-agent
package depends on the gofer
package that provides the goferd
service.
-
You have enabled the orcharhino client repository on the client.
-
Install the
katello-agent
package:# yum install katello-agent
-
Start the
goferd
service:# systemctl start goferd
Installing Tracer
Use this procedure to install Tracer on orcharhino and access Traces. Tracer displays a list of services and applications that are outdated and need to be restarted. Traces is the output generated by Tracer in the orcharhino management UI.
-
The host must be registered to orcharhino.
-
The ATIX AG orcharhino client repository must be enabled and synchronized on orcharhino Server, and enabled on the host.
-
On the content host, install the
katello-host-tools-tracer
RPM package:# yum install katello-host-tools-tracer
-
Enter the following command:
# katello-tracer-upload
-
In the orcharhino management UI, navigate to Hosts > All hosts, then click the required host name.
-
In the Properties tab, examine the Properties table and find the Traces item. If you cannot find a Traces item in the Properties table, then Tracer is not installed.
-
In the orcharhino management UI, navigate to Hosts > Content Hosts, then click the required host name.
-
Click the Traces tab to view Traces.
Migrating From Katello Agent to Remote Execution
Remote Execution
is the preferred way to manage package content on hosts.
The Katello Agent is deprecated and will be removed in a future orcharhino version.
Follow these steps to switch to Remote Execution.
-
You have previously installed the
katello-agent
package on content hosts.
-
Stop the goferd service on content hosts:
# systemctl stop goferd.service
-
Disable the goferd service on content hosts:
# systemctl disable goferd.service
-
Remove the Katello agent on content hosts:
If your host is installed on oVirt version 4.4 or lower, do not remove the katello-agent
package because the removed dependencies corrupt the host.# yum remove katello-agent
-
Distribute the remote execution SSH keys to the content hosts. For more information, see Distributing SSH Keys for Remote Execution.
-
In the orcharhino management UI, navigate to Administer > Settings.
-
Select the Content tab.
-
Set the Use remote execution by default parameter to Yes.
The orcharhino server now uses host management by remote execution instead of katello-agent.
The following table shows the remote execution equivalent commands to perform specific package actions.
See hammer job-invocation create --help
to learn how to specify search queries to determine the target hosts or host collections.
Action | Katello Agent | Remote Execution |
---|---|---|
Install a package |
hammer host package install |
hammer job-invocation create --feature katello_package_install |
Install a package (host collection) |
hammer host-collection package install |
hammer job-invocation create --feature katello_package_install |
Remove a package |
hammer host package remove |
hammer job-invocation create --feature katello_package_remove |
Remove a package (host collection) |
hammer host-collection package remove |
hammer job-invocation create --feature katello_package_remove |
Update a package |
hammer host package upgrade |
hammer job-invocation create --feature katello_package_update |
Update a package (host collection) |
hammer host-collection package update |
hammer job-invocation create --feature katello_package_update |
Update all packages |
hammer host package update |
hammer job-invocation create --feature katello_package_update |
Install errata |
hammer host errata apply |
hammer job-invocation create --feature katello_errata_install |
Install errata (host collection) |
hammer host-collection errata install |
hammer job-invocation create --feature katello_errata_install |
Install a package group |
hammer host package-group install |
hammer job-invocation create --feature katello_group_install |
Install a package group (host collection) |
hammer host-collection package-group install |
hammer job-invocation create --feature katello_group_install |
Remove a package group |
hammer host package-group remove |
hammer job-invocation create --feature katello_group_remove |
Remove a package group (host collection) |
hammer host-collection package-group remove |
hammer job-invocation create --feature katello_group_remove |
Update a package group |
hammer host package-group update |
hammer job-invocation create --feature katello_group_update |
Update a package group (host collection) |
hammer host-collection package-group update |
hammer job-invocation create --feature katello_group_update |
Adding Network Interfaces
orcharhino supports specifying multiple network interfaces for a single host. You can configure these interfaces when creating a new host as described in Creating a Host in orcharhino or when editing an existing host.
There are several types of network interfaces that you can attach to a host. When adding a new interface, select one of:
-
Interface: Allows you to specify an additional physical or virtual interface. There are two types of virtual interfaces you can create. Use VLAN when the host needs to communicate with several (virtual) networks using a single interface, while these networks are not accessible to each other. Use alias to add an additional IP address to an existing interface.
For more information about adding a physical interface, see Adding a Physical Interface.
For more information about adding a virtual interface, see Adding a Virtual Interface.
-
Bond: Creates a bonded interface. NIC bonding is a way to bind multiple network interfaces together into a single interface that appears as a single device and has a single MAC address. This enables two or more network interfaces to act as one, increasing the bandwidth and providing redundancy. For more information, see Adding a Bonded Interface in the Managing Hosts guide.
-
BMC: Baseboard Management Controller (BMC) allows you to remotely monitor and manage the physical state of machines. For more information about BMC, see Enabling Power Management on Managed Hosts in Installing orcharhino Server. For more information about configuring BMC interfaces, see Adding a Baseboard Management Controller (BMC) Interface.
Additional interfaces have the Managed flag enabled by default, which means the new interface is configured automatically during provisioning by the DNS and DHCP orcharhino Proxys associated with the selected subnet.
This requires a subnet with correctly configured DNS and DHCP orcharhino Proxys.
If you use a Kickstart method for host provisioning, configuration files are automatically created for managed interfaces in the post-installation phase at |
Virtual and bonded interfaces currently require a MAC address of a physical device. Therefore, the configuration of these interfaces works only on bare-metal hosts. |
Adding a Physical Interface
Use this procedure to add an additional physical interface to a host.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Click Edit next to the host you want to edit.
-
On the Interfaces tab, click Add Interface.
-
Keep the Interface option selected in the Type list.
-
Specify a MAC address. This setting is required.
-
Specify the Device Identifier, for example eth0. The identifier is used to specify this physical interface when creating bonded interfaces, VLANs, and aliases.
-
Specify the DNS name associated with the host’s IP address. orcharhino saves this name in orcharhino Proxy associated with the selected domain (the "DNS A" field) and orcharhino Proxy associated with the selected subnet (the "DNS PTR" field). A single host can therefore have several DNS entries.
-
Select a domain from the Domain list. To create and manage domains, navigate to Infrastructure > Domains.
-
Select a subnet from the Subnet list. To create and manage subnets, navigate to Infrastructure > Subnets.
-
Specify the IP address. Managed interfaces with an assigned DHCP orcharhino Proxy require this setting for creating a DHCP lease. DHCP-enabled managed interfaces are automatically provided with a suggested IP address.
-
Select whether the interface is Managed. If the interface is managed, configuration is pulled from the associated orcharhino Proxy during provisioning, and DNS and DHCP entries are created. If using kickstart provisioning, a configuration file is automatically created for the interface.
-
Select whether this is the Primary interface for the host. The DNS name from the primary interface is used as the host portion of the FQDN.
-
Select whether this is the Provision interface for the host. TFTP boot takes place using the provisioning interface. For image-based provisioning, the script to complete the provisioning is executed through the provisioning interface.
-
Select whether to use the interface for Remote execution.
-
Leave the Virtual NIC check box clear.
-
Click OK to save the interface configuration.
-
Click Submit to apply the changes to the host.
Adding a Virtual Interface
Use this procedure to configure a virtual interface for a host. This can be either a VLAN or an alias interface.
An alias interface is an additional IP address attached to an existing interface.
An alias interface automatically inherits a MAC address from the interface it is attached to; therefore, you can create an alias without specifying a MAC address.
The interface must be specified in a subnet with boot mode set to static
.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Click Edit next to the host you want to edit.
-
On the Interfaces tab, click Add Interface.
-
Keep the Interface option selected in the Type list.
-
Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.
Specify a MAC address for managed virtual interfaces so that the configuration files for provisioning are generated correctly. However, a MAC address is not required for virtual interfaces that are not managed.
If creating a VLAN, specify ID in the form of eth1.10 in the Device Identifier field. If creating an alias, use ID in the form of eth1:10.
-
Select the Virtual NIC check box. Additional configuration options specific to virtual interfaces are appended to the form:
-
Tag: Optionally set a VLAN tag to trunk a network segment from the physical network through to the virtual interface. If you do not specify a tag, managed interfaces inherit the VLAN tag of the associated subnet. User-specified entries from this field are not applied to alias interfaces.
-
Attached to: Specify the identifier of the physical interface to which the virtual interface belongs, for example eth1. This setting is required.
-
-
Click OK to save the interface configuration.
-
Click Submit to apply the changes to the host.
Adding a Bonded Interface
Use this procedure to configure a bonded interface for a host. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Click Edit next to the host you want to edit.
-
On the Interfaces tab, click Add Interface.
-
Select Bond from the Type list. Additional type-specific configuration options are appended to the form.
-
Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.
Bonded interfaces use IDs in the form of bond0 in the Device Identifier field.
A single MAC address is sufficient.
-
Specify the configuration options specific to bonded interfaces:
-
Mode: Select the bonding mode that defines a policy for fault tolerance and load balancing. See Bonding Modes Available in orcharhino for a brief description of each bonding mode.
-
Attached devices: Specify a comma-separated list of identifiers of attached devices. These can be physical interfaces or VLANs.
-
Bond options: Specify a space-separated list of configuration options, for example miimon=100.
-
-
Click OK to save the interface configuration.
-
Click Submit to apply the changes to the host.
-
To create a host with a bonded interface, enter the following command:
# hammer host create --name bonded_interface \ --hostgroup-id 1 \ --ip=192.168.100.123 \ --mac=52:54:00:14:92:2a \ --subnet-id=1 \ --managed true \ --interface="identifier=eth1, \ mac=52:54:00:62:43:06, \ managed=true, \ type=Nic::Managed, \ domain_id=1, \ subnet_id=1" \ --interface="identifier=eth2, \ mac=52:54:00:d3:87:8f, \ managed=true, \ type=Nic::Managed, \ domain_id=1, \ subnet_id=1" \ --interface="identifier=bond0, \ ip=172.25.18.123, \ type=Nic::Bond, \ mode=active-backup, \ attached_devices=[eth1,eth2], \ managed=true, \ domain_id=1, \ subnet_id=1" \ --organization "Your_Organization" \ --location "Your_Location" \ --ask-root-password yes
Bonding Mode | Description |
---|---|
balance-rr |
Transmissions are received and sent sequentially on each bonded interface. |
active-backup |
Transmissions are received and sent through the first available bonded interface. Another bonded interface is only used if the active bonded interface fails. |
balance-xor |
Transmissions are based on the selected hash policy. In this mode, traffic destined for specific peers is always sent over the same interface. |
broadcast |
All transmissions are sent on all bonded interfaces. |
802.a3 |
Creates aggregation groups that share the same settings. Transmits and receives on all interfaces in the active group. |
balance-tlb |
The outgoing traffic is distributed according to the current load on each bonded interface. |
balance-alb |
Receive load balancing is achieved through Address Resolution Protocol (ARP) negotiation. |
Adding a Baseboard Management Controller (BMC) Interface
Use this procedure to configure a baseboard management controller (BMC) interface for a host that supports this feature.
-
The
ipmitool
package is installed. -
You know the MAC address, IP address, and other details of the BMC interface on the host, and the appropriate credentials for that interface.
You only need the MAC address for the BMC interface if the BMC interface is managed, so that it can create a DHCP reservation.
-
Enable BMC on the orcharhino Proxy server if it is not already enabled:
-
Configure BMC power management on orcharhino Proxy by running the
foreman-installer
script with the following options:# foreman-installer --foreman-proxy-bmc=true \ --foreman-proxy-bmc-default-provider=ipmitool
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
From the list in the Actions column, click Refresh. The list in the Features column should now include BMC.
-
-
In the orcharhino management UI, navigate to Hosts > All hosts.
-
Click Edit next to the host you want to edit.
-
On the Interfaces tab, click Add Interface.
-
Select BMC from the Type list. Type-specific configuration options are appended to the form.
-
Specify the general interface settings. The applicable configuration options are the same as for the physical interfaces described in Adding a Physical Interface.
-
Specify the configuration options specific to BMC interfaces:
-
Username and Password: Specify any authentication credentials required by BMC.
-
Provider: Specify the BMC provider.
-
-
Click OK to save the interface configuration.
-
Click Submit to apply the changes to the host.
Host Management and Monitoring Using Web Console
Web Console is an interactive web interface that you can use to perform actions and monitor Red Hat Enterprise Linux hosts. You can enable a remote-execution feature to integrate orcharhino with Web Console. When you install Web Console on a host that you manage with orcharhino, you can view the Web Console dashboards of that host from within the orcharhino management UI. You can also use the features that are integrated with Web Console, for example, Lorax Composer.
Enabling Web Console on orcharhino
By default, Web Console integration is disabled in orcharhino. If you want to access Web Console features for your hosts from within orcharhino, you must first enable Web Console integration on orcharhino Server.
-
On orcharhino Server, run
foreman-installer
with the--enable-foreman-plugin-remote-execution-cockpit
option:# foreman-installer --enable-foreman-plugin-remote-execution-cockpit
Managing and Monitoring Hosts Using Web Console
You can access the Web Console web UI through the orcharhino management UI and use the functionality to manage and monitor hosts in orcharhino.
-
Web Console is enabled in orcharhino.
-
Web Console is installed on the host that you want to view:
-
orcharhino or orcharhino Proxy can authenticate to the host with SSH keys. For more information, Distributing SSH Keys for Remote Execution.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select the host that you want to manage and monitor with Web Console.
-
In the upper right of the host window, click Web Console.
You can now access the full range of features available for host monitoring and management, for example, Lorax Composer, through the Web Console.
Using Report Templates to Monitor Hosts
You can use report templates to query orcharhino data to obtain information about, for example, host status, registered hosts, applicable errata, applied errata, subscription details, and user activity. You can use the report templates that ship with orcharhino or write your own custom report templates to suit your requirements. The reporting engine uses the embedded Ruby (ERB) syntax. For more information about writing templates and ERB syntax, see Template Writing Reference.
You can create a template, or clone a template and edit the clone. For help with the template syntax, click a template and click the Help tab.
Generating Host Monitoring Reports
To view the report templates in the orcharhino management UI, navigate to Monitor > Report Templates. To schedule reports, configure a cron job or use the orcharhino management UI.
-
In the orcharhino management UI, navigate to Monitor > Report Templates.
-
To the right of the report template that you want to use, click Generate.
-
Optional: To schedule a report, to the right of the Generate at field, click the icon to select the date and time you want to generate the report at.
-
Optional: To send a report to an e-mail address, select the Send report via e-mail check box, and in the Deliver to e-mail addresses field, enter the required e-mail address.
-
Optional: Apply search query filters. To view all available results, do not populate the filter field with any values.
-
Click Submit. A CSV file that contains the report is downloaded. If you have selected the Send report via e-mail check box, the host monitoring report is sent to your e-mail address.
-
List all available report templates:
# hammer report-template list
-
Generate a report:
# hammer report-template generate --id My_Template_ID
This command waits until the report fully generates before completing. If you want to generate the report as a background task, you can use the
hammer report-template schedule
command.
Creating a Report Template
In orcharhino, you can create a report template and customize the template to suit your requirements. You can import existing report templates and further customize them with snippets and template macros.
Report templates use Embedded Ruby (ERB) syntax. To view information about working with ERB syntax and macros, in the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template, and then click the Help tab.
When you create a report template in orcharhino, safe mode is enabled by default. For more information about safe mode, see Report Template Safe Mode.
For more information about writing templates, see the Template Writing Reference. For more information about macros you can use in report templates, see Templates Macros.
-
In the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template.
-
In the Name field, enter a unique name for your report template.
-
If you want the template to be available to all locations and organizations, select Default.
-
Create the template directly in the template editor or import a template from a text file by clicking Import. For more information about importing templates, see Importing Report Templates.
-
Optional: In the Audit Comment field, you can add any useful information about this template.
-
Click the Input tab, and in the Name field, enter a name for the input that you can reference in the template in the following format:
input('name')
. Note that you must save the template before you can reference this input value in the template body. -
Select whether the input value is mandatory. If the input value is mandatory, select the Required check box.
-
From the Value Type list, select the type of input value that the user must input.
-
Optional: If you want to use facts for template input, select the Advanced check box.
-
Optional: In the Options field, define the options that the user can select from. If this field remains undefined, the users receive a free-text field in which they can enter the value they want.
-
Optional: In the Default field, enter a value, for example, a host name, that you want to set as the default template input.
-
Optional: In the Description field, you can enter information that you want to display as inline help about the input when you generate the report.
-
Optional: Click the Type tab, and select whether this template is a snippet to be included in other templates.
-
Click the Location tab and add the locations where you want to use the template.
-
Click the Organizations tab and add the organizations where you want to use the template.
-
Click Submit to save your changes.
Exporting Report Templates
You can export report templates that you create in orcharhino.
-
In the orcharhino management UI, navigate to Monitor > Report Templates.
-
Locate the template that you want to export, and from the list in the Actions column, select Export.
-
Repeat this action for every report template that you want to download.
An .erb
file that contains the template downloads.
-
To view the report templates available for export, enter the following command:
# hammer report-template list
Note the template ID of the template that you want to export in the output of this command.
-
To export a report template, enter the following command:
# hammer report-template dump --id My_Template_ID > example_export.erb
Exporting Report Templates Using the orcharhino API
You can use the orcharhino report_templates
API to export report templates from orcharhino.
-
Use the following request to retrieve a list of available report templates:
Example request:$ curl --insecure --user admin:redhat \ --request GET \ --config https://orcharhino.example.com/api/report_templates \ | json_reformat
In this example, the
json_reformat
tool is used to format the JSON output.Example response:{ "total": 6, "subtotal": 6, "page": 1, "per_page": 20, "search": null, "sort": { "by": null, "order": null }, "results": [ { "created_at": "2019-11-20 17:49:52 UTC", "updated_at": "2019-11-20 17:49:52 UTC", "name": "Applicable errata", "id": 112 }, { "created_at": "2019-11-20 17:49:52 UTC", "updated_at": "2019-11-20 17:49:52 UTC", "name": "Applied Errata", "id": 113 }, { "created_at": "2019-11-30 16:15:24 UTC", "updated_at": "2019-11-30 16:15:24 UTC", "name": "Hosts - complete list", "id": 158 }, { "created_at": "2019-11-20 17:49:52 UTC", "updated_at": "2019-11-20 17:49:52 UTC", "name": "Host statuses", "id": 114 }, { "created_at": "2019-11-20 17:49:52 UTC", "updated_at": "2019-11-20 17:49:52 UTC", "name": "Registered hosts", "id": 115 }, { "created_at": "2019-11-20 17:49:52 UTC", "updated_at": "2019-11-20 17:49:52 UTC", "name": "Subscriptions", "id": 116 } ] }
-
Note the
id
of the template that you want to export, and use the following request to export the template:Example request:$ curl --insecure --output /tmp/_Example_Export_Template.erb_ \ --user admin:password --request GET --config \ https://orcharhino.example.com/api/report_templates/My_Template_ID/export
Note that
158
is an example ID of the template to export.In this example, the exported template is redirected to
host_complete_list.erb
.
Importing Report Templates
You can import a report template into the body of a new template that you want to create. Note that using the orcharhino management UI, you can only import templates individually. For bulk actions, use the orcharhino API. For more information, see Importing Report Templates Using the orcharhino API.
-
You must have exported templates from orcharhino to import them to use in new templates. For more information see Exporting Report Templates.
-
In the orcharhino management UI, navigate to Monitor > Report Templates.
-
In the upper right of the Report Templates window, click Create Template.
-
On the upper right of the Editor tab, click the folder icon, and select the
.erb
file that you want to import. -
Edit the template to suit your requirements.
-
Click Submit.
For more information about customizing your new template, see Template Writing Reference.
Importing Report Templates Using the orcharhino API
You can use the orcharhino API to import report templates into orcharhino. Importing report templates using the orcharhino API automatically parses the report template metadata and assigns organizations and locations.
-
Create a template using
.erb
syntax or export a template from another orcharhino.For more information about writing templates, see Template Writing Reference.
For more information about exporting templates from orcharhino, see Exporting Report Templates Using the orcharhino API.
-
Use the following example to format the template that you want to import to a
.json
file:# cat Example_Template.json { "name": "Example Template Name", "template": " Enter ERB Code Here " }
Example JSON File with ERB Template:{ "name": "Hosts - complete list", "template": " <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQND': host.name ) -%> <% end -%> <%= report_render %> " }
-
Use the following request to import the template:
$ curl --insecure --user admin:redhat \ --data @Example_Template.json --header "Content-Type:application/json" \ --request POST --config https://orcharhino.example.com/api/report_templates/import
-
Use the following request to retrieve a list of report templates and validate that you can view the template in orcharhino:
$ curl --insecure --user admin:redhat \ --request GET --config https://orcharhino.example.com/api/report_templates | json_reformat
Report Template Safe Mode
When you create report templates in orcharhino, safe mode is enabled by default. Safe mode limits the macros and variables that you can use in the report template. Safe mode prevents rendering problems and enforces best practices in report templates. The list of supported macros and variables is available in the orcharhino management UI.
To view the macros and variables that are available, in the orcharhino management UI, navigate to Monitor > Report Templates and click Create Template. In the Create Template window, click the Help tab and expand Safe mode methods.
While safe mode is enabled, if you try to use a macro or variable that is not listed in Safe mode methods, the template editor displays an error message.
To view the status of safe mode in orcharhino, in the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab. Locate the Safemode rendering row to check the value.
Configuring Host Collections
A host collection is a group of content hosts. This feature enables you to perform the same action on multiple hosts at once. These actions can include the installation, removal, and update of packages and errata, change of assigned life cycle environment, and change of Content View. You can create host collections to suit your requirements, and those of your company. For example, group hosts in host collections by function, department, or business unit.
Creating a Host Collection
The following procedure shows how to create host collections.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Click New Host Collection.
-
Add the Name of the host collection.
-
Clear Unlimited Content Hosts, and enter the desired maximum number of hosts in the Limit field.
-
Add the Description of the host collection.
-
Click Save.
-
To create a host collection, enter the following command:
# hammer host-collection create \ --name "My_Host_Collection" \ --organization "My_Organization"
Cloning a Host Collection
The following procedure shows how to clone a host collection.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
On the left hand panel, click the host collection you want to clone.
-
Click Copy Collection.
-
Specify a name for the cloned collection.
-
Click Create.
Removing a Host Collection
The following procedure shows how to remove a host collection.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Choose the host collection to be removed.
-
Click Remove. An alert box appears:
Are you sure you want to remove host collection Host Collection Name?
-
Click Remove.
Adding a Host to a Host Collection
The following procedure shows how to add hosts to host collections.
A host must be registered to orcharhino to add it to a Host Collection.
Note that if you add a host to a host collection, the orcharhino auditing system does not log the change.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Select the host collection where the host should be added.
-
On the Hosts tab, select the Add subtab.
-
Select the hosts to be added from the table and click Add Selected.
-
To add multiple hosts to a host collection, enter the following command:
# hammer host-collection add-host \ --host-ids My_Host_ID_1,My_Host_ID_2 \ --id My_Host_Collection_ID
Removing a Host From a Host Collection
The following procedure shows how to remove hosts from host collections.
Note that if you remove a host from a host collection, the host collection record in the database is not modified so the orcharhino auditing system does not log the change.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Choose the desired host collection.
-
On the Hosts tab, select the List/Remove subtab.
-
Select the hosts you want to remove from the host collection and click Remove Selected.
Adding Content to a Host Collection
These steps show how to add content to host collections in orcharhino.
Adding Packages to a Host Collection
The following procedure shows how to add packages to host collections.
-
The content to be added should be available in one of the existing repositories or added prior to this procedure.
-
Content should be promoted to the environment where the hosts are assigned.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Select the host collection where the package should be added.
-
On the Collection Actions tab, click Package Installation, Removal, and Update.
-
To update all packages, click the Update All Packages button to use the default method. Alternatively, select the drop-down icon to the right of the button to select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.
-
Select the Package or Package Group radio button as required.
-
In the field provided, specify the package or package group name. Then click:
-
Install - to install a new package using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.
-
Update - to update an existing package in the host collection using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.
-
Adding Errata to a Host Collection
The following procedure shows how to add errata to host collections.
-
The errata to be added should be available in one of the existing repositories or added prior to this procedure.
-
Errata should be promoted to the environment where the hosts are assigned.
-
In the orcharhino management UI, navigate to Hosts > Host Collections.
-
Select the host collection where the errata should be added.
-
On the Collection Actions tab, click Errata Installation.
-
Select the errata you want to add to the host collection and click the Install Selected button to use the default method. Alternatively, select the drop-down icon to the right of the button to select a method to use. Selecting the via remote execution - customize first menu entry takes you to the Job invocation page where you can customize the action.
Removing Content From a Host Collection
The following procedure shows how to remove packages from host collections.
-
Click Hosts > Host Collections.
-
Click the host collection where the package should be removed.
-
On the Collection Actions tab, click Package Installation, Removal, and Update.
-
Select the Package or Package Group radio button as required.
-
In the field provided, specify the package or package group name.
-
Click the Remove button to remove the package or package group using the default method. Alternatively, select the drop-down icon to the right of the button and select a method to use. Selecting the via remote execution - customize first menu entry will take you to the Job invocation page where you can customize the action.
Changing the Life Cycle Environment or Content View of a Host Collection
The following procedure shows how to change the assigned life cycle environment or Content View of host collections.
-
In the orcharhino management UI, navigate to Hosts > Host Collection.
-
Selection the host collection where the life cycle environment or Content View should be changed.
-
On the Collection Actions tab, click Change assigned Life Cycle Environment or Content View.
-
Select the life cycle environment to be assigned to the host collection.
-
Select the required Content View from the list.
-
Click Assign.
The changes take effect in approximately 4 hours. To make the changes take effect immediately, on the host, enter the following command:
# subscription-manager refresh
You can use remote execution to run this command on multiple hosts at the same time.
Using Puppet for Configuration Management
Use this section as a guide to configuring orcharhino to use Puppet for configuration management on managed hosts.
Introduction to Puppet
Puppet is a configuration management tool using a declarative language to describe the state of managed hosts.
Puppet increases your productivity as you can administer multiple hosts simultaneously. At the same time, it decreases your configuration effort as Puppet makes it easy to verify and possibly correct the state hosts are in.
Puppet Architecture
Puppet uses a server-client architecture with managed hosts running the Puppet agent and orcharhino or any orcharhino Proxies acting as a Puppet server. The Puppet agent enforces the declared state on managed hosts. It sends facts to the Puppet server, receives catalogs that are compiled from Puppet manifests, and sends reports to your orcharhino. Differences between the declared state and the actual state are called drifts in Puppet terminology.
Puppet facts are information about the hosts running the Puppet agent.
Running puppet facts
on a host displays Puppet facts in JSON format.
They are collected by the Puppet agent and reported to the Puppet server on each run.
orcharhino users can define Puppet parameters as key-value pairs which behave similar to host parameters or Ansible variables.
This section is set out to use a Puppet module to install, configure, and manage the ntp service.
Prerequisites for Puppet Agent
There are the following requirements to run the Puppet agent on managed hosts:
-
Enable Puppet integration in your orcharhino. For more information, see Enabling Puppet Integration with orcharhino.
-
Sync the Puppet agent from puppet.com. For example, Puppet 6 for CentOS 7 on x86_64 or Puppet 6 for Ubuntu 20.04 with release
focal
, componentpuppet6
, and architectureamd64
.Manage these repositories by using products, sync plans, content views, composite content views, and lifecycle environments.
-
Provide the Puppet agent to the managed host using their activation key.
-
Add either
enable-puppet5
orenable-puppet6
as parameter to your host or host group. Set the parameter type toboolean
and its value totrue
. -
Set a Puppet environment, Puppet server, and Puppet CA when provisioning a host.
Continue either with installing the Puppet agent manually or by using the puppet_setup
snippet to install the Puppet agent automatically during provisioning.
Enabling Puppet Integration with orcharhino
By default, orcharhino does not have any Puppet integration configured by default. You need to enable the integration as is appropriate for your situation. orcharhino can be configured to manage and deploy Puppet server on orcharhino Server or on orcharhino Proxy. Additionally, Puppet server can be deployed externally to orcharhino and integrated for reporting, facts and external node classification (ENC).
-
Enable Puppet server on orcharhino Server before you enable it on orcharhino Proxies:
# foreman-installer --enable-foreman-plugin-puppet \ --enable-foreman-cli-puppet \ --foreman-proxy-puppet true \ --foreman-proxy-puppetca true \ --foreman-proxy-content-puppet true \ --enable-puppet \ --puppet-server true \ --puppet-server-foreman-ssl-ca /etc/pki/katello/puppet/puppet_client_ca.crt \ --puppet-server-foreman-ssl-cert /etc/pki/katello/puppet/puppet_client.crt \ --puppet-server-foreman-ssl-key /etc/pki/katello/puppet/puppet_client.key
-
Enable Puppet server on orcharhino Proxies:
# foreman-installer --foreman-proxy-puppet true \ --foreman-proxy-puppetca true \ --foreman-proxy-content-puppet true \ --enable-puppet \ --puppet-server true \ --puppet-server-foreman-ssl-ca /etc/pki/katello/puppet/puppet_client_ca.crt \ --puppet-server-foreman-ssl-cert /etc/pki/katello/puppet/puppet_client.crt \ --puppet-server-foreman-ssl-key /etc/pki/katello/puppet/puppet_client.key
Installing Puppet Agent during Host Provisioning
You can install Puppet Agent during the host provisioning process.
-
Navigate to Hosts > Provisioning Templates.
-
Select a provisioning template depending on your host provisioning method. For more information, see Types of Provisioning Templates in the Provisioning guide.
-
Ensure the
puppet_setup
snippet is included as follows:<%= snippet 'puppet_setup' %>
-
Enable Puppet using a host parameter for a single host or host group:
Add a host parameter named
enable-puppet6
as type boolean set totrue
. -
Optional: To install the Puppet Agent directly from yum.puppet.com, add a host parameter named
enable-puppetlabs-puppet6-repo
as type boolean set totrue
. Only use this if you don’t provide Puppet Agent to the host using its activation key.
Installing and Configuring the Puppet Agent
Use this procedure to install and configure the Puppet agent on a host.
-
The host must have a Puppet environment assigned to it.
-
The ATIX AG orcharhino client repository must be enabled and synchronized to orcharhino Server, and enabled on the host.
-
Log in to the host as the
root
user. -
Install the Puppet agent package.
-
On hosts running Enterprise Linux:
# yum install puppet-agent
-
On hosts running Debian:
# apt-get install puppet-agent
-
On hosts running SUSE Linux Enterprise Server:
# zypper install puppet-agent
-
-
Append the following server and environment settings to the
/etc/puppetlabs/puppet/puppet.conf
file. Set theenvironment
parameter to the name of the Puppet environment to which the host belongs:environment = My_Lifecycle_Environment server = orcharhino.example.com
-
Configure the Puppet agent to start on boot:
# systemctl enable --now puppet
-
Run the Puppet agent on the host:
# puppet ssl bootstrap
-
In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies.
-
From the list in the Actions column for the required orcharhino Proxy, select Certificates.
-
Click Sign to the right of the required host to sign the SSL certificate for the Puppet client.
You can continue to use Puppet for Configuration Management.
Installing a Puppet Module from Puppet Forge
The Puppet Forge is a repository for pre-built Puppet modules. Puppet modules flagged as supported are officially supported and tested by Puppet Inc.
This example shows how to add the ntp module to managed hosts.
-
Navigate to forge.puppet.com and search for
ntp
. One of the first modules is puppetlabs/ntp. -
Connect to your orcharhino Server using SSH and install the Puppet module:
# puppet module install puppetlabs-ntp -i /etc/puppetlabs/code/environment/production/modules
Use the
-i
parameter to specify the path and Puppet environment, for exampleproduction
.Once the installation is completed, the output looks as follows:
Notice: Preparing to install into /etc/puppetlabs/code/environment/production/modules ... Notice: Created target directory /etc/puppetlabs/code/environment/production/modules Notice: Downloading from https://forgeapi.puppet.com ... Notice: Installing -- do not interrupt ... /etc/puppetlabs/code/environment/production/modules |-| puppetlabs-ntp (v8.3.0) |-- puppetlabs-stdlib (v4.25.1) [/etc/puppetlabs/code/environments/production/modules]
An alternative way to install a Puppet module is to copy a folder containing the Puppet module to the module path as mentioned above. Ensure to resolve its dependencies manually.
Updating a Puppet Module
You can update an existing Puppet module using the puppet
command.
-
Connect to your Puppet server using SSH and find out where the Puppet modules are located:
# puppet config print modulepath
This returns output as follows:
/etc/puppetlabs/code/environments/production/modules:/etc/puppetlabs/code/environments/common:/etc/puppetlabs/code/modules:/opt/puppetlabs/puppet/modules:/usr/share/puppet/modules
-
If the module is located in the path as displayed above, the following command updates a module:
# puppet module upgrade module name
Importing a Puppet Module
Import the Puppet module to orcharhino or any attached orcharhino Proxy before you assign any of its classes to managed hosts. Ensure to select Any Organization and Any Location as context, otherwise the import might fail.
-
In the orcharhino management UI, navigate to Configure > Classes.
-
Click the Import button in the upper right corner and select which orcharhino Proxy you want to import modules from. You may typically choose between your orcharhino Server or any attached orcharhino Proxy.
-
Select the Puppet module you want to import:
-
Select the classes of your choice and check the box on the left.
-
Click the Update button to import the Puppet classes to orcharhino.
-
-
Importing a Puppet module results in a notification as follows:
Successfully updated environments and Puppet classes from the on-disk Puppet installation
Configuring a Puppet Class
You can configure a Puppet class after you have imported it to orcharhino Server. This example overrides the default list of ntp servers.
-
In the orcharhino management UI, navigate to Configure > Classes and select the ntp Puppet module to change its configuration.
-
Select the Smart Class Parameter tab and search for servers.
-
Ensure the Override check box is selected.
-
Set the Parameter Type drop down menu to array.
-
Insert a list of ntp servers as Default Value:
["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]
An alternative way to describe the array is the
yaml
syntax:- 0.de.pool.ntp.org - 1.de.pool.ntp.org - 2.de.pool.ntp.org - 3.de.pool.ntp.org
-
Click the Submit button after adding the values. This changes the default configuration of the Puppet module
ntp
.
Managing Puppet Modules with r10k
You can manage Puppet modules and environments using r10k.
r10k uses a list of Puppet roles from a Puppetfile
to download Puppet modules from the Puppet Forge or git repositories.
It does not handle dependencies between Puppet modules.
A Puppetfile looks as follows:
forge "https://forge.puppet.com"
mod "puppet-nginx",
:git => "https://github.com/voxpupuli/puppet-nginx.git",
:ref => "master"
mod "puppetlabs/apache"
mod "puppetlabs/ntp", "8.3.0"
Using Puppet Classes
There are three different ways to use Puppet classes within orcharhino. You can assign Puppet classes to either a single host or multiple hosts using host groupings such as a location based context or Puppet config groups in combination with host groups.
A Puppet module typically consists of several Puppet classes. You can only assign Puppet classes to hosts.
Editing Puppet Classes
You can edit Puppet Classes within your orcharhino.
-
In the orcharhino management UI, navigate to Configure > Puppet Classes.
-
Select a Puppet Class from the list of Puppet Classes.
-
On the Puppet Class tab, assign the Puppet Class to one or multiple host groups.
-
On the Smart Class Parameter tab, select the Override check box to change the default parameter.
-
Set a Parameter Type and Default Value to overwrite the default value from the Puppet Class.
-
Click Submit to save your changes.
Creating Puppet Environments
You can create a Puppet Environment within your orcharhino.
-
In the orcharhino management UI, navigate to Configure > Puppet Environments.
-
Click Create Puppet Environment to create a Puppet Environment.
-
Enter a Name.
-
Optional: Set a location context.
-
Optional: Set an organization context.
-
Click Submit to create the Puppet Environment.
Assigning a Puppet Class to Individual Hosts
-
In the orcharhino management UI, navigate to Hosts > All hosts and click on the edit button of the host you want to add the
ntp
Puppet class to. -
Select the Puppet classes tab and look for the ntp class.
-
Click the + symbol next to
ntp
to add the ntp submodule to the list of included classes. -
Click the Submit button at the bottom to save your changes.
If the Puppet classes tab of an individual host is empty, check if it’s assigned to the proper Puppet environment.
Select the YAML button on the host overview page to verify the Puppet configuration. This produces output similar as follows:
---
parameters:
// shortened YAML output
classes:
ntp:
servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]'
environment: production
...
Connect to your host using SSH and check the content of /etc/ntp.conf
to verify the ntp configuration.
This examples assumes your host is running CentOS 7.
Other operating systems may store the ntp config file in a different path.
You may need to run the Puppet agent on your host by executing the following command: # puppet agent -t |
Running the following command on the host checks which ntp servers are used for clock synchronization:
# cat /etc/ntp.conf
This returns output similar as follows:
# ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org
You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically.
Assigning a Puppet Class to a Host Group
Use a host group to assign the ntp Puppet class to multiple hosts at once. Every host you deploy based on this host group has this Puppet class installed.
-
In the orcharhino management UI, navigate to Configure > Host Groups to create a host group or edit an existing one.
-
Set the following parameters:
-
The Lifecycle Environment describes the stage in which certain versions of content are available to hosts.
-
The Content View is comprised of products and allows for version control of content repositories.
-
The Environment allows you to supply a group of hosts with their own dedicated configuration.
-
-
In the orcharhino management UI, navigate to Configure > Classes.
-
Add the Puppet class to the included classes or to the included config groups if a Puppet config group is configured.
-
Click Submit to save the changes.
Creating a Puppet Config Group
A Puppet config group is a named list of Puppet classes that allows you to combine their capabilities and assign them to managed hosts at a click. This is equivalent to the concept of profiles in pure Puppet.
-
In the orcharhino management UI, navigate to Configure > Config Groups and select the New Config Group button.
-
Select the classes you want to add to the config group.
-
Choose a meaningful Name for the Puppet config group.
-
Add selected Puppet classes to the Included Classes field.
-
-
Click Submit to save the changes.
Puppet Parameter Hierarchy
Puppet parameters are structured hierarchically. Parameters with a higher placement override parameters with lower placements:
-
Global parameters
-
Organization parameters
-
Location parameters
-
Host group parameters
-
Host parameters
For example, host specific parameters override any other parameter and location parameters only override any organization or global parameters. This feature is especially useful when grouping hosts with location or organization contexts.
Overriding Puppet Parameters on Individual Hosts
You can override parameters on individual hosts. This is recommended if you have multiple hosts and only want to make changes to a single one.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select a host.
-
Click Edit and select the Parameters tab.
-
Click the Override button to edit the Puppet parameter.
-
Click Submit to save the changes.
Overriding Puppet Parameters Location Based
You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following examples chooses the location context to illustrate setting context based parameters.
-
In the orcharhino management UI, navigate to Configure > Classes and select the Smart Class Parameter tab.
-
Choose a parameter.
-
Use the Order list to define the hierarchy of the Puppet parameters. The individual host (
fqdn
) marks the most and the location context (location
) the least relevant. -
Check Merge Overrides if you want to add all further matched parameters after finding the first match.
-
Check Merge Default if you want to also include the default value even if there are more specific values defined.
-
Check Avoid Duplicates if you want to create a list of unique values for the selected parameter.
-
The matcher field requires an attribute type from the order list. For example, you can choose
Paris
as location context and set the value to French ntp servers. -
Use the Add Matcher button to add more matchers.
-
Click Submit to save the changes.
Overriding Puppet Parameters Organization Based
You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following example chooses the organization context to illustrate setting context based parameters.
Note that organization based Puppet parameters are overridden by location based Puppet parameters.
-
In the orcharhino management UI, navigate to Configure > Classes and select the Smart Class Parameter tab.
-
Choose a parameter.
-
Use the Order list to define the hierarchy of the Puppet parameters. The individual host (
fqdn
) marks the most and the organization context (organization
) the least relevant. -
Check Merge Overrides if you want to add all further matched parameters after finding the first match.
-
Check Merge Default if you want to also include the default value even if there are more specific values defined.
-
Check Avoid Duplicates if you want to create a list of unique values for the selected parameter.
-
The matcher field requires an attribute type from the order list.
-
Use the Add Matcher button to add more matchers.
-
Click Submit to save the changes.
Puppet Run Once Using SSH
Assign the proper job template to the Run Puppet Once feature to run Puppet on managed hosts.
-
In the orcharhino management UI, navigate to Administer > Remote Execution Features.
-
Select the
puppet_run_host
remote execution feature. -
Assign the
Run Puppet Once - SSH Default
job template.
Run Puppet on managed hosts by running a job and selecting category Puppet
and template Run Puppet Once - SSH Default
.
Alternatively, click the Run Puppet Once button in the Schedule Remote Job drop down menu on the host details page.
Setting Out of Sync Time for Puppet Managed Hosts
orcharhino considers hosts managed by Puppet to be out of sync if the last Puppet report is older than the combined values of outofsync_interval
and puppet_interval
set in minutes.
By default, the Puppet agent on managed hosts runs every 30 minutes.
The puppet_interval
is set to 35 minutes and the global outofsync_interval
is set to 30 minutes.
The effective time after which hosts are considered out of sync is the sum of outofsync_interval
and puppet_interval
.
Setting the global outofsync_interval
to 30 and the puppet_interval
to 60 results in a total of 90 minutes after which the host status changes to out of sync
.
Setting the Puppet Agent Run Interval
Set the interval when the Puppet Agent runs and sends reports to orcharhino.
-
Connect to your managed host using SSH.
-
Add the Puppet Agent run interval to
/etc/puppetlabs/puppet/puppet.conf
, for exampleruninterval = 1h
.
Setting the Puppet Interval
-
In the orcharhino management UI, navigate to Administer > Settings, and click the Puppet tab.
-
In the Puppet interval field, edit the value to the duration, in minutes, after which hosts reporting using Puppet are considered to be out of sync. Set a duration in minutes after which hosts reporting using Puppet are considered to be out of sync.
Setting the Out of Sync Interval
-
In the orcharhino management UI, navigate to Administer > Settings.
-
On the General tab, edit Out of sync interval. Set a duration in minutes after which hosts are considered to be out of sync.
You can also overwrite this on individual hosts or host groups by adding the
outofsync_interval
parameter.
Additional Resources for Puppet
-
The official Puppet documentation is a good starting point.
-
The Puppet Forge is a repository for pre-built Puppet modules.
Using Salt for Configuration Management
Use this section as a guide to configuring orcharhino to use Salt for configuration management on managed hosts.
Introduction to Salt
This guide describes how to use Salt for configuration management in orcharhino. This guide contains information about how to install the Salt plugin, how to integrate orcharhino with an existing Salt Master, and how to configure managed hosts with Salt.
Salt offers two distinct modes of operation: Clientless using SSH or the Salt Minion client software. orcharhino’s Salt plugin supports exclusively the Salt Minion approach. |
Salt Architecture
You need a Salt Master that either runs on your orcharhino Server or orcharhino Proxy with the Salt plugin enabled. You can also use an existing Salt Master by installing and configuring the relevant orcharhino Proxy features on the existing Salt Master host.
For more information on installing a Salt Master, consult the official Salt documentation. Alternatively, use the bootstrap instructions found in the official Salt package repository.
-
Managed hosts are referred to as Salt Minions.
-
Information in form of key-value pairs gathered from Salt Minions is referred to as Salt Grains.
-
Configuration templates are referred to as Salt States.
-
Bundles of Salt States are referred to as Salt Environments.
Use the same Salt version on the Salt Master as you are using on your Salt Minions. You can use orcharhino’s content management to provide managed hosts with the correct version of the Salt Minion client software.
Port | Protocol | Service | Required For |
---|---|---|---|
4505 and 4506 |
TCP |
HTTPS |
Salt Master to Salt Minions |
9191 |
TCP |
HTTPS |
Salt API |
Installing Salt Plugin
To configure managed hosts with Salt, you need to install the Salt plugin.
Select Salt as a configuration management system during step five of the main orcharhino installation steps. Choosing this option installs and configures both the Salt plugin and a Salt Master on your orcharhino. |
-
Connect to your orcharhino using SSH:
# ssh root@orcharhino.example.com
-
Install the Salt plugin:
# foreman-installer \ --enable-foreman-plugin-salt \ --enable-foreman-proxy-plugin-salt
Configuring Salt
After you have installed the Salt plugin, you need to connect it to a Salt Master. This is required when adding Salt support to an existing orcharhino installation, when adding an existing Salt Master to orcharhino, or when setting up Salt on orcharhino Proxy.
If you select Salt during the main orcharhino installation steps, the installer automatically performs those steps on your orcharhino. Use the following sections as a reference for installing Salt manually on your orcharhino or orcharhino Proxy. |
Perform all actions on your Salt Master unless noted otherwise. This is either your orcharhino Server or orcharhino Proxy with Salt enabled.
Configuring orcharhino
You need to configure orcharhino to use the Salt plugin.
-
Connect to your orcharhino using SSH:
# ssh root@orcharhino.example.com
-
Extend the
/etc/sudoers
file to allow theforeman-proxy
user to run Salt:Cmd_Alias SALT = /usr/bin/salt, /usr/bin/salt-key foreman-proxy ALL = NOPASSWD: SALT Defaults:foreman-proxy !requiretty
-
Add a user called
saltuser
to access the Salt API:# adduser --no-create-home --shell /bin/false --home-dir / saltuser # passwd saltuser
Enter the password for the Salt user twice.
The command
adduser saltuser -p password
does not work. Using it prevents you from importing Salt States.
Configuring Salt Master
Configure your Salt Master to configure managed hosts using Salt.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Set orcharhino as an external node classifier in
/etc/salt/master
for Salt:master_tops: ext_nodes: /usr/bin/foreman-node
-
Enable Salt Pillar data for use with orcharhino:
ext_pillar: - puppet: /usr/bin/foreman-node
-
Add a Salt Environment called
base
that is associated with the/srv/salt
directory:file_roots: base: - /srv/salt
-
Use the
saltuser
user for the Salt API and specify the connection settings in/etc/salt/master
:external_auth: pam: saltuser: - '@runner' rest_cherrypy: port: 9191 host: 0.0.0.0 ssl_key: /etc/puppetlabs/puppet/ssl/private_keys/orcharhino.example.com.pem ssl_crt: /etc/puppetlabs/puppet/ssl/certs/orcharhino.example.com.pem
-
Optional: Use Salt as a remote execution provider.
You can run arbitrary commands on managed hosts by using the existing connection from your Salt Master to Salt Minions. Configure the
foreman-proxy
user to run Salt commands in/etc/salt/master
:publisher_acl: foreman-proxy: - state.template_str
Authenticating Salt Minions Using Salt Autosign Grains
Configure orcharhino to automatically accept Salt Minions using Salt autosign Grains.
-
Configure the Grains key file on the Salt Master and add a reactor in the
/etc/salt/master
file:autosign_grains_dir: /var/lib/foreman-proxy/salt/grains reactor: - 'salt/auth': - /var/lib/foreman-proxy/salt/reactors/foreman_minion_auth.sls
The Grains file holds the acceptable keys when you deploy Salt Minions. The reactor initiates an interaction with the Salt plugin if the Salt Minion is successfully authenticated.
-
Add the reactor to the
salt/auth
event. -
Copy the Salt runners into your
file_roots
runners directory. The directory depends on your/etc/salt/master
config. If it is configured to use/srv/salt
, create the runners folder/srv/salt/_runners
and copy the Salt runners into it.# mkdir -p /srv/salt/_runners # cp /var/lib/foreman-proxy/salt/runners/* /srv/salt/_runners/
-
Restart the Salt Master service:
# systemctl restart salt-master
-
Enable the Salt reactors and runners in your Salt Environment:
# salt-run saltutil.sync_all
Enabling Salt Grains Upload
Managed hosts running the Salt Minion client software can upload Salt Grains to orcharhino Server or orcharhino Proxy. Salt Grains are collected system properties, for example the operating system or IP address of a Salt Minion.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Edit
/etc/salt/foreman.yaml
on your Salt Master::proto: https :host: orcharhino.example.com :port: 443 :ssl_ca: "/etc/puppetlabs/puppet/ssl/ssl_ca.pem" :ssl_cert: "/etc/puppetlabs/puppet/ssl/client_cert.pem" :ssl_key: "/etc/puppetlabs/puppet/ssl/client_key.pem" :timeout: 10 :salt: /usr/bin/salt :upload_grains: true
Configuring Salt API
Configure the Salt API in /etc/foreman-proxy/settings.d/salt.yml
.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Edit
/etc/foreman-proxy/settings.d/salt.yml
::use_api: true :api_auth: pam :api_url: https://orcharhino.example.com:9191 :api_username: saltuser :api_password: password
Ensure to use the password of the previously created
saltuser
.
Activating Salt
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Restart the Salt services:
# systemctl restart salt-master salt-api
-
Connect to your orcharhino using SSH:
# ssh root@orcharhino.example.com
-
Restart the orcharhino services:
# foreman-maintain service restart
-
Refresh your orcharhino Proxy features.
-
In the orcharhino management UI, navigate to Infrastructure > Smart Proxies.
-
Click Refresh for the relevant orcharhino Proxy.
-
Setting up Salt Minions
Salt Minions require the Salt Minion client software to interact with your Salt Master.
Using orcharhinos Content Management
Provide managed hosts with the required Salt Minion client software using orcharhino.
-
Create a product called
Salt
. -
Create a repository within the
Salt
product for each operating system supported by orcharhino that you want to install the Salt Minion client software on.Add the operating system to the name of the repository, for example
Salt for Ubuntu 20.04
.You can find the upstream URL for the Salt packages on the official Salt package repository. The URL depends on both the Salt version and the operating system, for example
https://repo.saltproject.io/py3/ubuntu/20.04/amd64/3003/
. -
Synchronize the previously created products.
-
Create a Content View for each repository.
-
Create a Composite Content View for each major version of each operating system to make the new content available.
-
Add each of your operating system specific Salt Content Views to your main Composite Content View for that operating system and version.
-
Publish a new version of the Composite Content View from the previous step.
-
Promote the Content View from the previous step to your lifecycle environments as appropriate.
-
Optional: Create activation keys for your Composite Content View and lifecycle environment combinations.
Creating a Host Group With Salt
To bundle provisioning and configuration settings, you can create a host group with Salt enabled for managed hosts.
-
In the orcharhino management UI, navigate to Configure > Host Groups.
-
Click Create Host Group.
-
Click the Host Group tab and select a Salt Environment and a Salt Master.
-
Click the Salt States tab and assign Salt States to your host group.
-
Click the Activation Keys tab and select an activation key containing the Salt Minion client software.
-
Click Submit to save your host group.
Managed hosts deployed using this host group automatically install and configure the required Salt Minion client software and register with your Salt Master. For more information, see Creating a Host Group in the Managing Hosts guide.
Deploying Minion Hosts
Deploy managed hosts that are fully provisioned and configured for Salt usage.
-
A Salt Master
-
A Salt Environment
-
A Content View containing the required Salt Minion client software
-
An activation key
-
A lifecycle environment
-
In the orcharhino management UI, navigate to Hosts > Create Host.
-
Select the previously created host group with Salt enabled.
-
Click Submit to deploy a host.
Verifying the Connection between Salt Master and Salt Minions
Verify the connection between your Salt Master and Salt Minions.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Ping your Salt Minions:
# salt "*" test.ping
-
Display all Salt Grains of all connected Salt Minions:
# salt "*" grains.items
Salt Usage
Salt Minions managed by orcharhino are associated with a Salt Master and a Salt Environment.
The associated Salt Environment within orcharhino must match the actual Salt Environment from the file_roots
option in the /etc/salt/master
file.
You can configure managed hosts with Salt once they are associated with your orcharhino Server or orcharhino Proxy and the Salt Minion client software is installed.
Using the Salt Hammer CLI
You can use Hammer CLI to configure managed hosts using Salt.
Run hammer --help
for more information.
-
Install
hammer_cli_foreman_salt
on your orcharhino Server
-
Creating a Salt State:
# hammer salt-state create \ --name My_Salt_State
-
Viewing information about a Salt Minion:
# hammer salt-minion info \ --name salt-minion.example.com
-
Adding Salt States to a Salt Minion:
# hammer salt-minion update \ --name salt-minion.example.com \ --salt-states My_Salt_State
Using the Salt API
orcharhino Salt extends the orcharhino REST API with Salt-specific features.
View the full API documentation on your orcharhino Server at http://orcharhino.example.com/apidoc/v2.html
.
-
Using Curl to get a list of keys from "smartproxy.example.com":
# curl -u My_User_Name:My_Password \ -H "Accept: version=2,application/json" \ https://orcharhino.example.com/salt/api/v2/salt_keys/orcharhino-proxy.example.com
Importing Salt States
A Salt State configures parts of a host, for example, a service or the installation of a package.
You can import Salt States from your Salt Master to orcharhino.
The Salt Master configuration in this guide uses a Salt Environment called base
that includes the Salt States stored in /srv/salt/
.
-
In the orcharhino management UI, navigate to Configure > Salt States and click Import from FQDN.
-
Optional: Click Edit to assign Salt States to Salt Environments.
-
Optional: Click Delete to remove a Salt State from your orcharhino. This only removes the Salt State from orcharhino, not from the disk of your Salt Master.
-
Click Submit to import the Salt States.
After you have imported Salt States, you can assign them to hosts or Host Groups.
Salt applies these Salt States to any hosts they are assigned to every time you run state.highstate
.
Configure the paths for Salt States and Salt Pillars in |
Viewing Salt Autosign Keys
The Salt Keys page lists hosts and their Salt keys. You can manually accept, reject, or delete keys.
Use the Salt autosign feature to automatically accept signing requests from managed hosts. By default, managed hosts are supplied with a Salt key during host provisioning.
This feature only covers the autosign via |
-
In the orcharhino management UI, navigate to Infrastructure > Smart Proxies.
-
Select a orcharhino Proxy.
-
In the Actions drop down menu, click Salt Keys.
Viewing Salt Reports
Salt reports are uploaded to orcharhino every ten minutes using the /etc/cron.d/smart_proxy_salt
cron job.
You can trigger this action manually on your Salt Master:
# /usr/sbin/upload-salt-reports
-
To view Salt reports, in the orcharhino management UI, navigate to Monitor > Config Management Reports.
-
To view Salt reports associated with an individual host, in the orcharhino management UI, navigate to Hosts > All Hosts and select a single host.
Enabling Salt Report Uploads
The Salt Master can directly upload Salt reports to orcharhino.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
Ensure the reactor is present:
# file /var/lib/foreman-proxy/salt/reactors/foreman_report_upload.sls
-
Copy report upload script:
# cp /var/lib/foreman-proxy/salt/runners/foreman_report_upload.py /srv/salt/_runners/
-
Add the reactor to
/etc/salt/master
:reactor: - 'salt/auth': - /var/lib/foreman-proxy/salt/reactors/foreman_minion_auth.sls - 'salt/job/*/ret/*': - /var/lib/foreman-proxy/salt/reactors/foreman_report_upload.sls
-
Restart the Salt Master:
# systemctl restart salt-master
-
Enable the new runner:
# salt-run saltutil.sync_all
-
If you use a cron job to upload facts from your Salt Master to orcharhino, disable the cron job:
# rm -f /etc/cron.d/smart_proxy_salt
Alternatively, you can upload Salt reports from your Salt Master to orcharhino manually:
# /usr/sbin/upload-salt-reports
Salt Variables
You can configure Salt Variables within orcharhino. The configured values are available as Salt Pillar data.
For more information, see Puppet smart class parameters section on how to configure the variables' data.
Viewing ENC Parameters
You can use orcharhino as an external node classifier for Salt. Click Salt ENC on the host overview page to view assigned Salt States. This shows a list of parameters that are made available for Salt usage as Salt Pillar data.
You can check what parameters are truly available on the Salt side by completing the following procedure.
-
Connect to your Salt Master using SSH:
# ssh root@salt-master.example.com
-
View available ENC parameters:
# salt '*' pillar.items
-
Optional: Refresh the Salt Pillar data if a parameter is missing:
# salt '*' saltutil.refresh_pillar
Running Salt
You can run arbitrary Salt functions using orcharhino’s remote execution features.
Most commonly, you can run salt.highstate
on one or more Salt Minions.
This applies all relevant Salt States on your managed hosts.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Select one or multiple hosts.
-
Click Run Salt from the actions drop down menu.
Alternatively, you can click Run Salt from the Schedule Remote Job drop down menu on the host overview page.
Triggering a remote execution job takes you to the corresponding job overview page.
If you select a host from the list, you can see the output of the remote job in real time.
To view remote jobs, in the orcharhino management UI, navigate to Monitor > Jobs.
Running state.highstate
generates a Salt report.
Note that reports are uploaded to orcharhino every ten minutes.
You can use orcharhino’s remote execution features to run arbitrary Salt functions. This includes an optional test mode that tells you what would happen without actually changing anything.
-
In the orcharhino management UI, navigate to Hosts > All Hosts
-
Select a host.
-
Click Schedule Remote Job.
-
For the Job category field, select Salt-Call from the drop down menu.
-
For the Job template field, select Salt Run function from the drop down menu.
-
In the Function field, enter the name of the function that you want to trigger, for example,
pillar.items
ortest.ping
. -
Optional: Click Display advanced fields and enable test-run.
-
Click Submit to run the job.
You can also schedule remote jobs or run them recurrently. For more information, see Configuring and Setting up Remote Jobs in the Managing Hosts guide.
Alternatively, you can define recurrent actions using the native Salt way.
For example, you can schedule hourly state.highstate
runs on individual Salt Minions by extending /etc/salt/minion
:
schedule: highstate: function: state.highstate minutes: 60
Salt Example
This example uses a Salt State to manage the /etc/motd
file on one or more Salt Minions.
It demonstrates the use of orcharhino as an external node classifier and the use of Salt Grains.
-
Create a global parameter called
vendor_name
with the stringorcharhino
as its value. -
Add a new Salt State called
motd
to your Salt Master. -
Create the
/srv/salt/motd/
directory:# mkdir -p /srv/salt/motd/
-
Create
/srv/salt/motd/init.sls
as a Salt State file:/etc/motd: file.managed: - user: root - group: root - mode: 0644 - source: salt://motd/motd.template - template: jinja
-
Create
/srv/salt/motd/motd.template
as a template referenced by the Salt State file:Welcome to {{ grains['fqdn'] }} Powered by {{ salt['pillar.get']('vendor_name') }}
Access the
fqdn
Salt Grain from within this template and retrieve thevendor_name
parameter from the Salt Pillar. -
Import the
motd
Salt State into orcharhino. -
Verify that Salt has been given access to the
vendor_name
parameter by running either of the following commands on your Salt Master:# salt '*' pillar.items | grep -A 1 vendor_name # salt '*' pillar.get vendor_name
If the output does not include the value of the
vendor_name
parameter, you must refresh the Salt Pillar data first:# salt '*' saltutil.refresh_pillar
For information about how to refresh Salt Pillar data, see Salt ENC parameters.
-
Add the
motd
Salt State to your Salt Minions or a host group. -
Apply the Salt State by running
state.highstate
. -
Optional: Verify the contents of
/etc/motd
on a Salt Minion.
Additional Resources for Salt
-
The official Salt documentation is a good entry point when starting with Salt.
-
You can download the Salt packages from the official Salt package repository.
Configuring and Setting Up Remote Jobs
Use this section as a guide to configuring orcharhino to execute jobs on remote hosts.
Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times.
About Running Jobs on Hosts
You can run jobs on hosts remotely from orcharhino Proxies using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution.
For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the orcharhino Proxy base operating system. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.
Communication occurs through orcharhino Proxy, which means that orcharhino Server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution orcharhino Proxy has access to port 22 on the target hosts.
orcharhino uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide.
Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates.
Any orcharhino Proxy base operating system is a client of orcharhino Server’s internal orcharhino Proxy, and therefore this section applies to any type of host connected to orcharhino Server, including orcharhino Proxies. |
You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values.
In addition, you can specify custom values for templates when you run the command.
For more information, see Executing a Remote Job.
Remote Execution Workflow
When you run a remote job on hosts, for every host, orcharhino performs the following actions to find a remote execution orcharhino Proxy to use.
orcharhino searches only for orcharhino Proxies that have the remote execution feature enabled.
-
orcharhino finds the host’s interfaces that have the Remote execution check box selected.
-
orcharhino finds the subnets of these interfaces.
-
orcharhino finds remote execution orcharhino Proxies assigned to these subnets.
-
From this set of orcharhino Proxies, orcharhino selects the orcharhino Proxy that has the least number of running jobs. By doing this, orcharhino ensures that the jobs load is balanced between remote execution orcharhino Proxies.
-
If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino selects the most lightly loaded orcharhino Proxy from the following types of orcharhino Proxies that are assigned to the host:
-
DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets
-
DNS orcharhino Proxy assigned to the host’s domain
-
Realm orcharhino Proxy assigned to the host’s realm
-
Puppet server orcharhino Proxy
-
Puppet CA orcharhino Proxy
-
OpenSCAP orcharhino Proxy
-
-
If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Enable Global orcharhino Proxy setting is enabled, orcharhino selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.
Permissions for Remote Execution
You can control which users can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles:
-
Remote Execution Manager: This role allows access to all remote execution features and functionality.
-
Remote Execution User: This role only allows running jobs; it does not provide permission to modify job templates.
You can clone the Remote Execution User role and customize its filter for increased granularity.
If you adjust the filter with the view_job_templates
permission, the user can only see and trigger jobs based on matching job templates.
You can use the view_hosts
and view_smart_proxies
permissions to limit which hosts or orcharhino Proxies are visible to the role.
The execute_template_invocation
permission is a special permission that is checked immediately before execution of a job begins.
This permission defines which job template you can run on a particular host.
This allows for even more granularity when specifying permissions.
For more information on working with roles and permissions, see sources/usage_guides/administering_orcharhino_guide.adoc#Creating_and_Managing_Roles_admin in the Administering orcharhino guide.
The following example shows filters for the execute_template_invocation
permission:
name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = "Restart service" and host_group.name = webservers
The first line in this example permits the user to apply the Reboot template to one selected host. The second line defines a pool of hosts with names ending with .staging.example.com. The third line binds the template with a host group.
Permissions assigned to users can change over time. If a user has already scheduled some jobs to run in the future, and the permissions have changed, this can result in execution failure because the permissions are checked immediately before job execution. |
Creating a Job Template
Use this procedure to create a job template. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > Job templates.
-
Click New Job Template.
-
Click the Template tab, and in the Name field, enter a unique name for your job template.
-
Select Default to make the template available for all organizations and locations.
-
Create the template directly in the template editor or upload it from a text file by clicking Import.
-
Optional: In the Audit Comment field, add information about the change.
-
Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories.
-
Optional: In the Description Format field, enter a description template. For example,
Install package %package_name
. You can also use%template_name
and%job_category
in your template. -
From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks.
-
Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete.
-
Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab.
-
Optional: Click Foreign input set to include other templates in this job.
-
Optional: In the Effective user area, configure a user if the command cannot use the default
remote_execution_effective_user
setting. -
Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet.
-
Click the Location tab and add the locations where you want to use the template.
-
Click the Organizations tab and add the organizations where you want to use the template.
-
Click Submit to save your changes.
You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide.
-
To create a job template using a template-definition file, enter the following command:
# hammer job-template create \ --file "path_to_template_file" \ --name "template_name" \ --provider-type SSH \ --job-category "category_name"
Configuring the Fallback to Any orcharhino Proxy Remote Execution Setting in orcharhino
You can enable the Fallback to Any orcharhino Proxy setting to configure orcharhino to search for remote execution orcharhino Proxies from the list of orcharhino Proxies that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to orcharhino Proxies that do not have the remote execution feature enabled.
If the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded orcharhino Proxy from the set of all orcharhino Proxies assigned to the host, such as the following:
-
DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets
-
DNS orcharhino Proxy assigned to the host’s domain
-
Realm orcharhino Proxy assigned to the host’s realm
-
Puppet server orcharhino Proxy
-
Puppet CA orcharhino Proxy
-
OpenSCAP orcharhino Proxy
-
In the orcharhino management UI, navigate to Administer > Settings.
-
Click RemoteExecution.
-
Configure the Fallback to Any orcharhino Proxy setting.
Enter the hammer settings set
command on orcharhino to configure the Fallback to Any orcharhino Proxy setting.
For example, to set the value to true
, enter the following command:
# hammer settings set --name=remote_execution_fallback_proxy --value=true
Configuring the Global orcharhino Proxy Remote Execution Setting in orcharhino
By default, orcharhino searches for remote execution orcharhino Proxies in hosts' organizations and locations regardless of whether orcharhino Proxies are assigned to hosts' subnets or not. You can disable the Enable Global orcharhino Proxy setting if you want to limit the search to the orcharhino Proxies that are assigned to hosts' subnets.
If the Enable Global orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.
-
In the orcharhino management UI, navigate to Administer > Settings.
-
Click RemoteExection.
-
Configure the Enable Global orcharhino Proxy setting.
-
Enter the
hammer settings set
command on orcharhino to configure theEnable Global orcharhino Proxy
setting. For example, to set the value totrue
, enter the following command:# hammer settings set --name=remote_execution_global_proxy --value=true
Configuring orcharhino to Use an Alternative Directory to Execute Remote Jobs on Hosts
By default, orcharhino uses the /var/tmp
directory on the client system to execute the remote execution jobs.
If the client system has noexec
set for the /var/
volume or file system, you must configure orcharhino to use an alternative directory because otherwise the remote execution job fails since the script cannot be run.
-
Create a new directory, for example new_place:
# mkdir /remote_working_dir
-
Copy the SELinux context from the default
var
directory:# chcon --reference=/var /remote_working_dir
-
Configure the system:
# foreman-installer --foreman-proxy-plugin-remote-execution-ssh-remote-working-dir /remote_working_dir
Distributing SSH Keys for Remote Execution
To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from orcharhino Proxy to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22.
Use one of the following methods to distribute the public SSH key from orcharhino Proxy to target hosts:
-
Using the orcharhino API to Obtain SSH Keys for Remote Execution.
-
Configuring a Kickstart Template to Distribute SSH Keys during Provisioning.
-
For new orcharhino hosts, you can deploy SSH keys to orcharhino hosts during registration using the global registration template. For more information, see Registering a Host to orcharhino Using the Global Registration Template.
orcharhino distributes SSH keys for the remote execution feature to the hosts provisioned from orcharhino by default.
If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance.
Distributing SSH Keys for Remote Execution Manually
To distribute SSH keys manually, complete the following steps:
-
Enter the following command on orcharhino Proxy. Repeat for each target host you want to manage:
# ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@target.example.com
-
To confirm that the key was successfully copied to the target host, enter the following command on orcharhino Proxy:
# ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy root@target.example.com
Using the orcharhino API to Obtain SSH Keys for Remote Execution
To use the orcharhino API to download the public key from orcharhino Proxy, complete this procedure on each target host.
-
On the target host, create the
~/.ssh
directory to store the SSH key:# mkdir ~/.ssh
-
Download the SSH key from orcharhino Proxy:
# curl https://orcharhino-proxy.example.com:8443/ssh/pubkey >> ~/.ssh/authorized_keys
-
Configure permissions for the
~/.ssh
directory:# chmod 700 ~/.ssh
-
Configure permissions for the
authorized_keys
file:# chmod 600 ~/.ssh/authorized_keys
Configuring a Kickstart Template to Distribute SSH Keys during Provisioning
You can add a remote_execution_ssh_keys
snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning.
Kickstart templates that orcharhino ships include this snippet by default.
Therefore, orcharhino copies the SSH key for remote execution to the systems during provisioning.
-
To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use:
<%= snippet 'remote_execution_ssh_keys' %>
Configuring a keytab for Kerberos Ticket Granting Tickets
Use this procedure to configure orcharhino to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets.
-
Find the ID of the
foreman-proxy
user:# id -u foreman-proxy
-
Modify the
umask
value so that new files have the permissions600
:# umask 077
-
Create the directory for the keytab:
# mkdir -p "/var/kerberos/krb5/user/USER_ID"
-
Create a keytab or copy an existing keytab to the directory:
# cp your_client.keytab /var/kerberos/krb5/user/USER_ID/client.keytab
-
Change the directory owner to the
foreman-proxy
user:# chown -R foreman-proxy:foreman-proxy "/var/kerberos/krb5/user/USER_ID"
-
Ensure that the keytab file is read-only:
# chmod -wx "/var/kerberos/krb5/user/USER_ID/client.keytab"
-
Restore the SELinux context:
# restorecon -RvF /var/kerberos/krb5
Configuring Kerberos Authentication for Remote Execution
You can use Kerberos authentication to establish an SSH connection for remote execution on orcharhino hosts.
-
Enroll orcharhino Server on the Kerberos server
-
Enroll the orcharhino target host on the Kerberos server
-
Configure and initialize a Kerberos user account for remote execution
-
Ensure that the foreman-proxy user on orcharhino has a valid Kerberos ticket granting ticket
-
To install and enable Kerberos authentication for remote execution, enter the following command:
# foreman-installer --scenario katello \ --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true
-
To edit the default user for remote execution, in the orcharhino management UI, navigate to Administer > Settings and click the RemoteExecution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account.
-
Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account.
To confirm that Kerberos authentication is ready to use, run a remote job on the host.
Setting up Job Templates
orcharhino provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates. If you want to use a template without making changes, proceed to Executing a Remote Job.
You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone.
-
To clone a template, in the Actions column, select Clone.
-
Enter a unique name for the clone and click Submit to save the changes.
Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide.
To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax.
Begin the template with ---
.
You can embed an Ansible playbook YAML file into the job template body.
You can also add ERB syntax to customize your YAML Ansible template.
You can also import Ansible playbooks in orcharhino.
For more information, see Synchronizing Repository Templates in the Managing Hosts guide.
At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host’s edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the orcharhino management UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No.
Executing a Remote Job
You can execute a job that is based on a job template against one or more hosts.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list.
-
From the Select Action list, select Schedule Remote Job.
-
On the Job invocation page, define the main job settings:
-
Select the Job category and the Job template you want to use.
-
Optional: Select a stored search string in the Bookmark list to specify the target hosts.
-
Optional: Further limit the targeted hosts by entering a Search query. The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts.
-
The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template.
-
Optional: To configure advanced settings for the job, click Display advanced fields. Some of the advanced settings depend on the job template, the following settings are general:
-
Effective user defines the user for executing the job, by default it is the SSH user.
-
Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts.
-
Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the previous task took too long to finish, is canceled.
-
Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks.
-
Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized.
Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs.
-
-
To run the job immediately, ensure that Schedule is set to Execute now. You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition.
-
Click Submit. This displays the Job Overview page, and when the job completes, also displays the status of the job.
-
Enter the following command on orcharhino:
# hammer settings set --name=remote_execution_global_proxy --value=false
To execute a remote job with custom parameters, complete the following steps:
-
Find the ID of the job template you want to use:
# hammer job-template list
-
Show the template details to see parameters required by your template:
# hammer job-template info --id template_ID
-
Execute a remote job with custom parameters:
# hammer job-invocation create \ --job-template "template_name" \ --inputs key1="value",key2="value",... \ --search-query "query"
Replace query with the filter expression that defines hosts, for example
"name ~ rex01"
. For more information about executing remote commands with hammer, enterhammer job-template --help
andhammer job-invocation --help
.
Monitoring Jobs
You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required.
Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch.
-
In the orcharhino management UI, navigate to Monitor > Jobs. This page is automatically displayed if you triggered the job with the
Execute now
setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. -
On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running.
-
In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time.
-
Click Back to Job at any time to return to the Job Details page.
To monitor the progress of a job while it is running, complete the following steps:
-
Find the ID of a job:
# hammer job-invocation list
-
Monitor the job output:
# hammer job-invocation output \ --id job_ID \ --host host_name
-
Optional: to cancel a job, enter the following command:
# hammer job-invocation cancel \ --id job_ID
Host Status in orcharhino
In orcharhino, each host has a global status that indicates which hosts need attention. Each host also has sub-statuses that represent status of a particular feature. With any change of a sub-status, the global status is recalculated and the result is determined by statuses of all sub-statuses.
Host Global Status Overview
The global status represents the overall status of a particular host. The status can have one of three possible values: OK, Warning, or Error. You can find global status on the Hosts Overview page. The status displays a small icon next to host name and has a color that corresponds with the status. Hovering over the icon renders a tooltip with sub-status information to quickly find out more details. To view the global status for a host, in the orcharhino management UI, navigate to Hosts > All Hosts.
- OK
-
No errors were reported by any sub-status. This status is highlighted with the color green.
- Warning
-
While no error was detected, some sub-status raised a warning. For example, there are no configuration management reports for the host even though the host is configured to send reports. It is a good practice to investigate any warnings to ensure that your deployment remains healthy. This status is highlighted with the color yellow.
- Error
-
Some sub-status reports a failure. For example, a run contains some failed resources. This status is highlighted with the color red.
If you want to search for hosts according to their status, use the syntax for searching in orcharhino that is outlined in the Searching and Bookmarking chapter of the Administering orcharhino guide, and then build your searches out using the following status-related examples:
To search for hosts that have an OK status:
global_status = ok
To search for all hosts that deserve attention:
global_status = error or global_status = warning
Host Sub-status Overview
A sub-status monitors only part of a host’s capabilities.
Currently, orcharhino ships only with Build and Configuration sub-statuses. There can be more sub-statuses depending on which plugins you add to your orcharhino.
The build sub-status is relevant for managed hosts and when orcharhino runs in unattended mode.
The configuration sub-status is only relevant if orcharhino uses a configuration management system like Ansible, Puppet, or Salt.
To view the sub-status for a host, in the orcharhino management UI, navigate to Hosts > All Hosts and click the host whose full status you want to inspect. You can also view substatus information in the hover help for each host.
In the Properties table of the host details' page, you can view both the global host status and all sub-statuses.
Each sub-status can define its own set of possible values that are mapped to the three global status values.
The Build sub-status has two possible values - pending and built that are both mapped to global OK value.
The Configuration status has more possible values that map to the global status as follows:
- Active
-
During the last run, some resources were applied.
- Pending
-
During the last run, some resources would be applied but your configuration management integration was configured to run in
noop
mode. - No changes
-
During the last run, nothing changed.
- No reports
-
This can be both a Warning or OK sub-status. This occurs when there are no reports but the host uses, for example, an associated configuration management proxy or
always_show_configuration_status
setting is set totrue
, it maps to Warning.
- Error
-
This indicates an error during configuration, for example, a run failed to install a package.
- Out of sync
-
A configuration report was not received within the expected interval, based on the
outofsync_interval
. Reports are identified by an origin and can have different intervals based upon it. - No reports
-
When your host uses a configuration management system but orcharhino does not receive reports, it maps to Warning. Otherwise it is mapped to OK.
If you want to search for hosts according to their sub-status, use the syntax for searching in orcharhino that is outlined in the sources/usage_guides/administering_orcharhino_guide.adoc#Searching_and_Bookmarking_admin chapter of the Administering orcharhino guide, and then build your searches out using the following status-related examples:
You search for hosts' configuration sub-statuses based on their last reported state.
For example, to find hosts that have at least one pending resource:
status.pending > 0
To find hosts that restarted some service during last run:
status.restarted > 0
To find hosts that have an interesting last run that might indicate something has happened:
status.interesting = true
Synchronizing Template Repositories
In orcharhino, you can synchronize repositories of job templates, provisioning templates, report templates, and partition table templates between orcharhino Server and a version control system or local directory. In this chapter, a Git repository is used for demonstration purposes.
This section details the workflow for installing and configuring the TemplateSync plug-in and performing exporting and importing tasks.
Enabling the TemplateSync Plug-in
-
To enable the plug-in on your orcharhino Server, enter the following command:
# foreman-installer --enable-foreman-plugin-templates
-
To verify that the plug-in is installed correctly, ensure Administer > Settings includes the TemplateSync menu.
Configuring the TemplateSync Plug-in
In the orcharhino management UI, navigate to Administer > Settings > TemplateSync to configure the plug-in. The following table explains the attributes behavior. Note that some attributes are used only for importing or exporting tasks.
Parameter | API parameter name | Meaning on importing | Meaning on exporting |
---|---|---|---|
Associate |
Accepted values: |
Associates templates with OS, Organization, and Location based on metadata. |
N/A |
Branch |
|
Specifies the default branch in Git repository to read from. |
Specifies the default branch in Git repository to write to. |
Dirname |
|
Specifies the subdirectory under the repository to read from. |
Specifies the subdirectory under the repository to write to. |
Filter |
|
Imports only templates with names that match this regular expression. |
Exports only templates with names that match this regular expression. |
Force import |
|
Imported templates overwrite locked templates with the same name. |
N/A |
Lock templates |
|
Do not overwrite existing templates when you import a new template with the same name, unless Force import is enabled. |
N/A |
Metadata export mode |
Accepted values: |
N/A |
Defines how metadata is handled when exporting:
|
Negate |
Accepted values: |
Imports templates ignoring the filter attribute. |
Exports templates ignoring the filter attribute. |
Prefix |
|
Adds specified string to the beginning of the template if the template name does not start with the prefix already. |
N/A |
Repo |
|
Defines the path to the repository to synchronize from. |
Defines the path to a repository to export to. |
Verbosity |
Accepted values: |
Enables writing verbose messages to the logs for this action. |
N/A |
Importing and Exporting Templates
You can import and export templates using the orcharhino management UI, Hammer CLI, or orcharhino API. orcharhino API calls use the role-based access control system, which enables the tasks to be executed as any user. You can synchronize templates with a version control system, such as Git, or a local directory.
Importing Templates
You can import templates from a repository of your choice.
You can use different protocols to point to your repository, for example /tmp/dir
, git://example.com
, https://example.com
, and ssh://example.com
.
-
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>
-
In the orcharhino management UI, navigate to Hosts > Sync Templates.
-
Click Import.
-
Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to import. For more information about each field, see Configuring the TemplateSync Plug-in.
-
Click Submit.
The orcharhino management UI displays the status of the import. The status is not persistent; if you leave the status page, you cannot return to it.
-
To import a template from a repository, enter the following command:
$ hammer import-templates \ --branch "My_Branch" \ --filter '.*Template Name$' \ --organization "My_Organization" \ --prefix "[Custom Index] " \ --repo "https://git.example.com/path/to/repository"
For better indexing and management of your templates, use
--prefix
to set a category for your templates. To select certain templates from a large repository, use--filter
to define the title of the templates that you want to import. For example--filter '.*Ansible Default$'
imports various Ansible Default templates.
Exporting Templates
You can export templates to a version control server, such as a Git repository.
-
In the orcharhino management UI, navigate to Hosts > Sync Templates.
-
Click Export.
-
Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to export. For more information about each field, see Configuring the TemplateSync Plug-in.
-
Click Submit.
The orcharhino management UI displays the status of the export. The status is not persistent; if you leave the status page, you cannot return to it.
-
Clone a local copy of your Git repository:
$ git clone https://github.com/theforeman/community-templates /custom/templates
-
Change the owner of your local directory to the
foreman
user, and change the SELinux context with the following commands:# chown -R foreman:foreman /custom/templates # chcon -R -t httpd_sys_rw_content_t /custom/templates
-
To export the templates to your local repository, enter the following command:
hammer export-templates --organization 'Default Organization' --repo /custom/templates
When exporting templates, avoid temporary directories like
/tmp
or/var/tmp
because the backend service runs with systemd private temporary directories.
Synchronizing Templates Using the orcharhino API
-
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>
-
Configure a version control system that uses SSH authorization, for example gitosis, gitolite, or git daemon.
-
Configure the TemplateSync plug-in settings on a TemplateSync tab.
-
Change the Branch setting to match the target branch on a Git server.
-
Change the Repo setting to match the Git repository. For example, for the repository located in
git@git.example.com/templates.git
set the setting intossh://git@git.example.com/templates.git
.
-
-
Accept Git SSH host key as the
foreman
user:# sudo -u foreman ssh git.example.com
You can see the
Permission denied, please try again.
message in the output, which is expected, because the SSH connection cannot succeed yet. -
Create an SSH key pair if you do not already have it. Do not specify a passphrase.
# sudo -u foreman ssh-keygen
-
Configure your version control server with the public key from your orcharhino, which resides in
/usr/share/foreman/.ssh/id_rsa.pub
. -
Export templates from your orcharhino Server to the version control repository specified in the TemplateSync menu:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_orcharhino.example.com/api/v2/templates/export \ -X POST {"message":"Success"}
-
Import templates to orcharhino Server after their content was changed:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_orcharhino.example.com/api/v2/templates/import \ -X POST {“message”:”Success”}
Note that templates provided by orcharhino are locked and you cannot import them by default. To overwrite this behavior, change the
Force import
setting in the TemplateSync menu toyes
or add theforce
parameter-d '{ "force": "true" }’
to the import command.
Synchronizing Templates with a Local Directory Using the orcharhino API
Synchronizing templates with a local directory is useful if you have configured a version control repository in the local directory. That way, you can edit templates and track the history of edits in the directory. You can also synchronize changes to orcharhino Server after editing the templates.
-
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>
-
Create the directory where templates are stored and apply appropriate permissions and SELinux context:
# mkdir -p /usr/share/templates_dir/ # chown foreman /usr/share/templates_dir/ # chcon -t httpd_sys_rw_content_t /usr/share/templates_dir/ -R
-
Change the Repo setting on the TemplateSync tab to match the export directory
/usr/share/templates_dir/
. -
Export templates from your orcharhino Server to a local directory:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_orcharhino.example.com/api/v2/templates/export \ -X POST \ {"message":"Success"}
-
Import templates to orcharhino Server after their content was changed:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_orcharhino.example.com/api/v2/templates/import \ -X POST {“message”:”Success”}
Note that templates provided by orcharhino are locked and you cannot import them by default. To overwrite this behavior, change the
Force import
setting in the TemplateSync menu toyes
or add theforce
parameter-d '{ "force": "true" }’
to the import command.
You can override default API settings by specifying them in the request with the $ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://orcharhino.example.com/api/v2/templates/export \ -X POST \ -d "{\"repo\":\"git.example.com/templates\"}" |
Advanced Git Configuration
You can perform additional Git configuration for the TemplateSync plug-in using the command line or editing the .gitconfig
file.
If you are using a self-signed certificate authentication on your Git server, validate the certificate with the git config http.sslCAPath
command.
For example, the following command verifies a self-signed certificate stored in /cert/cert.pem
:
# sudo -u foreman git config --global http.sslCAPath cert/cert.pem
For a complete list of advanced options, see the git-config
manual page.
Uninstalling the Foreman Templates Plug-in
To avoid errors after removing the foreman_templates plugin:
-
Disable the plug-in using the orcharhino installer:
# foreman-installer --no-enable-foreman-plugin-templates
-
Clean custom data of the plug-in. The command does not affect any templates that you created.
# foreman-rake templates:cleanup
-
Uninstall the plug-in:
# yum remove foreman-plugin-templates
Template Writing Reference
Embedded Ruby (ERB) is a tool for generating text files based on templates that combine plain text with Ruby code. orcharhino uses ERB syntax in the following cases:
- Provisioning templates
-
For more information, see Creating Provisioning Templates in the Provisioning Guide.
- Remote execution job templates
-
For more information, see Configuring and Setting Up Remote Jobs.
- Report templates
-
For more information, see Using Report Templates to Monitor Hosts.
- Templates for partition tables
-
For more information, see Creating Partition Tables in the Provisioning Guide.
This section provides an overview of orcharhino-specific macros and variables that can be used in ERB templates along with some usage examples. Note that the default templates provided by orcharhino (Hosts > Provisioning templates, Hosts > Job templates, Monitor > Report Templates ) also provide a good source of ERB syntax examples.
When provisioning a host or running a remote job, the code in the ERB is executed and the variables are replaced with the host specific values. This process is referred to as rendering. orcharhino Server has the safemode rendering option enabled by default, which prevents any harmful code being executed from templates.
Accessing the Template Writing Reference in the orcharhino management UI
You can access the template writing reference document in the orcharhino management UI.
-
Log in to the orcharhino management UI.
-
In the orcharhino management UI, navigate to Administer > About.
-
Click the Templates DSL link in the Support section.
Writing ERB Templates
The following tags are the most important and commonly used in ERB templates:
All Ruby code is enclosed within <% %>
in an ERB template.
The code is executed when the template is rendered.
It can contain Ruby control flow structures as well as orcharhino-specific macros and variables.
For example:
<% if @host.operatingsystem.family == "Redhat" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input("action") %> <%= input("service") %> <% else -%> service <%= input("service") %> <%= input("action") %> <% end -%>
Note that this template silently performs an action with a service and returns nothing at the output.
This provides the same functionality as <% %>
but when the template is executed, the code output is inserted into the template.
This is useful for variable substitution, for example:
Example input:
echo <%= @host.name %>
Example rendering:
host.example.com
Example input:
<% server_name = @host.fqdn %> <%= server_name %>
Example rendering:
host.example.com
Note that if you enter an incorrect variable, no output is returned. However, if you try to call a method on an incorrect variable, the following error message returns:
Example input:
<%= @example_incorrect_variable.fqdn -%>
Example rendering:
undefined method `fqdn' for nil:NilClass
By default, a newline character is inserted after a Ruby block if it is closed at the end of a line:
Example input:
<%= "line1" %> <%= "line2" %>
Example rendering:
line1 line2
To change the default behavior, modify the enclosing mark with -%>
:
Example input:
<%= "line1" -%> <%= "line2" %>
Example rendering:
line1line2
This is used to reduce the number of lines, where Ruby syntax permits, in rendered templates. White spaces in ERB tags are ignored.
An example of how this would be used in a report template to remove unnecessary newlines between a FQDN and IP address:
Example input:
<%= @host.fqdn -%> <%= @host.ip -%>
Example rendering:
host.example.com10.10.181.216
Encloses a comment that is ignored during template rendering:
Example input:
<%# A comment %>
This generates no output.
Because of the varying lengths of the ERB tags, indenting the ERB syntax might seem messy. ERB syntax ignore white space. One method of handling the indentation is to declare the ERB tag at the beginning of each new line and then use white space within the ERB tag to outline the relationships within the syntax, for example:
<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>
Troubleshooting ERB Templates
The orcharhino management UI provides two ways to verify the template rendering for a specific host:
-
Directly in the template editor – when editing a template (under Hosts > Partition tables, Hosts > Provisioning templates, or Hosts > Job templates), on the Template tab click Preview and select a host from the list. The template then renders in the text field using the selected host’s parameters. Preview failures can help to identify issues in your template.
-
At the host’s details page – select a host at Hosts > All hosts and click the Templates tab to list templates associated with the host. Select Review from the list next to the selected template to view it’s rendered version.
Generic orcharhino-Specific Macros
This section lists orcharhino-specific macros for ERB templates. You can use the macros listed in the following table across all kinds of templates.
Name | Description |
---|---|
indent(n) |
Indents the block of code by n spaces, useful when using a snippet template that is not indented. |
foreman_url(kind) |
Returns the full URL to host-rendered templates of the given kind.
For example, templates of the "provision" type usually reside at |
snippet(name) |
Renders the specified snippet template. Useful for nesting provisioning templates. |
snippets(file) |
Renders the specified snippet found in the Foreman database, attempts to load it from the |
snippet_if_exists(name) |
Renders the specified snippet, skips if no snippet with the specified name is found. |
Templates Macros
If you want to write custom templates, you can use some of the following macros. Depending on the template type, some of the following macros have different requirements.
For more information about the available macros for report templates, in the orcharhino management UI, navigate to Monitor > Report Templates, and click Create Template. In the Create Template window, click the Help tab.
For more information about the available macros for job templates, in the orcharhino management UI, navigate to Hosts > Job Templates, and click the New Job Template. In the New Job Template window, click the Help tab.
- input
-
Using the
input
macro, you can customize the input data that the template can work with. You can define the input name, type, and the options that are available for users. For report templates, you can only use user inputs. When you define a new input and save the template, you can then reference the input in the ERB syntax of the template body.<%= input('cpus') %>
This loads the value from user input
cpus
. - load_hosts
-
Using the
load_hosts
macro, you can generate a complete list of hosts.<%- load_hosts().each_record do |host| -%> <%= host.name %>
Use the
load_hosts
macro with theeach_record
macro to load records in batches of 1000 to reduce memory consumption.If you want to filter the list of hosts for the report, you can add the option
search: input(‘Example_Host’)
:<% load_hosts(search: input('Example_Host')).each_record do |host| -%> <%= host.name %> <% end -%>
In this example, you first create an input that you then use to refine the search criteria that the
load_hosts
macro retrieves. - report_row
-
Using the
report_row
macro, you can create a formatted report for ease of analysis. Thereport_row
macro requires thereport_render
macro to generate the output.Example input:<%- load_hosts(search: input('Example_Host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>
Example rendering:Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com
You can add extra columns to the report by adding another header. The following example adds IP addresses to the report:
Example input:<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>
Example rendering:Server FQDN,IP host1.example.com,10.8.30.228 host2.example.com,10.8.30.227 host3.example.com,10.8.30.226 host4.example.com,10.8.30.225 host5.example.com,10.8.30.224 host6.example.com,10.8.30.223
- report_render
-
This macro is available only for report templates.
Using the
report_render
macro, you create the output for the report. During the template rendering process, you can select the format that you want for the report. YAML, JSON, HTML, and CSV formats are supported.<%= report_render -%>
- render_template()
-
This macro is available only for job templates.
Using this macro, you can render a specific template. You can also enable and define arguments that you want to pass to the template.
Host-Specific Variables
The following variables enable using host data within templates.
Note that job templates accept only @host
variables.
Name | Description |
---|---|
@host.architecture |
The architecture of the host. |
@host.bond_interfaces |
Returns an array of all bonded interfaces. See Parsing Arrays. |
@host.capabilities |
The method of system provisioning, can be either build (for example kickstart) or image. |
@host.certname |
The SSL certificate name of the host. |
@host.diskLayout |
The disk layout of the host. Can be inherited from the operating system. |
@host.domain |
The domain of the host. |
@host.environment |
The Puppet environment of the host. |
@host.facts |
Returns a Ruby hash of facts from Facter. For example to access the 'ipaddress' fact from the output, specify @host.facts['ipaddress']. |
@host.grub_pass |
Returns the host’s bootloader password. |
@host.hostgroup |
The host group of the host. |
host_enc['parameters'] |
Returns a Ruby hash containing information on host parameters. For example, use host_enc['parameters']['lifecycle_environment'] to get the life cycle environment of a host. |
@host.image_build? |
Returns |
@host.interfaces |
Contains an array of all available host interfaces including the primary interface. See Parsing Arrays. |
@host.interfaces_with_identifier('IDs') |
Returns array of interfaces with given identifier. You can pass an array of multiple identifiers as an input, for example @host.interfaces_with_identifier(['eth0', 'eth1']). See Parsing Arrays. |
@host.ip |
The IP address of the host. |
@host.location |
The location of the host. |
@host.mac |
The MAC address of the host. |
@host.managed_interfaces |
Returns an array of managed interfaces (excluding BMC and bonded interfaces). See Parsing Arrays. |
@host.medium |
The assigned operating system installation medium. |
@host.name |
The full name of the host. |
@host.operatingsystem.family |
The operating system family. |
@host.operatingsystem.major |
The major version number of the assigned operating system. |
@host.operatingsystem.minor |
The minor version number of the assigned operating system. |
@host.operatingsystem.name |
The assigned operating system name. |
@host.operatingsystem.boot_files_uri(medium_provider) |
Full path to the kernel and initrd, returns an array. |
@host.os.medium_uri(@host) |
The URI used for provisioning (path configured in installation media). |
host_param('parameter_name') |
Returns the value of the specified host parameter. |
host_param_false?('parameter_name') |
Returns |
host_param_true?('parameter_name') |
Returns |
@host.primary_interface |
Returns the primary interface of the host. |
@host.provider |
The compute resource provider. |
@host.provision_interface |
Returns the provisioning interface of the host. Returns an interface object. |
@host.ptable |
The partition table name. |
@host.puppet_ca_server |
The Puppet CA server the host must use. |
@host.puppetmaster |
The Puppet server the host must use. |
@host.pxe_build? |
Returns |
@host.shortname |
The short name of the host. |
@host.sp_ip |
The IP address of the BMC interface. |
@host.sp_mac |
The MAC address of the BMC interface. |
@host.sp_name |
The name of the BMC interface. |
@host.sp_subnet |
The subnet of the BMC network. |
@host.subnet.dhcp |
Returns |
@host.subnet.dns_primary |
The primary DNS server of the host. |
@host.subnet.dns_secondary |
The secondary DNS server of the host. |
@host.subnet.gateway |
The gateway of the host. |
@host.subnet.mask |
The subnet mask of the host. |
@host.url_for_boot(:initrd) |
Full path to the initrd image associated with this host. Not recommended, as it does not interpolate variables. |
@host.url_for_boot(:kernel) |
Full path to the kernel associated with this host. Not recommended, as it does not interpolate variables, prefer boot_files_uri. |
@provisioning_type |
Equals to 'host' or 'hostgroup' depending on type of provisioning. |
@static |
Returns |
@template_name |
Name of the template being rendered. |
grub_pass |
Returns a bootloader argument to set the encrypted bootloader password, such as --md5pass=#{@host.grub_pass}. |
ks_console |
Returns a string assembled using the port and the baud rate of the host which can be added to a kernel line. For example console=ttyS1,9600. |
root_pass |
Returns the root password configured for the system. |
The majority of common Ruby methods can be applied on host-specific variables. For example, to extract the last segment of the host’s IP address, you can use:
<% @host.ip.split('.').last %>
Kickstart-Specific Variables
The following variables are designed to be used within kickstart provisioning templates.
Name | Description |
---|---|
@arch |
The host architecture name, same as @host.architecture.name. |
@dynamic |
Returns |
@epel |
A command which will automatically install the correct version of the epel-release rpm. Use in a %post script. |
@mediapath |
The full kickstart line to provide the URL command. |
@osver |
The operating system major version number, same as @host.operatingsystem.major. |
Conditional Statements
In your templates, you might perform different actions depending on which value exists. To achieve this, you can use conditional statements in your ERB syntax.
In the following example, the ERB syntax searches for a specific host name and returns an output depending on the value it finds:
<% load_hosts().each_record do |host| -%> <% if @host.name == "host1.example.com" -%> <% result="positive" -%> <% else -%> <% result="negative" -%> <% end -%> <%= result -%>
host1.example.com positive
Parsing Arrays
While writing or modifying templates, you might encounter variables that return arrays.
For example, host variables related to network interfaces, such as @host.interfaces
or @host.bond_interfaces
, return interface data grouped in an array.
To extract a parameter value of a specific interface, use Ruby methods to parse the array.
The following procedure is an example that you can use to find the relevant methods to parse arrays in your template. In this example, a report template is used, but the steps are applicable to other templates.
-
To retrieve the NIC of a content host, in this example, using the
@host.interfaces
variable returns class values that you can then use to find methods to parse the array.Example input:<%= @host.interfaces -%>
Example rendering:<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>
-
In the Create Template window, click the Help tab and search for the
ActiveRecord_Associations_CollectionProxy
andNic::Base
classes. -
For
ActiveRecord_Associations_CollectionProxy
, in the Allowed methods or members column, you can view the following methods to parse the array:[] each find_in_batches first map size to_a
-
For
Nic::Base
, in the Allowed methods or members column, you can view the following method to parse the array:alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid
-
To iterate through an interface array, add the relevant methods to the ERB syntax:
Example input:<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>
Example rendering:host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:
Example Template Snippets
The following example checks if the host has the Puppet and Puppetlabs repositories enabled:
<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>
The following example shows how to capture the minor and major version of the host’s operating system, which can be used for package related decisions:
<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> ... <% end -%>
The following example imports the subscription_manager_registration snippet to the template and indents it by four spaces:
<%= indent 4 do snippet 'subscription_manager_registration' end %>
The following example imports the kickstart_networking_setup
snippet if the host’s subnet has the DHCP boot mode enabled:
<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>
You can use the host.facts
variable to parse values from a host’s facts and custom facts.
In this example luks_stat
is a custom fact that you can parse in the same manner as dmi::system::serial_number
, which is a host fact:
'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],
In this example, you can customize the Applicable Errata report template to parse for custom information about the kernel version of each host:
<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.lifecycle_environment, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>
Job Template Examples and Extensions
Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements.
Customizing Job Templates
When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones.
The following template combines default templates to install and start the httpd service on Red Hat Enterprise Linux systems:
<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'httpd' %>
<%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'httpd' %>
The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax:
<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %>
With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set, and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list.
Default Job Template Categories
Job template category | Description |
---|---|
Packages |
Templates for performing package related actions. Install, update, and remove actions are included by default. |
Puppet |
Templates for executing Puppet runs on target hosts. |
Power |
Templates for performing power related actions. Restart and shutdown actions are included by default. |
Commands |
Templates for executing custom commands on remote hosts. |
Services |
Templates for performing service related actions. Start, stop, restart, and status actions are included by default. |
Katello |
Templates for performing content related actions. These templates are used mainly from different parts of the orcharhino management UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. |
Example restorecon Template
This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts.
-
In the orcharhino management UI, navigate to Hosts > Job templates. Click New Job Template.
-
Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor:
restorecon -RvF <%= input("directory") %>
The
<%= input("directory") %>
string is replaced by a user-defined directory during job invocation. -
On the Job tab, set Job category to
Commands
. -
Click Add Input to allow job customization. Enter
directory
to the Name field. The input name must match the value specified in the template editor. -
Click Required so that the command cannot be executed without the user specified parameter.
-
Select User input from the Input type list. Enter a description to be shown during job invocation, for example
Target directory for restorecon
. -
Click Submit.
See Executing a restorecon Template on Multiple Hosts for information on how to execute a job based on this template.
Rendering a restorecon Template
This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template. This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts.
Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:
<%= render_template("Run Command - restorecon", :directory => "/home") %>
Executing a restorecon Template on Multiple Hosts
This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory.
-
In the orcharhino management UI, navigate to Hosts > All hosts and select target hosts. Select Schedule Remote Job from the Select Action list.
-
In the Job invocation page, select the
Commands
job category and theRun Command - restorecon
job template. -
Type
/home
in the directory field. -
Set Schedule to
Execute now
. -
Click Submit. You are taken to the Job invocation page where you can monitor the status of job execution.
Including Power Actions in Templates
This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents orcharhino from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly.
Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:
<%= render_template("Power Action - SSH Default", :action => "restart") %>
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA"). |