Configuring Hosts Using Ansible
Ansible is an automation engine and configuration management tool. It works without client and daemon and solely relies on Python and SSH. Ansible consists of a control node, for example a notebook, a workstation, or a server and managed nodes, that is the hosts in its inventory. You can use Ansible to configure hosts similar to Puppet and Salt.
ATIX offers Ansible trainings for beginners and advanced users on how to use Ansible as a configuration management tool. This helps you manage your infrastructure more efficiently using Ansible roles. It communicates how to create, use, and maintain Ansible roles, inventories, and playbooks based on best practices. Refer to the Ansible trainings website for more information or contact us. |
The Ansible plug-in for configuration management relies on Ansible roles, which are assigned to hosts or host groups. If you want to use Ansible playbooks, have a look at the application centric deployment (ACD) plug-in. ACD helps you deploy and configure multi host applications using an Ansible playbook and application definition. For more information, see Application Centric Deployment. |
If you want to integrate orcharhino with AWX, Ansible Automation Platform, or Oracle Linux Automation Manager, have a look at Integrating orcharhino and AWX, Ansible Automation Platform, or Oracle Linux Automation Manager.
Getting started with Ansible in orcharhino
Use this guide to configure orcharhino to use Ansible for remote execution.
Supported Ansible versions
orcharhino uses Ansible as provided by the base operating system of orcharhino Server or any orcharhino Proxies for remote execution. Therefore, the supported version of Ansible depends on your base OS configuration.
Configuring your orcharhino to run Ansible roles
In orcharhino, you can import Ansible roles to help with automation of routine tasks. To enable Ansible on orcharhino Server, see Enabling Ansible integration with orcharhino.
orcharhino imports and runs Ansible roles from the following paths:
-
/etc/ansible/roles
-
/usr/share/ansible/roles
-
/etc/ansible/collections
-
/usr/share/ansible/collections
Roles and collections from installed packages are placed under /usr/share/ansible
.
If you want to add custom roles or collections, place them under /etc/ansible
.
The paths are configured by orcharhino.
-
Add the roles to a directory in an Ansible path on orcharhino Server and all orcharhino Proxies from where you want to use the roles. If you want to use custom or third party Ansible roles, ensure to configure an external version control system to synchronize roles between orcharhino Server and orcharhino Proxies.
-
On all orcharhino Proxies that you want to use to run Ansible roles on hosts, enable the Ansible plugin:
# orcharhino-installer \ --enable-foreman-proxy-plugin-ansible
-
Distribute SSH keys to enable orcharhino Proxies to connect to hosts using SSH. For more information, see Distributing SSH Keys for Remote Execution in Managing Hosts. orcharhino runs Ansible roles the same way it runs remote execution jobs.
-
Import the Ansible roles into orcharhino.
-
Proceed to Using Ansible Roles to Automate Repetitive Tasks on Clients.
Enabling Ansible integration with orcharhino
Perform the following procedure to enable the Ansible plugin on your orcharhino Server.
-
Enable the Ansible plugin on your orcharhino Server:
# orcharhino-installer \ --enable-foreman-plugin-ansible \ --enable-foreman-proxy-plugin-ansible
Importing Ansible roles and variables
You can import Ansible roles and variables from the Ansible paths on orcharhino Server or orcharhino Proxies that has Ansible enabled.
-
Ensure that the roles and variables that you import are located in the Ansible paths on all orcharhino Proxies from where you want to use the roles.
-
In the orcharhino management UI, navigate to Configure > Ansible > Roles.
-
Click Import to select the orcharhino Proxy from which you want to import.
-
Select the roles that you want to import.
-
Click Submit.
Overriding Ansible variables in orcharhino
If you run Ansible roles in orcharhino, you can use orcharhino to override Ansible variables for those roles.
The following procedure refers to hosts and host groups. For more information, see Managing Hosts.
If you use an Ansible role to run a task as a user that is not the Effective User
, there is a strict order of precedence for overriding Ansible variables.
To ensure the variable that you override follows the correct order of precedence, see Variable precedence: Where should I put a variable?
-
You must have Ansible variables in orcharhino. For more information, see Importing Ansible Roles and Variables
-
To use overridden Ansible variables, a user must have a role that allows them to see the attributes that are matched against hosts.
-
In the orcharhino management UI, navigate to Configure > Ansible > Variables.
-
Select the Ansible variable that you want to override and manage with orcharhino.
-
In the Default Behavior area, select the Override checkbox.
-
In the Parameter Type field, select the value type for validation such as string or boolean. The types array and hash have further options for handling upon a variable match. For more information, see the Prioritize Attribute Order area below.
-
In the Default Value field, enter the default value that you want to use if there is no match for the variable.
-
Optional: If you do not want to display the value of the variable as plain text in the orcharhino management UI, select the Hidden Value checkbox to display the content of the variable as asterisks. This is useful for sensitive values such as passwords or secret tokens.
-
Optional: Expand the Optional Input Validator area and specify conditions that will be used to validate concrete values of the variable:
-
Select Required if you want to enforce users to fill in this variable.
-
In the Validator Type field, select how the value will be validated:
-
list
– The value will be validated against an enumeration of allowed values. -
regex
– The value will be validated against a regular expression pattern.
-
-
-
Optional: In the Prioritize Attribute Order area, specify the order of priority to match an override with a host by host attributes. Order at the top takes higher precedence. The first match wins.
You can combine multiple attributes into a single matcher key using a comma as the AND operation. For example, the matcher key of
hostgroup, environment
would expect matchers such ashostgroup = "web servers"
ANDenvironment = production
.If you use the parameter type array or hash, you can further set:
-
Merge Overrides – Merges members of the arrays/hashes instead of replacing the whole array or hash. If the hashes contain the same key, the value is overwritten by the value of the host.
-
Merge Default – Adds the default value to the array or hash.
-
Avoid Duplicates – Ensures that the values in the array or hash are unique.
-
-
Optional: Expand the Specify Matchers area and specify criteria for selecting hosts on which the variable overrides.
-
To save the override settings, click Submit.
To use the Ansible variable, add the variable as a parameter to your host or host group, or add the variable as a global parameter.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select the host that you want to use.
-
Click the Ansible tab, and in the Variables area, click the pencil icon to edit the value of the variable.
-
Click the tick icon to accept the value of the changed variable or the cross icon to cancel the change.
-
In the orcharhino management UI, navigate to Configure > Host Groups, and select the host group that you want to use.
-
Click the Parameters tab, and in the Host Group Parameters area, click Add Parameter.
-
In the Name field, add the Ansible variable name.
-
From the Type list, select the type of the variable for validation.
-
In the Value field, enter the value for the variable.
-
In the orcharhino management UI, navigate to Configure > Global Parameters, and click Create Parameter.
-
In the Name field, add the Ansible variable name.
-
From the Type list, select the type of the variable for validation.
-
In the Value field, enter the value for the variable.
-
Optional: If you do not want to display the Ansible variable in plain text, select the Hidden Values checkbox to display the content of the variable as asterisks in the orcharhino management UI.
Synchronizing Ansible Collections
On orcharhino, you can synchronize your Ansible Collections from any Ansible Galaxy and other orcharhino instances. Ansible Collections will appear on orcharhino as a new repository type in the orcharhino management UI menu under Content after the sync.
-
In the orcharhino management UI, navigate to Content > Products.
-
Select the required product name.
-
In the Products window, select the name of a product that you want to create a repository for.
-
Click the Repositories tab, and then click New Repository.
-
In the Name field, enter a name for the repository.
The Label field is populated automatically based on the name.
-
From the Type list, select ansible collection.
-
In the Upstream URL field, enter the URL for the upstream collections repository.
The URL can be any Ansible Galaxy endpoint. For example,
https://galaxy.ansible.com
. -
Optional: In the Requirements.yml field, you can specify the list of collections you want to sync from the endpoint, as well as their versions.
If you do not specify the list of collections, everything from the endpoint will be synced.
--- collections: - name: my_namespace.my_collection version: 1.2.3
For more information, see Installing roles and collections from the same requirements.yml file in the Galaxy User Guide.
-
Authenticate.
-
To sync orcharhino from Private Automation Hub, enter your token in the Auth Token field.
-
To sync orcharhino from
console.redhat.com
, enter your token in the Auth Token field and enter your SSO URL in the the Auth URL field. -
To sync orcharhino from orcharhino, leave both authentication fields blank.
-
-
Click Save.
-
Navigate to the Ansible Collections repository.
-
From the Select Action menu, select Sync Now.
Customizing Ansible configuration
orcharhino manages essential Ansible configuration that is required for Ansible integration with orcharhino. However, you can customize other Ansible configuration options as usual.
orcharhino stores the essential Ansible configuration as environment variables in /etc/foreman-proxy/ansible.env
.
This file is managed by orcharhino-installer
.
Ansible reads configuration from a configuration file and the environment provided by orcharhino Proxy.
If you need to customize Ansible configuration, you can do so in the system-wide /etc/ansible/ansible.cfg
file or in the /usr/share/foreman-proxy/.ansible.cfg
file in the home directory of the foreman-proxy
user.
Note that if you use /usr/share/foreman-proxy/.ansible.cfg
, Ansible spawned by orcharhino ignores configuration in /etc/ansible/ansible.cfg
.
Note that environment variables take precedence over values in ansible.cfg
, which ensures that the essential configuration required by orcharhino is retained.
The following table lists the essential Ansible configuration options managed by orcharhino.
Environment Variable | Config Key | Description |
---|---|---|
ANSIBLE_CALLBACKS_ENABLED |
callbacks_enabled |
Enables callback to orcharhino; equivalent to |
ANSIBLE_CALLBACK_WHITELIST |
callback_whitelist |
Enables callback to orcharhino; equivalent to |
ANSIBLE_COLLECTIONS_PATHS |
collections_paths |
List of paths to Ansible collections |
ANSIBLE_HOST_KEY_CHECKING |
host_key_checking |
Disables checking of host keys during SSH connection |
ANSIBLE_LOCAL_TEMP |
local_tmp |
Temporary directory on orcharhino Proxy |
ANSIBLE_ROLES_PATH |
roles_path |
List of paths to Ansible roles |
ANSIBLE_SSH_ARGS |
ssh_args |
Arguments passed to SSH connection |
Using Ansible Vault with orcharhino
You can encrypt sensitive Ansible data files using the Ansible Vault tool and configure Ansible to access the encrypted files using a password stored in a file.
-
If you customized
/etc/ansible/ansible.cfg
, copy your configuration from/etc/ansible/ansible.cfg
to/usr/share/foreman-proxy/.ansible.cfg
. -
Encrypt the sensitive file using the
ansible-vault
command:# ansible-vault encrypt /etc/ansible/roles/Role_Name/vars/main.yml
Note that
ansible-vault
changes the file permissions to600
. -
Change the group and permissions of the encrypted file to ensure that the
foreman-proxy
user can read it:# chgrp foreman-proxy /etc/ansible/roles/Role_Name/vars/main.yml # chmod 0640 /etc/ansible/roles/Role_Name/vars/main.yml
-
Create the
/usr/share/foreman-proxy/.ansible_vault_password
file and enter the Vault password into it. -
Change the user and permissions of the
.ansible_vault_password
file to ensure that only theforeman-proxy
user can read it:# chown foreman-proxy:foreman-proxy /usr/share/foreman-proxy/.ansible_vault_password # chmod 0400 /usr/share/foreman-proxy/.ansible_vault_password
-
Add the path of the Vault password file to the
[defaults]
section in/usr/share/foreman-proxy/.ansible.cfg
:[defaults] vault_password_file = /usr/share/foreman-proxy/.ansible_vault_password
The path to the Vault password file must be absolute.
Using Ansible roles to automate repetitive tasks on clients
Assigning Ansible roles to an existing host
You can use Ansible roles for remote management of orcharhino clients.
-
Ensure that you have configured and imported Ansible roles.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Select the host and click Edit.
-
On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list.
-
Click the + icon to add the role to the host. You can add more than one role.
-
Click Submit.
After you assign Ansible roles to hosts, you can use Ansible for remote execution. For more information, see Distributing SSH Keys for Remote Execution.
On the Parameters tab, click Add Parameter to add any parameter variables that you want to pass to job templates at run time. This includes all Ansible playbook parameters and host parameters that you want to associate with the host. To use a parameter variable with an Ansible job template, you must add a Host Parameter.
Removing Ansible roles from a host
Use the following procedure to remove Ansible roles from a host.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Select the host and click Edit.
-
Select the Ansible Roles tab.
-
In the Assigned Ansible Roles area, click the - icon to remove the role from the host. Repeat to remove more roles.
-
Click Submit.
Changing the order of Ansible roles
Use the following procedure to change the order of Ansible roles applied to a host.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Select a host.
-
Select the Ansible Roles tab.
-
In the Assigned Ansible Roles area, you can change the order of the roles by dragging and dropping the roles into the preferred position.
-
Click Submit to save the order of the Ansible roles.
Running Ansible roles on a host
You can run Ansible roles on a host through the orcharhino management UI.
-
You must configure your deployment to run Ansible roles. For more information, see Configuring your orcharhino to run Ansible roles.
-
You must have assigned the Ansible roles to the host.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Select the checkbox of the host that contains the Ansible role you want to run.
-
From the Select Action list, select Run all Ansible roles.
You can view the status of your Ansible job on the Run Ansible roles page. To rerun a job, click Rerun.
Assigning Ansible roles to a host group
You can use Ansible roles for remote management of orcharhino clients.
-
You must configure your deployment to run Ansible roles. For more information, see Configuring your orcharhino to run Ansible roles.
-
In the orcharhino management UI, navigate to Configure > Host Groups.
-
Click the host group name to which you want to assign an Ansible role.
-
On the Ansible Roles tab, select the role that you want to add from the Available Ansible Roles list.
-
Click the + icon to add the role to the host group. You can add more than one role.
-
Click Submit.
Running Ansible roles on a host group
You can run Ansible roles on a host group through the orcharhino management UI.
-
You must configure your deployment to run Ansible roles. For more information, see Configuring your orcharhino to run Ansible roles.
-
You must have assigned the Ansible roles to the host group.
-
You must have at least one host in your host group.
-
In the orcharhino management UI, navigate to Configure > Host Groups.
-
From the list in the Actions column for the host group, select Run all Ansible roles.
You can view the status of your Ansible job on the Run Ansible roles page. Click Rerun to rerun a job.
Running Ansible roles in check mode
You can run Ansible roles in check mode through the orcharhino management UI.
-
You must configure your deployment to run Ansible roles. For more information, see Configuring your orcharhino to run Ansible roles.
-
You must have assigned the Ansible roles to the host group.
-
You must have at least one host in your host group.
-
In the orcharhino management UI, navigate to Hosts > All Hosts.
-
Click Edit for the host you want to enable check mode for.
-
In the Parameters tab, ensure that the host has a parameter named
ansible_roles_check_mode
with typeboolean
set totrue
. -
Click Submit.
Running an Ansible playbook from orcharhino
You can run an Ansible playbook on a host or host group by executing a remote job in orcharhino.
-
Ansible plugin in orcharhino is enabled.
-
Remote job execution is configured. For more information, see Configuring and Setting Up Remote Jobs.
-
You have an Ansible playbook ready to use.
-
In the orcharhino management UI, navigate to Monitor > Jobs.
-
Click Run Job.
-
In Job category, select
Ansible Playbook
. -
In Job template, select
Ansible - Run playbook
. -
Click Next.
-
Select the hosts on which you want to run the playbook.
-
In the playbook field, paste the content of your Ansible playbook.
-
Follow the wizard to complete setting the remote job. For more information, see executing a remote job.
-
Click Submit to run the Ansible playbook on your hosts.
-
Alternatively, you can import Ansible playbooks from orcharhino Proxies. For more information, see the following resources:
Configuring and setting up remote jobs
orcharhino supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously.
About running jobs on hosts
You can run jobs on hosts remotely from orcharhino Proxies using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution.
For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the orcharhino Proxy base operating system. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.
Communication occurs through orcharhino Proxy, which means that orcharhino Server does not require direct access to the target host, and can scale to manage many hosts. For more information, see transport modes for remote execution.
orcharhino uses ERB syntax job templates. For more information, see Template Writing Reference in Managing Hosts.
Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates in Managing Hosts.
Any orcharhino Proxy base operating system is a client of orcharhino Server’s internal orcharhino Proxy, and therefore this section applies to any type of host connected to orcharhino Server, including orcharhino Proxies. |
You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values.
In addition, you can specify custom values for templates when you run the command.
For more information, see Executing a Remote Job in Managing Hosts.
Remote execution workflow
For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your orcharhino Proxy. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.
When you run a remote job on hosts, for every host, orcharhino performs the following actions to find a remote execution orcharhino Proxy to use.
orcharhino searches only for orcharhino Proxies that have the remote execution feature enabled.
-
orcharhino finds the host’s interfaces that have the Remote execution checkbox selected.
-
orcharhino finds the subnets of these interfaces.
-
orcharhino finds remote execution orcharhino Proxies assigned to these subnets.
-
From this set of orcharhino Proxies, orcharhino selects the orcharhino Proxy that has the least number of running jobs. By doing this, orcharhino ensures that the jobs load is balanced between remote execution orcharhino Proxies.
If you have enabled Prefer registered through orcharhino Proxy for remote execution, orcharhino runs the REX job using the orcharhino Proxy the host is registered to.
By default, Prefer registered through orcharhino Proxy for remote execution is set to No.
To enable it, in the orcharhino management UI, navigate to Administer > Settings, and on the Content tab, set Prefer registered through orcharhino Proxy for remote execution
to Yes.
This ensures that orcharhino performs REX jobs on hosts by the orcharhino Proxy to which they are registered to.
If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino selects the most lightly loaded orcharhino Proxy from the following types of orcharhino Proxies that are assigned to the host:
-
DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets
-
DNS orcharhino Proxy assigned to the host’s domain
-
Realm orcharhino Proxy assigned to the host’s realm
-
Puppet server orcharhino Proxy
-
Puppet CA orcharhino Proxy
-
OpenSCAP orcharhino Proxy
If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Enable Global orcharhino Proxy setting is enabled, orcharhino selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.
Permissions for remote execution
You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles:
-
Remote Execution Manager: Can access all remote execution features and functionality.
-
Remote Execution User: Can only run jobs.
You can clone the Remote Execution User role and customize its filter for increased granularity.
If you adjust the filter with the view_job_templates
permission on a customized role, you can only see and trigger jobs based on matching job templates.
You can use the view_hosts
and view_smart_proxies
permissions to limit which hosts or orcharhino Proxies are visible to the role.
The execute_template_invocation
permission is a special permission that is checked immediately before execution of a job begins.
This permission defines which job template you can run on a particular host.
This allows for even more granularity when specifying permissions.
You can run remote execution jobs against orcharhino and orcharhino Proxy registered as hosts to orcharhino with the execute_jobs_on_infrastructure_hosts
permission.
Standard Manager and Site Manager roles have this permission by default.
If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts
permission, you can execute remote jobs against registered orcharhino and orcharhino Proxy hosts.
For more information on working with roles and permissions, see Creating and Managing Roles in Administering orcharhino.
The following example shows filters for the execute_template_invocation
permission:
name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = "Restart service" and host_group.name = webservers
Use the first line in this example to apply the Reboot template to one selected host.
Use the second line to define a pool of hosts with names ending with .staging.example.com
.
Use the third line to bind the template with a host group.
Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. |
Transport modes for remote execution
You can configure your orcharhino to use two different modes of transport for remote job execution.
On orcharhino Proxies in ssh
mode, remote execution uses the SSH service to transport job details.
This is the default transport mode.
The SSH service must be enabled and active on the target hosts.
The remote execution orcharhino Proxy must have access to the SSH port on the target hosts.
Unless you have a different setting, the standard SSH port is 22.
If your orcharhino Proxy already uses the # orcharhino-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh |
On orcharhino Proxies in pull-mqtt
mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to publish jobs it receives from orcharhino Server.
The host subscribes to the MQTT broker on orcharhino Proxy for job notifications using the yggdrasil
pull client.
After the host receives a notification, it pulls job details from orcharhino Proxy over HTTPS, runs the job, and reports results back to orcharhino Proxy.
To use the pull-mqtt
mode, you must enable it on orcharhino Proxy and configure the pull client on the target hosts.
-
To enable pull mode on orcharhino Proxy, see Configuring Remote Execution for Pull Client in Installing orcharhino Proxy.
-
To enable pull mode on an existing host, continue with Configuring a Host to Use the Pull Client.
-
To enable pull mode on a new host, continue with either of the following procedures in Managing Hosts:
Configuring a host to use the pull client
For orcharhino Proxies configured to use pull-mqtt
mode, hosts can subscribe to remote jobs using the remote execution pull client.
Hosts do not require an SSH connection from their orcharhino Proxy.
-
You have registered the host to orcharhino.
-
The host’s orcharhino Proxy is configured to use
pull-mqtt
mode. For more information, see Configuring Remote Execution for Pull Client in Installing orcharhino Proxy. -
The ATIX AG orcharhino Client for Debian repository is enabled and synchronized on orcharhino Server, and enabled on the host.
-
The host is able to communicate with its orcharhino Proxy over MQTT using port
1883
. -
The host is able to communicate with its orcharhino Proxy over HTTPS.
-
Install the
katello-pull-transport-migrate
package on your host:-
On Debian hosts:
# apt-get install katello-pull-transport-migrate
The package installs
foreman_ygg_worker
andyggdrasil
as dependencies and enables the pull mode on the host. The host’ssubscription-manager
configuration and consumer certificates are used to configure theyggdrasil
client on the host, and the pull mode client worker is started. -
-
Check the status of the
yggdrasild
service:# systemctl status yggdrasild
Creating a job template
Use this procedure to create a job template. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > Templates > Job templates.
-
Click New Job Template.
-
Click the Template tab, and in the Name field, enter a unique name for your job template.
-
Select Default to make the template available for all organizations and locations.
-
Create the template directly in the template editor or upload it from a text file by clicking Import.
-
Optional: In the Audit Comment field, add information about the change.
-
Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing Hosts.
-
Optional: In the Description Format field, enter a description template. For example,
Install package %{package_name}
. You can also use%{template_name}
and%{job_category}
in your template. -
From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks.
-
Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete.
-
Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab.
-
Optional: Click Foreign input set to include other templates in this job.
-
Optional: In the Effective user area, configure a user if the command cannot use the default
remote_execution_effective_user
setting. -
Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet.
-
Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to orcharhino after a job finishes.
-
Click the Location tab and add the locations where you want to use the template.
-
Click the Organizations tab and add the organizations where you want to use the template.
-
Click Submit to save your changes.
You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing Hosts.
-
To create a job template using a template-definition file, enter the following command:
# hammer job-template create \ --file "Path_to_My_Template_File" \ --job-category "My_Category_Name" \ --name "My_Template_Name" \ --provider-type SSH
Importing an Ansible playbook by name
You can import Ansible playbooks by name to orcharhino from collections installed on orcharhino Proxy.
orcharhino creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported
job category.
If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/My_Namespace/My_Collection
.
-
Ansible plugin is enabled.
-
Your orcharhino account has a role that grants the
import_ansible_playbooks
permission.
-
Fetch the available Ansible playbooks by using the following API request:
# curl -X GET -H 'Content-Type: application/json' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/fetch?proxy_id=My_orcharhino-proxy_ID
-
Select the Ansible playbook you want to import and note its name.
-
Import the Ansible playbook by its name:
# curl -X PUT -H 'Content-Type: application/json' -d '{ "playbook_names": ["My_Playbook_Name"] }' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/sync?proxy_id=My_orcharhino-proxy_ID
You get a notification in the orcharhino management UI after the import completes.
-
You can run the playbook by executing a remote job from the created job template. For more information, see executing a remote job.
Importing all available Ansible playbooks
You can import all the available Ansible playbooks to orcharhino from collections installed on orcharhino Proxy.
orcharhino creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported
job category.
If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/My_Namespace/My_Collection
.
-
Ansible plugin is enabled.
-
Your orcharhino account has a role that grants the
import_ansible_playbooks
permission.
-
Import the Ansible playbooks by using the following API request:
# curl -X PUT -H 'Content-Type: application/json' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/sync?proxy_id=My_orcharhino-proxy_ID
You get a notification in the orcharhino management UI after the import completes.
-
You can run the playbooks by executing a remote job from the created job templates. For more information, see executing a remote job.
Configuring the fallback to any orcharhino Proxy remote execution setting in orcharhino
You can enable the Fallback to Any orcharhino Proxy setting to configure orcharhino to search for remote execution orcharhino Proxies from the list of orcharhino Proxies that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to orcharhino Proxies that do not have the remote execution feature enabled.
If the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded orcharhino Proxy from the set of all orcharhino Proxies assigned to the host, such as the following:
-
DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets
-
DNS orcharhino Proxy assigned to the host’s domain
-
Realm orcharhino Proxy assigned to the host’s realm
-
Puppet server orcharhino Proxy
-
Puppet CA orcharhino Proxy
-
OpenSCAP orcharhino Proxy
-
In the orcharhino management UI, navigate to Administer > Settings.
-
Click Remote Execution.
-
Configure the Fallback to Any orcharhino Proxy setting.
-
Enter the
hammer settings set
command on orcharhino to configure the Fallback to Any orcharhino Proxy setting. To set the value totrue
, enter the following command:# hammer settings set \ --name=remote_execution_fallback_proxy \ --value=true
Configuring the global orcharhino Proxy remote execution setting in orcharhino
By default, orcharhino searches for remote execution orcharhino Proxies in hosts' organizations and locations regardless of whether orcharhino Proxies are assigned to hosts' subnets or not. You can disable the Enable Global orcharhino Proxy setting if you want to limit the search to the orcharhino Proxies that are assigned to hosts' subnets.
If the Enable Global orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.
-
In the orcharhino management UI, navigate to Administer > Settings.
-
Click Remote Execution.
-
Configure the Enable Global orcharhino Proxy setting.
-
Enter the
hammer settings set
command on orcharhino to configure theEnable Global orcharhino Proxy
setting. To set the value totrue
, enter the following command:# hammer settings set \ --name=remote_execution_global_proxy \ --value=true
Setting an alternative directory for remote execution jobs in push mode
By default, orcharhino uses the /var/tmp
directory on hosts for remote execution jobs in push mode.
If the /var/tmp
directory on your host is mounted with the noexec
flag, orcharhino cannot execute remote execution job scripts in this directory.
You can use orcharhino-installer
to set an alternative directory for executing remote execution jobs in push mode.
-
On your host, create a new directory:
# mkdir /My_Remote_Working_Directory
-
Copy the SELinux context from the default
/var/tmp
directory:# chcon --reference=/var/tmp /My_Remote_Working_Directory
-
Configure your orcharhino Server or orcharhino Proxy to use the new directory:
# orcharhino-installer \ --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory
Setting an alternative directory for remote execution jobs in pull mode
By default, orcharhino uses the /run
directory on hosts for remote execution jobs in pull mode.
If the /run
directory on your host is mounted with the noexec
flag, orcharhino cannot execute remote execution job scripts in this directory.
You can use the yggdrasild
service to set an alternative directory for executing remote execution jobs in pull mode.
On your host, perform these steps:
-
Create a new directory:
# mkdir /My_Remote_Working_Directory
-
Access the
yggdrasild
service configuration:# systemctl edit yggdrasild
-
Specify the alternative directory by adding the following line to the configuration:
Environment=FOREMAN_YGG_WORKER_WORKDIR=/My_Remote_Working_Directory
-
Restart the
yggdrasild
service:# systemctl restart yggdrasild
Altering the privilege elevation method
By default, push-based remote execution uses sudo
to switch from the SSH user to the effective user that executes the script on your host.
In some situations, you might require to use another method, such as su
or dzdo
.
You can globally configure an alternative method in your orcharhino settings.
-
Your user account has a role assigned that grants the
view_settings
andedit_settings
permissions. -
If you want to use
dzdo
for Ansible jobs, ensure thecommunity.general
Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation.
-
Navigate to Administer > Settings.
-
Select the Remote Execution tab.
-
Click the value of the Effective User Method setting.
-
Select the new value.
-
Click Submit.
Distributing SSH keys for remote execution
For orcharhino Proxies in ssh
mode, remote execution connections are authenticated using SSH.
The public SSH key from orcharhino Proxy must be distributed to its attached hosts that you want to manage.
Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22.
Use one of the following methods to distribute the public SSH key from orcharhino Proxy to target hosts:
-
Using the orcharhino API to obtain SSH keys for remote execution.
-
Configuring a Preseed template to distribute SSH keys during provisioning.
-
For new orcharhino hosts, you can deploy SSH keys to orcharhino hosts during registration using the global registration template. For more information, see Registering a Host to orcharhino Using the Global Registration Template in Managing Hosts.
orcharhino distributes SSH keys for the remote execution feature to the hosts provisioned from orcharhino by default.
If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts.
Distributing SSH keys for remote execution manually
To distribute SSH keys manually, complete the following steps:
-
Copy the SSH pub key from your orcharhino Proxy to your target host:
# ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@client.example.com
Repeat this step for each target host you want to manage.
-
To confirm that the key was successfully copied to the target host, enter the following command on orcharhino Proxy:
# ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy root@client.example.com
Adding a passphrase to SSH key used for remote execution
By default, orcharhino Proxy uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure.
-
On your orcharhino Server or orcharhino Proxy, use
ssh-keygen
to add a passphrase to your SSH key:# ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy
-
Users now must use a passphrase when running remote execution jobs on hosts.
Using the orcharhino API to obtain SSH keys for remote execution
To use the orcharhino API to download the public key from orcharhino Proxy, complete this procedure on each target host.
-
On the target host, create the
~/.ssh
directory to store the SSH key:# mkdir ~/.ssh
-
Download the SSH key from orcharhino Proxy:
# curl https://orcharhino-proxy.network2.example.com:9090/ssh/pubkey >> ~/.ssh/authorized_keys
-
Configure permissions for the
~/.ssh
directory:# chmod 700 ~/.ssh
-
Configure permissions for the
authorized_keys
file:# chmod 600 ~/.ssh/authorized_keys
Configuring a Preseed template to distribute SSH keys during provisioning
You can add a remote_execution_ssh_keys
snippet to your custom Preseed template to deploy SSH keys to hosts during provisioning.
Preseed templates that orcharhino ships include this snippet by default.
orcharhino copies the SSH key for remote execution to the systems during provisioning.
-
To include the public key in newly-provisioned hosts, add the following snippet to the Preseed template that you use:
<%= snippet 'remote_execution_ssh_keys' %>
Configuring a keytab for Kerberos ticket granting tickets
Use this procedure to configure orcharhino to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets.
-
Find the ID of the
foreman-proxy
user:# id -u foreman-proxy
-
Modify the
umask
value so that new files have the permissions600
:# umask 077
-
Create the directory for the keytab:
# mkdir -p "/var/kerberos/krb5/user/My_User_ID"
-
Create a keytab or copy an existing keytab to the directory:
# cp My_Client.keytab /var/kerberos/krb5/user/My_User_ID/client.keytab
-
Change the directory owner to the
foreman-proxy
user:# chown -R foreman-proxy:foreman-proxy "/var/kerberos/krb5/user/My_User_ID"
-
Ensure that the keytab file is read-only:
# chmod -wx "/var/kerberos/krb5/user/My_User_ID/client.keytab"
-
Restore the SELinux context:
# restorecon -RvF /var/kerberos/krb5
Configuring Kerberos authentication for remote execution
You can use Kerberos authentication to establish an SSH connection for remote execution on orcharhino hosts.
-
Enroll orcharhino Server on the Kerberos server
-
Enroll the orcharhino target host on the Kerberos server
-
Configure and initialize a Kerberos user account for remote execution
-
Ensure that the foreman-proxy user on orcharhino has a valid Kerberos ticket granting ticket
-
To install and enable Kerberos authentication for remote execution, enter the following command:
# orcharhino-installer \ --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true
-
To edit the default user for remote execution, in the orcharhino management UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account.
-
Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account.
-
To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing Hosts.
Setting up job templates
orcharhino provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates. If you want to use a template without making changes, proceed to Executing a Remote Job in Managing Hosts.
You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone.
-
To clone a template, in the Actions column, select Clone.
-
Enter a unique name for the clone and click Submit to save the changes.
Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing Hosts.
To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax.
Begin the template with ---
.
You can embed an Ansible playbook YAML file into the job template body.
You can also add ERB syntax to customize your YAML Ansible template.
You can also import Ansible playbooks in orcharhino.
For more information, see Synchronizing Repository Templates in Managing Hosts.
At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host’s edit page can be used as input parameters for job templates.
Executing a remote job
You can execute a job that is based on a job template against one or more hosts.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Monitor > Jobs and click Run job.
-
Select the Job category and the Job template you want to use, then click Next.
-
Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context.
If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query:
hostgroup_fullname ~ "My_Host_Group*"
Replace My_Host_Group with the name of the top-level host group.
-
If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click Next.
-
Optional: To configure advanced settings for the job, fill in the Advanced fields.
-
Click Next.
-
Schedule time for the job.
-
To execute the job immediately, keep the pre-selected Immediate execution.
-
To execute the job in future time, select Future execution.
-
To execute the job on regular basis, select Recurring execution.
-
-
Optional: If you selected future or recurring execution, select the Query type, otherwise click Next.
-
Static query means that job executes on the exact list of hosts that you provided.
-
Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter.
Click Next after you have selected the query type.
-
-
Optional: If you selected future or recurring execution, provide additional details:
-
For Future execution, enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled.
-
For Recurring execution, select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time.
Click Next after you have entered the required information.
-
-
Review job details. You have the option to return to any part of the job wizard and edit the information.
-
Click Submit to schedule the job for execution.
-
Enter the following command on orcharhino:
# hammer settings set \ --name=remote_execution_global_proxy \ --value=false
-
Find the ID of the job template you want to use:
# hammer job-template list
-
Show the template details to see parameters required by your template:
# hammer job-template info --id My_Template_ID
-
Execute a remote job with custom parameters:
# hammer job-invocation create \ --inputs My_Key_1="My_Value_1",My_Key_2="My_Value_2",... \ --job-template "My_Template_Name" \ --search-query "My_Search_Query"
Replace
My_Search_Query
with the filter expression that defines hosts, for example"name ~ My_Pattern"
. For more information about executing remote commands with hammer, enterhammer job-template --help
andhammer job-invocation --help
.
Scheduling a recurring Ansible job for a host
You can schedule a recurring job to run Ansible roles on hosts.
-
Ensure you have the
view_foreman_tasks
,view_job_invocations
, andview_recurring_logics
permissions.
-
In the orcharhino management UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job.
-
On the Ansible tab, select Jobs.
-
Click Schedule recurring job.
-
Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window.
-
Click Submit.
-
Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs.
Scheduling a recurring Ansible job for a host group
You can schedule a recurring job to run Ansible roles on host groups.
-
In the orcharhino management UI, navigate to Configure > Host groups.
-
In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for.
-
Click Schedule recurring job.
-
Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window.
-
Click Submit.
Monitoring jobs
You can monitor the progress of a job while it is running. This can help in any troubleshooting that may be required.
Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch.
-
In the orcharhino management UI, navigate to Monitor > Jobs. This page is automatically displayed if you triggered the job with the
Execute now
setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. -
On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running.
-
In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time.
-
Click Back to Job at any time to return to the Job Details page.
-
Find the ID of a job:
# hammer job-invocation list
-
Monitor the job output:
# hammer job-invocation output \ --host My_Host_Name \ --id My_Job_ID
-
Optional: To cancel a job, enter the following command:
# hammer job-invocation cancel \ --id My_Job_ID
Using Ansible provider for package and errata actions
By default, orcharhino is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure orcharhino to use them by default for remote execution features associated with them.
Remember that Ansible job templates only work when remote execution is configured for |
-
In the orcharhino management UI, navigate to Administer > Remote Execution Features.
-
Find each feature whose name contains
by_search
. -
Change the job template for these features from
Katello Script Default
toKatello Ansible Default
. -
Click Submit.
orcharhino now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions.
This applies to job invocations from the orcharhino management UI as well as by using hammer job-invocation create
with the same remote execution features that you have changed.
Job template examples and extensions
Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements.
Customizing job templates
When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones.
The following template combines default templates to install and start the nginx service on clients:
<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %>
<%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>
The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax:
<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %>
With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set, and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list.
Default job template categories
Job template category | Description |
---|---|
Packages |
Templates for performing package related actions. Install, update, and remove actions are included by default. |
Puppet |
Templates for executing Puppet runs on target hosts. |
Power |
Templates for performing power related actions. Restart and shutdown actions are included by default. |
Commands |
Templates for executing custom commands on remote hosts. |
Services |
Templates for performing service related actions. Start, stop, restart, and status actions are included by default. |
Katello |
Templates for performing content related actions. These templates are used mainly from different parts of the orcharhino management UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. |
Example restorecon template
This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts.
-
In the orcharhino management UI, navigate to Hosts > Templates > Job templates.
-
Click New Job Template.
-
Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor:
restorecon -RvF <%= input("directory") %>
The
<%= input("directory") %>
string is replaced by a user-defined directory during job invocation. -
On the Job tab, set Job category to
Commands
. -
Click Add Input to allow job customization. Enter
directory
to the Name field. The input name must match the value specified in the template editor. -
Click Required so that the command cannot be executed without the user specified parameter.
-
Select User input from the Input type list. Enter a description to be shown during job invocation, for example
Target directory for restorecon
. -
Click Submit. For more information, see Executing a restorecon Template on Multiple Hosts in Managing Hosts.
Rendering a restorecon template
This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template.
This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/
directory on target hosts.
Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:
<%= render_template("Run Command - restorecon", :directory => "/home") %>
Executing a restorecon template on multiple hosts
This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts.
The job restores the SELinux context in all files under the /home/
directory.
-
In the orcharhino management UI, navigate to Monitor > Jobs and click Run job.
-
Select Commands as Job category and Run Command – restorecon as Job template and click Next.
-
Select the hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context.
-
In the directory field, provide a directory, for example
/home
, and click Next. -
Optional: To configure advanced settings for the job, fill in the Advanced fields. When you are done entering the advanced settings or if it is not required, click Next.
-
Schedule time for the job.
-
To execute the job immediately, keep the pre-selected Immediate execution.
-
To execute the job in future time, select Future execution.
-
To execute the job on regular basis, select Recurring execution.
-
-
Optional: If you selected future or recurring execution, select the Query type, otherwise click Next.
-
Static query means that the job executes on the exact list of hosts that you provided.
-
Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter.
Click Next after you have selected the query type.
-
-
Optional: If you selected future or recurring execution, provide additional details:
-
For Future execution, enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled.
-
For Recurring execution, select the start date and time, frequency, and condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time.
Click Next after you have entered the required information.
-
-
Review job details. You have the option to return to any part of the job wizard and edit the information.
-
Click Submit to schedule the job for execution.
Including power actions in templates
This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents orcharhino from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly.
Create a new template as described in Setting up Job Templates, and specify the following string in the template editor:
<%= render_template("Power Action - SSH Default", :action => "restart") %>
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0"). |