Configuring and setting up remote jobs

orcharhino supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously.

About running jobs on hosts

You can run jobs on hosts remotely from orcharhino Proxies using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution.

For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the orcharhino Proxy base operating system. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.

Communication occurs through orcharhino Proxy, which means that orcharhino Server does not require direct access to the target host, and can scale to manage many hosts. For more information, see transport modes for remote execution.

orcharhino uses ERB syntax job templates. For more information, see Template Writing Reference in Managing Hosts.

Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates in Managing Hosts.

Any orcharhino Proxy base operating system is a client of orcharhino Server’s internal orcharhino Proxy, and therefore this section applies to any type of host connected to orcharhino Server, including orcharhino Proxies.

You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values.

In addition, you can specify custom values for templates when you run the command.

For more information, see Executing a Remote Job in Managing Hosts.

Remote execution workflow

For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your orcharhino Proxy. Before you can use Ansible roles, you must import the roles into orcharhino from the orcharhino Proxy where they are installed.

When you run a remote job on hosts, for every host, orcharhino performs the following actions to find a remote execution orcharhino Proxy to use.

orcharhino searches only for orcharhino Proxies that have the remote execution feature enabled.

  1. orcharhino finds the host’s interfaces that have the Remote execution checkbox selected.

  2. orcharhino finds the subnets of these interfaces.

  3. orcharhino finds remote execution orcharhino Proxies assigned to these subnets.

  4. From this set of orcharhino Proxies, orcharhino selects the orcharhino Proxy that has the least number of running jobs. By doing this, orcharhino ensures that the jobs load is balanced between remote execution orcharhino Proxies.

If you have enabled Prefer registered through orcharhino Proxy for remote execution, orcharhino runs the REX job using the orcharhino Proxy the host is registered to.

By default, Prefer registered through orcharhino Proxy for remote execution is set to No. To enable it, in the orcharhino management UI, navigate to Administer > Settings, and on the Content tab, set Prefer registered through orcharhino Proxy for remote execution to Yes. This ensures that orcharhino performs REX jobs on hosts by the orcharhino Proxy to which they are registered to.

If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino selects the most lightly loaded orcharhino Proxy from the following types of orcharhino Proxies that are assigned to the host:

  • DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets

  • DNS orcharhino Proxy assigned to the host’s domain

  • Realm orcharhino Proxy assigned to the host’s realm

  • Puppet server orcharhino Proxy

  • Puppet CA orcharhino Proxy

  • OpenSCAP orcharhino Proxy

If orcharhino does not find a remote execution orcharhino Proxy at this stage, and if the Enable Global orcharhino Proxy setting is enabled, orcharhino selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.

Permissions for remote execution

You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles:

  • Remote Execution Manager: Can access all remote execution features and functionality.

  • Remote Execution User: Can only run jobs.

You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or orcharhino Proxies are visible to the role.

The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions.

You can run remote execution jobs against orcharhino and orcharhino Proxy registered as hosts to orcharhino with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered orcharhino and orcharhino Proxy hosts.

For more information on working with roles and permissions, see Creating and Managing Roles in Administering orcharhino.

The following example shows filters for the execute_template_invocation permission:

name = Reboot and host.name = staging.example.com
name = Reboot and host.name ~ *.staging.example.com
name = "Restart service" and host_group.name = webservers

Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com. Use the third line to bind the template with a host group.

Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution.

Transport modes for remote execution

You can configure your orcharhino to use two different modes of transport for remote job execution.

On orcharhino Proxies in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution orcharhino Proxy must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22.

On orcharhino Proxies in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to publish jobs it receives from orcharhino Server. The host subscribes to the MQTT broker on orcharhino Proxy for job notifications using the yggdrasil pull client. After the host receives a notification, it pulls job details from orcharhino Proxy over HTTPS, runs the job, and reports results back to orcharhino Proxy.

To use the pull-mqtt mode, you must enable it on orcharhino Proxy and configure the pull client on the target hosts.

Additional resources

Configuring a host to use the pull client

For orcharhino Proxies configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their orcharhino Proxy.

Prerequisites
  • You have registered the host to orcharhino.

  • The host’s orcharhino Proxy is configured to use pull-mqtt mode. For more information, see Configuring Remote Execution for Pull Client in Installing orcharhino Proxy.

  • The ATIX AG orcharhino Client for CentOS repository is enabled and synchronized on orcharhino Server, and enabled on the host.

  • The host is able to communicate with its orcharhino Proxy over MQTT using port 1883.

  • The host is able to communicate with its orcharhino Proxy over HTTPS.

Procedure
  1. Install the katello-pull-transport-migrate package on your host:

    • On CentOS 8 and CentOS 9 hosts:

      # dnf install katello-pull-transport-migrate
    • On CentOS 7 hosts:

      # yum install katello-pull-transport-migrate

    The package installs foreman_ygg_worker and yggdrasil as dependencies and enables the pull mode on the host. The host’s subscription-manager configuration and consumer certificates are used to configure the yggdrasil client on the host, and the pull mode client worker is started.

  2. Optional: To verify that the pull client is running and configured properly, check the status of the yggdrasild service:

    # systemctl status yggdrasild

Creating a job template

Use this procedure to create a job template. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Templates > Job templates.

  2. Click New Job Template.

  3. Click the Template tab, and in the Name field, enter a unique name for your job template.

  4. Select Default to make the template available for all organizations and locations.

  5. Create the template directly in the template editor or upload it from a text file by clicking Import.

  6. Optional: In the Audit Comment field, add information about the change.

  7. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing Hosts.

  8. Optional: In the Description Format field, enter a description template. For example, Install package %{package_name}. You can also use %{template_name} and %{job_category} in your template.

  9. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks.

  10. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete.

  11. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab.

  12. Optional: Click Foreign input set to include other templates in this job.

  13. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting.

  14. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet.

  15. Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to orcharhino after a job finishes.

  16. Click the Location tab and add the locations where you want to use the template.

  17. Click the Organizations tab and add the organizations where you want to use the template.

  18. Click Submit to save your changes.

You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing Hosts.

CLI procedure
  • To create a job template using a template-definition file, enter the following command:

    # hammer job-template create \
    --file "Path_to_My_Template_File" \
    --job-category "My_Category_Name" \
    --name "My_Template_Name" \
    --provider-type SSH

Changing the associated template

Procedure
  1. In the orcharhino management UI, navigate to Administer > Remote Execution Features.

  2. Select the remote execution feature of which you want to change the associated template.

  3. In the Job Template drop down menu, select a different job template.

    Changing the default job templates might break existing orcharhino workflows. Evaluate carefully why you want to change the default job template.

    Cloned templates do not receive updates when upgrading orcharhino.

Importing an Ansible playbook by name

You can import Ansible playbooks by name to orcharhino from collections installed on orcharhino Proxy.

Prerequisites
  • Ansible plug-in in orcharhino is enabled

Procedure
  1. Fetch the available Ansible playbooks using the following API request:

    # curl -X GET 'Content-Type: application/json' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/fetch?proxy_id=Myorcharhino-proxy_ID_
  2. Select the Ansible playbook you want to import and note its name.

  3. Import the Ansible playbook using its name:

    # curl -X PUT 'Content-Type: application/json' -d '{ "playbook_names": ["My_Playbook_Name"] }' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/sync?proxy_id=Myorcharhino-proxy_ID_

    You get a notification in the orcharhino management UI after the import completes.

Importing all available Ansible playbooks

You can import all the available Ansible playbooks to orcharhino from collections installed on orcharhino Proxy.

Prerequisites
  • Ansible plug-in in orcharhino is enabled

Procedure
  • Import the Ansible playbooks using the following API request:

    # curl -X PUT 'Content-Type: application/json' https://orcharhino.example.com/ansible/api/v2/ansible_playbooks/sync?proxy_id=My-orcharhino-proxy-ID

    You get a notification in the orcharhino management UI after the import completes.

Configuring the fallback to any orcharhino Proxy remote execution setting in orcharhino

You can enable the Fallback to Any orcharhino Proxy setting to configure orcharhino to search for remote execution orcharhino Proxies from the list of orcharhino Proxies that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to orcharhino Proxies that do not have the remote execution feature enabled.

If the Fallback to Any orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded orcharhino Proxy from the set of all orcharhino Proxies assigned to the host, such as the following:

  • DHCP, DNS and TFTP orcharhino Proxies assigned to the host’s subnets

  • DNS orcharhino Proxy assigned to the host’s domain

  • Realm orcharhino Proxy assigned to the host’s realm

  • Puppet server orcharhino Proxy

  • Puppet CA orcharhino Proxy

  • OpenSCAP orcharhino Proxy

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. Click Remote Execution.

  3. Configure the Fallback to Any orcharhino Proxy setting.

CLI procedure
  • Enter the hammer settings set command on orcharhino to configure the Fallback to Any orcharhino Proxy setting. To set the value to true, enter the following command:

    # hammer settings set \
    --name=remote_execution_fallback_proxy \
    --value=true

Configuring the global orcharhino Proxy remote execution setting in orcharhino

By default, orcharhino searches for remote execution orcharhino Proxies in hosts' organizations and locations regardless of whether orcharhino Proxies are assigned to hosts' subnets or not. You can disable the Enable Global orcharhino Proxy setting if you want to limit the search to the orcharhino Proxies that are assigned to hosts' subnets.

If the Enable Global orcharhino Proxy setting is enabled, orcharhino adds another set of orcharhino Proxies to select the remote execution orcharhino Proxy from. orcharhino also selects the most lightly loaded remote execution orcharhino Proxy from the set of all orcharhino Proxies in the host’s organization and location to execute a remote job.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. Click Remote Execution.

  3. Configure the Enable Global orcharhino Proxy setting.

CLI procedure
  • Enter the hammer settings set command on orcharhino to configure the Enable Global orcharhino Proxy setting. To set the value to true, enter the following command:

    # hammer settings set \
    --name=remote_execution_global_proxy \
    --value=true

Configuring orcharhino to use an alternative directory to execute remote jobs on hosts

By default, orcharhino uses the /var/tmp directory on the client system to execute the remote execution jobs. If the client system has noexec set for the /var/ volume or file system, you must configure orcharhino to use an alternative directory because otherwise the remote execution job fails since the script cannot be run.

Procedure
  1. Create a new directory:

    # mkdir /My_Remote_Working_Directory
  2. Copy the SELinux context from the default var directory:

    # chcon --reference=/var /My_Remote_Working_Directory
  3. Configure the system:

    # orcharhino-installer \
    --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory

Altering the privilege elevation method

By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo. You can globally configure an alternative method in your orcharhino settings.

Prerequisites
  • Your user account has a role assigned that grants the view_settings and edit_settings permissions.

  • If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plug-in, is installed. For more information, see Installing collections in Ansible documentation.

Procedure
  1. Navigate to Administer > Settings.

  2. Select the Remote Execution tab.

  3. Click the value of the Effective User Method setting.

  4. Select the new value.

  5. Click Submit.

Distributing SSH keys for remote execution

For orcharhino Proxies in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from orcharhino Proxy must be distributed to its attached hosts that you want to manage.

Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22.

Use one of the following methods to distribute the public SSH key from orcharhino Proxy to target hosts:

  1. distributing ssh keys for remote execution manually.

  2. Using the orcharhino API to obtain SSH keys for remote execution.

  3. Configuring a Kickstart template to distribute SSH keys during provisioning.

  4. For new orcharhino hosts, you can deploy SSH keys to orcharhino hosts during registration using the global registration template. For more information, see Registering a Host to orcharhino Using the Global Registration Template in Managing Hosts.

orcharhino distributes SSH keys for the remote execution feature to the hosts provisioned from orcharhino by default.

If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts.

Distributing SSH keys for remote execution manually

To distribute SSH keys manually, complete the following steps:

Procedure
  • Copy the SSH pub key from your orcharhino Proxy to your target host:

    # ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@client.example.com

    Repeat this step for each target host you want to manage.

Verification
  • To confirm that the key was successfully copied to the target host, enter the following command on orcharhino Proxy:

    # ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy root@client.example.com

Adding a passphrase to SSH key used for remote execution

By default, orcharhino Proxy uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure.

Procedure
  • On your orcharhino Server or orcharhino Proxy, use ssh-keygen to add a passphrase to your SSH key:

    # ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy
Next steps
  • Users now must use a passphrase when running remote execution jobs on hosts.

Using the orcharhino API to obtain SSH keys for remote execution

To use the orcharhino API to download the public key from orcharhino Proxy, complete this procedure on each target host.

Procedure
  1. On the target host, create the ~/.ssh directory to store the SSH key:

    # mkdir ~/.ssh
  2. Download the SSH key from orcharhino Proxy:

    # curl https://orcharhino-proxy.network2.example.com:443/ssh/pubkey >> ~/.ssh/authorized_keys
  3. Configure permissions for the ~/.ssh directory:

    # chmod 700 ~/.ssh
  4. Configure permissions for the authorized_keys file:

    # chmod 600 ~/.ssh/authorized_keys

Configuring a Kickstart template to distribute SSH keys during provisioning

You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that orcharhino ships include this snippet by default. orcharhino copies the SSH key for remote execution to the systems during provisioning.

Procedure
  • To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use:

    <%= snippet 'remote_execution_ssh_keys' %>

Configuring a keytab for Kerberos ticket granting tickets

Use this procedure to configure orcharhino to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets.

Procedure
  1. Find the ID of the foreman-proxy user:

    # id -u foreman-proxy
  2. Modify the umask value so that new files have the permissions 600:

    # umask 077
  3. Create the directory for the keytab:

    # mkdir -p "/var/kerberos/krb5/user/My_User_ID"
  4. Create a keytab or copy an existing keytab to the directory:

    # cp My_Client.keytab /var/kerberos/krb5/user/My_User_ID/client.keytab
  5. Change the directory owner to the foreman-proxy user:

    # chown -R foreman-proxy:foreman-proxy "/var/kerberos/krb5/user/My_User_ID"
  6. Ensure that the keytab file is read-only:

    # chmod -wx "/var/kerberos/krb5/user/My_User_ID/client.keytab"
  7. Restore the SELinux context:

    # restorecon -RvF /var/kerberos/krb5

Configuring Kerberos authentication for remote execution

You can use Kerberos authentication to establish an SSH connection for remote execution on orcharhino hosts.

Prerequisites
  • Enroll orcharhino Server on the Kerberos server

  • Enroll the orcharhino target host on the Kerberos server

  • Configure and initialize a Kerberos user account for remote execution

  • Ensure that the foreman-proxy user on orcharhino has a valid Kerberos ticket granting ticket

Procedure
  1. To install and enable Kerberos authentication for remote execution, enter the following command:

    # orcharhino-installer \
    --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true
  2. To edit the default user for remote execution, in the orcharhino management UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account.

  3. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account.

Verification
  • To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing Hosts.

Setting up job templates

orcharhino provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates. If you want to use a template without making changes, proceed to Executing a Remote Job in Managing Hosts.

You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone.

Procedure
  1. To clone a template, in the Actions column, select Clone.

  2. Enter a unique name for the clone and click Submit to save the changes.

Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing Hosts.

Ansible considerations

To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with ---. You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in orcharhino. For more information, see Synchronizing Repository Templates in Managing Hosts.

Parameter variables

At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host’s edit page can be used as input parameters for job templates.

Executing a remote job

You can execute a job that is based on a job template against one or more hosts.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Jobs and click Run job.

  2. Select the Job category and the Job template you want to use, then click Next.

  3. Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context.

    If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query:

    hostgroup_fullname ~ "My_Host_Group*"

    Replace My_Host_Group with the name of the top-level host group.

  4. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click Next.

  5. Optional: To configure advanced settings for the job, fill in the Advanced fields.

  6. Click Next.

  7. Schedule time for the job.

    • To execute the job immediately, keep the pre-selected Immediate execution.

    • To execute the job in future time, select Future execution.

    • To execute the job on regular basis, select Recurring execution.

  8. Optional: If you selected future or recurring execution, select the Query type, otherwise click Next.

    • Static query means that job executes on the exact list of hosts that you provided.

    • Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter.

    Click Next after you have selected the query type.

  9. Optional: If you selected future or recurring execution, provide additional details:

    • For Future execution, enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled.

    • For Recurring execution, select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time.

    Click Next after you have entered the required information.

  10. Review job details. You have the option to return to any part of the job wizard and edit the information.

  11. Click Submit to schedule the job for execution.

CLI procedure
  1. Enter the following command on orcharhino:

    # hammer settings set \
    --name=remote_execution_global_proxy \
    --value=false
  2. Find the ID of the job template you want to use:

    # hammer job-template list
  3. Show the template details to see parameters required by your template:

    # hammer job-template info --id My_Template_ID
  4. Execute a remote job with custom parameters:

    # hammer job-invocation create \
    --inputs My_Key_1="My_Value_1",My_Key_2="My_Value_2",... \
    --job-template "My_Template_Name" \
    --search-query "My_Search_Query"

    Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern". For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help.

Advanced settings in the job wizard

Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings.

SSH user

A user to be used for connecting to the host through SSH.

Effective user

A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts.

  • If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true. This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by orcharhino.

  • If your SSH user and effective user are identical, orcharhino does not overwrite the become_user. Therefore, you can set a custom become_user in your Ansible playbook.

Description

A description template for the job.

Timeout to kill

Time in seconds from the start of the job after which the job should be killed if it is not finished already.

Time to pickup

Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport.

Password

Is used if SSH authentication method is a password instead of the SSH key.

Private key passphrase

Is used if SSH keys are protected by a passphrase.

Effective user password

Is used if effective user is different from the ssh user.

Concurrency level

Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts.

Execution ordering

Determines the order in which the job is executed on hosts. It can be alphabetical or randomized.

For more information about remote execution using custom users with Ansible, see become_user not working as expected in the ATIX Service Portal.

Using extended cron lines

When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM.

The extended cron line provides the following features:

You can use # to specify a concrete week day in a month

For example:

  • 0 0 * * mon#1 specifies first Monday of the month

  • 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month

  • 0 7 * * fri#-1 specifies the last Friday of the month at 07:00

  • 0 7 * * fri#L also specifies the last Friday of the month at 07:00

  • 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00

You can use % to specify every n-th day of the month

For example:

  • 9 0 * * sun%2 specifies every other Sunday at 00:09

  • 0 0 * * sun%2+1 specifies every odd Sunday

  • 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday

You can use & to specify that the day of the month has to match the day of the week

For example:

  • 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday

Scheduling a recurring Ansible job for a host

You can schedule a recurring job to run Ansible roles on hosts.

Prerequisites
  • Ensure you have the view_foreman_tasks, view_job_invocations, and view_recurring_logics permissions.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job.

  2. On the Ansible tab, select Jobs.

  3. Click Schedule recurring job.

  4. Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window.

  5. Click Submit.

  6. Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs.

Scheduling a recurring Ansible job for a host group

You can schedule a recurring job to run Ansible roles on host groups.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Host groups.

  2. In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for.

  3. Click Schedule recurring job.

  4. Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window.

  5. Click Submit.

Monitoring jobs

You can monitor the progress of a job while it is running. This can help in any troubleshooting that may be required.

Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch.

Procedure
  1. In the orcharhino management UI, navigate to Monitor > Jobs. This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect.

  2. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running.

  3. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time.

  4. Click Back to Job at any time to return to the Job Details page.

CLI procedure
  1. Find the ID of a job:

    # hammer job-invocation list
  2. Monitor the job output:

    # hammer job-invocation output \
    --host My_Host_Name \
    --id My_Job_ID
  3. Optional: To cancel a job, enter the following command:

    # hammer job-invocation cancel \
    --id My_Job_ID

Setting the job rate limit on orcharhino Proxy

You can limit the maximum number of active jobs on a orcharhino Proxy at a time to prevent performance spikes. The job is active from the time orcharhino Proxy first tries to notify the host about the job until the job is finished on the host.

The job rate limit only applies to mqtt based jobs.

The optimal maximum number of active jobs depends on the computing resources of your orcharhino Proxy. By default, the maximum number of active jobs is unlimited.

Procedure
  • Set the maximum number of active jobs using orcharhino-installer:

    # orcharhino-installer \
    --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER

    For example:

    # orcharhino-installer \
    --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").