orcharhino Security Guide

orcharhino uses SELinux and firewalld.

Firewall Configuration

This is not an exhaustive introduction or documentation of firewalld. Please refer to the official firewalld documentation or the firewalld documentation of your operating system vendor before making any changes to your firewall configuration.

orcharhino uses firewalld. It is enabled by default on every Kickstart installation and available on CentOS, Oracle Linux, and Red Hat Enterprise Linux version 7 or newer. The firewall daemon can be enabled by running systemctl enable firewalld and started/stopped via systemctl {start|stop} firewalld. Running systemctl status firewalld shows whether the firewall service is currently running; firewall-cmd --help shows the inbuilt documentation. firewalld relies on zones and services:

  • Zones define the level of trust of a connection. One interface, source, or connection belongs to exactly one zone; one zone can be used by many interfaces, sources, or connections. By default, zone example files are located at /usr/lib/firewalld/zones/ and describe an individual zone. firewall-cmd --list-all-zones lists all available zones on your machine. Refer to the official documentation for more information about zones in firewalld or the firewalld documentation of your operating system vendor.

  • Services bundle and allow traffic regarding certain ports and protocols. By default, SSH, DNS, HTTP, and HTTPS services are added to the public zone. An orcharhino tailor-made service is defined at /usr/lib/firewalld/services/orcharhino.xml and specifies several ports and protocols, e.g. port 80 with tcp protocol for the orcharhino web server.

    Do not overwrite /usr/lib/firewalld/services/orcharhino.xml as this file is being managed by orcharhino itself and will be overwritten with every upgrade.

Enabling orcharhino firewalld Service

You can enable the orcharhino service in firewalld on your orcharhino Server manually:

# firewall-cmd --permanent --zone=internal --add-service=orcharhino

The internal firewall zone is created by the orcharhino installer. Run firewall-cmd --get-active-zones to see all zones that are currently active. Use --permanent to ensure your change is persistent after rebooting.

Adding a Custom Service

firewall-cmd --zone=internal --list-all lists all services of the internal zone. firewall-cmd --permanent --zone=work --add-service=orcharhino allows you to enable the orcharhino service for the work zone permanently, i.e. persistent after rebooting. You may specify a network interface with the --add-interface=<INTERFACE> option, e.g. --add-interface=eth0. Ensure to reload the firewalld configuration afterwards: firewall-cmd --complete-reload. This can be reverted by running firewall-cmd --permanent --zone=work --remove-service=orcharhino. You can also add or disable arbitrary services like HTTP. Refer to the official documentation for more information about services in firewalld or the firewalld documentation of your operating system vendor.

Opening a Custom Port

Specific ports on your orcharhino can be opened or closed by running the following command: firewall-cmd --permanent --remove-port=5000/tcp (e.g. Katello for Docker registry). Run firewall-cmd --complete-reload to reload the firewalld configuration. Revert removing a port by using the --add-port=PORT/PROTOCOL option.

Altering the Firewall Configuration

You can change the firewall configuration by adding a custom service. Write a custom service and save it in the /etc/firewalld/services/ directory. Run firewall-cmd --permanent --zone=internal --new-service-from-file=my_service.xml to add your service to the internal zone and reload the firewalld configuration: firewall-cmd --complete-reload.

Ensure to not remove the SSH service unless you have other means of accessing your orcharhino.

Restricting User Access

You can restrict user access to orcharhino via roles and filters.

You can also restrict access using a location and organization context. This limits the user’s access to orcharhino’s resources, e.g. hosts and job templates.

Dedicated Remote Execution User

If you want to run job templates against the orcharhino server or orcharhino proxies, then orcharhino’s remote execution features require SSH access. To make setting up the required SSH access easy, ATIX provides the orcharhino-rex-user RPM package. For more information, see configuring and setting up remote jobs in the Managing Hosts guide.

Installing the orcharhino-rex-user RPM package allows any sufficiently privileged orcharhino management UI user to run arbitrary commands against the orcharhino server as root. By installing this package, you are opting in to the security implications.

Mitigation strategies include:

  • Restricting job template access for any regular orcharhino users.

  • Adopting more restrictive sudo rules for the or-rex user in /etc/sudoers.d/or_rex_user. Note that future orcharhino updates may revert your local changes.

  • Uninstalling the orcharhino-rex-user package after use (yum remove orcharhino-rex-user). This will completely revert the changes introduced by installing the package.

Install the package to set up the or-rex linux system user, along with sudo rules, and the required authorized_keys file:

# yum install -y orcharhino-rex-user

You will also need to set a host parameter for orcharhino’s remote execution features to use the new user:

  1. Navigate to All Hosts > Hosts.

  2. Select the host you have installed the user on, e.g. orcharhino.example.com or orcharhino-proxy.example.com.

  3. Click the Edit button.

  4. On the Parameters tab, add the host parameter remote_execution_ssh_user of type string and with value or-rex.

Your sufficiently privileged orcharhino users can now run job templates against the relevant target host.

Running Remote Execution Jobs on orcharhino Server

When you execute a remote job targeting your orcharhino Server, ensure your orcharhino Server is in a subnet with the correct smart proxy set for remote execution. This prevents remote execution jobs that you run on your orcharhino Server itself to try and use every other orcharhino Proxy first. It saves a lot of time, especially in scenarios with multiple orcharhino Proxies, as you no longer have to wait until the remote job has ran into the SSH timeout for each orcharhino Proxy.

Procedure
  1. Navigate to Hosts > All Hosts and select your orcharhino Server.

  2. On the Interfaces tab, click Edit.

  3. Select a subnet in the IPv4 Subnet drop down menu.

  4. Navigate to Infrastructure > Subnets and select the subnet of your orcharhino Server.

  5. On the Remote Execution tab, select your orcharhino Server.

  6. Amend the public SSH key of the foreman-proxy user of the orcharhino Server to the list of authorized keys of the root user of the orcharhino Server:

    # cat /var/lib/foreman-proxy/ssh/id_rsa_foreman_proxy.pub >> /root/.ssh/authorized_keys

    This allows for the execution of arbitrary commands as root user on your orcharhino Server for all orcharhino users that can run job templates.

  7. Ensure to remove the public SSH key from the /root/.ssh/authorized_keys file after running the job template.

    # sed -i /foreman-proxy@/d /root/.ssh/authorized_keys

The following procedure allows you to run remote execution on hosts without a subnet configuration.

Procedure
  1. Navigate to Administer > Settings.

  2. On the RemoteExecution tab, ensure Enable Global Proxy and Fallback to Any Proxy are set to their default values.

    If you require a fallback to any orcharhino Proxy, remote execution jobs try every smart proxy with remote execution enabled. For example, if you have ten orcharhino Proxies, remote execution tries to connect with all ten orcharhino Proxies and only after all ten orcharhino Proxies returned whether they can connect to the target host, it selects an orcharhino Proxy that works and starts the job. This implies that the remote execution job has to wait for each unsuccessful connection to run into a timeout.