Installing orcharhino Proxy

The documentation refers to orcharhino Server and orcharhino Proxies to describe the machines running orcharhino, for example orcharhino.example.com, and their attached orcharhino Proxies, for example orcharhino-proxy.network2.example.com. Both orcharhino Server and orcharhino Proxy incorporate smart proxy functionality depending on their configuration.

orcharhino Proxies allow you to manage hosts in additional networks. You can configure your orcharhino Proxy to deliver content to managed hosts by either mirroring synchronized content from your orcharhino Server or by streaming content from orcharhino Server as requested by content hosts. Your orcharhino Proxy can also function as DNS, DHCP, and TFTP server and provide CA capabilities.

The term Smart Proxy is used throughout the orcharhino management UI. It describes the smart proxy functionality present on both your orcharhino Server and orcharhino Proxies.

When looking at the upstream documentation for Foreman and Katello, you might come across the terms Smart Proxy, Foreman proxy, or Katello proxy. These terms are used ambiguously to describe the software component or the attached host with smart proxy functionality in different variations. By contrast, Red Hat names their Foreman downstream product Satellite and any attached smart proxies Capsules.

Additionally, there are HTTP proxies as a way of relaying network traffic from one machine to another. They are often part of more advanced internal network architectures.

This guide describes the installation of an orcharhino Proxy to go along with your orcharhino Server. If you want to use orcharhino to manage hosts in additional networks, you need an orcharhino Proxy installed in each network you want to manage. This allows you to orchestrate the process of managing hosts in different networks, that is networks spanning across different data centres and regions.

Your orcharhino Server comes bundled with internal smart proxy functionality. It is sufficient for managing hosts in the same network your orcharhino Server is in.

Usage Scenario

There are different reasons on why to use an orcharhino Proxy:

  • To manage infrastructure that is not part of the same network as your orcharhino Server.

  • To manage traffic across firewalls so as to deploy hosts into isolated networks. This helps you simplifying firewall rules and therefore making it more robust, as only the orcharhino Proxy needs to be accessible from outside its network.

  • To centrally manage infrastructure differentiated by location.

  • To reduce traffic and latency.

There are two ways on how to deliver content to managed hosts from orcharhino Proxies. An orcharhino Proxy can either mirror the synchronized content from orcharhino Server or relay requested content. This results in a trade-off between storage space and network traffic:

  • By default, orcharhino Proxies mirror the synchronized content from your orcharhino Server. Synchronized content implies that your orcharhino Proxy has a mirrored version of the content from orcharhino Server stored locally. This results in the need for additional storage capacity analogue to your orcharhino Server. On the other hand, there is significantly less traffic between orcharhino Proxies and your orcharhino Server.

  • Optionally, you can enable the streamed download policy for your orcharhino Proxy. orcharhino Proxies download content as requested from content hosts without storing it locally. Instead, all content comes directly from your orcharhino Server and is purely relayed to the managed hosts in its network. This results in additional traffic between your orcharhino Server and orcharhino Proxy, but orcharhino Proxies do not need to cache the content locally using up additional disk space. For more information, see Changing the Download Policy for orcharhino Proxies.

Optionally, your orcharhino Proxy can also handle DHCP, DNS, and TFTP for its network.

System and Network Requirements

Your orcharhino Proxy depends on a working orcharhino Server installation. ATIX AG recommends using the same operating system for orcharhino Server as well as orcharhino Proxy itself. subscription-manager needs to be installed on both machines and an activation key needs to be available.

This guide presumes a working orcharhino installation as well as a suitable host for the orcharhino Proxy. Regardless of whether it will run on virtualised hardware or on bare metal, the orcharhino Proxy machine must meet the following requirements:

Minimum Recommended

OS

AlmaLinux 8, Oracle Linux 8, Red Hat Enterprise Linux 8, or Rocky Linux 8

For more information, see OS requirements.

CPU

4 cores

8 cores

RAM

20 GiB

32 GiB

HDD 1 (/)

30 GiB

50 GiB

HDD 2 (/var)

~ 40 GiB for each Enterprise Linux distribution

~ 80 GiB for each Debian or Ubuntu distribution

~ 500 GiB (or as appropriate) if you plan to maintain additional repositories or keep multiple versions of packages

If you use streamed as download policy for your orcharhino Proxy, a second disk for /var can be as small as 10 GiB.

ATIX AG does not support using third party repositories on your orcharhino Server or orcharhino Proxies. Resolving package conflicts or other issues due to third party or custom repositories is not part of your orcharhino support subscription. Please contact us if you have any questions.

Using Multiple DNS Names

If your orcharhino Proxy has different DNS names for orcharhino Server and managed hosts, you have to ensure that hostname -f shows the FQDN towards orcharhino Server. Generate the certificates archive for your orcharhino Proxy on orcharhino Server and specify all FQDNs when creating the certificates using foreman-proxy-certs-generate --foreman-proxy-fqdn My_FQDN_Towards_orcharhino_Server --foreman-proxy-cname My_FQDN_Towards_Managed_Hosts --certs-tar /root/orcharhino-proxy-certs.tar.

On your orcharhino Proxy, ensure that:

  • hostname -f shows the FQDN towards orcharhino Server.

  • Add the --foreman-proxy-registration-url parameter to the orcharhino-installer command and set it to the FQDN used by managed hosts.

  • Use --certs-cname My_FQDN_Towards_Managed_Hosts or modify the foreman-proxy-content-answers.yaml file to add additional cnames.

In addition to the system requirements, your orcharhino Proxy also needs its own domain and subnet, which you can set up in the orcharhino management UI. The network configuration and firewall rules must allow for communication from your orcharhino Server to your orcharhino Proxy and vice versa. For more information, see Port and firewall requirements in Installing orcharhino Server.

Table 1. Managed Hosts to orcharhino Proxy
Port Protocol SSL Required for

53

TCP & UDP

no

DNS Services

67

UDP

no

DHCP Service

69

UDP

no

PXE boot

80

TCP

no

Anaconda, yum, templates, iPXE

443

TCP

yes

yum, Katello

5000

TCP

yes

Katello for Docker registry

8000

TCP

yes

Anaconda for downloading Kickstart templates, iPXE

8140

TCP

yes

Puppet agent to Puppet master

8443

TCP

yes

Subscription Management

9090

TCP

yes

OpenSCAP reports

orcharhino Server and orcharhino Proxy need to be reachable by name. If they are not yet resolvable through DNS, ensure to add them to each others /etc/hosts file as follows. If you are using an HTTP proxy, ensure it is properly configured as well.

This guide uses the network1 (192.168.50.0) for your orcharhino Server (called orcharhino), and network2 (192.168.60.0) for your orcharhino Proxy (called orcharhino-proxy). orcharhino Server is the 192.168.50.10 and orcharhino Proxy the 192.168.60.10 in the default scenario and 192.168.70.10 in network3 when opting for streamed content.

To do so, edit the etc/hosts file on both your orcharhino Server and your orcharhino Proxy and add the following lines:

192.168.50.10 orcharhino.network1.example.com
192.168.60.10 orcharhino-proxy.network2.example.com orcharhino-proxy

In case your orcharhino Server uses an HTTP proxy itself, it is necessary to add your orcharhino Proxy to the no_proxy entries.

  1. Edit the last line in /etc/sysconfig/httpd:

    no_proxy='localhost,127.0.0.1,orcharhino.network1.example.com,orcharhino,.network1.example.com,orcharhino-proxy.network2.example.com,.network2.example.com'
  2. Edit the last line in /etc/sysconfig/foreman-proxy:

    no_proxy='localhost,127.0.0.1,orcharhino.network1.example.com,orcharhino,.network1.example.com,orcharhino-proxy.network2.example.com,.network2.example.com'
  3. Edit the settings in orcharhino:

    To do so, click on Administer > Settings and edit HTTP(S) proxy except hosts:

    [ localhost, 127.0.0.1, orcharhino.network1.example.com, orcharhino, *.network1.example.com, orcharhino-proxy.network2.example.com, *.network2.example.com ]

If there are persisting problems, check your networking and firewall settings.

Preparing Content for orcharhino Proxy

There are two ways to configure your orcharhino Proxy: You can either edit foreman-proxy-content-answers.yaml or pass arguments to the orcharhino-installer command.

Your orcharhino Proxy requires the same packages as orcharhino Server. You can get a list of repositories be running subscription-manager repos --list-enabled on your orcharhino Server. Add these repositories to products and publish them as a composite content view. This content must be available on the host you are installing your orcharhino Proxy on.

Alternatively, you can run a script on orcharhino Server to automatically create the necessary content view. It bundles the required repositories and content credentials to a product specifically made for orcharhino Proxies and creates an activation key. The script takes repositories from /etc/yum.repos.d/redhat.repo and reuses HTTP proxy settings.

Ensure that you have the necessary Oracle Linux or Red Hat Enterprise Linux subscription if you want to install orcharhino Proxy on Oracle Linux or Red Hat Enterprise Linux. Your orcharhino subscription does not include any Oracle Linux or Red Hat Enterprise Linux subscriptions. Please contact us if you need help obtaining the relevant subscriptions or have questions on how to use your existing subscriptions.

Procedure
  1. Set all necessary variables for the content for your orcharhino Proxy:

    # vim /etc/orcharhino-ansible/or_orcharhino_proxy_content_vars.yaml
  2. On your orcharhino Server, run the following script:

    # /opt/orcharhino/automation/play_orcharhino_proxy_content.sh

    You can pass extra variables and Ansible parameters when running the shell script:

    # /opt/orcharhino/automation/play_orcharhino_proxy_content.sh -e use_sca=true -v
  3. Optional: In the orcharhino management UI, navigate to Content > Products and verify that the Smart Proxy Atix product contains the packages required to install orcharhino Proxies.

This section describes the steps necessary to attach an orcharhino Proxy to your orcharhino. It consists of five steps: installing Katello, creating a .tar file containing certificates, copying the certificates to the orcharhino Proxy, installing the orcharhino Proxy, and setting the appropriate organization and location context.

Installing the orcharhino Proxy Packages

You can install orcharhino Proxy using an Ansible role or by manually configuring orcharhino Proxy and installing the required packages.

Using the Ansible Role or_proxy_installation
  1. Install the or_proxy_installation Ansible role on your Ansible control node, for example under /root/.ansible/roles/. You can find the Ansible role on your orcharhino Server under /usr/share/orcharhino-ansible/roles/.

  2. Create an Ansible inventory file for your orcharhino Server and orcharhino Proxy and provide an orcharhino and a proxy alias in the Ansible inventory. For more information, see Inventory aliases in the Ansible documentation.

    • If you use your orcharhino Server as Ansible control node, create an Ansible inventory file as follows:

      # cat > inventory <<EOF
      [all]
      orcharhino  ansible_connection=local
      proxy       ansible_host=orcharhino-proxy.network2.example.com
      EOF
    • If you run the Ansible playbook on another host, create an Ansible inventory file as follows:

      # cat > inventory <<EOF
      [all]
      orcharhino  ansible_host=orcharhino.example.com
      proxy       ansible_host=orcharhino-proxy.network2.example.com
      EOF
  3. Create an Ansible playbook to run the or_proxy_installation Ansible role, for example:

    # cat > playbook.yaml <<EOF
    - hosts: proxy
      vars:
        My_Optional_Ansible_Variables
      gather_facts: false
      roles:
        - "or_proxy_installation"
    EOF

    You can configure your orcharhino Proxy using Ansible variables. For a list of available variables, see /usr/share/orcharhino-ansible/roles/or_proxy_installation/README.md on your orcharhino Server.

  4. Run the Ansible playbook:

    # ansible-playbook \
    -i inventory ./playbook.yaml \
    -e "or_fqdn=orcharhino.example.com" \
    -e "or_organization=My_Organization" \
    -e "or_proxy_fqdn=orcharhino-proxy.network2.example.com"
Manual Procedure
  1. Enable the DNF modules on your orcharhino Proxy:

    # dnf module switch-to ruby:2.7
    # dnf module enable pki-core
    # dnf module enable python36 python38 python39
    # subscription-manager repo-override \
    --add module_hotfixes:1 \
    --repo ATIX_Smart_Proxy_Atix_ATIX_orcharhino_Server_EL8_orcharhino_6_9
  2. Install the orcharhino-proxy meta package to pull in the required dependencies:

    # dnf install orcharhino-proxy
  3. Create a tar file with certificates. In case your organization already has certificates in place, you will need the following four files on your orcharhino:

    • A private key: proxy.key

    • A certificate: proxy.cert

    • A certificate signing request: proxy.csr

    • The authority’s certificate: proxy.ca

      In case you have no certificate signing request, you may create one with the private key and certificate by running the following command:

      # openssl x509 -x509toreq -in orcharhino-proxy.cert -out orcharhino-proxy.csr -signkey orcharhino.key

      Run the following command to compile the tar file:

      # foreman-proxy-certs-generate \
      --certs-tar "/root/orcharhino-proxy.network2.example.com-certs.tar" \
      --foreman-proxy-fqdn "orcharhino-proxy.network2.example.com" \
      --server-ca-cert /root/certs/orcharhino-proxy.ca \
      --server-cert /root/certs/orcharhino-proxy.cert \
      --server-cert-req /root/certs/orcharhino-proxy.csr \
      --server-key /root/certs/orcharhino-proxy.key

      In case there are no existing certificates, you can create them by running the following command on your orcharhino:

      # foreman-proxy-certs-generate \
      --certs-tar "orcharhino-proxy.network2.example.com-certs.tar" \
      --foreman-proxy-fqdn "orcharhino-proxy.network2.example.com"

      Save the standard output describing how to import certificates. This helps later on when installing the orcharhino Proxy.

  4. Copy the generated certificates from your orcharhino Server to your orcharhino Proxy. This allows for a secure certificate based connection:

    # scp /root/orcharhino-proxy.network2.example.com-certs.tar root@orcharhino-proxy.network2.example.com:/root/orcharhino-proxy.network2.example.com-certs.tar
  5. On your orcharhino Server, extract the OAuth token for orcharhino Proxy:

    # grep oauth_consumer_ /etc/foreman-installer/scenarios.d/katello-answers.yaml | sort -u
  6. Install orcharhino Proxy with the proper certificates:

    # orcharhino-installer \
    --certs-tar-file "/root/orcharhino-proxy.network2.example.com-certs.tar" \
    --foreman-proxy-foreman-base-url "https://orcharhino.network1.example.com" \
    --foreman-proxy-oauth-consumer-key "mGZxEukJVvdL89ySA6Ymk3bAuGSF7jJj" \
    --foreman-proxy-oauth-consumer-secret "g2tNnBR7qNHg9Gptt4XMcsFnh9c3kDSg" \
    --foreman-proxy-register-in-foreman "true" \
    --foreman-proxy-trusted-hosts "orcharhino-proxy.network2.example.com" \
    --foreman-proxy-trusted-hosts "orcharhino.network1.example.com" \
    --puppet-server-foreman-url "https://orcharhino.network1.example.com" \
    --scenario foreman-proxy-content
  7. Set the organization and location context.

    To add the organization and location to the orcharhino Proxy, go to Infrastructure > Smart Proxies and select orcharhino-proxy.network2.example.com.

    • Add Organization: your organization

    • Add Location: your location

    • Add Lifecycle Environment: Production

    • Download Policy: Immediate

      One way to distinguish between orcharhino administrators and regular users is to place your orcharhino Server and any attached orcharhino Proxies into a separate location and/or organization context.

      Alternatively, you can achieve a fine grained permissions concept using roles and filters.

  8. If you want to manage hosts running Debian or Ubuntu through your orcharhino Proxy, copy the public pulp_deb_signing.key key from orcharhino Server to your orcharhino Proxy:

    # scp /var/www/html/pub/pulp_deb_signing.key root@orcharhino-proxy.network2.example.com:/var/www/html/pub/pulp_deb_signing.key

By now, you have a fully functional orcharhino Proxy. Optionally, you can now activate DHCP and DNS services on your orcharhino Proxy.

Optional: Installing Puppet on orcharhino Proxies

Puppet is an optional plug-in for orcharhino Server and orcharhino Proxies. If you use Puppet to configure hosts, you need to install the Puppet plug-in on your orcharhino Proxies. For more information, see Enabling Puppet Integration with orcharhino.

Optional: Removing Puppet on orcharhino Proxies

Puppet is an optional plug-in for orcharhino Server and orcharhino Proxies. If you do not use Puppet to configure hosts, you can remove the Puppet plug-in from your orcharhino Proxies.

Prerequisites
  • You can only remove the Puppet plug-in if it is installed on your orcharhino Proxy.

Procedure
  • On your orcharhino Proxy, disable the Puppet plug-in:

    # orcharhino-installer \
    --foreman-proxy-puppet false \
    --foreman-proxy-puppetca false

Changing the Download Policy for orcharhino Proxies

There are four ways on how orcharhino Proxies handle the download of content from the orcharhino Server.

  • Immediate: Synchronize content immediately from orcharhino Server onto orcharhino Proxies.

  • On Demand: Download content at time of request by a managed host onto orcharhino Proxies.

  • Inherit from Repository: Synchronize content as specified on a per-repository basis onto orcharhino Proxies.

  • Streamed: Relay content requested by content hosts from orcharhino Server without saving it on orcharhino Proxies.

Procedure
  1. Navigate to Infrastructure > Smart Proxies.

  2. Select an orcharhino Proxy and click Edit.

  3. On the Smart Proxy tab, adjust the Download Policy.

  4. Click Submit to save your changes.

Depending on your setting of Sync Smart Proxies after Content View promotion, content is automatically synchronized to orcharhino Proxies after promoting a content view. Alternatively, you can manually synchronize content from orcharhino Server to orcharhino Proxies:

Procedure
  1. Navigate to Infrastructure > Smart Proxies.

  2. Select your orcharhino Proxy.

  3. On the Overview tab, click Synchronize.

    • Select Optimized Sync to bypass any unnecessary synchronization steps.

    • Select Complete Sync to synchronize repositories even if the meta data appears to be unchanged.

Optional: Activating DHCP, DNS, and TFTP

ATIX AG recommends having orcharhino Proxies provide DHCP, DNS, and TFTP services on their respective networks.

This subsection describes the steps necessary to activate DHCP, DNS, and TFTP on orcharhino Proxies. You can enable TFTP capabilities on your orcharhino Proxy using the following command:

# orcharhino-installer --foreman-proxy-tftp true

This activates TFTP capabilities on your orcharhino Proxy which is necessary for provisioning hosts using PXE.

To activate DHCP and DNS on your orcharhino Proxy, configure /etc/foreman-installer/scenarios.d/foreman-proxy-content-answers.yaml. Ensure that you active DHCP and DNS simultaneously. This is a sample configuration:

#DHCP:
dhcp: true
dhcp_gateway: 192.168.60.254
dhcp_range: 192.168.60.50 192.168.60.200
dhcp_option_domain: network2.example.com

#DNS:
dns: true
dns_zone: network2.example.com
dns_reverse:
- 60.168.192.in-addr.arpa
dns_forwarder:
- 192.168.50.10

Rerun orcharhino-installer to automatically fetch the configuration from the edited yaml file. After that, you can set up your orcharhino Proxy as DHCP and DNS server for the corresponding subnet and domain.

Navigate to Infrastructure > Domains and select network2.example.com. Select the new orcharhino Proxy orcharhino-proxy.network2.example.com.

On Infrastructure > Subnets, select vlan60 and change the primary DNS to 192.168.60.10 with IPAM to DHCP and Boot Mode to DHCP. Under Proxies, set everything but the last to orcharhino-proxy.network2.example.com.

Optional: Activating Remote Execution via SSH

On orcharhino Server, the remote execution plugin is installed by default. You can install it on orcharhino Proxies as follows:

Procedure
  1. Install the plugin on your orcharhino Proxy:

    # orcharhino-installer --enable-foreman-proxy-plugin-remote-execution-script
  2. In the orcharhino management UI, navigate to Infrastructure > Subnets.

  3. Select the subnet of your orcharhino Proxy.

  4. On the Remove Execution tab, activate the orcharhino Proxy and click Submit.

    You can now schedule remote jobs on hosts in the subnet of your orcharhino Proxy.

Performing additional configuration on orcharhino Proxy

Use this chapter to configure additional settings on your orcharhino Proxy.

Configuring orcharhino Proxy for host registration and provisioning

Use this procedure to configure orcharhino Proxy so that you can register and provision hosts using your orcharhino Proxy instead of your orcharhino Server.

Procedure
  • On orcharhino Server, add the orcharhino Proxy to the list of trusted proxies.

    This is required for orcharhino to recognize hosts' IP addresses forwarded over the X-Forwarded-For HTTP header set by orcharhino Proxy. For security reasons, orcharhino recognizes this HTTP header only from localhost by default. You can enter trusted proxies as valid IPv4 or IPv6 addresses of orcharhino Proxies, or network ranges.

    Do not use a network range that is too wide, because that poses a potential security risk.

    Enter the following command. Note that the command overwrites the list that is currently stored in orcharhino. Therefore, if you have set any trusted proxies previously, you must include them in the command as well:

    # orcharhino-installer \
    --foreman-trusted-proxies "127.0.0.1/8" \
    --foreman-trusted-proxies "::1" \
    --foreman-trusted-proxies "My_IP_address" \
    --foreman-trusted-proxies "My_IP_range"

    The localhost entries are required, do not omit them.

Verification
  1. List the current trusted proxies using the full help of orcharhino installer:

    # orcharhino-installer --full-help | grep -A 2 "trusted-proxies"
  2. The current listing contains all trusted proxies you require.

Configuring remote execution for pull client

By default, Remote Execution uses SSH as the transport mechanism for the Script provider. However, Remote Execution also offers pull-based transport, which you can use if your infrastructure prohibits outgoing connections from orcharhino Proxy to hosts.

This is comprised of pull-mqtt mode on orcharhino Proxy in combination with a pull client running on hosts.

The pull-mqtt mode works only with the Script provider. Ansible and other providers will continue to use their default transport settings.

The mode is configured per orcharhino Proxy. Some orcharhino Proxies can be configured to use pull-mqtt mode while others use SSH. If this is the case, it is possible that one remote job on a given host will use the pull client and the next job on the same host will use SSH. If you wish to avoid this scenario, configure all orcharhino Proxies to use the same mode.

Procedure
  1. Enable the pull-based transport on each relevant orcharhino Proxy:

    # orcharhino-installer --foreman-proxy-plugin-remote-execution-script-mode pull-mqtt
  2. Configure the firewall to allow MQTT service:

    # firewall-cmd --add-service=mqtt
  3. Make the changes persistent:

    # firewall-cmd --runtime-to-permanent
  4. In pull-mqtt mode, hosts subscribe for job notifications to the orcharhino Proxy through which they are registered. Therefore, it is recommended to ensure that orcharhino Server sends remote execution jobs to that same orcharhino Proxy. To do this, in the orcharhino management UI, navigate to Administer > Settings. On the Content tab, set the value of Prefer registered through orcharhino Proxy for remote execution to Yes.

  5. After you set up the pull-based transport on orcharhino Proxy, you must also configure it on each host. For more information, see Transport Modes for Remote Execution in Managing Hosts.

Enabling OpenSCAP on orcharhino Proxies

On orcharhino Server and the integrated orcharhino Proxy of your orcharhino Server, OpenSCAP is enabled by default. To use the OpenSCAP plugin and content on external orcharhino Proxies, you must enable OpenSCAP on each orcharhino Proxy.

Procedure
  • To enable OpenSCAP, enter the following command:

    # orcharhino-installer \
    --enable-foreman-proxy-plugin-openscap \
    --foreman-proxy-plugin-openscap-ansible-module true \
    --foreman-proxy-plugin-openscap-puppet-module true

    If you want to use Puppet to deploy compliance policies, you must enable it first. For more information, see Configuring Hosts Using Puppet.

Adding lifecycle environments to orcharhino Proxies

If your orcharhino Proxy has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.

Do not assign the Library lifecycle environment to your orcharhino Proxy because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.

You can use Hammer CLI on orcharhino Server or the orcharhino management UI.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a lifecycle to.

  2. Click Edit and click the Lifecycle Environments tab.

  3. From the left menu, select the lifecycle environments that you want to add to orcharhino Proxy and click Submit.

  4. To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.

  5. Select either Optimized Sync or Complete Sync.

    For definitions of each synchronization type, see Recovering a Repository.

CLI procedure
  1. To display a list of all orcharhino Proxies, on orcharhino Server, enter the following command:

    # hammer proxy list

    Note the orcharhino Proxy ID of the orcharhino Proxy to which you want to add a lifecycle.

  2. Using the ID, verify the details of your orcharhino Proxy:

    # hammer proxy info \
    --id Myorcharhino-proxy_ID_
  3. To view the lifecycle environments available for your orcharhino Proxy, enter the following command and note the ID and the organization name:

    # hammer proxy content available-lifecycle-environments \
    --id Myorcharhino-proxy_ID_
  4. Add the lifecycle environment to your orcharhino Proxy:

    # hammer proxy content add-lifecycle-environment \
    --id Myorcharhino-proxy_ID_ \
    --lifecycle-environment-id My_Lifecycle_Environment_ID
    --organization "My_Organization"

    Repeat for each lifecycle environment you want to add to orcharhino Proxy.

  5. Synchronize the content from orcharhino to orcharhino Proxy.

    • To synchronize all content from your orcharhino Server environment to orcharhino Proxy, enter the following command:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_
    • To synchronize a specific lifecycle environment from your orcharhino Server to orcharhino Proxy, enter the following command:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_
      --lifecycle-environment-id My_Lifecycle_Environment_ID
    • To synchronize all content from your orcharhino Server to your orcharhino Proxy without checking metadata:

      # hammer proxy content synchronize \
      --id Myorcharhino-proxy_ID_ \
      --skip-metadata-check true

      This equals selecting Complete Sync in the orcharhino management UI.

Enabling power management on hosts

To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on orcharhino.

Prerequisites
Procedure
  • To enable BMC, enter the following command:

    # orcharhino-installer \
    --foreman-proxy-bmc "true" \
    --foreman-proxy-bmc-default-provider "freeipmi"

Configuring DNS, DHCP, and TFTP on orcharhino

To configure the DNS, DHCP, and TFTP services on orcharhino, use the orcharhino-installer command with the options appropriate for your environment.

Any changes to the settings require entering the orcharhino-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values.

Prerequisites
  • You must have the correct network name (dns-interface) for the DNS server.

  • You must have the correct interface name (dhcp-interface) for the DHCP server.

  • Contact your network administrator to ensure that you have the correct settings.

Procedure
  • Enter the orcharhino-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services:

    # orcharhino-installer \
    --foreman-proxy-dns true \
    --foreman-proxy-dns-managed true \
    --foreman-proxy-dns-zone example.com \
    --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa \
    --foreman-proxy-dhcp true \
    --foreman-proxy-dhcp-managed true \
    --foreman-proxy-dhcp-range "192.0.2.100 192.0.2.150" \
    --foreman-proxy-dhcp-gateway 192.0.2.1 \
    --foreman-proxy-dhcp-nameservers 192.0.2.2 \
    --foreman-proxy-tftp true \
    --foreman-proxy-tftp-managed true \
    --foreman-proxy-tftp-servername 192.0.2.3

You can monitor the progress of the orcharhino-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/katello.log.

Additional resources
  • For more information about the orcharhino-installer command, enter orcharhino-installer --help.

  • For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning Hosts.

You can configure orcharhino Proxy to manage DHCP, DNS, and TFTP. For more information, see Configuring External Services.

orcharhino Proxy scalability considerations

The maximum number of orcharhino Proxies that orcharhino Server can support has no fixed limit. It was tested that a orcharhino Server can support 17 orcharhino Proxies with 2 vCPUs. However, scalability is highly variable, especially when managing Puppet clients.

orcharhino Proxy scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. orcharhino Proxy has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error.

For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a orcharhino Proxy with 4 CPUs has a maximum of 1250 – 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the orcharhino installation can scale out the number of orcharhino Proxies to support them.

If you want to scale your orcharhino Proxy when managing Puppet clients, the following assumptions are made:

  • There are no external Puppet clients reporting directly to the orcharhino integrated orcharhino Proxy.

  • All other Puppet clients report directly to an external orcharhino Proxy.

  • There is an evenly distributed run-interval of all Puppet agents.

Deviating from the even distribution increases the risk of overloading orcharhino Server. The limit of 100 concurrent requests applies.

The following table describes the scalability limits using the recommended 4 CPUs.

Table 2. Puppet scalability using 4 CPUs
Puppet Managed Resources per Host Run-Interval Distribution

1

3000 – 2500

10

2400 – 2000

20

1700 – 1400

The following table describes the scalability limits using the minimum 2 CPUs.

Table 3. Puppet scalability using 2 CPUs
Puppet Managed Resources per Host Run-Interval Distribution

1

1700 – 1450

10

1500 – 1250

20

850 – 700

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0").