Tuning orcharhino

Introduction to performance tuning

This document provides guidelines for tuning orcharhino for performance and scalability.

Performance tuning quickstart

You can tune your orcharhino Server based on expected host counts and hardware allocation using built in tuning profiles included in orcharhino that are available using the installation routine’s tuning flag. For more information, see Tuning orcharhino Server with Predefined Profiles in Installing orcharhino Server.

There are four sizes provided based on estimates of the number of hosts your orcharhino manages. You can find the specific tuning settings for each profile in the configuration files contained in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes.

Name Number of hosts Recommend RAM Recommend Cores


0 – 5000

20 GiB



5000 – 10000

32 GiB



10000 – 20000

64 GiB



20000 – 60000

128 GiB




256 GiB+


  1. Select an installation size: default, medium, large, extra-large, or extra-extra-large. The default value is default.

  2. Run orcharhino-installer:

    # orcharhino-installer --tuning "My_Installation_Size"
  3. Optional: Run a health check. For more information, see Applying configurations.

  4. Optional: Tune the Ruby app server directly using the Puma Tuning section. For more information, see Puma Tunings.

System requirements for tuning

You can find the hardware and software requirements in Preparing your Environment for Installation in Installing orcharhino Server.

Determining hardware and OS configuration


The more physical cores that are available to orcharhino, the higher throughput can be achieved for the tasks. Some of the orcharhino components such as Puppet and PostgreSQL are CPU intensive applications and can really benefit from the higher number of available CPU cores.


The higher amount of memory available in the system running orcharhino, the better will be the response times for the orcharhino operations. Since orcharhino uses PostgreSQL as the database solutions, any additional memory coupled with the tunings will provide a boost to the response times of the applications due to increased data retention in the memory.


With orcharhino doing heavy IOPS due to repository synchronizations, package data retrieval, high frequency database updates for the subscription records of the content hosts, it is advised that orcharhino be installed on a high speed SSD so as to avoid performance bottlenecks which may happen due to increased disk reads or writes. orcharhino requires disk IO to be at or above 60 – 80 megabytes per second of average throughput for read operations. Anything below this value can have severe implications for the operation of the orcharhino. orcharhino components such as PostgreSQL benefit from using SSDs due to their lower latency compared to HDDs.


The communication between the orcharhino Server and orcharhino Proxies is impacted by the network performance. A decent network with a minimum jitter and low latency is required to enable hassle free operations such as orcharhino Server and orcharhino Proxies synchronization (at least ensure it is not causing connection resets, etc).

Server Power Management

Your server by default is likely to be configured to conserve power. While this is a good approach to keep the max power consumption in check, it also has a side effect of lowering the performance that orcharhino may be able to achieve. For a server running orcharhino, it is recommended to set the BIOS to enable the system to be run in performance mode to boost the maximum performance levels that orcharhino can achieve.

Benchmarking disk performance

We are working to update orcharhino-maintain to only warn the users when its internal quick storage benchmark results in numbers below our recommended throughput.

Also working on an updated benchmark script you can run (which will likely be integrated into orcharhino-maintain in the future) to get a more accurate real-world storage information.

  • You may have to temporarily reduce the RAM in order to run the I/O benchmark. For example, if your orcharhino Server has 256 GiB RAM, tests would require 512 GiB of storage to run. As a workaround, you can add mem=20G kernel option in grub during system boot to temporary reduce the size of the RAM. The benchmark creates a file twice the size of the RAM in the specified directory and executes a series of storage I/O tests against it. The size of the file ensures that the test is not just testing the filesystem caching. If you benchmark other filesystems, for example smaller volumes such as PostgreSQL storage, you might have to reduce the RAM size as described above.

  • If you are using different storage solutions such as SAN or iSCSI, you can expect a different performance.

  • ATIX AG recommends you to stop all services before executing this script and you will be prompted to do so.

This test does not use direct I/O and will utilize file caching as normal operations would.

You can find our first version of the script storage-benchmark. To execute it, just download the script to your orcharhino, make it executable, and run:

# ./storage-benchmark /var/lib/pulp

As noted in the README block in the script, generally you wish to see on average 100MB/sec or higher in the tests below:

  • Local SSD based storage should give values of 600MB/sec or higher.

  • Spinning disks should give values in the range of 100 – 200MB/sec or higher.

If you see values below this, please open a support ticket for assistance.

Enabling tuned profiles

On bare-metal, ATIX AG recommends to run the throughput-performance tuned profile on orcharhino Server and orcharhino Proxies. On virtual machines, ATIX AG recommends to run the virtual-guest profile.

  1. Check if tuned is running:

    # systemctl status tuned
  2. If tuned is not running, enable it:

    # systemctl enable --now tuned
  3. Optional: View a list of available tuned profiles:

    # tuned-adm list
  4. Enable a tuned profile depending on your scenario:

    # tuned-adm profile "My_Tuned_Profile"

Disable Transparent Hugepage

Transparent Hugepage is a memory management technique used by the Linux kernel to reduce the overhead of using the Translation Lookaside Buffer (TLB) by using larger sized memory pages. Due to databases having Sparse Memory Access patterns instead of Contiguous Memory access patterns, database workloads often perform poorly when Transparent Hugepage is enabled. To improve PostgreSQL and Redis performance, disable Transparent Hugepage. In deployments where the databases are running on separate servers, there may be a small benefit to using Transparent Hugepage on the orcharhino Server only.

Configuring orcharhino for performance

orcharhino comes with a number of components that communicate with each other. You can tune these components independently of each other to achieve the maximum possible performance for your scenario.

You will see no significant performance difference between orcharhino installed on Enterprise Linux 7 and Enterprise Linux 8.

Applying configurations

In following sections we suggest various tunables and how to apply them. Please always test changing these in non production environment first, with valid backup and with proper outage window as in most of the cases orcharhino restart is required.

It is also a good practice to setup a monitoring before applying any change as it will allow you to evaluate effect of the change. Our testing environment might be too far from what you will see although we are trying hard to mimic real world environment.

Changing systemd service files

If you have changed some systemd service file, you need to notify systemd daemon to reload the configuration:

# systemctl daemon-reload

Restart orcharhino services:

# orcharhino-maintain service restart
Changing configuration files

If you have changed a configuration file such as /etc/foreman-installer/custom-hiera.yaml, rerun installer to apply your changes:

# orcharhino-installer
Running installer with additional options

If you need to rerun installer with some new options added:

# orcharhino-installer new options
Checking basic sanity of the setup

Optional: After any change, run this quick orcharhino health-check:

# orcharhino-maintain health check

Puma tunings

Puma is a ruby application server which is used for serving the Foreman related requests to the clients. For any orcharhino configuration that is supposed to handle a large number of clients or frequent operations, it is important for the Puma to be tuned appropriately.

Puma threads

Number of Puma threads (per Puma worker) is configured using two values: threads_min and threads_max.

Value of threads_min determines how many threads each worker spawns at a worker start. Then, as concurrent requests are coming and more threads is needed, worker will be spawning more and more workers up to threads_max limit.

We recommend setting threads_min to same value as threads_max as having fewer Puma threads lead to higher memory usage on your orcharhino Server.

For example, we have compared these two setups using concurrent registrations test:

orcharhino VM with 8 CPUs, 40 GiB RAM orcharhino VM with 8 CPUs, 40 GiB RAM







Setting the minimum Puma threads to 16 results in about 12% less memory usage as compared to threads_min=0.

Puma workers and threads auto-tuning

If you do not provide any Puma workers and thread values with orcharhino-installer or they are not present in your orcharhino configuration, the orcharhino-installer configures a balanced number of workers. It follows this formula:

min(CPU_COUNT * 1.5, RAM_IN_GB - 1.5)

This should be fine for most cases, but with some usage patterns tuning is needed to either limit the amount of resources dedicated to Puma (so other orcharhino components can use these) or for any other reason. Each Puma worker consumes around 1 GiB of RAM.

View your current orcharhino Server settings
# cat /etc/systemd/system/foreman.service.d/installer.conf
View the currently active Puma workers
# systemctl status foreman

Manually tuning Puma workers and threads count

If you decide not to rely on Puma Workers and Threads Auto Tuning, you can apply custom numbers for these tunables. In the example below we are using 2 workers, 5 and 5 threads:

# orcharhino-installer \
--foreman-foreman-service-puma-workers=2 \
--foreman-foreman-service-puma-threads-min=5 \

Apply your changes to orcharhino Server. For more information, see Applying configurations.

Puma workers and threads recommendations

In order to recommend thread and worker configurations for the different tuning profiles, we conducted Puma tuning testing on orcharhino with different tuning profiles. The main test used in this testing was concurrent registration with the following combinations along with different number of workers and threads. Our recommendation is based purely on concurrent registration performance, so it might not reflect your exact use-case. For example, if your setup is very content oriented with lots of publishes and promotes, you might want to limit resources consumed by Puma in favor of Pulp and PostgreSQL.

Name Number of hosts RAM Cores Recommended Puma Threads for both min & max Recommended Puma Workers


0 – 5000

20 GiB



4 – 6


5000 – 10000

32 GiB



8 – 12


10000 – 20000

64 GiB



12 – 18


20000 – 60000

128 GiB



16 – 24



256 GiB+



20 – 26

Tuning number of workers is the more important aspect here and in some case we have seen up to 52% performance increase. Although installer uses 5 min/max threads by default, we recommend 16 threads with all the tuning profiles in the table above. That is because we have seen up to 23% performance increase with 16 threads (14% for 8 and 10% for 32) when compared to setup with 4 threads.

To figure out these recommendations we used concurrent registrations test case which is a very specific use-case. It can be different on your orcharhino which might have more balanced use-case (not only registrations). Keeping default 5 min/max threads is a good choice as well.

These are some of our measurements that lead us to these recommendations:

4 workers, 4 threads 4 workers, 8 threads 4 workers, 16 threads 4 workers, 32 threads






Use 4 – 6 workers on a default setup (4 CPUs) – we have seen about 25% higher performance with 5 workers when compared to 2 workers, but 8% lower performance with 8 workers when compared to 2 workers – see table below:

2 workers, 16 threads 4 workers, 16 threads 6 workers, 16 threads 8 workers, 16 threads






Use 8 – 12 workers on a medium setup (8 CPUs) – see table below:

2 workers, 16 threads 4 workers, 16 threads 8 workers, 16 threads 12 workers, 16 threads 16 workers, 16 threads







Use 16 – 24 workers on a 32 CPUs setup (this was tested on a 90 GiB RAM machine and memory turned out to be a factor here as system started swapping – proper extra-large should have 128 GiB), higher number of workers was problematic for higher registration concurrency levels we tested, so we cannot recommend it.

4 workers, 16 threads 8 workers, 16 threads 16 workers, 16 threads 24 workers, 16 threads 32 workers, 16 threads 48 workers, 16 threads






too many failures

too many failures

Configuring Puma workers

If you have enough CPUs, adding more workers adds more performance. For example, we have compared orcharhino setups with 8 and 16 CPUs:

Table 1. orcharhino-installer options used to test effect of workers count
orcharhino VM with 8 CPUs, 40 GiB RAM orcharhino VM with 16 CPUs, 40 GiB RAM







In 8 CPUs setup, changing the number of workers from 2 to 16, improved concurrent registration time by 36%. In 16 CPUs setup, the same change caused 55% improvement.

Adding more workers can also help with total registration concurrency orcharhino can handle. In our measurements, setup with 2 workers were able to handle up to 480 concurrent registrations, but adding more workers improved the situation.

Configuring Puma threads

More threads allow for lower time to register hosts in parallel. For example, we have compared these two setups:

orcharhino VM with 8 CPUs, 40 GiB RAM orcharhino VM with 8 CPUs, 40 GiB RAM







Using more workers and the same total number of threads results in about 11% of speedup in highly concurrent registrations scenario. Moreover, adding more workers did not consume more CPU and RAM but gets more performance.

Configuring Puma DB pool

The effective value of $db_pool is automatically set to equal $foreman::foreman_service_puma_threads_max. It is the maximum of $foreman::db_pool and $foreman::foreman_service_puma_threads_max but both have default value 5, so any increase to the max threads above 5 automatically increases the database connection pool by the same amount.

If you encounter ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.006 seconds); all pooled connections were in use error in /var/log/foreman/production.log, you might want to increase this value.

View current db_pool setting
# grep pool /etc/foreman/database.yml
  pool: 5

Manually tuning db_pool

If you decide not to rely on automatically configured value, you can apply custom number like this:

# orcharhino-installer --foreman-db-pool 10

Apply your changes to orcharhino Server. For more information, see Applying configurations.

Apache HTTPD performance tuning

Apache httpd forms a core part of the orcharhino and acts as a web server for handling the requests that are being made through the orcharhino management UI or exposed APIs. To increase the concurrency of the operations, httpd forms the first point where tuning can help to boost the performance of your orcharhino.

Configuring the open files limit for Apache HTTPD

With the tuning in place, Apache httpd can easily open a lot of file descriptors on the server which may exceed the default limit of most of the Linux systems in place. To avoid any kind of issues that may arise as a result of exceeding max open files limit on the system, please create the following file and directory and set the contents of the file as specified in the below given example:

  1. Set the maximum open files limit in /etc/systemd/system/httpd.service.d/limits.conf:

  2. Apply your changes to orcharhino Server. For more information, see Applying configurations.

Tuning Apache httpd child processes

By default, httpd uses event request handling mechanism. When the number of requests to httpd exceeds the maximum number of child processes that can be launched to handle the incoming connections, httpd raises an HTTP 503 Service Unavailable error. Amidst httpd running out of processes to handle, the incoming connections can also result in multiple component failures on your orcharhino services side due to the dependency of some components on the availability of httpd processes.

You can adapt the configuration of httpd event to handle more concurrent requests based on your expected peak load.

Configuring these numbers in custom-hiera.yaml locks them. If you change these numbers using orcharhino-installer --tuning=My_Tuning_Option, your custom-hiera.yaml will overwrite this setting. Set your numbers only if you have specific need for it.

  1. Modify the number of concurrent requests in /etc/foreman-installer/custom-hiera.yaml by changing or adding the following lines:

    apache::mod::event::serverlimit: 64
    apache::mod::event::maxrequestworkers: 1024
    apache::mod::event::maxrequestsperchild: 4000

    The example is identical to running orcharhino-installer --tuning=medium or higher on orcharhino Server.

  2. Apply your changes to orcharhino Server. For more information, see Applying configurations.

Dynflow tuning

Dynflow is the workflow management system and task orchestrator which is a orcharhino plugin and is used to execute the different tasks of orcharhino in an out-of-order execution manner. Under the conditions when there are a lot of clients checking in on orcharhino and running a number of tasks, Dynflow can take some help from an added tuning specifying how many executors can it launch.

For more information about the tunings involved related to Dynflow, see https://orcharhino.example.com/foreman_tasks/sidekiq.

Increase number of Sidekiq workers

orcharhino contains a Dynflow service called dynflow-sidekiq that performs tasks scheduled by Dynflow. Sidekiq workers can be grouped into various queues to ensure lots of tasks of one type will not block execution of tasks of other type.

ATIX AG recommends to increase the number of sidekiq workers to scale the Foreman tasking system for bulk concurrent tasks, for example for multiple content view publications and promotions, content synchronizations, and synchronizations to orcharhino Proxies. There are two options available:

  • You can increase the number of threads used by a worker (worker’s concurrency). This has limited impact for values larger than five due to Ruby implementation of the concurrency of threads.

  • You can increase the number of workers, which is recommended.

  1. Increase the number of workers from one worker to three while remaining five threads/concurrency of each:

    # orcharhino-installer --foreman-dynflow-worker-instances 3    # optionally, add --foreman-dynflow-worker-concurrency 5
  2. Optional: Check if there are three worker services:

    # systemctl -a | grep dynflow-sidekiq@worker-[0-9]
    dynflow-sidekiq@worker-1.service        loaded    active   running   Foreman jobs daemon - worker-1 on sidekiq
    dynflow-sidekiq@worker-2.service        loaded    active   running   Foreman jobs daemon - worker-2 on sidekiq
    dynflow-sidekiq@worker-3.service        loaded    active   running   Foreman jobs daemon - worker-3 on sidekiq

Pull-based REX transport tuning

orcharhino has a pull-based transport mode for remote execution. This transport mode uses MQTT as its messaging protocol and includes an MQTT client running on each host. For more information, see Transport Modes for Remote Execution in Managing Hosts.

Increasing host limit for pull-based REX transport

You can tune the mosquitto MQTT server and increase the number of hosts connected to it.

  1. Enable pull-based remote execution on your orcharhino Server or orcharhino Proxy:

    # orcharhino-installer --foreman-proxy-plugin-remote-execution-script-mode pull-mqtt

    Note that your orcharhino Server or orcharhino Proxy can only use one transport mode, either SSH or MQTT.

  2. Create a config file to increase the default number of hosts accepted by the MQTT service:

    cat >/etc/systemd/system/mosquitto.service.d/limits.conf <<EOF

    This example sets the limit to allow the mosquitto service to handle 5000 hosts.

  3. Run the following commands to apply your changes:

    # systemctl daemon-reload
    # systemctl restart mosquitto.service

Decreasing performance impact of the pull-based REX transport

When orcharhino Server is configured with the pull-based transport mode for remote execution jobs using the Script provider, orcharhino Proxy sends notifications about new jobs to clients through MQTT. This notification does not include the actual workload that the client is supposed to execute. After a client receives a notification about a new remote execution job, it queries orcharhino Proxy for its actual workload. During the job, the client periodically sends outputs of the job to orcharhino Proxy, further increasing the number of requests to orcharhino Proxy.

These requests to orcharhino Proxy together with high concurrency allowed by the MQTT protocol can cause exhaustion of available connections on orcharhino Proxy. Some requests might fail, making some child tasks of remote execution jobs unresponsive. This also depends on actual job workload, as some jobs are causing additional load to orcharhino Server, making it compete for resources if clients are registered to orcharhino Server.

To avoid this, configure your orcharhino Server and orcharhino Proxy with the following parameters:

  • MQTT Time To Live – Time interval in seconds given to the host to pick up the job before considering the job undelivered

  • MQTT Resend Interval – Time interval in seconds at which the notification should be re-sent to the host until the job is picked up or cancelled

  • MQTT Rate Limit – Number of jobs that are allowed to run at the same time. You can limit the concurrency of remote execution by tuning the rate limit which means you are going to put more load on orcharhino.

  • Tune the MQTT parameters on your orcharhino Server:

# orcharhino-installer \
--foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit My_MQTT_Rate_Limit \
--foreman-proxy-plugin-remote-execution-script-mqtt-resend-interval My_MQTT_Resend_Interval \
--foreman-proxy-plugin-remote-execution-script-mqtt-ttl My_MQTT_Time_To_Live

orcharhino Proxy logs are in /var/log/foreman-proxy/proxy.log. orcharhino Proxy uses Webrick HTTP server (no httpd or Puma involved), so there is no simple way to increase its capacity.

PostgreSQL tuning

PostgreSQL is the primary SQL based database that is used by orcharhino for the storage of persistent context across a wide variety of tasks that orcharhino does. The database sees an extensive usage is usually working on to provide the orcharhino with the data which it needs for its smooth functioning. This makes PostgreSQL a heavily used process which if tuned can have a number of benefits on the overall operational response of orcharhino.

The PostgreSQL authors recommend disabling Transparent Hugepage on servers running PostgreSQL. For more information, see Disable Transparent Hugepage.

You can apply a set of tunings to PostgreSQL to improve its response times, which will modify the postgresql.conf file.

  1. Append /etc/foreman-installer/custom-hiera.yaml to tune PostgreSQL:

      max_connections: 1000
      shared_buffers: 2GB
      work_mem: 8MB
      autovacuum_vacuum_cost_limit: 2000

    You can use this to effectively tune down your orcharhino instance irrespective of a tuning profile.

  2. Apply your changes to orcharhino Server. For more information, see Applying configurations.

In the above tuning configuration, there are a certain set of keys which we have altered:

  • max_connections: The key defines the maximum number of connections that can be accepted by the PostgreSQL processes that are running.

  • shared_buffers: The shared buffers define the memory used by all the active connections inside PostgreSQL to store the data for the different database operations. An optimal value for this will vary between 2 GiB to a maximum of 25% of your total system memory depending upon the frequency of the operations being conducted on orcharhino.

  • work_mem: The work_mem is the memory that is allocated on per process basis for PostgreSQL and is used to store the intermediate results of the operations that are being performed by the process. Setting this value to 8 MB should be more than enough for most of the intensive operations on orcharhino.

  • autovacuum_vacuum_cost_limit: The key defines the cost limit value for the vacuuming operation inside the autovacuum process to clean up the dead tuples inside the database relations. The cost limit defines the number of tuples that can be processed in a single run by the process. ATIX AG recommends setting the value to 2000 as it is for the medium, large, extra-large, and extra-extra-large profiles, based on the general load that orcharhino pushes on the PostgreSQL server process.

Benchmarking raw DB performance

PGbench utility (note you may need to resize PostgreSQL data directory /var/lib/pgsql directory to 100 GiB or what does benchmark take to run) might be used to measure PostgreSQL performance on your system. Use dnf install postgresql-contrib to install it.

Choice of filesystem for PostgreSQL data directory might matter as well.

  • Never do any testing on production system and without valid backup.

  • Before you start testing, see how big the database files are. Testing with a really small database would not produce any meaningful results. For example, if the DB is only 20 GiB and the buffer pool is 32 GiB, it won’t show problems with large number of connections because the data will be completely buffered.

Redis tuning

Redis is an in-memory data store. It is used by multiple services in orcharhino. The Dynflow and Pulp tasking systems use it to track their tasks. Given the way orcharhino uses Redis, its memory consumption should be stable.

The Redis authors recommend disabling Transparent Hugepage on servers running Redis. For more about it please see Disable Transparent Hugepage.

orcharhino Proxy configuration tuning

orcharhino Proxies are meant to offload part of orcharhino load and provide access to different networks related to distributing content to clients but they can also be used to execute remote execution jobs. What they cannot help with is anything which extensively uses orcharhino API as host registration or package profile update.

orcharhino Proxy performance tests

We have measured multiple test cases on multiple orcharhino Proxy configurations:

orcharhino Proxy HW configuration CPUs RAM



12 GiB



24 GiB

extra large


46 GiB

Content delivery use case

In a download test where we concurrently downloaded a 40MB repo of 2000 packages on 100, 200, .. 1000 hosts, we saw roughly 50% improvement in average download duration every time when we doubled orcharhino Proxy resources. For more precise numbers, see the table below.

Concurrent downloading hosts Minimal (4 CPU and 12 GiB RAM) → Large (8 CPU and 24 GiB RAM) Large (8 CPU and 24 GiB RAM) → Extra Large (16 CPU and 46 GiB RAM) Minimal (4 CPU and 12 GiB RAM) → Extra Large (16 CPU and 46 GiB RAM)

Average Improvement

~ 50% (e.g. for 700 concurrent downloads in average 9 seconds vs. 4.4 seconds per package)

~ 40% (e.g. for 700 concurrent downloads in average 4.4 seconds vs. 2.5 seconds per package)

~ 70% (e.g. for 700 concurrent downloads in average 9 seconds vs. 2.5 seconds per package)

When we compared download performance from orcharhino Server vs. from orcharhino Proxy, we have seen only about 5% speedup, but that is expected as orcharhino Proxy’s main benefit is in getting content closer to geographically distributed clients (or clients in different networks) and in handling part of the load orcharhino Server would have to handle itself. In some smaller hardware configurations (8 CPUs and 24 GiB), orcharhino Server was not able to handle downloads from more than 500 concurrent clients, while a orcharhino Proxy with the same hardware configuration was able to service more than 1000 and possibly even more.

Concurrent registrations use case

For concurrent registrations, a bottleneck is usually CPU speed, but all configs were able to handle even high concurrency without swapping. Hardware resources used for orcharhino Proxy have only minimal impact on registration performance. For example, orcharhino Proxy with 16 CPUs and 46 GiB RAM have at most a 9% registration speed improvement when compared to a orcharhino Proxy with 4 CPUs and 12 GiB RAM. During periods of very high concurrency, you might experience timeouts in the orcharhino Proxy to orcharhino Server communication. You can alleviate this by increasing the default timeout using the following tunable in /etc/foreman-installer/custom-hiera.yaml:

apache::mod::proxy::proxy_timeout: 600
Remote execution use case

We have tested executing Remote Execution jobs via both SSH and Ansible backend on 500, 2000 and 4000 hosts. All configurations were able to handle all of the tests without errors, except for the smallest configuration (4 CPUs and 12 GiB memory) which failed to finish on all 4000 hosts.

Content sync use case

In a sync test where we synced Enterprise Linux 6, 7, 8 BaseOS and 8 AppStream we have not seen significant differences among orcharhino Proxy configurations. This will be different for syncing a higher number of content views in parallel.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0").