Glossary of terms used in orcharhino

orcharhino is a complete lifecycle management tool for physical hosts, virtual machines, and cloud instances. Key features include automated host provisioning, configuration management, and content management including patch and errata management. You can automate tasks and quickly provision hosts, all through a single unified interface.

This alphabetically ordered glossary provides an overview of orcharhino related technical terms.

Activation key

Activation keys are used by Subscription Manager to register hosts to orcharhino. They define content view and lifecycle environment associations, content overrides, system purpose attributes, and other parameters to be associated with a newly created host.

They are associated to exactly one lifecycle environment and exactly one content view, though this may be a composite content view. You can use them on multiple machines and they behave like configuration information rather than traditional software license keys. You can also use multiple activation keys with a single host. When you register a host using an activation key, certain content from orcharhino is provided to the host. The content that is made available depends on the content in the activation key’s content view and lifecycle environment, any content overrides present, any repository-level restrictions such as operating system or architecture, and system purpose attributes such as release version.

Activation key

Ansible is an agentless open-source automation engine. For hosts running Linux, Ansible uses SSH to connect to hosts. For hosts running Microsoft Windows, Ansible relies on WinRM. It uses playbooks and roles to describe and bundle tasks. Within orcharhino, you can use Ansible to configure hosts and perform remote execution.

For more information about using Ansible to configure hosts, see Managing Hosts using Ansible. For more information about automating orcharhino using orcharhino Ansible collection, see Managing orcharhino with Ansible collections in Administering orcharhino.

Answer file

A configuration file that defines settings for an installation scenario. Answer files are defined in the YAML format and stored in the /etc/foreman-installer/scenarios.d/ directory.

ARF report

Asset Reporting Format (ARF) reports are the result of OpenSCAP compliance scans on hosts which have a policy assigned. Summarizes the security compliance of hosts managed by orcharhino. They list compliance criteria and whether the scanned host has passed or failed.


Provide a report on changes made by a specific user. Audits can be viewed in the orcharhino management UI under Monitor > Audits.

Baseboard management controller (BMC)

Enables remote power management of bare-metal hosts. In orcharhino, you can create a BMC interface to manage selected hosts.

Boot disk

An ISO image used for PXE-less provisioning. This ISO enables the host to connect to orcharhino Server, boot the installation media, and install the operating system. There are several kinds of boot disks: host image, full host image, generic image, and subnet image.


A document that describes the desired system state for one specific host managed by Puppet. It lists all of the resources that need to be managed, as well as any dependencies between those resources. Catalogs are compiled by a Puppet server from Puppet Manifests and data from Puppet Agents.


A service within Katello responsible for subscription management.

Compliance policy

Compliance policies refer to the application of SCAP content to hosts by using orcharhino with its OpenSCAP plugin. You can create compliance policies by using the orcharhino management UI, Hammer CLI, or API. A compliance policy requires the setting of a specific XCCDF profile from a SCAP content, optionally using a tailoring file. You can set up scheduled tasks on orcharhino that check your hosts for compliance against SCAP content. When a compliance policy scan completes, the host sends an ARF report to orcharhino.

Compute profile

Specifies default attributes for new virtual machines on a compute resource.

Compute resource

A compute resource is an external virtualization or cloud infrastructure that you can attach to orcharhino. orcharhino can provision, configure, and manage hosts within attached compute resources. Examples of compute resources include VMware or libvirt and cloud providers such as Microsoft Azure or Amazon EC2.

Configuration Management

Configuration management describes the task of configuring and maintaining hosts. In orcharhino, you can use Ansible, Puppet, and Salt to configure and maintain hosts with orcharhino as a single source of infrastructure truth.

Container (Docker container)

An isolated application sandbox that contains all runtime dependencies required by an application. orcharhino supports container provisioning on a dedicated compute resource.

Container image

A static snapshot of the container’s configuration. orcharhino supports various methods of importing container images as well as distributing images to hosts through content views.


A general term for everything orcharhino distributes to hosts. Content includes software packages such as .rpm packages, errata, or Docker images. Content is synchronized into the Library and then promoted into lifecycle environments using content views so that they can be consumed by hosts.

Content delivery network (CDN)

The mechanism used to deliver Red Hat content to orcharhino Server.

Content view

Content views are named and versioned collections of repositories. When you publish a content view, orcharhino creates a new content view version. This content view version is a frozen snapshot of the current state of the repositories within the content view. Any subsequent changes to the underlying repositories will no longer affect the published content view version. Once a content view is published, it can be promoted through the lifecycle environment path, or modified using incremental upgrades.

Composite content view

Composite content views contain content views, which allows for a more modular approach to manage and version content. You can choose which version of each content view is used in a composite content view.

Discovered host

A bare-metal host detected on the provisioning network by the Discovery plugin.

Discovery image

Refers to the minimal operating system based on AlmaLinux that is PXE-booted on hosts to acquire initial hardware information and to communicate with orcharhino Server before starting the provisioning process.

Discovery plugin

Enables automatic bare-metal discovery of unknown hosts on the provisioning network. The plugin consists of three components: services running on orcharhino Server and orcharhino Proxy, and the Discovery image running on host.

Discovery rule

A set of predefined provisioning rules which assigns a host group to discovered hosts and triggers provisioning automatically.

Docker tag

A mark used to differentiate container images, typically by the version of the application stored in the image. In the orcharhino management UI, you can filter images by tag under Content > Docker Tags.

Enterprise Linux

An umbrella term for the following AlmaLinux-like operating systems:

  • AlmaLinux

  • CentOS Linux

  • CentOS Stream

  • Oracle Linux

  • AlmaLinux

  • Rocky Linux


Embedded Ruby (ERB) is a template syntax used in provisioning and job templates.


Updated packages containing security fixes, bug fixes, and enhancements. In relationship to a host, erratum is applicable if it updates a package installed on the host and installable if it is present in the host’s content view (which means it is accessible for installation on the host).

External node classifier (ENC)

A construct that provides additional data for a server to use when configuring hosts. When Puppet obtains information about nodes from an external source instead of its own database, the external source is called External node classifier. If the Puppet plugin is installed, orcharhino can act as an External node classifier to Puppet servers in a orcharhino deployment.


A program that provides information (facts) about the system on which it is run; for example, Facter can report total memory, operating system version, architecture, and more. Puppet modules enable specific configurations based on host data gathered by Facter.


Host parameters such as total memory, operating system version, or architecture. Facts are reported by Facter and used by Puppet.


Foreman is an open-source component to provision and manage hosts. Foreman is the core upstream component of orcharhino.

Foreman hook

An executable that is automatically triggered when an orchestration event occurs, such as when a host is created or when provisioning of a host has completed.

Foreman hook functionality is deprecated and will be removed in a future orcharhino version.

Full host image

A boot disk used for PXE-less provisioning of a specific host. The full host image contains an embedded Linux kernel and init RAM disk of the associated operating system installer.

Generic image

A boot disk for PXE-less provisioning that is not tied to a specific host. The generic image sends the host’s MAC address to orcharhino Server, which matches it against the host entry.


Hammer is a command-line interface tool for orcharhino. You can execute Hammer commands from the command line or utilize it in scripts. You can use Hammer to automate certain recurring tasks as an alternative to orcharhino Ansible collection or orcharhino API.


A host is a physical, virtual, or cloud instance registered to orcharhino.

Host collection

A user defined group of one or more Hosts used for bulk actions such as errata installation.

Host group

A host group is a template to build hosts that holds shared parameters, such as subnet or lifecycle environment. It helps to unify configuration management in Ansible, Puppet, and Salt by grouping hosts. You can nest host groups to create a hierarchical structure. For more information, see Working with host groups in Managing Hosts.

Host image

A boot disk used for PXE-less provisioning of a specific host. The host image only contains the boot files necessary to access the installation media on orcharhino Server.

Incremental upgrade (of a content view)

The act of creating a new (minor) content view version in a lifecycle environment. Incremental upgrades provide a way to make in-place modification of an already published content view. Useful for rapid updates, for example when applying security errata.

Installation Media

Installation media are sets of installation files used to install the base operating system during the provisioning process. An installation medium in orcharhino represents the installation files for one or more operating systems, which must be accessible over the network, either through an URL or an NFS server location. It is usually either a mirror or a CD or DVD image. Pointing the URL of the installation medium to a local copy, for example, can improve provisioning time and reduce network load.

Every operating system depends on exactly one path of an installation medium, whereas installation media paths may serve different operating systems at the same time. You can use operating system parameters such as $version, $major, and $minor to parameterize the URL.


A command executed remotely on a host from orcharhino Server. Every job is defined in a job template.


Katello is an open-source plugin to perform content management and subscription handling. It depends on Pulp for content management, which fetches software from repositories and stores various versions of it. It also depends on Candlepin for host registration and managing subscription manifests. The Katello plugin is always installed on orcharhino.

Lazy sync

The ability to change the default download policy of a repository from Immediate to On Demand. The On Demand setting saves storage space and synchronization time by only downloading the packages when requested by a host.


A collection of default settings that represent a physical place. Location is a tag mostly used for geographical separation of hosts within orcharhino. Examples include different cities or different data centers.


A container for content from all synchronized repositories on orcharhino Server. Libraries exist by default for each organization as the root of every lifecycle environment path and the source of content for every content view.

Lifecycle environment

A lifecycle environment represents a step in the lifecycle environment path. It defines the stage in which certain versions of content are available to hosts, such as development, testing, and production. This way, new versions of software can be developed and tested before being deployed in a production environment, thus reducing the risk of disruption by prematurely rolled out updates. Content moves through lifecycle environments by publishing and promoting content views.

Lifecycle environment path

A sequence of lifecycle environments through which content views are promoted. You can promote a content view through a typical promotion path, for example, from development to test to production. All lifecycle environment paths originate from the Library environment, which is always present by default.

Manifest (Red Hat subscription manifest)

A mechanism for transferring subscriptions from the Red Hat Customer Portal to orcharhino. Do not confuse with Puppet manifest.

Migrating orcharhino

The process of moving an existing orcharhino installation to a new instance.


A project implementing security compliance auditing according to the Security Content Automation Protocol (SCAP). OpenSCAP is integrated in orcharhino to provide compliance auditing for hosts.

orcharhino Customer Center (OCC)

orcharhino Customer Center provides all content from ATIX AG to install, update, and run orcharhino Server and orcharhino Proxies. It also provides all orcharhino Client for AlmaLinuxs. For more information, see Registering orcharhino Server to OCC and Adding orcharhino Clients manually in the ATIX Service Portal.

orcharhino Proxy

orcharhino Proxies provide DHCP, DNS, and TFTP services and act as an Ansible control node, Puppet server, or Salt Master in separate networks. They interact with orcharhino Server in a client-server model. orcharhino Server always comes bundled with an integrated orcharhino Proxy.

orcharhino Proxies are required in orcharhino deployments that manage IT infrastructure spanning across multiple networks and useful for orcharhino deployments across various geographical locations.


An isolated collection of systems, content, and other functionality within orcharhino. Organization is a tag used for organizational separation of hosts within orcharhino. Examples include different teams or business units.


Defines the behavior of orcharhino components during provisioning. Depending on the parameter scope, we distinguish between global, domain, host group, and host parameters. Depending on the parameter complexity, we distinguish between simple parameters (key-value pair) and smart parameters (conditional arguments, validation, overrides).

Parametrized class (smart class parameter)

A parameter created by importing a class from Puppet server.

Patch and release management

Patch and release management describes the process of acquiring, managing, and installing patches and software updates to your infrastructure. Using orcharhino, you can keep control of the package versions available to your hosts and deliver applicable errata.


Defines an action related to a selected part of orcharhino infrastructure (resource type). Each resource type is associated with a set of permissions, for example the Architecture resource type has the following permissions: view_architectures, create_architectures, edit_architectures, and destroy_architectures. You can group permissions into roles and associate them with users or user groups.


Products are named collections of one or more repositories. If you manage Red Hat content and upload a Red Hat manifest, orcharhino automatically groups Red Hat content within products. If you manage SUSE content using SCC Manager plugin, orcharhino automatically groups SUSE content within products. For more information, see Managing SUSE content in Managing Content.

Promote (a content view)

The act of moving a content view from one lifecycle environment to another. For more information, see Promoting a content view in Managing Content.


The provisioning of a host is the deployment of the base operating system on the host and registration of the host to orcharhino. Optionally, the process continues with the supply of content and configuration. This process is ideally automated. Provisioned hosts run on compute resources or bare metal, never orcharhino Server or orcharhino Proxies.

Provisioning template

Provisioning templates are templates that automate deployment of an operating system on hosts. orcharhino contains provisioning templates for all supported host operating system families:

  • AutoYaST for SUSE Linux Enterprise Server

  • Kickstart for AlmaLinux, Amazon Linux, CentOS, Oracle Linux, Red Hat Enterprise Linux, and Rocky Linux

  • Preseed files for Debian and Ubuntu

Publish (a content view)

The act of making a content view version available in a lifecycle environment and usable by hosts.


A service within Katello responsible for repository and content management.


Puppet is a configuration management tool utilizing a declarative language in a server-client architecture. For more information about using Puppet to configure hosts, see Configuring Hosts Using Puppet.

Puppet agent

A service running on a host that applies configuration changes to that host.

Puppet environment

An isolated set of Puppet Agent nodes that can be associated with a specific set of Puppet Modules.

Puppet manifest

Refers to Puppet scripts, which are files with the .pp extension. The files contain code to define a set of necessary resources, such as packages, services, files, users and groups, and so on, using a set of key-value pairs for their attributes.

Puppet server

A orcharhino Proxy component that provides Puppet Manifests to hosts for execution by the Puppet Agent.

Puppet module

A self-contained bundle of code (Puppet Manifests) and data (facts) that you can use to manage resources such as users, files, and services.


PXE stands for preboot execution environment and is used to boot operating systems received from the network rather than a local disk. It requires a compatible network interface card (NIC) and relies on DHCP and TFTP.

Recurring logic

A job executed automatically according to a schedule. In the orcharhino management UI, you can view those jobs under Monitor > Recurring logics.


An archive of container images. orcharhino supports importing images from local and external registries. orcharhino itself can act as an image registry for hosts. However, hosts cannot push changes back to the registry.

Remote execution (REX)

Remote execution is the process of using orcharhino to run commands on registered hosts.


A repository is a single source and the smallest unit of content in orcharhino. You can either synchronize a repository with a URL or manually upload content to orcharhino. orcharhino supports multiple content types. For more information, see Content types in orcharhino in Managing Content. One or more repositories form a product.

Resource type

Refers to a part of orcharhino infrastructure, for example host, orcharhino Proxy, or architecture. Used in permission filtering.


Specifies a collection of permissions that are applied to a set of resources, such as hosts. Roles can be assigned to users and user groups. orcharhino provides a number of predefined roles.


Salt is a configuration management tool used to maintain hosts in certain defined states, for example have packages installed or services running. It is designed to be idempotent. For more information about using Salt to configure hosts, see Configuring Hosts Using Salt.

SCAP content

SCAP stands for Security Content Automation Protocol and refers to .xml files containing the configuration and security baseline against which hosts are checked. orcharhino uses SCAP content in compliance policies.

Subnet image

A type of generic image for PXE-less provisioning that communicates through orcharhino Proxy.


An entitlement for receiving content and service from Red Hat.

Subscription Manager

Subscription Manager is a client application to register hosts to orcharhino. subscription-manager uses activation keys to consume content on hosts.

SUSE Subscription

You can use orcharhino to manage SUSE content. For more information, see Managing SUSE content in Managing Content.


Synchronization describes the process of fetching content from external repositories into the orcharhino Server.

Sync plan

Sync plans describe the scheduled synchronization of content from external content.

Tailoring files

Tailoring files specify a set of modifications to existing SCAP content. They adapt SCAP content to your particular needs without changing the original SCAP content itself.


A background process executed on the orcharhino or orcharhino Proxy, such as repository synchronization or content view publishing. You can monitor the task status in the orcharhino management UI under Monitor > orcharhino Tasks > Tasks.

Updating orcharhino

The process of advancing your orcharhino Server and orcharhino Proxy installations from a z-stream release to the next, for example orcharhino 6.9.0 to orcharhino 6.9.1.

Upgrading orcharhino

The process of advancing your orcharhino Server and orcharhino Proxy installations from a y-stream release to the next, for example orcharhino 6.8 to orcharhino 6.9.

User group

A collection of roles which can be assigned to a collection of users.


Anyone registered to use orcharhino. Authentication and authorization is possible through built-in logic, through external resources (LDAP, Identity Management, or Active Directory), or with Kerberos.


Virtualization describes the process of running multiple operating systems with various applications on a single hardware host using hypervisors like VMware, Proxmox, or libvirt. It facilitates scalability and cost savings. You can attach virtualization infrastructure as compute resources to orcharhino. Enable appropriate plugins to access this feature.


An agent for retrieving IDs of virtual machines from the hypervisor. When used with orcharhino, virt-who reports those IDs to orcharhino Server so that it can provide subscriptions for hosts provisioned on virtual machines.

XCCDF profiles

Extensible configuration checklist description format (XCCDF) profiles are a component of SCAP content. XCCDF is a language to write security checklists and benchmarks. An XCCDF file contains security configuration rules for lists of hosts.

Simple Content Access (SCA)

Simple content access (SCA) simplifies subscription management. In organizations which have simple content access enabled, content hosts do not have to be subscribed to a product to access its content. Instead, content hosts automatically consume all repositories published within their content view and lifecycle environment.


Without simple content access (SCA), subscription are closely tied to a product and contain the right to access certain content. The usage of Red Hat and SUSE products is limited by the amount of available subscriptions.

Advanced subscriptions management is generally only needed for Red Hat and SUSE products. Other products generally have an unlimited number of subscriptions available and each subscribed content host will automatically be given one.