Content Management Guide

Introduction to Content Management

In the context of orcharhino, content is defined as the software installed on systems. This includes, but is not limited to, the base operating system, middleware services, and end-user applications. With orcharhino, you can manage the various types of content at every stage of the software life cycle.

orcharhino manages the following content:

Subscription management

This provides organizations with a method to manage their Red Hat subscription information.

Content management

This provides organizations with a method to store APT and YUM content and organize it in various ways.

For a better understanding of orcharhino’s content management, see our glossary for terminology and key terms. Key terms include product, repository, content view, lifecycle environment, and activation key.

For more information, see Content Management guide.

Managing Organizations

Organizations divide orcharhino resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through orcharhino, then divide and assign subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. Here are some examples of organization management:

Single Organization

A small business with a simple system administration chain. In this case, you can create a single organization for the business and assign content to it.

Multiple Organizations

A large company that owns several smaller business units. For example, a company with separate system administration and software development groups. In this case, you can create organizations for the company and each of the business units it owns. This keeps the system infrastructure for each separate. You can then assign content to each organization based on its needs.

External Organizations

A company that manages external systems for other organizations. For example, a company offering cloud computing and web hosting resources to customers. In this case, you can create an organization for the company’s own system infrastructure and then an organization for each external business. You can then assign content to each organization where necessary.

New Users

If a new user is not assigned a default organization, their access is limited. To grant systems rights to users, assign them to a default organization. The next time the user logs on to orcharhino, the user’s account has the correct system rights.

Creating an Organization

Use this procedure to create an organization. To use the CLI instead of the orcharhino management UI, see CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Organizations.

  2. Click New Organization.

  3. In the Name field, enter a name for the organization.

  4. In the Label field, enter a unique identifier for the organization. This is used for creating and mapping certain assets, such as directories for content storage. Use letters, numbers, underscores, and dashes, but no spaces.

  5. Optional: in the Description field, enter a description for the organization.

  6. Click Submit.

  7. If you have hosts with no organization assigned, select the hosts that you want to add to the organization, then click Proceed to Edit.

  8. In the Edit page, assign the infrastructure resources that you want to add to the organization. This includes networking resources, installation media, kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Organizations and then selecting an organization to edit.

  9. Click Submit.

CLI procedure
  1. To create an organization, enter the following command:

    # hammer organization create \
    --name "My_Organization" \
    --label "My_Organization_Label" \
    --description "My_Organization_Description"
  2. Optional: To edit an organization, enter the hammer organization update command. For example, the following command assigns a compute resource to the organization:

    # hammer organization update \
    --name "My_Organization" \
    --compute-resource-ids 1

Setting the Organization Context

An organization context defines the organization to use for a host and its associated resources.

Procedure

The organization menu is the first menu item in the menu bar, on the upper left of the orcharhino management UI. If you have not selected a current organization, the menu says Any Organization. Click the Any Organization button and select the organization to use.

CLI procedure

While using the CLI, include either --organization "My_Organization" or --organization-label "My_Organization_Label" as an option. For example:

# hammer subscription list \
--organization "My_Organization"

This command outputs subscriptions allocated for the My_Organization.

Creating an Organization Debug Certificate

If you require a debug certificate for your organization, use the following procedure.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Organizations.

  2. Select an organization that you want to generate a debug certificate for.

  3. Click Generate and Download.

  4. Save the certificate file in a secure location.

Debug Certificates for Provisioning Templates

Debug Certificates are automatically generated for provisioning template downloads if they do not already exist in the organization for which they are being downloaded.

Browsing Repository Content Using an Organization Debug Certificate

You can view an organization’s repository content using a web browser or using the API if you have a debug certificate for that organization.

Prerequisites
  1. Create and download an organization certificate as described in Creating an Organization Debug Certificate.

  2. Open the X.509 certificate, for example, for the default organization:

    $ vi 'Default Organization-key-cert.pem'
  3. Copy the contents of the file from -----BEGIN RSA PRIVATE KEY----- to -----END RSA PRIVATE KEY-----, into a key.pem file.

  4. Copy the contents of the file from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE-----, into a cert.pem file.

Procedure

To use a browser, you must first convert the X.509 certificate to a format your browser supports and then import the certificate.

For Firefox Users

To use an organization debug certificate in Firefox, complete the following steps:

  1. To create a PKCS12 format certificate, enter the following command:

    $ openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out organization_label.pfx -name My_Organization
  2. In the Firefox browser, navigate to Edit > Preferences > Advanced Tab.

  3. Select View Certificates, and click the Your Certificates tab.

  4. Click Import and select the .pfx file to load.

  5. In the address bar, enter a URL in the following format to browse for repositories:

    http://orcharhino.example.com/pulp/repos/organization_label

    Pulp uses the organization label, therefore, you must enter the organization label into the URL.

For CURL Users

To use the organization debug certificate with CURL, enter the following command:

$ curl -k --cert cert.pem --key key.pem \
http://orcharhino.example.com/pulp/repos/Default_Organization/Library/content/dist/rhel/server/7/7Server/x86_64/sat-tools/5.11/os/

Ensure that the paths to cert.pem and key.pem are the correct absolute paths otherwise the command fails silently.

Deleting an Organization

You can delete an organization if the organization is not associated with any life cycle environments or host groups. If there are any life cycle environments or host groups associated with the organization you are about to delete, remove them by navigating to Administer > Organizations and clicking the relevant organization. Do not delete the default organization created during installation because the default organization is a placeholder for any unassociated hosts in the orcharhino environment. There must be at least one organization in the environment at any given time.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Organizations.

  2. From the list to the right of the name of the organization you want to delete, select Delete.

  3. Click OK to delete the organization.

CLI procedure
  1. Enter the following command to retrieve the ID of the organization that you want to delete:

    # hammer organization list

    From the output, note the ID of the organization that you want to delete.

  2. Enter the following command to delete an organization:

    # hammer organization delete --id Organization_ID

Managing Locations

Locations function similar to organizations: they provide a method to group resources and assign hosts. Organizations and locations have the following conceptual differences:

  • Locations are based on physical or geographical settings.

  • Locations have a hierarchical structure.

Creating a Location

Use this procedure to create a location so that you can manage your hosts and resources by location. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Locations.

  2. Click New Location.

  3. Optional: from the Parent list, select a parent location. This creates a location hierarchy.

  4. In the Name field, enter a name for the location.

  5. Optional: in the Description field, enter a description for the location.

  6. Click Submit.

  7. If you have hosts with no location assigned, add any hosts that you want to assign to the new location, then click Proceed to Edit.

  8. Assign any infrastructure resources that you want to add to the location. This includes networking resources, installation media, kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Locations and then selecting a location to edit.

  9. Click Submit to save your changes.

CLI procedure
  • Enter the following command to create a location:

    # hammer location create \
    --description "My_Location_Description" \
    --name "My_Location" \
    --parent-id "My_Location_Parent_ID"

Creating Multiple Locations

The following example Bash script creates three locations - London, Munich, Boston - and assigns them to the Example Organization.

ORG="Example Organization"
LOCATIONS="London Munich Boston"

for LOC in ${LOCATIONS}
do
  hammer location create --name "${LOC}"
  hammer location add-organization --name "${LOC}" --organization "${ORG}"
done

Setting the Location Context

A location context defines the location to use for a host and its associated resources.

Procedure

The location menu is the second menu item in the menu bar, on the upper left of the orcharhino management UI. If you have not selected a current location, the menu displays Any Location. Click Any location and select the location to use.

CLI procedure

While using the CLI, include either --location "My_Location" or --location-id "My_Location_ID" as an option. For example:

# hammer subscription list \
--location "My_Location"

This command outputs subscriptions allocated for the My_Location.

Deleting a Location

You can delete a location if the location is not associated with any life cycle environments or host groups. If there are any life cycle environments or host groups associated with the location you are about to delete, remove them by navigating to Administer > Locations and clicking the relevant location. Do not delete the default location created during installation because the default location is a placeholder for any unassociated hosts in the orcharhino environment. There must be at least one location in the environment at any given time.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Locations.

  2. Select Delete from the list to the right of the name of the location you want to delete.

  3. Click OK to delete the location.

CLI procedure
  1. Enter the following command to retrieve the ID of the location that you want to delete:

    # hammer location list

    From the output, note the ID of the location that you want to delete.

  2. Enter the following command to delete the location:

    # hammer location delete --id Location ID

Managing Red Hat Subscriptions

orcharhino can import content from the Red Hat Content Delivery Network (CDN). orcharhino requires a Red Hat Subscription Manifest to find, access, and download content from the corresponding repositories. You must have a Red Hat Subscription Manifest containing a subscription allocation for each organization on orcharhino Server. All subscription information is available in your Red Hat Customer Portal account.

Before you can complete the tasks in this chapter, you must create a Red Hat Subscription Manifest in the Customer Portal.

Use this chapter to import a Red Hat Subscription Manifest and manage the manifest within the orcharhino management UI.

Subscription Allocations and Organizations

You can manage more than one organization if you have more than one subscription allocation. orcharhino requires a single allocation for each organization configured in orcharhino Server. The advantage of this is that each organization maintains separate subscriptions so that you can support multiple organizations, each with their own Red Hat accounts.

Future-Dated subscriptions

You can use future-dated subscriptions in a subscription allocation. When you add future-dated subscriptions to content hosts before the expiry date of the existing subscriptions, you can have uninterrupted access to repositories.

Manually attach the future-dated subscriptions to your content hosts before the current subscriptions expire. Do not rely on the auto-attach method because this method is designed for a different purpose and might not work. For more information, see Attaching Red Hat Subscriptions to Content Hosts.

Importing a Red Hat Subscription Manifest into orcharhino Server

Use the following procedure to import a Red Hat Subscription Manifest into orcharhino Server.

Prerequisites
  • You must have a Red Hat Subscription Manifest file exported from the Customer Portal.

Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Content > Subscriptions and click Manage Manifest.

  3. In the Manage Manifest window, click Browse.

  4. Navigate to the location that contains the Red Hat Subscription Manifest file, then click Open. If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window.

CLI procedure
  1. Copy the Red Hat Subscription Manifest file from your client to orcharhino Server:

    $ scp ~/manifest_file.zip root@orcharhino.example.com:~/.
  2. Log in to orcharhino Server as the root user and import the Red Hat Subscription Manifest file:

    # hammer subscription upload \
    --file ~/manifest_file.zip \
    --organization "My_Organization"

You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide.

Locating a Red Hat Subscription

When you import a Red Hat Subscription Manifest into orcharhino Server, the subscriptions from your manifest are listed in the Subscriptions window. If you have a high volume of subscriptions, you can filter the results to find a specific subscription.

Prerequisite
Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Content > Subscriptions.

  3. In the Subscriptions window, click the Search field to view the list of search criteria for building your search query.

  4. Select search criteria to display further options.

  5. When you have built your search query, click the search icon.

For example, if you place your cursor in the Search field and select expires, then press the space bar, another list appears with the options of placing a >, <, or = character. If you select > and press the space bar, another list of automatic options appears. You can also enter your own criteria.

Adding Red Hat Subscriptions to Subscription Allocations

Use the following procedure to add Red Hat subscriptions to a subscription allocation in the orcharhino management UI.

Prerequisite
Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Content > Subscriptions.

  3. In the Subscriptions window, click Add Subscriptions.

  4. On the row of each subscription you want to add, enter the quantity in the Quantity to Allocate column.

  5. Click Submit

Removing Red Hat Subscriptions from Subscription Allocations

Use the following procedure to remove Red Hat subscriptions from a subscription allocation in the orcharhino management UI.

Manifests must not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the orcharhino management UI, all of the entitlements for all of your content hosts will be removed.

Prerequisite
Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Content > Subscriptions.

  3. On the row of each subscription you want to remove, select the corresponding check box.

  4. Click Delete, and then confirm deletion.

Updating and Refreshing Red Hat Subscription Manifests

Every time that you change a subscription allocation, you must refresh the manifest to reflect these changes. For example, you must refresh the manifest if you take any of the following actions:

  • Renewing a subscription

  • Adjusting subscription quantities

  • Purchasing additional subscriptions

You can refresh the manifest directly in the orcharhino management UI. Alternatively, you can import an updated manifest that contains the changes.

Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Content > Subscriptions.

  3. In the Subscriptions window, click Manage Manifest.

  4. In the Manage Manifest window, click Refresh.

Attaching Red Hat Subscriptions to Content Hosts

Using activation keys is the main method to attach subscriptions to content hosts during the provisioning process. However, an activation key cannot update an existing host. If you need to attach new or additional subscriptions, such as future-dated subscriptions, to one host, use the following procedure.

For more information about updating multiple hosts, see Updating Red Hat Subscriptions on Multiple Hosts. For more information about activation keys, see Managing Activation Keys.

Smart Management Subscriptions

In orcharhino, you must maintain a Red Hat Enterprise Linux Smart Management subscription for every Red Hat Enterprise Linux host that you want to manage.

However, you are not required to attach Smart Management subscriptions to each content host. Smart Management subscriptions cannot attach automatically to content hosts in orcharhino because they are not associated with any product certificates. Adding a Smart Management subscription to a content host does not provide any content or repository access. If you want, you can add a Smart Management subscription to a manifest for your own recording or tracking purposes.

Prerequisite
  • You must have a Red Hat Subscription Manifest file imported to orcharhino Server.

Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Hosts > Content Hosts.

  3. On the row of each content host whose subscription you want to change, select the corresponding check box.

  4. From the Select Action list, select Manage Subscriptions.

  5. Optionally, enter a key and value in the Search field to filter the subscriptions displayed.

  6. Select the check box to the left of the subscriptions that you want to add or remove and click Add Selected or Remove Selected as required.

  7. Click Done to save the changes.

CLI procedure
  1. Connect to orcharhino Server as the root user, and then list the available subscriptions:

    # hammer subscription list \
    --organization-id 1
  2. Attach a subscription to the host:

    # hammer host subscription attach \
    --host host_name \
    --subscription-id subscription_id

Updating Red Hat Subscriptions on Multiple Hosts

Use this procedure for post-installation changes to multiple content hosts at the same time.

Procedure
  1. In the orcharhino management UI, ensure the context is set to the organization you want to use.

  2. In the orcharhino management UI, navigate to Hosts > Content Hosts.

  3. On the row of each content host whose subscription you want to change, select the corresponding check box.

  4. From the Select Action list, select Manage Subscriptions.

  5. Optionally, enter a key and value in the Search field to filter the subscriptions displayed.

  6. Select the check box to the left of the subscriptions to be added or removed and click Add Selected or Remove Selected as required.

  7. Click Done to save the changes.

Managing SUSE Repositories

Using the SCC Manager plug-in, you can comfortable manage content from SUSE using your orcharhino.

Preparing SUSE Installation Media

Extract the SUSE installation medium on your orcharhino Server to perform offline installations. This example prepares the installation medium for SUSE Linux Enterprise Server 15 SP3.

Procedure
  1. Download the SLES 15 SP3 installation medium from suse.com/download/sles.

  2. Download the signature and checksum from suse.com.

  3. Transfer the .iso image, the signature, and the checksum from your local machine to orcharhino Server using scp:

    # scp SLE-15-SP3-Full-x86_64-GM-Media1.iso root@orcharhino.example.com:/tmp/
    # scp SLE-15-SP3-Full-x86_64-GM-Media1.iso.sha256 root@orcharhino.example.com:/tmp/
    # scp sles_15_sp3.signature root@orcharhino.example.com:/tmp/
  4. On your orcharhino Server, verify the integrity and authenticity of the .iso image using GPG public keys:

    # cd /tmp
    # gpg --verify sles_15_sp3.signature SLE-15-SP3-Full-x86_64-GM-Media1.iso.sha256
    # echo '2a6259fc849fef6ce6701b8505f64f89de0d2882857def1c9e4379d26e74fa56 SLE-15-SP3-Full-x86_64-GM-Media1.iso' | sha256sum --check
  5. Mount the .iso image:

    # mount SLE-15-SP3-Full-x86_64-GM-Media1.iso /mnt
  6. Copy the content of the mounted installation medium to the pub directory:

    # mkdir -p /var/www/html/pub/installation_media/sles/15sp3
    # cp -a /mnt/* /var/www/html/pub/installation_media/sles/15sp3/
  7. Unmount and delete the .iso image:

    # cd
    # umount /mnt
    # rm -f /tmp/SLE-15-SP3-Full-x86_64-GM-Media1.iso
    # rm -f /tmp/SLE-15-SP3-Full-x86_64-GM-Media1.iso.sha256
    # rm -f /tmp/sles_15_sp3.signature

Use the path to the local installation media files when creating an installation medium, for example http://orcharhino.example.com/pub/installation_media/sles/15sp3/.

Creating a SUSE Operating System

Create an installation media and operating system entry for SLES 15 SP3.

Prerequisite
  • An extracted .iso image on orcharhino or an HTTP server that can be reached during the host provisioning process.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Installation Media and click Create Medium.

  2. Set a Name for the installation medium for SLES 15 SP3. Add local to the name as the path of the extracted .iso image points to your orcharhino.

  3. Set the Path of the extracted .iso image. The content of /var/www/html/pub/ is publicly available on http://orcharhino.example.com/pub/. If you extract the .iso image to /var/www/html/pub/installation_media/sles/15sp3/, the path is http://orcharhino.example.com/pub/installation_media/sles/15sp3/.

  4. Set the Operating System Family to SUSE for all SLES systems.

  5. Set a location and organization context for the installation media.

  6. Click Submit to save the installation media entry for SLES 15 SP3.

  7. In the orcharhino management UI, navigate to Hosts > Operating Systems and click Create Operating System.

  8. Set the Name of the operating system. Choose a name as reported by Ansible, Puppet, or Salt as fact.

  9. Set the Major Version of SLES, for example 15.

  10. Set the Minor Version of SLES, for example 3 for SLES 15 SP3.

  11. Optional: Add an arbitrary Description.

  12. Set the Family to SUSE for all SLES systems.

  13. Set the Root Password Hash to SHA256 for SLES 15 SP3.

  14. Assign the Architectures to SLES 15 SP3.

  15. Click Submit to save the operating system entry.

  16. In the orcharhino management UI, navigate to Hosts > Partition Tables and click Create Partition Table. The partition tables are stored in the /usr/share/foreman/app/views/unattended/partition_tables_templates/ directory on your orcharhino Server.

    For more information, see Partition Tables in the Provisioning guide.

  17. In the orcharhino management UI, navigate to Hosts > Provisioning Templates and click Create Template. The provisioning templates are stored in the /usr/share/foreman/app/views/unattended/provisioning_templates/ directory on your orcharhino Server.

    For more information, see Provisioning Templates in the Provisioning guide.

  18. In the orcharhino management UI, navigate to Hosts > Operating Systems.

  19. Select the previously created operating system.

  20. On the Partition Table tab, select the previously created partition table.

  21. On the Templates tab, select the previously created provisioning template.

  22. Click Submit to save the operating system entry.

Installing the SCC Manager Plug-in

Perform the following steps to install the SCC Manager plug-in on your orcharhino.

Procedure
  1. Connect to your orcharhino Server using SSH:

    # ssh root@orcharhino.example.com
  2. Install the SCC Manager plug-in on your orcharhino:

    # yum install -y tfm-rubygem-foreman_scc_manager
  3. Run database migrations on your orcharhino:

    # foreman-rake db:migrate
    # foreman-rake db:seed
  4. Restart orcharhino services:

    # foreman-maintain service restart

Continue with adding your SCC account to orcharhino.

Adding SCC Accounts to orcharhino

Prerequisite
Procedure
  1. In the orcharhino management UI, navigate to Content > Content Credentials and click Create Content Credential.

    Add the GPG public key for SLES 15 SP3 from suse.com. For more information, see content credentials.

  2. In the orcharhino management UI, navigate to Content > SUSE Subscriptions.

  3. Click Add SCC Account.

  4. Enter your account name and password.

  5. Optional: Set a Sync interval to periodically update the SCC authentication tokens. Note that this does not refer to synchronizing content to orcharhino.

  6. Assign a GPG key for SUSE products to the SCC products. zypper automatically verifies the signatures of each software package to ensure their authenticity.

    You can also set the GPG public key for SUSE repositories at a later stage. However, changing it does not affect already synchronized products. If you already have synchronized products in orcharhino, navigate to Content > Products and replace the GPG key in each respective product.

  7. Click Test connection to verify your account information. Note that you have to re-enter your password if you are trying to check the connection to an SCC account that has already been saved to orcharhino.

  8. Click Submit to save your SCC account to orcharhino.

  9. In the orcharhino management UI, navigate to Content > SUSE Subscriptions, select the previously added SCC account, and click Sync to fetch a list of products associated to your SCC account.

Continue with importing products from your newly added SCC account.

Switching SCC Accounts

You can switch your SCC account by changing the SCC credentials saved on orcharhino.

If you want to switch your SCC account and retain the synchronized content, do not immediately delete your old SCC account, even if it is expired. If you delete your old SCC account, you cannot reuse existing repositories, products, content views, and composite content views.

Procedure
  1. In the orcharhino management UI, navigate to Content > SUSE Subscriptions.

  2. Select your SCC account.

  3. Enter a new Login and Password.

  4. Click Submit to change your SCC account.

Your new SCC account reuses existing products and repositories from SUSE. Content that is no longer available for your new SCC account cannot be synchronized anymore.

Importing SUSE Products

Prerequisites
Procedure
  1. In the orcharhino management UI, navigate to Content > SUSE Subscriptions.

  2. Click Select products on your previously synchronized SCC account.

  3. Select all SUSE products you want to synchronize to orcharhino. This guide assumes you have access to and select the SUSE Linux Enterprise Server 15 SP3 x86_64 product.

  4. Click Submit to create the selected SUSE products in orcharhino.

Synchronizing SUSE Content

You can use orcharhino to synchronize SUSE content to deploy, attach, and serve content to managed hosts.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select the SUSE Linux Enterprise Server 15 SP3 x86_64 product and click Sync Now to synchronize the SUSE repositories for SLES 15 SP3 to orcharhino.

  3. In the orcharhino management UI, navigate to Content > Content Views.

  4. Create a Content View called SLES 15 SP3 comprising the SLES repositories created in the SLES 15 SP3 product and a Content View called SLES 15 SP3 orcharhino client comprising the orcharhino client repository created in the SLES 15 SP3 orcharhino client product.

    For more information, see creating a Content View.

  5. Publish a new version of both Content Views.

    For more information, see promoting a Content View.

  6. In the orcharhino management UI, navigate to Content > Content Views.

  7. Click Create Content View to create a Composite Content View called Composite SLES 15 SP3 comprising the previously published SLES 15 SP3 Content View, the SLES 15 SP3 orcharhino client Content View, and optionally further Content Views of your choice, for example a Content View containing Puppet. Refer to adding an orcharhino client on how to synchronize the orcharhino client repository for SLES 15 SP3 and the ATIX Service Portal for the necessary upstream URL. For more information, see creating a Composite Content View.

  8. Publish a new version and promote this version to the Life Cycle Environment of your choice.

  9. In the orcharhino management UI, navigate to Content > Activation Keys.

  10. Click Create Activation Key to create an Activation Key called sles-15-sp3.

    For more information, see creating an Activation Key.

  11. On the Details tab, select a Life Cycle Environment and Composite Content View.

  12. On the Subscriptions tab, select the necessary subscriptions, for example SLES 15 SP3, SLES 15 SP3 orcharhino client, and Puppet.

You can now create a Host Group and assign the previously created Activation Key to it to deploy hosts running SLES 15 SP3 more comfortably.

Importing Content

This chapter outlines how you can import different types of custom content to orcharhino. If you want to import RPM or DEB packages, Files, or Puppet Modules to orcharhino, it is done with largely the same procedures in this chapter.

For example, you can use the following chapters for information on specific types of custom content but the underlying procedures are the same:

Using Products and Repositories in orcharhino

Content from upstream as well as from Canonical, Oracle, Red Hat, SUSE, and custom content in orcharhino have similarities:

  • The relationship between a product and its repositories is the same and the repositories still require synchronization.

  • custom products require a subscription for clients to access, similar to subscriptions to Red Hat Products. orcharhino creates a subscription for each custom product you create.

Red Hat Content is already organized into Products. For example, Red Hat Enterprise Linux Server is a Product in orcharhino. The repositories for that Product consist of different versions, architectures, and add-ons. For Red Hat repositories, products are created automatically after enabling the repository. See Enabling Red Hat Repositories

Other custom content can be organized into Products however you want. For example, you might create an EPEL (Extra Packages for Enterprise Linux) product and add an "EPEL 7 x86_64" repository to it.

Importing Custom SSL Certificates

Before you synchronize custom content from an external source, you might need to import SSL certificates into your custom product. This might include client certs and keys or CA certificates for the upstream repositories you want to synchronize.

If you require SSL certificates and keys to download RPMs, you can add them to orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Credentials. In the Content Credentials window, click Create Content Credential.

  2. In the Name field, enter a name for your SSL certificate.

  3. From the Type list, select SSL Certificate.

  4. In the Content Credentials Content field, paste your SSL certificate, or click Browse to upload your SSL certificate.

  5. Click Save.

Creating a Custom Product

Use this procedure to create a custom product that you can then add repositories to. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products, click Create Product.

  2. In the Name field, enter a name for the product. orcharhino automatically completes the Label field based on what you have entered for Name.

  3. Optional: From the GPG Key list, select the GPG key for the product.

  4. Optional: From the SSL CA Cert list, select the SSL CA certificate for the product.

  5. Optional: From the SSL Client Cert list, select the SSL client certificate for the product.

  6. Optional: From the SSL Client Key list, select the SSL client key for the product.

  7. Optional: From the Sync Plan list, select an existing sync plan or click Create Sync Plan and create a sync plan for your product requirements.

  8. In the Description field, enter a description of the product.

  9. Click Save.

CLI procedure

To create the product, enter the following command:

# hammer product create \
--name "My_Product" \
--sync-plan "Example Plan" \
--description "Content from My Repositories" \
--organization "My_Organization"

Adding Custom RPM Repositories

Use this procedure to add custom RPM repositories in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

The Products window in the orcharhino management UI also provides a Repo Discovery function that finds all repositories from a URL and you can select which ones to add to your custom product. For example, you can use the Repo Discovery to search http://yum.postgresql.org/9.5/redhat/ and list all repositories for different Red Hat Enterprise Linux versions and architectures. This helps users save time importing multiple repositories from a single source.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and select the product that you want to use, and then click New Repository.

  2. In the Name field, enter a name for the repository. orcharhino automatically completes the Label field based on what you have entered for Name.

  3. Optional: In the Description field, enter a description for the repository.

  4. From the Type list, select yum as type of repository.

  5. Optional: From the Restrict to Architecture list, select an architecture. If you want to make the repository available to all hosts regardless of the architecture, ensure to select No restriction.

  6. Optional: From the Restrict to OS Version list, select the OS version. If you want to make the repository available to all hosts regardless of the OS version, ensure to select No restriction.

  7. In the Upstream URL field, enter the URL of the external repository to use as a source.

    • Optional: Check the Ignore SRPMs check box to exclude source RPM packages from being synchronized to orcharhino.

  8. Select the Verify SSL check box if you want to verify that the upstream repository’s SSL certificates are signed by a trusted CA.

  9. In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication.

  10. In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication.

  11. From the Download Policy list, select the type of synchronization orcharhino Server performs. For more information, see Download Policies Overview.

  12. Ensure to select the Mirror on Sync check box. This ensures that the content that is no longer part of the upstream repository is removed during synchronization.

  13. Optional: In the HTTP Proxy Policy field, select an HTTP proxy.

  14. From the Checksum list, select the checksum type for the repository.

  15. Optional: If you want, you can clear the Publish via HTTP check box to disable this repository from publishing through HTTP.

  16. Optional: From the GPG Key list, select the GPG key for the product.

  17. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository.

  18. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository.

  19. Optional: In the SSL Client Key field, select the SSL Client Key for the repository.

  20. Click Save to create the repository.

CLI procedure
  1. Enter the following command to create the repository:

    # hammer repository create \
    --arch "My_Architecture" \
    --content-type "yum" \
    --gpg-key-id My_GPG_Key_ID \
    --name "My_Repository" \
    --organization "My_Organization" \
    --os-version "My_OS_Version" \
    --product "My_Product" \
    --publish-via-http true \
    --url My_Upstream_URL

Extracting GPG Public Key Fingerprints from a Release Files

You can use GPG public keys the verify the authenticity of APT repositories by verifying the signature of the Release file. This example verifies the signature for the Release file from Debian 11.

Procedure
  1. Download the Release and Release.gpg files:

    $ wget https://ftp.debian.org/debian/dists/bullseye/Release
    $ wget https://ftp.debian.org/debian/dists/bullseye/Release.gpg
  2. Verify the signature:

    $ gpg --verify Release.gpg Release
  3. Download the required public GPG key:

    $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0146DC6D4A0B2914BDED34DB648ACFD622F3D138
    $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys A7236886F3CCCAAD148A27F80E98404D386FA1D9
    $ gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys A4285295FC7B1A81600062A9605C66F00D6C9793
  4. Optional: Verify the signature again:

    $ gpg --verify Release.gpg Release
  5. Export the ASCII-armored GPG public keys to a file:

    $ gpg --armor --export 0146DC6D4A0B2914BDED34DB648ACFD622F3D138 A7236886F3CCCAAD148A27F80E98404D386FA1D9 A4285295FC7B1A81600062A9605C66F00D6C9793 > debian_11.txt

    Use this .txt file and upload it to orcharhino as described below.

Continue to import the custom GPG public key into orcharhino.

Adding DEB Repositories

Use this procedure to add DEB repositories in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and select the product that you want to use, and then click New Repository.

  2. In the Name field, enter a name for the repository. orcharhino automatically completes the Label field based on what you have entered for Name.

  3. Optional: In the Description field, enter a description for the repository.

  4. From the Type list, select deb as type of repository.

  5. In the Upstream URL field, enter the URL of the external repository to use as a source. You can find the upstream URLs on Debian-based systems in /etc/apt/sources.list.

  6. Optional: In the Releases field, set one or multiple releases.

    For official Debian repositories, set a codename in the Releases field, for example buster for Debian 10 or bullseye for Debian 11. Avoid using stable or testing because the codename they reference changes over time. This helps to avoid drastic changes once a new Debian version is released and the reference is changed. To keep things easy to manage and to avoid potential performance and network issues during synchronization, create one repository per release in orcharhino. For official Ubuntu repositories, use the Ubuntu suite, for example focal or focal-updates.

  7. Optional: In the Components field, enter a component. This indicates the licensing terms of the software packages.

    In Debian, it is divided into main, contrib, and non-free. For official Debian or Ubuntu repositories, ATIX AG recommends leaving this field empty to synchronize all available components. Note that some third party Debian repositories use the components in ways that may require setting an explicit selection.

    Ensure that you enter both Releases and Components exactly as they are in an /etc/apt/sources.list file.

  8. Optional: In the Architectures field, enter one or multiple architectures to limit the synchronization of packages to a subset of the available architectures. Leave this field empty to synchronize all available architectures.

  9. Optional: In the Errata URL field, enter the URL of an errata service.

  10. Select the Verify SSL check box if you want to verify that the upstream repository’s SSL certificates are signed by a trusted CA.

  11. In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication.

  12. In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication.

  13. Ensure to select the Mirror on Sync check box. This ensures that the content that is no longer part of the upstream repository is removed during synchronization.

  14. Optional: In the HTTP Proxy Policy field, select an HTTP proxy.

  15. Optional: If you want, you can clear the Publish via HTTP check box to disable this repository from publishing through HTTP.

  16. Optional: From the GPG Key list, select the GPG key if you want to verify the signatures of the Release files associated with the Debian repository.

  17. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository.

  18. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository.

  19. Optional: In the SSL Client Key field, select the SSL Client Key for the repository.

  20. Click Save to create the repository.

CLI procedure
  1. Enter the following command to create the repository:

    # hammer repository create \
    --content-type "deb" \
    --deb-architectures "My_DEB_Architectures" \
    --deb-components "_My_DEB_Components" \
    --deb-releases "My_DEB_Releases" \
    --gpg-key-id "My_GPG_Key_ID" \
    --name "_My_Repository" \
    --organization "My_Organization" \
    --product "My_Product" \
    --publish-via-http true \
    --url My_Upstream_URL

Adding DEB Repository Example for Debian 11

This example creates a product and repositories for Debian 11.

Prerequisites
  1. Extract the GPG public keys from the Release file

  2. Import the content credentials into orcharhino

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Click Create Product to create a product named Debian 11.

  3. On the Repositories tab, click New Repository to create three repositories of type deb as follows:

    • Debian 11 main

      • URL: http://ftp.debian.org/debian/

      • Releases: bullseye

      • Component: main

      • Architecture: amd64

    • Debian 11 updates

      • URL: http://ftp.debian.org/debian/

      • Releases: bullseye-updates

      • Component: main

      • Architecture: amd64

    • Debian 11 security

      • URL: http://deb.debian.org/debian-security/

      • Releases: bullseye/updates

      • Component: main

      • Architecture: amd64

  4. Click Create Product to create a product named Debian 11 client.

  5. On the Repositories tab, click New Repository to create a repository of type deb as follows:

    • Debian 11 client

Enabling Red Hat Repositories

If outside network access requires usage of an HTTP Proxy, configure a default HTTP Proxy for your server. See Adding a default HTTP Proxy

To select the repositories to synchronize, you must first identify the product that contains the repository, and then enable that repository based on the relevant release version and base architecture. For Red Hat Enterprise Linux 8, you must enable both AppStream and BaseOS repositories.

Repository Versioning

The difference between associating Red Hat Enterprise Linux operating system with either 7 Server repositories or 7.X repositories is that 7 Server repositories contain all the latest updates while Red Hat Enterprise Linux 7.X repositories stop getting updates after the next minor version release. Note that Kickstart repositories only have minor versions.

For Red Hat Enterprise Linux 8 Clients

To provision Red Hat Enterprise Linux 8 clients, you require the Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMS) and Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) repositories.

For Red Hat Enterprise Linux 7 Clients

To provision Red Hat Enterprise Linux 7 clients, you require the Red Hat Enterprise Linux 7 Server (RPMs) repository.

Procedure
  1. In the orcharhino management UI, navigate to Content > Red Hat Repositories.

  2. To find repositories, either enter the repository name, or toggle the Recommended Repositories button to the on position to view a list of repositories that you require.

  3. In the Available Repositories pane, click a repository to expand the repository set.

  4. Click the Enable icon next to the base architecture and release version that you want.

CLI procedure
  1. To search for your product, enter the following command:

    # hammer product list --organization "My_Organization"
  2. List the repository set for the product:

    # hammer repository-set list \
    --product "Red Hat Enterprise Linux Server" \
    --organization "My_Organization"
  3. Enable the repository using either the name or ID number. Include the release version, for example,7Server and base architecture, for example, x86_64. For example:

    # hammer repository-set enable \
    --name "Red Hat Enterprise Linux 7 Server (RPMs)" \
    --releasever "7Server" \
    --basearch "x86_64" \
    --product "Red Hat Enterprise Linux Server" \
    --organization "My_Organization"

Synchronizing Repositories

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and select the product that contains the repositories that you want to synchronize.

  2. Select the repositories that you want to synchronize and click Sync Now.

To view the progress of the synchronization in the orcharhino management UI, navigate to Content > Sync Status and expand the corresponding product or repository tree.

CLI procedure
  • Synchronize an entire Product:

    # hammer product synchronize \
    --name "My_Product" \
    --organization "My_Organization"
  • Synchronize an individual repository:

    # hammer repository synchronize \
    --name "My_Repository" \
    --organization "My_Organization" \
    --product "My Product"

The synchronization duration depends on the size of each repository and the speed of your network connection. The following table provides estimates of how long it would take to synchronize content, depending on the available Internet bandwidth:

Single Package (10Mb) Minor Release (750Mb) Major Release (6Gb)

256 Kbps

5 Mins 27 Secs

6 Hrs 49 Mins 36 Secs

2 Days 7 Hrs 55 Mins

512 Kbps

2 Mins 43.84 Secs

3 Hrs 24 Mins 48 Secs

1 Day 3 Hrs 57 Mins

T1 (1.5 Mbps)

54.33 Secs

1 Hr 7 Mins 54.78 Secs

9 Hrs 16 Mins 20.57 Secs

10 Mbps

8.39 Secs

10 Mins 29.15 Secs

1 Hr 25 Mins 53.96 Secs

100 Mbps

0.84 Secs

1 Min 2.91 Secs

8 Mins 35.4 Secs

1000 Mbps

0.08 Secs

6.29 Secs

51.54 Secs

Create a sync plan to ensure updates on a regular basis. See Creating a Sync Plan.

Synchronizing All Repositories in an Organization

Use this procedure to synchronize all repositories within an organization.

Procedure

To synchronize all repositories within an organization, run the following Bash script on your orcharhino Server:

ORG="Your_Organization"

for i in $(hammer --no-headers --csv repository list --organization $ORG | awk -F, {'print $1'})
do
  hammer repository synchronize --id ${i} --organization $ORG --async
done

Download Policies Overview

orcharhino provides multiple download policies for synchronizing RPM and DEB content. For example, you might want to download only the content metadata while deferring the actual content download for later.

orcharhino Server has the following policies:

Immediate

orcharhino Server downloads all metadata and packages during synchronization.

On Demand

orcharhino Server downloads only the metadata during synchronization. orcharhino Server only fetches and stores packages on the file system when orcharhino Proxies or directly connected clients request them. This setting has no effect if you set a corresponding repository on a orcharhino Proxy to Immediate because orcharhino Server is forced to download all the packages.

The On Demand policy acts as a Lazy Synchronization feature because they save time synchronizing content. The lazy synchronization feature must be used only for yum repositories. You can add the packages to Content Views and promote to life cycle environments as normal.

orcharhino Proxy has the following policies:

Immediate

orcharhino Proxy downloads all metadata and packages during synchronization. Do not use this setting if the corresponding repository on orcharhino Server is set to On Demand as orcharhino Server is forced to download all the packages.

On Demand

orcharhino Proxy only downloads the metadata during synchronization. orcharhino Proxy fetches and stores packages only on the file system when directly connected clients request them. When you use an On Demand download policy, content is downloaded from orcharhino Server if it is not available on orcharhino Proxy.

Inherit

orcharhino Proxy inherits the download policy for the repository from the corresponding repository on orcharhino Server.

Changing the Default Download Policy

You can set the default download policy that orcharhino applies to repositories that you create in all organizations.

Depending on whether it is a Red Hat, SUSE, or custom repository, orcharhino uses separate settings. Changing the default value does not change existing settings.

Procedure
  1. In the orcharhino management UI, navigate to Administer > Settings.

  2. Click the Content tab.

  3. Change the default download policy depending on your requirements:

    • To change the default download policy for a Red Hat repository, change the value of the Default Red Hat Repository download policy setting.

    • To change the default download policy for a non-Red Hat custom repository, change the value of the Default Custom Repository download policy setting.

CLI procedure
  • To change the default download policy for Red Hat repositories to one of immediate or on_demand, enter the following command:

    # hammer settings set \
    --name default_redhat_download_policy \
    --value immediate
  • To change the default download policy for a custom repository to one of immediate or on_demand, enter the following command:

    # hammer settings set \
    --name default_download_policy \
    --value immediate

Changing the Download Policy for a Repository

You can set the download policy for a repository.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select the required product name.

  3. On the Repositories tab, click the required repository name, locate the Download Policy field, and click the edit icon.

  4. From the list, select the required download policy and then click Save.

CLI procedure
  1. List the repositories for an organization:

    # hammer repository list \
    --organization-label organization-label
  2. Change the download policy for a repository to immediate or on_demand:

    # hammer repository update \
    --organization-label organization-label  \
    --product "Red Hat Enterprise Linux Server" \
    --name "Red Hat Enterprise Linux 7 Server Kickstart x86_64 7.5"  \
    --download-policy immediate

Uploading Content to Custom RPM Repositories

You can upload individual RPMs and source RPMs to custom RPM repositories. You can upload RPMs using the orcharhino management UI or the Hammer CLI. You must use the Hammer CLI to upload source RPMs.

Procedure
  1. In the orcharhino management UI, click Content > Products.

  2. Click the name of the custom product.

  3. In the Repositories tab, click the name of the custom RPM repository.

  4. Under Upload Package, click Browse…​ and select the RPM you want to upload.

  5. Click Upload.

To view all RPMs in this repository, click the number next to Packages under Content Counts.

CLI procedure
  • Enter the following command to upload an RPM:

    # hammer repository upload-content \
    --id Repository_ID \
    --path /path/to/example-package.rpm
  • Enter the following command to upload a source RPM:

    # hammer repository upload-content \
    --content-type srpm \
    --id Repository_ID \
    --path /path/to/example-package.src.rpm

    When the upload is complete, you can view information about a source RPM by using the commands hammer srpm list and hammer srpm info --id srpm_ID.

Configuring SELinux to Permit Content Synchronization on Custom Ports

SELinux permits access of orcharhino for content synchronization only on specific ports. By default, connecting to web servers running on the following ports is permitted: 80, 81, 443, 488, 8008, 8009, 8443, and 9000.

Procedure
  1. On orcharhino, to verify the ports that are permitted by SELinux for content synchronization, enter a command as follows:

    # semanage port -l | grep ^http_port_t
    http_port_t     tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000
  2. To configure SELinux to permit a port for content synchronization, for example 10011, enter a command as follows:

    # semanage port -a -t http_port_t -p tcp 10011

Recovering a Corrupted Repository

In case of repository corruption, you can recover it by using an advanced synchronization, which has three options:

Optimized Sync

Synchronizes the repository bypassing RPMs that have no detected differences from the upstream RPMs.

Complete Sync

Synchronizes all RPMs regardless of detected changes. Use this option if specific RPMs could not be downloaded to the local repository even though they exist in the upstream repository.

Validate Content Sync

Synchronizes all RPMs and then verifies the checksum of all RPMs locally. If the checksum of an RPM differs from the upstream, it re-downloads the RPM. This option is relevant only for yum repositories. Use this option if you have one of the following errors:

  • Specific RPMs cause a 404 error while synchronizing with yum.

  • Package does not match intended download error, which means that specific RPMs are corrupted.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select the product containing the corrupted repository.

  3. Select the name of a repository you want to synchronize.

  4. From the Select Action menu, select Advanced Sync.

  5. Select the option and click Sync.

CLI procedure
  1. Obtain a list of repository IDs:

    # hammer repository list \
    --organization "My_Organization"
  2. Synchronize a corrupted repository using the necessary option:

    • For the optimized synchronization:

      # hammer repository synchronize \
      --id My_ID
    • For the complete synchronization:

      # hammer repository synchronize \
      --id My_ID \
      --skip-metadata-check true
    • For the validate content synchronization:

      # hammer repository synchronize \
      --id My_ID \
      --validate-contents true

Adding an HTTP Proxy

Use this procedure to add HTTP proxies to orcharhino. You can then specify which HTTP proxy to use for Products, repositories, and supported compute resources.

If orcharhino Server uses a proxy to communicate with subscription.rhsm.redhat.com or subscription.rhn.redhat.com, and cdn.redhat.com then the proxy must not perform SSL inspection on these communications.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > HTTP Proxies and select New HTTP Proxy.

  2. In the Name field, enter a name for the HTTP proxy.

  3. In the URL field, enter the URL for the HTTP proxy, including the port number. The following host names are available:

    Host name Port Protocol

    subscription.rhsm.redhat.com

    443

    HTTPS

    subscription.rhn.redhat.com

    443

    HTTPS

    cdn.redhat.com

    443

    HTTPS

  4. If your HTTP proxy requires authentication, enter a Username and Password.

  5. Optional: In the Test URL field, enter the HTTP proxy URL, then click Test Connection to ensure that you can connect to the HTTP proxy from orcharhino.

  6. Click the Locations tab and add a location.

  7. Click the Organization tab and add an organization.

  8. Click Submit.

CLI procedure
  • On orcharhino Server, enter the following command to add a new HTTP proxy:

    # hammer http-proxy create --name proxy-name \
    --url proxy-URL:port-number

    If your HTTP proxy requires authentication, add the --username name and --password password options.

Changing the HTTP Proxy Policy for a Product

For granular control over network traffic, you can set an HTTP proxy policy for each Product. A Product’s HTTP proxy policy applies to all repositories in the Product, unless you set a different policy for individual repositories.

To set an HTTP proxy policy for individual repositories, see Changing the HTTP Proxy Policy for a Repository.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and select the check box next to each of the Products that you want to change.

  2. From the Select Action list, select Manage HTTP Proxy.

  3. Select an HTTP Proxy Policy from the list:

    • Global Default: Use the global default proxy setting.

    • No HTTP Proxy: Do not use an HTTP proxy, even if a global default proxy is configured.

    • Use specific HTTP Proxy: Select an HTTP Proxy from the list. You must add HTTP proxies to orcharhino before you can select a proxy from this list. For more information, see Adding an HTTP Proxy.

  4. Click Update.

Changing the HTTP Proxy Policy for a Repository

For granular control over network traffic, you can set an HTTP proxy policy for each repository. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

To set the same HTTP proxy policy for all repositories in a Product, see Changing the HTTP Proxy Policy for a Product.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and click the name of the Product that contains the repository.

  2. In the Repositories tab, click the name of the repository.

  3. Locate the HTTP Proxy field and click the edit icon.

  4. Select an HTTP Proxy Policy from the list:

    • Global Default: Use the global default proxy setting.

    • No HTTP Proxy: Do not use an HTTP proxy, even if a global default proxy is configured.

    • Use specific HTTP Proxy: Select an HTTP Proxy from the list. You must add HTTP proxies to orcharhino before you can select a proxy from this list. For more information, see Adding an HTTP Proxy.

  5. Click Save.

CLI procedure
  • On orcharhino Server, enter the following command, specifying the HTTP proxy policy you want to use:

    # hammer repository update --id repository-ID \
    --http-proxy-policy policy

    Specify one of the following options for --http-proxy-policy:

    • none: Do not use an HTTP proxy, even if a global default proxy is configured.

    • global_default_http_proxy: Use the global default proxy setting.

    • use_selected_http_proxy: Specify an HTTP proxy using either --http-proxy proxy-name or --http-proxy-id proxy-ID. To add a new HTTP proxy to orcharhino, see Adding an HTTP Proxy.

Creating a Sync Plan

A sync plan checks and updates the content at a scheduled date and time. In orcharhino, you can create a sync plan and assign products to the plan.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Sync Plans and click New Sync Plan.

  2. In the Name field, enter a name for the plan.

  3. Optional: In the Description field, enter a description of the plan.

  4. From the Interval list, select the interval at which you want the plan to run.

  5. From the Start Date and Start Time lists, select when to start running the synchronization plan.

  6. Click Save.

CLI procedure
  1. To create the synchronization plan, enter the following command:

    # hammer sync-plan create \
    --name "Red Hat Products 2" \
    --description "Example Plan for Red Hat Products" \
    --interval daily \
    --sync-date "2016-02-01 01:00:00" \
    --enabled true \
    --organization "My_Organization"
  2. View the available sync plans for an organization to verify that the sync plan has been created:

    # hammer sync-plan list --organization "Default Organization"

Assigning a Sync Plan to a Product

A sync plan checks and updates the content at a scheduled date and time. In orcharhino, you can assign a sync plan to products to update content regularly.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select a product.

  3. On the Details tab, select a Sync Plan from the drop down menu.

CLI procedure
  1. Assign a sync plan to a product:

    # hammer product set-sync-plan \
    --name "Product_Name" \
    --sync-plan "Sync_Plan_Name" \
    --organization "My_Organization"

Assigning a Sync Plan to Multiple Products

Use this procedure to assign a sync plan to the products in an organization that have been synchronized at least once and contain at least one repository.

Procedure
  1. Run the following Bash script:

    ORG="Your_Organization"
    SYNC_PLAN="daily_sync_at_3_a.m"
    
    for i in $(hammer --no-headers --csv product list --organization $ORG --per-page 999 | grep -vi not_synced | awk -F, {'{ if ($5!=0) print $1}'})
    do
      hammer sync-plan create --name $SYNC_PLAN --interval daily --sync-date "2018-06-20 03:00:00" --enabled true --organization $ORG
      hammer product set-sync-plan --sync-plan $SYNC_PLAN --organization $ORG --id $i
    done
  2. After executing the script, view the products assigned to the sync plan:

    # hammer product list --organization $ORG --sync-plan $SYNC_PLAN

Limiting Synchronization Concurrency

By default, each Repository Synchronization job can fetch up to ten files at a time. This can be adjusted on a per repository basis.

Increasing the limit may improve performance, but can cause the upstream server to be overloaded or start rejecting requests. If you are seeing Repository syncs fail due to the upstream servers rejecting requests, you may want to try lowering the limit.

CLI procedure
# hammer repository update \
--download-concurrency 5 \
--id Repository_ID \
--organization "My_Organization"

Importing a Custom GPG Key

When clients are consuming signed custom content, ensure that the clients are configured to validate the installation of RPMs with the appropriate GPG Key. This helps to ensure that only packages from authorized sources can be installed.

Red Hat content is already configured with the appropriate GPG key and thus GPG Key management of Red Hat Repositories is not supported.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites

Ensure that you have a copy of the GPG key used to sign the RPM content that you want to use and manage in orcharhino. Most RPM distribution providers provide their GPG Key on their website. You can also extract this manually from an RPM:

  1. Download a copy of the version specific repository package to your client system:

    $ wget http://www.example.com/9.5/example-9.5-2.noarch.rpm
  2. Extract the RPM file without installing it:

    $ rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv

The GPG key is located relative to the extraction at etc/pki/rpm-gpg/RPM-GPG-KEY-EXAMPLE-95.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Credentials and in the upper-right of the window, click Create Content Credential.

  2. Enter the name of your repository and select GPG Key from the Type list.

  3. Either paste the GPG key into the Content Credential Contents field, or click Browse and select the GPG key file that you want to import.

    If your custom repository contains content signed by multiple GPG keys, you must enter all required GPG keys in the Content Credential Contents field with new lines between each key, for example:

    -----BEGIN PGP PUBLIC KEY BLOCK-----
    
    mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN
    4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU
    XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA
    =F6VG
    -----END PGP PUBLIC KEY BLOCK-----
    
    -----BEGIN PGP PUBLIC KEY BLOCK-----
    
    mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP
    OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e
    Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I=
    =WPpI
    -----END PGP PUBLIC KEY BLOCK-----
  4. Click Save.

CLI procedure
  1. Copy the GPG key to your orcharhino Server:

    $ scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY-EXAMPLE-95 root@orcharhino.example.com:~/.
  2. Upload the GPG key to orcharhino:

    # hammer gpg create \
    --key ~/RPM-GPG-KEY-EXAMPLE-95 \
    --name "My_Repository" \
    --organization "My_Organization"

Best Practices for Products and Repositories

  • We recommend keeping products as small as possible. For example, one product each for PostgreSQL-7.0, PostgreSQL-7.1, and PostgreSQL-8.1 with one repository each is better than a single PostgreSQL product including multiple repositories for different versions. Likewise, we recommend separate products for things like CentOS-7.1, Apache 2.0, MySQL-5.1, and PostgreSQL 7.1 rather than a single WebServer product. Since products correspond to subscriptions, small products will give you greater control over what subscriptions (and by extension what content) is used (via your activation keys).

  • Separate products by major release, for example Oracle Linux 7 and Oracle Linux 8 or Ubuntu 18.04 and Ubuntu 20.04.

  • Use one content type per product and content view, for example deb or yum only.

  • Make file repositories available through HTTP. Using TLS is currently only working with a global debugging certificate.

  • Add sync plans to products.

  • Spread out starting sync plans over a large time span. We do not recommend to start synchronizing a lot of repositories at the same time.

  • Synchronize your content regularly to avoid overly large changes in repositories. We recommend synchronizing content rather each night than only once a month.

  • Use file repositories for installation media. File repositories require a Pulp Manifest file which you can create using pulp-manifest /path/to/files. Use local repositories with file:///path/to/files as Upstream URL in orcharhino, for example for installation media. Alternatively, place the installation media file repositories on a web server that can be reached from hosts during provisioning.

    ATIX provides file repositories for installation media. Once synchronized, you can perform offline installations using a local installation media. See ATIX Service Portal for the upstream URL.

    ATIX provides the following installation media as file repository:

    • Debian 9 and 10

    • Ubuntu 18.04

    • Ubuntu 20.04

  • Use a Hammer script or Ansible Playbook to create a lot of products and repositories when starting with a fresh orcharhino installation.

  • For SUSE and RHEL content, use the SCC Manager Plugin or import the Red Hat Manifest file to orcharhino.

Best Practices for Sync Plans

  • Use sync plans to periodically synchronize content to orcharhino.

  • Sync plans are associated with products.

  • Use cron line to specify recurring synchronization using cron expressions.

  • We recommend distributing synchronization tasks over several hours if possible to reduce the task load by creating multiple sync plans with cron line. Using a single sync plan would start all sync tasks at the same time resulting in a high load significantly lowering the performance of your orcharhino. This might also lead to occasional difficulties in the correct assigning of the Pulp tasks.

    Table 1. Cron Expression Examples
    Cron Expression Explanation

    0 22 * * 1-5

    every night at 22:00 from Monday through Friday

    30 3 * * 6,0

    at 03:30 on each Saturday and Sunday

    15 1-9/2 * * *

    15 minutes past every second hour from 1 through 9

    30 2 8-14 * *

    at 02:30 every day of the month from 8 through 14

Managing Application Life Cycles

This chapter outlines the application life cycle in orcharhino and how to create and remove application life cycles for orcharhino and orcharhino Proxy.

Introduction to Application Life Cycle

The application life cycle is a concept central to orcharhino’s content management functions. The application life cycle defines how a particular system and its software look at a particular stage. For example, an application life cycle might be simple; you might only have a development stage and production stage. In this case the application life cycle might look like this:

  • Development

  • Production

However, a more complex application life cycle might have further stages, such as a phase for testing or a beta release. This adds extra stages to the application life cycle:

  • Development

  • Testing

  • Beta Release

  • Production

orcharhino provides methods to customize each application life cycle stage so that it suits your specifications.

Each stage in the application life cycle is called an environment in orcharhino. Each environment uses a specific collection of content. orcharhino defines these content collections as a Content View. Each Content View acts as a filter where you can define what repositories, and packages to include in a particular environment. This provides a method for you to define specific sets of content to designate to each environment.

For example, an email server might only require a simple application life cycle where you have a production-level server for real-world use and a test server for trying out the latest mail server packages. When the test server passes the initial phase, you can set the production-level server to use the new packages.

Another example is a development life cycle for a software product. To develop a new piece of software in a development environment, test it in a quality assurance environment, pre-release as a beta, then release the software as a production-level application.

The orcharhino Application Life Cycle
Figure 1. The orcharhino Application Life Cycle

Promoting Content across the Application Life Cycle

In the application life cycle chain, when content moves from one environment to the next, this is called promotion.

Example Content Promotion Across orcharhino Life Cycle Environments

Each environment contains a set of systems registered to orcharhino. These systems only have access to repositories relevant to their environment. When you promote packages from one environment to the next, the target environment’s repositories receive new package versions. As a result, each system in the target environment can update to the new package versions.

This example uses a .rpm package, but you can promote any type of content across the application life cycle.

Development Testing Production

example_software-1.1-0.noarch.rpm

example_software-1.0-0.noarch.rpm

example_software-1.0-0.noarch.rpm

After completing development on the patch, you promote the package to the Testing environment so the Quality Engineering team can review the patch. The application life cycle then contains the following package versions in each environment:

Development Testing Production

example_software-1.1-0.noarch.rpm

example_software-1.1-0.noarch.rpm

example_software-1.0-0.noarch.rpm

While the Quality Engineering team reviews the patch, the Development team starts work on example_software 2.0. This results in the following application life cycle:

Development Testing Production

exampleware-2.0-0.noarch.rpm

exampleware-1.1-0.noarch.rpm

exampleware-1.0-0.noarch.rpm

The Quality Engineering team completes their review of the patch. Now example_software 1.1 is ready to release. You promote 1.1 to the Production environment:

Development Testing Production

example_software-2.0-0.noarch.rpm

example_software-1.1-0.noarch.rpm

example_software-1.1-0.noarch.rpm

The Development team completes their work on example_software 2.0 and promotes it to the Testing environment:

Development Testing Production

example_software-2.0-0.noarch.rpm

example_software-2.0-0.noarch.rpm

example_software-1.1-0.noarch.rpm

Finally, the Quality Engineering team reviews the package. After a successful review, promote the package to the Production environment:

Development Testing Production

exampleware-2.0-0.noarch.rpm

exampleware-2.0-0.noarch.rpm

exampleware-2.0-0.noarch.rpm

For more information, see Promoting a Content View.

Creating a Life Cycle Environment Path

To create an application life cycle for developing and releasing software, use the Library environment as the initial environment to create environment paths. Then optionally add additional environments to the environment paths.

Procedure
  1. In the orcharhino management UI, navigate to Content > Lifecycle Environments.

  2. Click New Environment Path to start a new application life cycle.

  3. In the Name field, enter a name for your environment.

  4. In the Description field, enter a description for your environment.

  5. Click Save.

  6. Optional: To add an environment to the environment path, click Add New Environment, complete the Name and Description fields, and select the prior environment from the Prior Environment list.

CLI procedure
  1. To create an environment path, enter the hammer lifecycle-environment create command and specify the Library environment with the --prior option:

    # hammer lifecycle-environment create \
    --name "Environment Path Name" \
    --description "Environment Path Description" \
    --prior "Library" \
    --organization "My_Organization"
  2. Optional: To add an environment to the environment path, enter the hammer lifecycle-environment create command and specify the parent environment with the --prior option:

    # hammer lifecycle-environment create \
    --name "Environment Name" \
    --description "Environment Description" \
    --prior "Prior Environment Name" \
    --organization "My_Organization"
  3. To view the chain of the life cycle environment, enter the following command:

    # hammer lifecycle-environment paths --organization "My_Organization"

Removing Life Cycle Environments from orcharhino Server

Use this procedure to remove a life cycle environment.

Procedure
  1. In the orcharhino management UI, navigate to Content > Life Cycle Environments.

  2. Click the name of the life cycle environment that you want to remove, and then click Remove Environment.

  3. Click Remove to remove the environment.

CLI procedure
  1. List the life cycle environments for your organization and note the name of the life cycle environment you want to remove:

    # hammer lifecycle-environment list \
    --organization "My_Organization"
  2. Use the hammer lifecycle-environment delete command to remove an environment:

    # hammer lifecycle-environment delete \
    --name "My_Environment" \
    --organization "My_Organization"

Removing Life Cycle Environments from orcharhino Proxy

When life cycle environments are no longer relevant to the host system or environments are added incorrectly to orcharhino Proxy, you can remove the life cycle environments from orcharhino Proxy.

You can use both the orcharhino management UI and the Hammer CLI to remove life cycle environments from orcharhino Proxy.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to remove a life cycle from.

  2. Click Edit and click the Life Cycle Environments tab.

  3. From the right menu, select the life cycle environments that you want to remove from orcharhino Proxy, and then click Submit.

  4. To synchronize orcharhino Proxy’s content, click the Overview tab, and then click Synchronize.

  5. Select either Optimized Sync or Complete Sync.

CLI procedure
  1. Select orcharhino Proxy that you want from the list and take note of its id:

    # hammer capsule list
  2. To verify orcharhino Proxy’s details, enter the following command:

    # hammer capsule info \
    --id orcharhino Proxy_ID
  3. Verify the list of life cycle environments currently attached to orcharhino Proxy and take note of the Environment ID:

    # hammer capsule content lifecycle-environments \
    --id orcharhino Proxy_ID
  4. Remove the life cycle environment from orcharhino Proxy:

    # hammer capsule content remove-lifecycle-environment \
    --id orcharhino Proxy_ID
    --lifecycle-environment-id Lifecycle_Environment_ID

    Repeat this step for every life cycle environment that you want to remove from orcharhino Proxy.

  5. Synchronize the content from orcharhino Server’s environment to orcharhino Proxy:

    # hammer capsule content synchronize \
    --id orcharhino Proxy_ID

Adding Life Cycle Environments to orcharhino Proxys

If your orcharhino Proxy has the content functionality enabled, you must add an environment so that orcharhino Proxy can synchronize content from orcharhino Server and provide content to host systems.

Do not assign the Library lifecycle environment to your orcharhino Proxy because it triggers an automated orcharhino Proxy sync every time the CDN updates a repository. This might consume multiple system resources on orcharhino Proxies, network bandwidth between orcharhino and orcharhino Proxies, and available disk space on orcharhino Proxies.

You can use Hammer CLI on orcharhino Server or the orcharhino management UI.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > orcharhino Proxies, and select the orcharhino Proxy that you want to add a life cycle to.

  2. Click Edit and click the Life Cycle Environments tab.

  3. From the left menu, select the life cycle environments that you want to add to orcharhino Proxy and click Submit.

  4. To synchronize the content on the orcharhino Proxy, click the Overview tab and click Synchronize.

  5. Select either Optimized Sync or Complete Sync.

    For definitions of each synchronization type, see Recovering a Repository.

CLI procedure
  1. To display a list of all orcharhino Proxys, on orcharhino Server, enter the following command:

    # hammer capsule list

    Note the orcharhino Proxy ID of the orcharhino Proxy that you want to add a life cycle to.

  2. Using the ID, verify the details of your orcharhino Proxy:

    # hammer capsule info --id capsule_id
  3. To view the life cycle environments available for your orcharhino Proxy, enter the following command and note the ID and the organization name:

    # hammer capsule content available-lifecycle-environments --id capsule_id
  4. Add the life cycle environment to your orcharhino Proxy:

    # hammer capsule content add-lifecycle-environment \
    --id capsule_id --organization "My_Organization" \
    --lifecycle-environment-id lifecycle-environment_id

    Repeat for each life cycle environment you want to add to orcharhino Proxy.

  5. Synchronize the content from orcharhino to orcharhino Proxy.

    • To synchronize all content from your orcharhino Server environment to orcharhino Proxy, enter the following command:

      # hammer capsule content synchronize --id capsule_id
    • To synchronize a specific life cycle environment from your orcharhino Server to orcharhino Proxy, enter the following command:

      # hammer capsule content synchronize --id external_capsule_id \
      --lifecycle-environment-id lifecycle-environment_id

Best Practices for Lifecycle Environments

  • Use multiple lifecycle environment paths to implement multiple sequential stages of content consumption. Each stage contains a defined set of content, for example in lifecycle environment Production.

  • Default use case: Fixed stages in each lifecycle environment paths, for example Development, Test, and Production.

    • Promote content views to lifecycle environments.

    • To make content from Test available in Production, promote the content view version from Test to Production. All content hosts consuming this content view or composite content view are able to install packages from the Production lifecycle environment. Note that these packages are not installed or updated automatically.

    • If you encounter errors during patching content hosts, attach the host to a previous version of the content view. This only affects the availability of packages but does not downgrade installed packages.

  • Alternative use case: Using stages in lifecycle environments for fixed content, for example quarterly updates.

    • Promote the content view to a specific stage, for example 2021-Q1 and only ever publish new minor versions when adding incremental updates from errata.

    • When patching content hosts, change the lifecycle environment from 2020-Q4 to 2021-Q1 using the management UI, the API, a Hammer script, or an activation key.

    • Advantage: You can directly see which software packages a hosts receives by looking at its lifecycle environment.

    • Disadvantage: Promoting content is less dynamic without clearly defined stages such as Development, Test, and Production.

  • Use multiple lifecycle environment paths to define multiple stages for different environments, for example to decouple web server and database hosts.

  • orcharhino Proxies running Pulp use lifecycle environments to synchronize content. They synchronize content more efficiently if you split content into multiple lifecycle environment paths. If a specific orcharhino Proxy only serves content for one operating system in a single lifecycle environment path, it only synchronizes actually required content.

Best Practices for Patching Content Hosts

  • Attaching hosts to orcharhino requires the orcharhino clients which contain the subscription-manager package, katello-host-tools, and their dependencies. Use the bootstrap.py script to attach existing hosts. When attaching hosts manually, ensure to provide the necessary configuration files and certificates.

  • Use the management UI to install, upgrade, and remove packages from managed hosts. You can update content hosts using job templates using SSH and Ansible.

  • Apply errata on content hosts using the management UI.

  • Modify or replace job templates to add custom steps. This allows you to run commands or execute scripts on managed hosts.

  • When running bulk actions on managed hosts, we recommend bundling them by the same major operating system version, especially when upgrading packages.

  • Select via remote execution - customize first to define the time when patches are applied on managed hosts when performing bulk actions.

  • When patching packages on managed hosts using the default package manager, orcharhino receives a list of packages and repositories to recalculate applicable errata and available updates.

  • You cannot apply errata to packages that are not part of the repositories on orcharhino and the attached content view.

  • Modifications to installed packages using rpm or dpkg are sent to orcharhino with the next run of apt, yum, or zypper.

Managing Content Views

orcharhino uses Content Views to create customized repositories from the repositories. To do this, you must define which repositories to use and then apply certain filters to the content. These filters include both package filters, package group filters, errata filters, and module stream filters. You can use Content Views to define which software versions a particular environment uses. For example, a Production environment might use a Content View containing older package versions, while a Development environment might use a Content View containing newer package versions.

Each Content View creates a set of repositories across each environment, which orcharhino Server stores and manages. When you promote a Content View from one environment to the next environment in the application life cycle, the respective repository on orcharhino Server updates and publishes the packages.

Development Testing Production

Content View Version and Contents

Version 2 - example_software-1.1-0.noarch.rpm

Version 1 - example_software-1.0-0.noarch.rpm

Version 1 - example_software-1.0-0.noarch.rpm

The repositories for Testing and Production contain the example_software-1.0-0.noarch.rpm package. If you promote Version 2 of the Content View from Development to Testing, the repository for Testing regenerates and then contains the example_software-1.1-0.noarch.rpm package:

Development Testing Production

Content View Version and Contents

Version 2 - example_software-1.1-0.noarch.rpm

Version 2 - example_software-1.1-0.noarch.rpm

Version 1 - example_software-1.0-0.noarch.rpm

This ensures systems are designated to a specific environment but receive updates when that environment uses a new version of the Content View.

The general workflow for creating Content Views for filtering and creating snapshots is as follows:

  1. Create a Content View.

  2. Add one or more repositories that you want to the Content View.

  3. Optionally, create one or more filters to refine the content of the Content View.

  4. Optionally, resolve any package dependencies for a Content View.

  5. Publish the Content View.

  6. Optionally, promote the Content View to another environment.

  7. Attach the content host to the Content View.

If a repository is not associated with the Content View, the file /etc/yum.repos.d/redhat.repo remains empty and systems registered to it cannot receive updates.

Hosts can only be associated with a single Content View. To associate a host with multiple Content Views, create a composite Content View. For more information, see Creating a Composite Content View.

Package Dependency Resolution

Package dependency is a complication of package management. For more information about how to manage package dependencies within Content Views, see Resolving Package Dependencies.

Creating a Content View

Use this procedure to create a simple Content View. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites

While you can stipulate whether you want to resolve any package dependencies on a Content View by Content View basis, you might want to change the default orcharhino settings to enable or disable package resolution for all Content Views. For more information, see Resolving Package Dependencies.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views and click Create New View.

  2. In the Name field, enter a name for the view. orcharhino automatically completes the Label field from the name you enter.

  3. In the Description field, enter a description of the view.

  4. Optional: if you want to solve dependencies automatically every time you publish this Content View, select the Solve Dependencies check box. Dependency solving slows the publishing time and might ignore any Content View filters you use. This can also cause errors when resolving dependencies for errata.

  5. Click Save to create the Content View.

  6. In the Repository Selection area, select the repositories that you want to add to your Content View, then click Add Repositories.

  7. Click Publish New Version and in the Description field, enter information about the version to log changes.

  8. Click Save.

  9. Optional: to force metadata regeneration on Yum repositories, from the Actions list for your Content View versions, select Regenerate Repository Metadata.

You can view the Content View in the Content Views window. To view more information about the Content View, click the Content View name.

To register a host to your content view, see Registering Hosts in the Managing Hosts guide.

CLI procedure
  1. Obtain a list of repository IDs:

    # hammer repository list --organization "My_Organization"
  2. Create the Content View and add repositories:

    # hammer content-view create \
    --name "Example_Content_View" \
    --description "Example Content View" \
    --repository-ids 1,2 \
    --organization "My_Organization"

    For the --repository-ids option, you can find the IDs in the output of the hammer repository list command.

  3. Publish the view:

    # hammer content-view publish \
    --name "Example_Content_View" \
    --description "Example Content View" \
    --organization "My_Organization"
  4. Optional: To add a repository to an existing Content View, enter the following command:

    # hammer content-view add-repository \
    --name "Example_Content_View" \
    --organization "My_Organization" \
    --repository-id repository_ID

orcharhino Server creates the new version of the view and publishes it to the Library environment.

Viewing Module Streams

In orcharhino, you can view the module streams of the repositories in your Content Views.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views, and select the Content View that contains the modules you want to view.

  2. Click the Versions tab and select the Content View version that you want to view.

  3. Click the Module Streams tab to view the module streams that are available for the Content View.

  4. Use the Filter field to refine the list of modules.

  5. To view the information about the module, click the module.

Promoting a Content View

Use this procedure to promote Content Views across different lifecycle environments. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Permission Requirements for Content View Promotion

Non-administrator users require two permissions to promote a Content View to an environment:

  1. promote_or_remove_content_views

  2. promote_or_remove_content_views_to_environment.

The promote_or_remove_content_views permission restricts which Content Views a user can promote.

The promote_or_remove_content_views_to_environment permission restricts the environments to which a user can promote Content Views.

With these permissions you can assign users permissions to promote certain Content Views to certain environments, but not to other environments. For example, you can limit a user so that they are permitted to promote to test environments, but not to production environments.

You must assign both permissions to a user to allow them to promote Content Views.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views and select the Content View that you want to promote.

  2. Click the Versions tab for the Content View.

  3. Select the version that you want to promote and in the Actions column, click Promote.

  4. Select the environment where you want to promote the Content View, and click Promote Version.

  5. Click the Promote button again. This time select the Testing environment and click Promote Version.

  6. Finally click on the Promote button again. Select the Production environment and click Promote Version.

Now the repository for the Content View appears in all environments.

CLI procedure
  • Promote the Content View using the hammer content-view version promote each time:

    # hammer content-view version promote \
    --content-view "Database" \
    --version 1 \
    --to-lifecycle-environment "Development" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Database" \
    --version 1 \
    --to-lifecycle-environment "Testing" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Database" \
    --version 1 \
    --to-lifecycle-environment "Production" \
    --organization "My_Organization"

    Now the database content is available in all environments.

To register a host to your content view, see Registering Hosts in the Managing Hosts guide.

Promoting a Content View Across All Life Cycle Environments within an Organization

Use this procedure to promote Content Views across all lifecycle environments within an organization.

Procedure
  1. To promote a selected Content View version from Library across all life cycle environments within an organization, run the following Bash script:

    ORG="Your_Organization"
    CVV_ID=3
    
    for i in $(hammer --no-headers --csv lifecycle-environment list --organization $ORG | awk -F, {'print $1'} | sort -n)
    do
       hammer content-view version promote --organization $ORG --to-lifecycle-environment-id $i --id $CVV_ID
    done
  2. Display information about your Content View version to verify that it is promoted to the required lifecycle environments:

    # hammer content-view version info --id 3

Composite Content Views Overview

A Composite Content View combines the content from several Content Views. For example, you might have separate Content Views to manage an operating system and an application individually. You can use a Composite Content View to merge the contents of both Content Views into a new repository. The repositories for the original Content Views still exist but a new repository also exists for the combined content.

If you want to develop an application that supports different database servers. The example_application appears as:

example_software

Application

Database

Operating System

Example of four separate Content Views:

  • Red Hat Enterprise Linux (Operating System)

  • PostgreSQL (Database)

  • MariaDB (Database)

  • example_software (Application)

From the previous Content Views, you can create two Composite Content Views.

Example Composite Content View for a PostgreSQL database:

Composite Content View 1 - example_software on PostgreSQL

example_software (Application)

PostgreSQL (Database)

Red Hat Enterprise Linux (Operating System)

Example Composite Content View for a MariaDB:

Composite Content View 2 - example_software on MariaDB

example_software (Application)

MariaDB (Database)

Red Hat Enterprise Linux (Operating System)

Each Content View is then managed and published separately. When you create a version of the application, you publish a new version of the Composite Content Views. You can also select the Auto Publish option when creating a Composite Content View, and then the Composite Content View is automatically republished when a Content View it includes is republished.

Repository Restrictions

You cannot include more than one of each repository in Composite Content Views. For example, if you attempt to include two Content Views using the same repository in a Composite Content View, orcharhino Server reports an error.

Creating a Composite Content View

Use this procedure to create a composite content view. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views and click Create New View.

  2. In the Name field, enter a name for the view. orcharhino automatically completes the Label field from the name you enter.

  3. In the Description field, enter a description of the view.

  4. Select the Composite View? check box to create a Composite Content View.

  5. Optional: select the Auto Publish check box if you want the Composite Content View to be republished automatically when a Content View is republished.

  6. Click Save.

  7. In the Add Content Views area, select the Content Views that you want to add to the Composite Content View, and then click Add Content Views.

  8. Click Publish New Version to publish the Composite Content View. In the Description field, enter a description and click Save.

  9. Click Promote and select the lifecycle environments to promote the Composite Content View to, enter a description, and then click Promote Version.

CLI procedure
  1. Before you create the Composite Content Views, list the version IDs for your existing Content Views:

    # hammer content-view version list \
    --organization "My_Organization"
  2. Create a new Composite Content View. When the --auto-publish option is set to yes, the Composite Content View is automatically republished when a Content View it includes is republished:

    # hammer content-view create \
    --composite \
    --auto-publish yes \
    --name "Example_Composite_Content_View" \
    --description "Example Composite Content View" \
    --organization "My_Organization"
  3. Add a component Content View to the Composite Content View. You must include the Content View Version ID and use the --latest option. To include multiple component Content Views to the Composite Content View, repeat this step for every Content View you want to include:

    # hammer content-view component add \
    --component-content-view-id Content_View_Version_ID \
    --latest \
    --composite-content-view "Example_Composite_Content_View"
  4. Publish the Composite Content View:

    # hammer content-view publish \
    --name "Example_Composite_Content_View" \
    --description "Initial version of Composite Content View" \
    --organization "My_Organization"
  5. Promote the Composite Content View across all environments:

    # hammer content-view version promote \
    --content-view "Example_Composite_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Development" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Example_Composite_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Testing" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Example_Composite_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Production" \
    --organization "My_Organization"

Content Filter Overview

Content Views also use filters to include or restrict certain RPM content. Without these filters, a Content View includes everything from the selected repositories.

There are two types of content filters:

Table 2. Filter Types
Filter Type Description

Include

You start with no content, then select which content to add from the selected repositories. Use this filter to combine multiple content items.

Exclude

You start with all content from selected repositories, then select which content to remove. Use this filter when you want to use most of a particular content repository but exclude certain packages, such as blacklisted packages. The filter uses all content in the repository except for the content you select.

Include and Exclude Filter Combinations

If using a combination of Include and Exclude filters, publishing a Content View triggers the include filters first, then the exclude filters. In this situation, select which content to include, then which content to exclude from the inclusive subset.

Content Types

There are also five types of content to filter:

Table 3. Content Types
Content Type Description

Package

Filter packages based on their name and version number. The Package option filters non-modular RPM packages and errata.

Package Group

Filter packages based on package groups. The list of package groups is based on the repositories added to the Content View.

Erratum (by ID)

Select which specific errata to add to the filter. The list of Errata is based on the repositories added to the Content View.

Erratum (by Date and Type)

Select a issued or updated date range and errata type (Bugfix, Enhancement, or Security) to add to the filter.

Module Streams

Select whether to include or exclude specific module streams. The Module Streams option filters modular RPMs and errata, but does not filter non-modular content that is associated with the selected module stream.

Resolving Package Dependencies

In orcharhino, you can use the package dependency resolution feature to ensure that any dependencies that packages have within a Content View are added to the dependent repository as part of the Content View publication process.

You can select to resolve package dependencies for any Content View that you want, or you can change the default setting to enable or disable resolving package dependencies for all new Content Views.

Note that resolving package dependencies can cause significant delays to Content View promotion. The package dependency resolution feature does not consider packages that are installed on your system independently of the Content View or solve dependencies across repositories.

Resolving Package Dependencies and Filters

Filters do not resolve any dependencies of the packages listed in the filters. This might require some level of testing to determine what dependencies are required.

If you add a filter that excludes some packages that are required and the Content View has dependency resolution enabled, orcharhino ignores the rules you create in your filter in favor of resolving the package dependency.

If you create a content filter for security purposes, to resolve a package dependency, orcharhino can add packages that you might consider insecure.

Procedure

To resolve package dependencies by default, complete the following steps:

  1. In the orcharhino management UI, navigate to Administer > Settings and click the Content tab.

  2. Locate the Content View Dependency Solving Default, and select Yes.

You can also set the default level of dependency resolution that you want. You can select between adding packages to solve dependencies only if the required package does not exist, or to add the latest packages to resolve the dependency even if the package exists in the repository.

To set the default level of dependency resolution, complete the following steps:

  1. In the orcharhino management UI, navigate to Administer > Settings and click the Content tab.

  2. Locate the Content View Dependency Solving Algorithm and select one of the following options:

    • To add the package that resolves the dependency only if it does not exist in the repository, select Conservative.

    • To add the package that resolves the dependency regardless of whether or not it exists in the repository, select Greedy.

Content Filter Examples

Use any of the following examples with the procedure that follows to build custom content filters.

Example 1

Create a repository with the base Red Hat Enterprise Linux packages. This filter requires a Red Hat Enterprise Linux repository added to the Content View.

Filter:

  • Inclusion Type: Include

  • Content Type: Package Group

  • Filter: Select only the Base package group

Example 2

Create a repository that excludes all errata, except for security updates, after a certain date. This is useful if you want to perform system updates on a regular basis with the exception of critical security updates, which must be applied immediately. This filter requires a Red Hat Enterprise Linux repository added to the Content View.

Filter:

  • Inclusion Type: Exclude

  • Content Type: Erratum (by Date and Type)

  • Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On. Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered.

Example 3

A combination of Example 1 and Example 2 where you only require the operating system packages and want to exclude recent bug fix and enhancement errata. This requires two filters attached to the same Content View. The Content View processes the Include filter first, then the Exclude filter.

Filter 1:

  • Inclusion Type: Include

  • Content Type: Package Group

  • Filter: Select only the Base package group

Filter 2:

  • Inclusion Type: Exclude

  • Content Type: Erratum (by Date and Type)

  • Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On. Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered.

Example 4

Filter a specific module stream in a Content View.

Filter 1:

  • Inclusion Type: Include

  • Content Type: Module Stream

  • Filter: Select only the specific module stream that you want for the Content View, for example ant, and click Add Module Stream.

Filter 2:

  • Inclusion Type: Exclude

  • Content Type: Package

  • Filter: Add a rule to filter any non-modular packages that you want to exclude from the Content View. If you do not filter the packages, the Content View filter includes all non-modular packages associated with the module stream ant. Add a rule to exclude all * packages, or specify the package names that you want to exclude.

Creating a Content Filter

Use this procedure to create a content filter. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

For examples of how to build a filter, see Content Filter Examples.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views and select a Content View.

  2. Select Yum Content > Filters and click New Filter.

  3. In the Name field, enter a name for your filter.

  4. From the Content Type list, select the content type that you want to filter. Depending on what you select for the new filter’s content type, different options appear.

  5. From the Inclusion Type list, select either Include or Exclude.

  6. In the Description field, enter a description for the filter, and click Save.

  7. Depending on what you enter for Content Type, add rules to create the filter that you want.

  8. Click the Affected repositories tab to select which specific repositories use this filter.

  9. Click Publish New Version to publish the filtered repository. In the Description field, enter a description of the changes, and click Save.

You can promote this Content View across all environments.

CLI procedure
  1. Add a filter to the Content View. Use the --inclusion false option to set the filter to an Exclude filter:

    # hammer content-view filter create \
    --name "Errata Filter" \
    --type erratum --content-view "Example_Content_View" \
    --description "My latest filter" \
    --inclusion false \
    --organization "My_Organization"
  2. Add a rule to the filter:

    # hammer content-view filter rule create \
    --content-view "Example_Content_View" \
    --content-view-filter "Errata Filter" \
    --start-date "YYYY-MM-DD" \
    --types enhancement,bugfix \
    --date-type updated \
    --organization "My_Organization"
  3. Publish the Content View:

    # hammer content-view publish \
    --name "Example_Content_View" \
    --description "Adding errata filter" \
    --organization "My_Organization"
  4. Promote the view across all environments:

    # hammer content-view version promote \
    --content-view "Example_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Development" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Example_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Testing" \
    --organization "My_Organization"
    # hammer content-view version promote \
    --content-view "Example_Content_View" \
    --version 1 \
    --to-lifecycle-environment "Production" \
    --organization "My_Organization"

Best Practices for Content Views

  • There are two basic approaches to content views: Either create content views that contain everything needed for a certain type of host, e.g. a WebServer content view, which includes repositories for CentOS-7.1, Apache-2.0, and PostgreSQL-7.1. Alternatively, create a content view each for CentOS-7.1, Apache-2.0, and PostgreSQL-7.1 along with a composite content view WebServer, which includes all of these content views.

  • If you require daily updated content, use the content view Default Organization View. This contains the latest synchronized content from all repositories and is available in lifecycle environment Library.

  • Use content views and composite content views to assemble and structure lists of repositories. You can match different versions of content views or update specific content views when bundling them to a composite content view.

  • We recommend using composite content views only if you require more flexibility.

  • Making content management too small scale results in more manual work.

  • When using composite content views, publish the content views first, then the composite content views second. The more content views you bundle, the more work it becomes when making changes or updating content. We recommend making it as simple as possible and as complex as necessary.

  • Setting a lifecycle environment for content views is unnecessary if they are solely bundled to a composite content view.

  • Automate creating and publishing composite content views and lifecycle environments using Hammer scripts or Ansible Playbooks. Use cron jobs or recurring logics for more visibility. Recurring logics require SSH based access to orcharhino and the remote execution plugin.

  • We recommend adding the changes and date to the description of each published content view or composite content view version. The date shown in the management UI shows the latest action, for example promoting content to a different lifecycle environment, which is independent from the latest change to the actual content.

  • Publishing a new content view or composite content view creates a new major version. Incremental errata updates increment the minor version. Note that you cannot change or reset this counter.

Synchronizing Content Between orcharhino Servers

orcharhino uses Inter-Server Synchronization (ISS) to synchronize content between two orcharhino Servers including those that are airgapped.

Scenarios

  • If you want to copy some but not all content from your orcharhino Server to other orcharhino Servers. For example, you have Content Views that your IT department consumes content from orcharhino Server, and you want to copy content from those Content Views to other orcharhino Servers.

  • If you want to copy all library content from your orcharhino Server to another orcharhino Servers. For example, you have Products and repositories that your IT department consumes content from orcharhino Server in the Library, and you want to copy all Products and repositories in that organization to another orcharhino Servers.

  • If you have both connected and disconnected orcharhino Servers, and want to copy content from the connected servers to the disconnected servers. For example, you require complete isolation of your management infrastructure for security or other purposes.

These scenarios require at least two orcharhino Servers: one orcharhino Server that is connected to the outside world and contains content to export, and another orcharhino Server that is disconnected that you want to import content to.

The orcharhino ISS use cases
Figure 2. The orcharhino ISS use cases

You cannot use ISS to synchronize content from orcharhino Server to orcharhino Proxy. orcharhino Proxy supports synchronization natively. For more information, see orcharhino Proxy Overview in Planning for orcharhino.

Configuration Definitions

There are different ways of using ISS to synchronize content between two orcharhino Servers.

Connected

In a connected scenario, two orcharhino Servers are able to communicate with each other.

The orcharhino ISS Connected use case
Figure 3. The orcharhino ISS Connected use case
Disconnected

In a disconnected scenario, there is the following setup:

  • Two orcharhino Servers are connected to each other.

  • One orcharhino Server is connected to the Internet while the other orcharhino Server is not connected to the Internet.

    • This is referred to as the disconnected orcharhino Server.

  • The disconnected orcharhino Server is completely isolated from all external networks but can communicate with a connected orcharhino Server.

The orcharhino ISS Airgapped Disconnected use case
Figure 4. The orcharhino ISS Non Airgapped Disconnected use case
Air-gapped Disconnected

In this setup, both orcharhino Servers are isolated from each other. The only way for air-gapped orcharhino Servers to receive content updates is through the import/export workflow.

The orcharhino ISS Airgapped Disconnected use case
Figure 5. The orcharhino ISS Airgapped Disconnected use case

Exporting from a Connected orcharhino Server

Connected orcharhino Server as a Content Store

In this scenario, you have one connected orcharhino Server that is receiving content from an external source like the CDN. The rest of your infrastructure is completely isolated, including another disconnected orcharhino Server. The connected orcharhino Server is mainly used as a content store for updates. The disconnected orcharhino Server serves as the main orcharhino Server for managing content for all infrastructure behind the isolated network.

On the connected orcharhino Server
  1. Ensure that repositories are using the immediate download policy, either by:

    1. For existing repos using On Demand, change their download policy on the repository details page to Immediate.

    2. For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories For more information, see Download Policies Overview.

  2. Enable the content that you want to use. For more information, see Enabling Red Hat Repositories.

  3. Synchronize the enabled content.

  4. For the first export, perform a complete Library export so that all the synchronized content are exported. This generates content archives that you can later import into one or more disconnected orcharhino Servers. For more information on performing a complete Library export, see Exporting the Library Environment.

  5. Export all future updates in the connected orcharhino Servers incrementally. This generates leaner content archives that contain changes only from the recent set of updates. For example, if you enable and synchronize a new repository, the next exported content archive contains content only from the newly enabled repository. For more information on performing an incremental Library export, see Incremental Export.

On the disconnected orcharhino Server
  1. Copy the exported content archive that you want from the connected orcharhino Server.

  2. Import content to an organization using the procedure outlined in Importing into the Library Environment. You can then manage content using Content Views or Lifecycle Environments as you require on the disconnected orcharhino Server.

Connected orcharhino Server for Managing Content View Versions

In this scenario, your connected orcharhino Server is receiving content from an external source like the CDN. The rest of your infrastructure is completely isolated, including a disconnected orcharhino Server. However, the connected orcharhino Server is not only used as a content store, but also is used to manage content. Updates coming from the CDN are curated into Content Views and Lifecycle Environments. Once the content has been promoted to a designated Lifecycle Environment, the content can be exported and imported into the disconnected orcharhino Server.

On the connected orcharhino Server
  1. Ensure that repositories are using the immediate download policy:

    1. For existing repos using On Demand, change their download policy on the repository details page to Immediate.

    2. For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and Default download policy is set to Immediate for custom repositories For more information, see Download Policies Overview.

  2. Enable any content that you want to use. For more information, see Enabling Red Hat Repositories.

  3. Synchronize the enabled content.

  4. When you have new content, republish the Content Views that includes this content. For more information, see Managing Content Views. This creates a new Content View Version with the appropriate content to export.

  5. For the first export, perform a complete version export on the Content View Version that you want to export. For more information see, Exporting a Content View Version. This generates content archives that you can import into one or more disconnected orcharhino Servers.

  6. Export all future updates in the connected orcharhino Servers incrementally. This generates leaner content archives that contain changes only from the recent set of updates. For example, if your Content View has a new repository, this exported content archive contains only the latest changes. For more information on performing an incremental export on Content View Versions, see Incremental Export.

On the disconnected orcharhino Server
  1. Copy the exported content archive from the connected orcharhino Server.

  2. Import the content to the organization that you want. For more information, see Importing a Content View Version. This will create a content view version from the exported content archives and them import content appropriately.

Keeping Track of Your Exports

If you are exporting content to several disconnected orcharhino Servers, then using a --destination-server option provides a way to organize or maintain a record of what versions got exported to a given destination. For more information, see Examining the Exports.

This option is available for all content-export operations. You can use the destination-server to

  • Query what was previously exported to a given destination.

  • Generate incremental exports automatically to the given destination server.

Exporting a Content View Version

You can export a version of a Content View to an archive file from orcharhino Server and use this archive file to create the same Content View version on another orcharhino Server or on another orcharhino Server organization. orcharhino exports composite Content Views as normal content views. The composite nature is not retained. On importing the exported archive, a regular Content View is created or updated on your disconnected orcharhino Server. The exported archive file contains the following data:

  • A JSON file containing Content View version metadata

  • An archive file containing all the repositories included into the Content View version

orcharhino Server exports only RPM and kickstart files added to a version of a Content View. orcharhino does not export the following content:

  • Docker content

  • Content View definitions and metadata, such as package filters.

Prerequisites

To export a Content View, ensure that orcharhino Server where you want to export meets the following conditions:

  • Ensure that the export directory has free storage space to accommodate the export.

  • Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process.

  • Ensure that you set download policy to Immediate for all repositories within the Content View you export. For more information, see Download Policies Overview.

  • Ensure that you synchronize Products that you export to the required date.

  • Ensure that the user exporting the content has the Content Exporter role.

To Export a Content View Version
  1. List versions of the Content View that are available for export:

    # hammer content-view version list \
    --content-view="My_Content_View" \
    --organization="My_Organization"
    
    ---|----------|---------|-------------|-----------------------
    ID | NAME     | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS
    ---|----------|---------|-------------|-----------------------
    5  | view 3.0 | 3.0     |             | Library
    4  | view 2.0 | 2.0     |             |
    3  | view 1.0 | 1.0     |             |
    ---|----------|---------|-------------|----------------------
Export a Content View version
  1. Get the version number of desired version. The following example targets version 1.0 for export.

    # hammer content-export complete version \
    --content-view="Content_View_Name" \
    --version=1.0 \
    --organization="My_Organization"
  2. Verify that the archive containing the exported version of a Content View is located in the export directory:

    # ls -lh /var/lib/pulp/exports/My_Organization/Content_View_Name/1.0/2021-02-25T18-59-26-00-00/

You require all three files, for example, the tar.gz archive file, the toc.json and metadata.json to import the content successfully.

Export with chunking

In many cases, the exported archive content can be several gigabytes in size. You might want to split it smaller sizes or chunks. You can use the --chunk-size-gb option with in the hammer content-export command to handle this. The following example uses the --chunk-size-gb=2 to split the archives into 2 GB chunks.

# hammer content-export complete version \
--chunk-size-gb=2 \
--content-view="Content_View_Name" \
--organization="My_Organization" \
--version=1.0
# ls -lh /var/lib/pulp/exports/My_Organization/view/1.0/2021-02-25T21-15-22-00-00/

Examining the Exports

When importing content to several orcharhino Servers, the --destination-server option is especially useful for keeping track of which content was exported and to where.

You can use this flag to let the exporting orcharhino Server keep track of content in specific servers. The --destination-server option functions to indicate the destination server that your content is imported to. The following example uses --destination-server=mirror1 to export content to mirror1. The archive is created on the exporting orcharhino Server. However, a record of each export is also maintained. This can be very useful when incrementally exporting.

# hammer content-export complete version \
--content-view="Content_View_Name" \
--destination-server=mirror1 \
--organization="My_Organization" \
--version=1.0
Incremental Export

Exporting complete versions can be a very expensive operation on storage space and resources. The size of the exported Content View versions depends on the number of products. Content View versions that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of the space on orcharhino Server.

You can use the Incremental Export functionality to help reduce demands on your infrastructure. Incremental Export exports only content that changes from the previously exported version. Generally, incremental changes are smaller than full exports. ln the following example, since version 1.0 has already been exported and the command targets version 2.0 for export. To use incremental export, complete the following steps.

# hammer content-export incremental version \
--content-view="Content_View_Name" \
--organization="My_Organization" \
--version=2.0

# ls -lh /var/lib/pulp/exports/My_Organization/view/2.0/2021-02-25T21-45-34-00-00/

Viewing Content Export Reports

You can query on the exports that you previously have created via the hammer content-export list command.

hammer content-export list --organization="My_Organization"

---|--------------------|------------------------------------------------------------------------------|-------------|----------------------|-------------------------|-------------------------|------------------------
ID | DESTINATION SERVER | PATH                                                                         | TYPE        | CONTENT VIEW VERSION | CONTENT VIEW VERSION ID | CREATED AT              | UPDATED AT
---|--------------------|------------------------------------------------------------------------------|-------------|----------------------|-------------------------|-------------------------|------------------------
1  |                    | /var/lib/pulp/exports/My_Organization/view/1.0/2021-02-25T18-59-26-00-00 | complete    | view 1.0             | 3                       | 2021-02-25 18:59:30 UTC | 2021-02-25 18:59:30 UTC
2  |                    | /var/lib/pulp/exports/My_Organization/view/1.0/2021-02-25T21-15-22-00-00 | complete    | view 1.0             | 3                       | 2021-02-25 21:15:26 UTC | 2021-02-25 21:15:26 UTC
3  |                    | /var/lib/pulp/exports/My_Organization/view/2.0/2021-02-25T21-45-34-00-00 | incremental | view 2.0             | 4                       | 2021-02-25 21:45:37 UTC | 2021-02-25 21:45:37 UTC
---|--------------------|------------------------------------------------------------------------------|-------------|----------------------|-------------------------|-------------------------|------------------------

Importing a Content View Version

You can use the archive that the hammer content-export command outputs to create a version of a Content View with the same content as the exported Content View version. For more information about exporting a Content View version, see Exporting a Content View Version.

When you import a Content View version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. The Custom Repositories, Products and Content Views are automatically created if they do not exist in the importing organization.

Prerequisites

To import a Content View, ensure that orcharhino Server where you want to import meets the following conditions:

  1. The exported archive must be in a directory under /var/lib/pulp/imports.

  2. The directory must have pulp:pulp permissions so that Pulp can read and write the .json files in that directory.

  3. If there are any Red Hat repositories in the export archive, the importing organization’s manifest must contain subscriptions for the products contained within the export.

  4. The user importing the content view version must have the 'Content Importer' Role.

Procedure
  1. Copy the archived file with the exported Content View version to the /var/lib/pulp/imports directory on orcharhino Server where you want to import.

  2. Set the user:group permission of the archive files to pulp:pulp.

    # chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/
  3. Verify that the permission change occurs:

    # ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/
  4. To import the Content View version to orcharhino Server, enter the following command:

    # hammer content-import version \
    --organization="My_Organization" \
    --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/

    Note that you must enter the full path /var/lib/pulp/imports/<path>. Relative paths do not work.

  5. To verify that you import the Content View version successfully, list Content Views for your organization:

    # hammer content-view version list \
    --content-view="My_Content_View" \
    --organization="My_Organization"

Exporting the Library Environment

You can export contents of all Yum repositories in the Library environment of an organization to an archive file from orcharhino Server and use this archive file to create the same repositories in another orcharhino Server or in another orcharhino Server organization. The exported archive file contains the following data:

  • A JSON file containing Content View version metadata

  • An archive file containing all the repositories from the Library environment of the organization.

orcharhino Server exports only RPM and kickstart files included in a Content View version. orcharhino does not export the following content:

  • Docker content

Prerequisites

To export the contents of the Library lifecycle environment of the organization, ensure that orcharhino Server where you want to export meets the following conditions:

  • Ensure that the export directory has free storage space to accommodate the export.

  • Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process.

  • Ensure that you set download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Download Policies Overview.

  • Ensure that you synchronize Products that you export to the required date.

To Export the Library Content of an Organization:

Use the organization name or ID to export.

# hammer content-export complete library --organization="My_Organization"
  1. Verify that the archive containing the exported version of a Content View is located in the export directory:

    # ls -lh /var/lib/pulp/exports/My_Organization/Export-Library/1.0/2021-03-02T03-35-24-00-00
    total 68M
    -rw-r--r--. 1 pulp pulp 68M Mar  2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz
    -rw-r--r--. 1 pulp pulp 333 Mar  2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json
    -rw-r--r--. 1 root root 443 Mar  2 03:35 metadata.json
  2. You require all three files, for example, the tar.gz, the toc.json and the metadata.json file to be able to import.

  3. A new Content View Export-Library is created in the organization. This content view contains all the repositories belonging to this organization. A new version of this Content View is published and exported automatically.

Export with chunking

In many cases the exported archive content may be several gigabytes in size. If you want to split it into smaller sizes or chunks. You can use the --chunk-size-gb flag directly in the export command to handle this. In the following example, you can see how to specify --chunk-size-gb=2 to split the archives in 2 GB chunks.

# hammer content-export complete library \
--chunk-size-gb=2 \
--organization="My_Organization"

Generated /var/lib/pulp/exports/My_Organization/Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json

# ls -lh /var/lib/pulp/exports/My_Organization/Export-Library/2.0/2021-03-02T04-01-25-00-00/
Incremental Export

Exporting Library content can be a very expensive operation in terms of space and resources. Organization that have multiple RHEL trees may occupy several gigabytes of the space on orcharhino Server.

orcharhino Server offers Incremental Export to help with this scenario. Incremental Export exports only things that changed from the previous export. These would be typically smaller than the full exports. In the example below we will incrementally export what changed from the previous export of all the repositories in the Library lifecycle environment.

# hammer content-export incremental library --organization="My_Organization"

Generated /var/lib/pulp/exports/My_Organization/Export-Library/3.0/2021-03-02T04-22-14-00-00/metadata.json

# ls -lh /var/lib/pulp/exports/My_Organization/Export-Library/3.0/2021-03-02T04-22-14-00-00/
total 172K
-rw-r--r--. 1 pulp pulp 161K Mar  2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz
-rw-r--r--. 1 pulp pulp  333 Mar  2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json
-rw-r--r--. 1 root root  492 Mar  2 04:22 metadata.json
  1. Since nothing changed between the previous export and now in the Organization’s library environment the change files are really small.

Importing into the Library Environment

You can use the archive that the hammer content-export command outputs to import into the Library lifecycle environment of another organization For more information about exporting contents from the Library environment, see Exporting the Library Environment.

Prerequisites

To import in to an Organization’s library lifecycle environment ensure that orcharhino Server where you want to import meets the following conditions:

  1. The exported archive must be in a directory under /var/lib/pulp/imports.

  2. The directory must have pulp:pulp permissions so that Pulp can read and write the .json files in that directory.

  3. If there are any Red Hat repositories in the export archive, the importing organization’s manifest must contain subscriptions for the products contained within the export.

  4. The user importing the content must have the 'Content Importer' Role.

Procedure
  1. Copy the archived file with the exported Content View version to the /var/lib/pulp/imports directory on orcharhino Server where you want to import.

  2. Set the permission of the archive files to pulp:pulp.

    # chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00
    # ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00
    total 68M
    -rw-r--r--. 1 pulp pulp 68M Mar  2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz
    -rw-r--r--. 1 pulp pulp 333 Mar  2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json
    -rw-r--r--. 1 pulp pulp 443 Mar  2 04:29 metadata.json
  3. On orcharhino Server where you want to import, create/enable repositories the same name and label as the exported content.

  4. In the orcharhino management UI, navigate to Content > Products, click the Yum content tab and add the same Yum content that the exported Content View version includes.

  5. Identify the Organization that you wish to import into.

  6. To import the Library content to orcharhino Server, enter the following command:

    # hammer content-import library \
    --organization="My_Organization" \
    --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00

    Note you must enter the full path /var/lib/pulp/imports/<path>. Relative paths do not work.

  7. To verify that you imported the Library content, check the contents of the Product and Repositories. A new Content View called Import-Library is created in the target organization. This Content View is used to facilitate the library content import.

Import/Export Cheat Sheet

Table 4. Export
Intent Command

Fully Export a content view version

hammer content-export complete version --content-view=view --version=1.0 --organization="Default_Organization"

Incrementally Export a content view version (assuming you have already exported something)

hammer content-export incremental version --content-view=view --version=2.0 --organization="Default_Organization"

Fully Export an Organization’s Library

hammer content-export complete library --organization="Default_Organization"

Incrementally Export an Organization’s Library (assuming you have already exported something)

hammer content-export incremental library --organization="Default_Organization"

Export a content view version promoted to the Dev Environment

hammer content-export complete version --content-view=view --organization="Default_Organization" --lifecycle-environment=’Dev’

Export a content view in smaller chunks (2 gb slabs)

hammer content-export complete version --content-view=view --version=1.0 --organization="Default_Organization" --chunk-size-gb=2

Get a list of exports

hammer content-export list --content-view=view --organization="Default_Organization"

Table 5. Import
Intent Command

Import to a content view version

hammer content-import version --organization="Default_Organization" --path=’/var/lib/pulp/imports/dump_dir’

Import to an Organization’s Library

hammer content-import library --organization="Default_Organization" --path=’/var/lib/pulp/imports/dump_dir’

Managing Activation Keys

Activation keys provide a method to automate system registration and subscription attachment. You can create multiple keys and associate them with different environments and Content Views. For example, you might create a basic activation key with a subscription for Red Hat Enterprise Linux workstations and associate it with Content Views from a particular environment.

You can use activation keys during content host registration to improve the speed, simplicity and consistency of the process. Note that activation keys are used only when hosts are registered. If changes are made to an activation key, it is applicable only to hosts that are registered with the amended activation key in the future. The changes are not made to existing hosts.

Activation keys can define the following properties for content hosts:

  • Associated subscriptions and subscription attachment behavior

  • Available products and repositories

  • A life cycle environment and a Content View

  • Host collection membership

  • System purpose

Content View Conflicts between Host Creation and Registration

When you provision a host, orcharhino uses provisioning templates and other content from the Content View that you set in the host group or host settings. When the host is registered, the Content View from the activation key overwrites the original Content View from the host group or host settings. Then orcharhino uses the Content View from the activation key for every future task, for example, rebuilding a host.

When you rebuild a host, ensure that you set the Content View that you want to use in the activation key and not in the host group or host settings.

Using the Same Activation Key with Multiple Content Hosts

You can apply the same activation key to multiple content hosts if it contains enough subscriptions. However, activation keys set only the initial configuration for a content host. When the content host is registered to an organization, the organization’s content can be attached to the content host manually.

Using Multiple Activation Keys with a Content Host

A content host can be associated with multiple activation keys that are combined to define the host settings. In case of conflicting settings, the last specified activation key takes precedence. You can specify the order of precedence by setting a host group parameter as follows:

$ hammer hostgroup set-parameter \
--hostgroup "My_Host_Group" \
--name "My_Activation_Key" \
--value "name_of_first_key", "name_of_second_key", ...

Creating an Activation Key

You can use activation keys to define a specific set of subscriptions to attach to hosts during registration. The subscriptions that you add to an activation key must be available within the associated Content View.

Subscription Manager attaches subscriptions differently depending on the following factors:

  • Are there any subscriptions associated with the activation key?

  • Is the auto-attach option enabled?

  • For Red Hat Enterprise Linux 8 hosts: Is there system purpose set on the activation key?

Note that orcharhino automatically attaches subscriptions only for the products installed on a host. For subscriptions that do not list products installed on Red Hat Enterprise Linux by default, such as the Extended Update Support (EUS) subscription, use an activation key specifying the required subscriptions and with the auto-attach disabled.

Based on the previous factors, there are three possible scenarios for subscribing with activation keys:

  1. Activation key that attaches subscriptions automatically.

    With no subscriptions specified and auto-attach enabled, hosts using the activation key search for the best fitting subscription from the ones provided by the Content View associated with the activation key. This is similar to entering the subscription-manager --auto-attach command. For Red Hat Enterprise Linux 8 hosts, you can configure the activation key to set system purpose on hosts during registration to enhance the automatic subscriptions attachment.

  2. Activation key providing a custom set of subscription for auto-attach.

    If there are subscriptions specified and auto-attach is enabled, hosts using the activation key select the best fitting subscription from the list specified in the activation key. Setting system purpose on the activation key does not affect this scenario.

  3. Activation key with the exact set of subscriptions.

    If there are subscriptions specified and auto-attach is disabled, hosts using the activation key are associated with all subscriptions specified in the activation key. Setting system purpose on the activation key does not affect this scenario.

Custom Products

If a custom product, typically containing content not provided by Red Hat, is assigned to an activation key, this product is always enabled for the registered content host regardless of the auto-attach setting.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Activation keys and click Create Activation Key.

  2. In the Name field, enter the name of the activation key.

  3. If you want to set a limit, clear the Unlimited hosts check box, and in the Limit field, enter the maximum number of systems you can register with the activation key. If you want unlimited hosts to register with the activation key, ensure the Unlimited Hosts check box is selected.

  4. Optional: In the Description field, enter a description for the activation key.

  5. From the Environment list, select the environment to use.

  6. From the Content View list, select a Content View to use. If you intend to use the deprecated Katello Agent instead of Remote Execution, the Content View must contain the orcharhino client repository because it contains the katello-agent package.

  7. Click Save.

  8. Optional: For Red Hat Enterprise Linux 8 hosts, in the System Purpose section, you can configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment.

CLI procedure
  1. Create the activation key:

    # hammer activation-key create \
    --name "My_Activation_Key" \
    --unlimited-hosts \
    --description "Example Stack in the Development Environment" \
    --lifecycle-environment "Development" \
    --content-view "Stack" \
    --organization "My_Organization"
  2. Optional: For Red Hat Enterprise Linux 8 hosts, enter the following command to configure the activation key with system purpose to set on hosts during registration to enhance subscriptions auto attachment.

    # hammer activation-key update \
    --organization "My_Organization" \
    --name "My_Activation_Key" \
    --service-level "Standard" \
    --purpose-usage "Development/Test" \
    --purpose-role "Red Hat Enterprise Linux Server" \
    --purpose-addons "addons"
  3. Obtain a list of your subscription IDs:

    # hammer subscription list --organization "My_Organization"
  4. Attach the Red Hat Enterprise Linux subscription UUID to the activation key:

    # hammer activation-key add-subscription \
    --name "My_Activation_Key" \
    --subscription-id My_Subscription_ID \
    --organization "My_Organization"
  5. List the product content associated with the activation key:

    # hammer activation-key product-content \
    --name "My_Activation_Key" \
    --organization "My_Organization"
  6. Override the default auto-enable status for the orcharhino client repository. The default status is set to disabled. To enable, enter the following command:

    # hammer activation-key content-override \
    --name "My_Activation_Key" \
    --content-label orcharhino client \
    --value 1 \
    --organization "My_Organization"

Updating Subscriptions Associated with an Activation Key

Use this procedure to change the subscriptions associated with an activation key. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Note that changes to an activation key apply only to machines provisioned after the change. To update subscriptions on existing content hosts, see Updating Red Hat Subscriptions on Multiple Hosts.

Procedure
  1. In the orcharhino management UI, navigate to Content > Activation keys and click the name of the activation key.

  2. Click the Subscriptions tab.

  3. To remove subscriptions, select List/Remove, and then select the check boxes to the left of the subscriptions to be removed and then click Remove Selected.

  4. To add subscriptions, select Add, and then select the check boxes to the left of the subscriptions to be added and then click Add Selected.

  5. Click the Repository Sets tab and review the repositories' status settings.

  6. To enable or disable a repository, select the check box for a repository and then change the status using the Select Action list.

  7. Click the Details tab, select a Content View for this activation key, and then click Save.

CLI procedure
  1. List the subscriptions that the activation key currently contains:

    # hammer activation-key subscriptions \
    --name My_Activation_Key \
    --organization "My_Organization"
  2. Remove the required subscription from the activation key:

    # hammer activation-key remove-subscription \
    --name "My_Activation_Key" \
    --subscription-id ff808181533518d50152354246e901aa \
    --organization "My_Organization"

    For the --subscription-id option, you can use either the UUID or the ID of the subscription.

  3. Attach new subscription to the activation key:

    # hammer activation-key add-subscription \
    --name "My_Activation_Key" \
    --subscription-id ff808181533518d50152354246e901aa \
    --organization "My_Organization"

    For the --subscription-id option, you can use either the UUID or the ID of the subscription.

  4. List the product content associated with the activation key:

    # hammer activation-key product-content \
    --name "My_Activation_Key" \
    --organization "My_Organization"
  5. Override the default auto-enable status for the required repository:

    # hammer activation-key content-override \
    --name "My_Activation_Key" \
    --content-label content_label \
    --value 1 \
    --organization "My_Organization"

    For the --value option, enter 1 for enable, 0 for disable.

Using Activation Keys for Host Registration

You can use activation keys to complete the following tasks:

  • Registering new hosts during provisioning through orcharhino. The kickstart provisioning templates in orcharhino contain commands to register the host using an activation key that is defined when creating a host.

  • Registering existing Red Hat Enterprise Linux hosts. Configure Subscription Manager to use orcharhino Server for registration and specify the activation key when running the subscription-manager register command.

You can register hosts with orcharhino using the host registration feature, the orcharhino API, or Hammer CLI.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Register Host.

  2. Click Generate to create the registration command.

  3. Click on the files icon to copy the command to your clipboard.

  4. Log on to the host you want register and run the previously generated command.

  5. Check the /etc/yum.repos.d/redhat.conf and ensure that the appropriate repositories have been enabled.

CLI procedure
  1. Generate the host registration command using the Hammer CLI:

    # hammer host-registration generate-command \
    --activation-keys "My_Activation_Key"
  2. Log on to the host you want register and run the previously generated command.

  3. Check the /etc/yum.repos.d/redhat.conf and ensure that the appropriate repositories have been enabled.

API procedure
  1. Generate the host registration command using the orcharhino API:

    # curl -X POST https://orcharhino.example.com/api/registration_commands \
    --user "My_User_Name" \
    -H 'Content-Type: application/json' \
    -d '{ "registration_command": { "activation_keys": ["My_Activation_Key_1, My_Activation_Key_2"] }}'

    Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in the Content Management guide.

    To enter a password as command line argument, use username:password syntax. Keep in mind this can save the password in the shell history.

    For more information about registration see Registering a Host to orcharhino.

  2. Log on to the host you want register and run the previously generated command.

  3. Check the /etc/yum.repos.d/redhat.conf and ensure that the appropriate repositories have been enabled.

Multiple Activation Keys

You can use multiple activation keys when registering a content host. You can then create activation keys for specific subscription sets and combine them according to content host requirements. For example, the following command registers a content host to your organization with both VDC and OpenShift subscriptions:

# subscription-manager register --org="My_Organization" \
--activationkey="ak-VDC,ak-OpenShift"
Settings Conflicts

If there are conflicting settings in activation keys, the rightmost key takes precedence.

  • Settings that conflict: Service Level, Release Version, Environment, Content View, and Product Content.

  • Settings that do not conflict and the host gets the union of them: Subscriptions and Host Collections.

  • Settings that influence the behavior of the key itself and not the host configuration: Content Host Limit and Auto-Attach.

Enabling Auto-Attach

When auto-attach is enabled on an activation key and there are subscriptions associated with the key, the subscription management service selects and attaches the best-matched associated subscriptions based on a set of criteria like currently installed products, architecture, and preferences like service level.

You can enable auto-attach and have no subscriptions associated with the key. This type of key is commonly used to register virtual machines when you do not want the virtual machine to consume a physical subscription, but to inherit a host-based subscription from the hypervisor.

Auto-attach is enabled by default. Disable the option if you want to force attach all subscriptions associated with the activation key.

Procedure
  1. In the orcharhino management UI, navigate to Content > Activation Keys.

  2. Click the activation key name that you want to edit.

  3. Click the Subscriptions tab.

  4. Click the edit icon next to Auto-Attach.

  5. Select or clear the check box to enable or disable auto-attach.

  6. Click Save.

CLI procedure
  • Enter the following command to enable auto-attach on the activation key:

    # hammer activation-key update --name "My_Activation_Key" \
    --organization "My_Organization" --auto-attach true

Setting the Service Level

You can configure an activation key to define a default service level for the new host created with the activation key. Setting a default service level selects only the matching subscriptions to be attached to the host. For example, if the default service level on an activation key is set to Premium, only subscriptions with premium service levels are attached to the host upon registration.

Procedure
  1. In the orcharhino management UI, navigate to Content > Activation Keys.

  2. Click the activation key name you want to edit.

  3. Click the edit icon next to Service Level.

  4. Select the required service level from the list. The list only contains service levels available to the activation key.

  5. Click Save.

CLI procedure
  • Enter the following command to set a default service level to Premium on the activation key:

    # hammer activation-key update --name "My_Activation_Key" \
    --organization "My_Organization" --service-level premium

Best Practices for Activation Keys

  • There are two basic approaches for activation keys: Either create a single activation key for a specific kind of use case, for example corporate_webserver_prod, which uses the WebServer content view and selects all subscriptions necessary to run the service. Alternatively, create separate activation keys like basic_centos_7_prod, webserver_prod, and database_mysql_prod, all using the same content view, that is WebServer, along with a different set of subscriptions.

  • Register hosts to orcharhino with an activation key. You can add additional content using content views, lifecycle environments, subscriptions, and repository sets using the management UI, the API, Hammer scripts, or Ansible.

  • You can attach multiple activation keys to a host as long as all activation keys belong to the same lifecycle environment and content view. This allows you to separately activate additional repositories at a later stage.

  • We recommend using meaningful names for activation keys to indicate the content and lifecycle environment, for example ubuntu_20_webserver.

Managing Errata

Red Hat Enterprise Linux users receive errata for each release of official Red Hat RPMs. SUSE Linux Enterprise Server users receive errata for each release of official SUSE RPMs. Additionally, ATIX provides Debian and Ubuntu users with errata. There are three types of advisories (in order of importance):

Security Advisory

Describes fixed security issues found in the package. The security impact of the issue can be Low, Moderate, Important, or Critical.

Bug Fix Advisory

Describes bug fixes for the package.

Product Enhancement Advisory

Describes enhancements and new features added to the package.

For Red Hat Enterprise Linux, orcharhino imports errata information when synchronizing repositories with Red Hat’s Content Delivery Network (CDN). For SUSE Linux Enterprise Server, orcharhino imports errata information when synchronizing repositories with SUSE Customer Center. For Alma Linux and Rocky Linux, orcharhino imports errata information when synchronizing repositories with upstream repositories. orcharhino also provides tools to inspect and filter errata, allowing for precise update management. This way, you can select relevant updates and propagate them through Content Views to selected content hosts. For Red Hat Enterprise Linux users, orcharhino imports this errata information when synchronizing repositories with Red Hat’s Content Delivery Network (CDN). For Alma Linux or Rocky Linux, you can use errata directly from their upstream repositories.

For Debian and Ubuntu, you can use the errata service provided by ATIX. For more information, see creating a repository of type deb.

The ATIX Debian and Ubuntu Errata service provides errata for Debian and Ubuntu. When creating a repository of type deb, point the Errata URL at ATIX’s Debian and Ubuntu Errata Parser server. Use https://dep.atix.de/dep/api/v1/debian for Debian and https://dep.atix.de/dep/api/v1/ubuntu for Ubuntu.

An erratum contains the information which packages have to be updated to fix a security issue. Debian and Ubuntu errata are derived from the Debian security announcements (DSA) and the Ubuntu security notices (USN).

Ensure to only add Debian and Ubuntu errata to Debian and Ubuntu repositories that contain the packages needed to apply the errata. For Debian, you need the <debian_release>/updates repository, for example bullseye/updates. For Ubuntu, you need the <ubuntu_release>-security repository, for example focal-security.

Email notifications may help you keeping track of available errata for specific hosts. Navigate to the users page to enable email notifications.

You can also use orcharhino to inspect and filter errata, allowing for precise update management. This way, you can select relevant updates and propagate them through Content Views to selected content hosts.

Errata are labeled according to the most important advisory type they contain. Therefore, errata labeled as Product Enhancement Advisory can contain only enhancement updates, while Bug Fix Advisory errata can contain both bug fixes and enhancements, and Security Advisory can contain all three types.

In orcharhino, there are two keywords that describe an erratum’s relationship to the available content hosts:

Applicable

An erratum that applies to one or more content hosts, which means it updates packages present on the content host. Although these errata apply to content hosts, until their state changes to Installable, the errata are not ready to be installed. Installable errata are automatically applicable.

Installable

An erratum that applies to one or more content hosts and is available to install on the content host. Installable errata are available to a content host from life cycle environment and the associated Content View, but are not yet installed.

This chapter shows how to manage errata and apply them to either a single host or multiple hosts.

Inspecting Available Errata

The following procedure describes how to view and filter the available errata and how to display metadata of the selected advisory. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Errata to view the list of available errata.

  2. Use the filtering tools at the top of the page to limit the number of displayed errata:

    • Select the repository to be inspected from the list. All Repositories is selected by default.

    • The Applicable check box is selected by default to view only errata applicable to the selected repository. Select the Installable check box to view only errata marked as installable.

    • To search the table of errata, type the query in the Search field in the form of:

      parameter operator value

      See Parameters Available for Errata Search for the list of parameters available for search. Find the list of applicable operators in Supported Operators for Granular Search in Administering orcharhino. Automatic suggestion works as you type. You can also combine queries with the use of and and or operators. For example, to display only security advisories related to the kernel package, type:

      type = security and package_name = kernel

      Press Enter to start the search.

  3. Click the Errata ID of the erratum you want to inspect:

    • The Details tab contains the description of the updated package as well as documentation of important fixes and enhancements provided by the update.

    • On the Content Hosts tab, you can apply the erratum to selected content hosts as described in Applying Errata to Multiple Hosts.

    • The Repositories tab lists repositories that already contain the erratum. You can filter repositories by the environment and Content View, and search for them by the repository name.

CLI procedure
  • To view errata that are available for all organizations, enter the following command:

    # hammer erratum list
  • To view details of a specific erratum, enter the following command:

    # hammer erratum info --id erratum_ID
  • You can search errata by entering the query with the --search option. For example, to view applicable errata for the selected product that contains the specified bugs ordered so that the security errata are displayed on top, enter the following command:

    # hammer erratum list \
    --product-id 7 \
    --search "bug = 1213000 or bug = 1207972" \
    --errata-restrict-applicable 1 \
    --order "type desc"

Subscribing to Errata Notifications

You can configure email notifications for orcharhino users. Users receive a summary of applicable and installable errata, notifications on Content View promotion or after synchronizing a repository. For more information, see Configuring Email Notifications in the Administering orcharhino guide.

Limitations to Repository Dependency Resolution

With orcharhino, using incremental updates to your Content Views solves some repository dependency problems. However, dependency resolution at a repository level still remains problematic on occasion.

When a repository update becomes available with a new dependency, orcharhino retrieves the newest version of the package to solve the dependency, even if there are older versions available in the existing repository package. This can create further dependency resolution problems when installing packages.

Example scenario

A repository on your client has the package example_repository-1.0 with the dependency example_repository-libs-1.0. The repository also has another package example_tools-1.0.

A security erratum becomes available with the package example_tools-1.1. The example_tools-1.1 package requires the example_repository-libs-1.1 package as a dependency.

After an incremental Content View update, the example_tools-1.1, example_tools-1.0, and example_repository-libs-1.1 are now in the repository. The repository also has the packages example_repository-1.0 and example_repository-libs-1.0. Note that the incremental update to the Content View did not add the package example_repository-1.1. Because you can install all these packages using yum, no potential problem is detected. However, when the client installs the example_tools-1.1 package, a dependency resolution problem occurs because both example_repository-libs-1.0 and example_repository-libs-1.1 cannot be installed.

There is currently no workaround for this problem. The larger the time frame, and major Y releases between the base set of RPMs and the errata being applied, the higher the chance of a problem with dependency resolution.

Creating a Content View Filter for Errata

You can use content filters to limit errata. Such filters include:

  • ID - Select specific erratum to allow into your resulting repositories.

  • Date Range - Define a date range and include a set of errata released during that date range.

  • Type - Select the type of errata to include such as bug fixes, enhancements, and security updates.

Create a content filter to exclude errata after a certain date. This ensures your production systems in the application life cycle are kept up to date to a certain point. Then you can modify the filter’s start date to introduce new errata into your testing environment to test the compatibility of new packages into your application life cycle.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites
  • A Content View with the repositories that contain required errata is created. For more information, see Creating a Content View.

Procedure
  1. In the orcharhino management UI, navigate to Content > Content Views and select a Content View that you want to use for applying errata.

  2. Select Yum Content > Filters and click New Filter.

  3. In the Name field, enter Errata Filter.

  4. From the Content Type list, select Erratum - Date and Type.

  5. From the Inclusion Type list, select Exclude.

  6. In the Description field, enter Exclude errata items from YYYY-MM-DD.

  7. Click Save.

  8. For Errata Type, select the check boxes of errata types you want to exclude. For example, select the Enhancement and Bugfix check boxes and clear the Security check box to exclude enhancement and bugfix errata after certain date, but include all the security errata.

  9. For Date Type, select one of two check boxes:

    • Issued On for the issued date of the erratum.

    • Updated On for the date of the erratum’s last update.

  10. Select the Start Date to exclude all errata on or after the selected date.

  11. Leave the End Date field blank.

  12. Click Save.

  13. Click Publish New Version to publish the resulting repository.

  14. Enter Adding errata filter in the Description field.

  15. Click Save.

    When the Content View completes publication, notice the Content column reports a reduced number of packages and errata from the initial repository. This means the filter successfully excluded the all non-security errata from the last year.

  16. Click the Versions tab.

  17. Click Promote to the right of the published version.

  18. Select the environments you want to promote the Content View version to.

  19. In the Description field, enter the description for promoting.

  20. Click Promote Version to promote this Content View version across the required environments.

CLI procedure
  1. Create a filter for the errata:

    # hammer content-view filter create --name "Filter Name" \
    --description "Exclude errata items from the YYYY-MM-DD" \
    --content-view "CV Name" --organization "Default Organization" \
    --type "erratum"
  2. Create a filter rule to exclude all errata on or after the Start Date that you want to set:

    # hammer content-view filter rule create --start-date "YYYY-MM-DD" \
    --content-view "CV Name" --content-view-filter="Filter Name" \
    --organization "Default Organization" --types=security,enhancement,bugfix
  3. Publish the Content View:

    # hammer content-view publish --name "CV Name" \
    --organization "Default Organization"
  4. Promote the Content View to the lifecycle environment so that the included errata are available to that lifecycle environment:

    # hammer content-view version promote \
    --content-view "CV Name" \
    --organization "Default Organization" \
    --to-lifecycle-environment "Lifecycle Environment Name"

Adding Errata to an Incremental Content View

If errata are available but not installable, you can create an incremental Content View version to add the errata to your content hosts. For example, if the Content View is version 1.0, it becomes Content View version 1.1, and when you publish, it becomes Content View version 2.0.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Errata.

  2. From the Errata list, click the name of the errata that you want to apply.

  3. Select the content hosts that you want to apply the errata to, and click Apply to Hosts. This creates the incremental update to the Content View.

  4. If you want to apply the errata to the content host, select the Apply Errata to Content Hosts immediately after publishing check box.

  5. Click Confirm to apply the errata.

CLI procedure
  1. List the errata and its corresponding IDs:

    # hammer erratum list
  2. List the different content-view versions and the corresponding IDs:

    # hammer content-view version list
  3. Apply a single erratum to content-view version. You can add more IDs in a comma-separated list.

    # hammer content-view version incremental-update \
    --content-view-version-id 319 --errata-ids 34068b

Applying Errata to a Host

Use these procedures to review and apply errata to a host.

Prerequisites
  • Register the host to an environment and Content View on orcharhino Server. For more information, see Registering Hosts in the Managing Hosts guide.

  • Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting up Remote Jobs in the Managing Hosts guide.

If the host is already configured to receive content updates with the deprecated Katello Agent, migrate to remote execution instead. For more information, see Migrating from Katello Agent to Remote Execution in the Managing Hosts guide.

The procedure to apply an erratum to a managed host depends on its operating system.

Applying Errata to a Host running CentOS 7

Use these procedures to review and apply errata to a host running CentOS 7.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to.

  2. Navigate to the Errata tab to see the list of errata.

  3. Select the errata to apply and click Apply Selected. In the confirmation window, click Apply.

  4. After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages.

CLI procedure
  1. List all errata for the host:

    # hammer host errata list \
    --host client.example.com
  2. Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID:

    # hammer job-invocation create \
    --feature katello_errata_install \
    --inputs errata=ERRATUM_ID1,ERRATUM_ID2 \
    --search-query "name = client.example.com"

Applying Errata to a Host running Debian or Ubuntu

Use these procedures to review and apply errata to a host running Debian or Ubuntu.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to.

  2. Navigate to the Errata tab to see the list of errata.

  3. Select the errata to apply and click Apply Selected. In the confirmation window, click Apply.

  4. After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages.

CLI procedure
  1. List all errata for the host:

    # hammer host errata list \
    --host client.example.com
  2. Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID.

    Using Remote Execution

    # hammer job-invocation create \
    --feature katello_errata_install \
    --inputs errata=ERRATUM_ID1,ERRATUM_ID2 \
    --search-query "name = client.example.com"

Applying Errata to a Host running Red Hat Enterprise Linux 7

Use these procedures to review and apply errata to a host running Red Hat Enterprise Linux 7.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to.

  2. Navigate to the Errata tab to see the list of errata.

  3. Select the errata to apply and click Apply Selected. In the confirmation window, click Apply.

  4. After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages.

CLI procedure
  1. List all errata for the host:

    # hammer host errata list \
    --host client.example.com
  2. Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID.

    Using Remote Execution

    # hammer job-invocation create \
    --feature katello_errata_install \
    --inputs errata=ERRATUM_ID1,ERRATUM_ID2 \
    --search-query "name = client.example.com"

    Using Katello Agent (deprecated)

    # hammer host errata apply --host "client.example.com" \
    --errata-ids ERRATUM_ID1,ERRATUM_ID2...

Applying Errata to a Host running Red Hat Enterprise Linux 8

Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 8.

CLI procedure
  1. On orcharhino, list all errata for the host:

    # hammer host errata list \
    --host client.example.com
  2. Find the module stream an erratum belongs to:

    # hammer erratum info --id ERRATUM_ID
  3. On the host, update the module stream:

    # yum update Module_Stream_Name

Applying Errata to a Host Running SLES 15

Use this procedure to review and apply errata to a host running SUSE Linux Enterprise Server 15.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to.

  2. Navigate to the Errata tab to view the list of errata.

  3. Select the errata to apply and click Apply Selected. In the confirmation window, click Apply.

  4. After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages.

CLI procedure
  1. List all errata for the host:

    # hammer host errata list \
    --host client.example.com
  2. Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID:

    # hammer job-invocation create \
    --feature katello_errata_install \
    --inputs errata=ERRATUM_ID1,ERRATUM_ID2 \
    --search-query "name = client.example.com"

Applying Errata to Multiple Hosts

Use these procedures to review and apply errata to multiple RHEL 7 hosts.

Prerequisites
  • Synchronize orcharhino repositories with the latest errata available from Red Hat. For more information, see Synchronizing Repositories.

  • Register the hosts to an environment and Content View on orcharhino Server. For more information, see Registering Hosts in the Managing Hosts guide.

  • Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting up Remote Jobs in the Managing Hosts guide.

If the host is already configured to receive content updates with the deprecated Katello Agent, migrate to remote execution instead. For more information, see Migrating from Katello Agent to Remote Execution in the Managing Hosts guide.

Procedure
  1. In the orcharhino management UI, navigate to Content > Errata.

  2. Click the name of an erratum you want to apply.

  3. Click to Content Hosts tab.

  4. Select the hosts you want to apply errata to and click Apply to Hosts.

  5. Click Confirm.

CLI procedure
  1. List all installable errata:

    # hammer erratum list \
    --errata-restrict-installable true \
    --organization "Default Organization"
  2. Apply one of the errata to multiple hosts:

    Using Remote Execution

    # hammer job-invocation create \
    --feature katello_errata_install \
    --inputs errata=ERRATUM_ID \
    --search-query "applicable_errata = ERRATUM_ID"

    Using Katello Agent (deprecated)

    Identify the erratum you want to use and list the hosts that this erratum is applicable to:

    # hammer host list \
    --search "applicable_errata = ERRATUM_ID" \
    --organization "Default Organization"

    The following Bash script applies an erratum to each host for which this erratum is available:

    for HOST in hammer --csv --csv-separator "|" host list --search "applicable_errata = ERRATUM_ID" --organization "Default Organization" | tail -n+2 | awk -F "|" '{ print $2 }' ;
    do
      echo "== Applying to $HOST ==" ; hammer host errata apply --host $HOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ;
    done

    This command identifies all hosts with erratum_IDs as an applicable erratum and then applies the erratum to each host.

  3. To see if an erratum is applied successfully, find the corresponding task in the output of the following command:

    # hammer task list
  4. View the state of a selected task:

    # hammer task progress --id task_ID

Applying Errata to a Host Collection

Using Remote Execution
# hammer job-invocation create \
--feature katello_errata_install \
--inputs errata=ERRATUM_ID1,ERRATUM_ID2,... \
--search-query "host_collection = HOST_COLLECTION_NAME"
Using Katello Agent (deprecated)
# hammer host-collection erratum install \
--errata "erratum_ID1,erratum_ID2,..." \
--name "host_collection_name"\
--organization "Your_Organization"

Best Practices for Errata

  • Errata are classified in several distinct types and contain information about what is fixed in which version. Applying errata updates the described packages.

  • orcharhino shows which errata are applicable for each content host by comparing package lists with installed and available packages.

  • View errata on the content hosts page and choose between errata from the current content view and lifecycle environment and the Library lifecycle environment which contains the latest synchronized packages.

    You can only apply errata included in the content view version of the lifecycle of your managed host. See this as a recommendation to promote content views.

  • You can subsequently add synchronized errata which are applicable on at least one content hosts to content views.

    • Find errata on Content > Errata filtering for a specific repository or name of an erratum.

    • Applicable errata indicate that a host contains packages that have errata available.

    • Installable errata indicate that a host has updated packages available in its content view ready to be installed.

    • If an erratum is applicable but not installable, select the erratum and click Apply. This displays information which hosts are affected by the erratum. Trying to install the erratum prompts you the option to create an incremental content view version and install the erratum afterwards.

    • An incremental update publishes a new minor version of the content view and promotes it to the necessary lifecycle environment. Use bulk actions to apply errata to multiple content hosts at once.

    • This allows you to add patches for security issues to a frozen set of content without unnecessarily updating other unaffected packages.

    • You cannot manually increment a content view without adding errata.

  • Your orcharhino subscriptions contains access to security errata for Debian and Ubuntu provided by ATIX.

Managing Container Images

With orcharhino, you can import container images from various sources and distribute them to external containers using Content Views.

Importing Container Images

You can import container image repositories from any container image registry.

This procedure uses repository discovery to find container images and import them as repositories. For more information about creating a product and repository manually, see Importing Content.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products and click Repo Discovery.

  2. From the Repository Type list, select Container Images.

  3. In the Registry to Discover field, enter the URL of the registry to import images from.

  4. In the Registry Username field, enter the name that corresponds with your user name for the container image registry.

  5. In the Registry Password field, enter the password that corresponds with the user name that you enter.

  6. In the Registry Search Parameter field, enter any search criteria that you want to use to filter your search, and then click Discover.

  7. Optional: To further refine the Discovered Repository list, in the Filter field, enter any additional search criteria that you want to use.

  8. From the Discovered Repository list, select any repositories that you want to import, and then click Create Selected.

  9. Optional: If you want to create a product, from the Product list, select New Product.

  10. In the Name field, enter a product name.

  11. Optional: In the Repository Name and Repository Label columns, you can edit the repository names and labels.

  12. Click Run Repository Creation.

  13. When repository creation is complete, you can click each new repository to view more information.

  14. Optional: To filter the content you import to a repository, click a repository, and then navigate to Limit Sync Tags. Click to edit, and add any tags that you want to limit the content that synchronizes to orcharhino.

  15. In the orcharhino management UI, navigate to Content > Products and select the name of your product.

  16. Select the new repositories and then click Sync Now to start the synchronization process.

To view the progress of the synchronization, navigate to Content > Sync Status and expand the repository tree.

When the synchronization completes, you can click Container Image Manifests to list the available manifests. From the list, you can also remove any manifests that you do not require.

CLI procedure
  1. Create the custom Red Hat Container Catalog product:

    # hammer product create \
    --description "Red Hat Container Catalog content" \
    --name "Red Hat Container Catalog" \
    --organization "My_Organization" \
    --sync-plan "My_Sync_Plan"
  2. Create the repository for the container images:

    # hammer repository create \
    --name "RHEL7" \
    --content-type "docker" \
    --url "http://registry.access.redhat.com/" \
    --docker-upstream-name "rhel7" \
    --product "Red Hat Container Catalog" \
    --organization "My_Organization"
  3. Synchronize the repository:

    # hammer repository synchronize \
    --name "RHEL7" \
    --product "Red Hat Container Catalog" \
    --organization "My_Organization"

Managing Container Name Patterns

When you use orcharhino to create and manage your containers, as the container moves through Content View versions and different stages of the orcharhino lifecycle environment, the container name changes at each stage. For example, if you synchronize a container image with the name ssh from an upstream repository, when you add it to a orcharhino product and organization and then publish as part of a Content View, the container image can have the following name: my_organization_production-custom_spin-my_product-custom_ssh. This can create problems when you want to pull a container image because container registries can contain only one instance of a container name. To avoid problems with orcharhino naming conventions, you can set a registry name pattern to override the default name to ensure that your container name is clear for future use.

Limitations

If you use a registry name pattern to manage container naming conventions, because registry naming patterns must generate globally unique names, you might experience naming conflict problems. For example:

  • If you set the repository.docker_upstream_name registry name pattern, you cannot publish or promote Content Views with container content with identical repository names to the Production lifecycle.

  • If you set the lifecycle_environment.name registry name pattern, this can prevent the creation of a second container repository with the identical name.

You must proceed with caution when defining registry naming patterns for your containers.

Procedure

To manage container naming with a registry name pattern, complete the following steps:

  1. In the orcharhino management UI, navigate to Content > Lifecycle Environments, and either create a lifecycle environment or select a lifecycle environment to edit.

  2. In the Container Image Registry area, click the edit icon to the right of Registry Name Pattern area.

  3. Use the list of variables and examples to determine which registry name pattern you require.

  4. In the Registry Name Pattern field, enter the registry name pattern that you want to use. For example, to use the repository.docker_upstream_name:

    <%= repository.docker_upstream_name %>
  5. Click Save.

Managing Container Registry Authentication

You can manage the authentication settings for accessing containers images from orcharhino. By default, users must authenticate to access containers images in orcharhino.

You can specify whether you want users to authenticate to access container images in orcharhino in a lifecycle environment. For example, you might want to permit users to access container images from the Production lifecycle without any authentication requirement and restrict access the Development and QA environments to authenticated users.

Procedure
  1. In the orcharhino management UI, navigate to Content > Lifecycle Environments.

  2. Select the lifecycle environment that you want to manage authentication for.

  3. To permit unauthenticated access to the containers in this lifecycle environment, select the Unauthenticated Pull check box. To restrict unauthenticated access, clear the Unauthenticated Pull check box.

  4. Click Save.

Using Container Registries

Podman and Docker can be used to fetch content from container registries.

Procedure

Logging in to the container registry:

# podman login orcharhino.example.com

Listing container images:

# podman search orcharhino.example.com/

Pulling container images:

# podman pull orcharhino.example.com/my-image:<optional_tag>

Managing ISO Images

You can use orcharhino to store ISO images, either from Red Hat’s Content Delivery Network or other sources. You can also upload other files, such as virtual machine images, and publish them in repositories.

Importing ISO Images from Red Hat

The Red Hat Content Delivery Network provides ISO images for certain products. The procedure for importing this content is similar to the procedure for enabling repositories for RPM content.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Red Hat Repositories.

  2. In the Search field, enter an image name, for example, Red Hat Enterprise Linux 7 Server (ISOs).

  3. In the Available Repositories window, expand Red Hat Enterprise Linux 7 Server (ISOs).

  4. For the x86_64 7.2 entry, click the Enable icon to enable the repositories for the image.

  5. In the orcharhino management UI, navigate to Content > Products and click Red Hat Enterprise Linux Server.

  6. Click the Repositories tab of the Red Hat Enterprise Linux Server window, and click Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2.

  7. In the upper right of the Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2 window, click Select Action and select Sync Now.

To view the Synchronization Status
  • In the orcharhino management UI, navigate to Content > Sync Status and expand Red Hat Enterprise Linux Server.

CLI procedure
  1. Locate the Red Hat Enterprise Linux Server product for file repositories:

    # hammer repository-set list \
    --product "Red Hat Enterprise Linux Server" \
    --organization "My_Organization" | grep "file"
  2. Enable the file repository for Red Hat Enterprise Linux 7.2 Server ISO:

    # hammer repository-set enable \
    --product "Red Hat Enterprise Linux Server" \
    --name "Red Hat Enterprise Linux 7 Server (ISOs)" \
    --releasever 7.2 \
    --basearch x86_64 \
    --organization "My_Organization"
  3. Locate the repository in the product:

    # hammer repository list \
    --product "Red Hat Enterprise Linux Server" \
    --organization "My_Organization"
  4. Synchronize the repository in the product:

    # hammer repository synchronize \
    --name "Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2" \
    --product "Red Hat Enterprise Linux Server" \
    --organization "My_Organization"

Importing Individual ISO Images and Files

Use this procedure to manually import ISO content and other files to orcharhino Server. To import files, you can complete the following steps in the orcharhino management UI or using the Hammer CLI. However, if the size of the file that you want to upload is larger than 15 MB, you must use the Hammer CLI to upload it to a repository.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products, and in the Products window, click Create Product.

  2. In the Name field, enter a name to identify the product. This name populates the Label field.

  3. Optional: In the GPG Key field, enter a GPG Key for the product.

  4. Optional: From the Sync Plan list, select a synchronization plan for the product.

  5. Optional: In the Description field, enter a description of the product.

  6. Click Save.

  7. In the Products window, click the new product and then click Create Repository.

  8. In the Name field, enter a name for the repository. This automatically populates the Label field.

  9. From the Type list, select file.

  10. In the Upstream URL field, enter the URL of the registry to use as a source. Add a corresponding user name and password in the Upstream Username and Upstream Password fields.

  11. Click Save.

  12. Select the new repository.

  13. Navigate to Upload File and click Browse.

  14. Select the .iso file and click Upload.

CLI procedure
  1. Create the custom product:

    # hammer product create \
    --name "My_ISOs" \
    --sync-plan "Example Plan" \
    --description "My_Product" \
    --organization "My_Organization"
  2. Create the repository:

    # hammer repository create \
    --name "My_ISOs" \
    --content-type "file" \
    --product "My_Product" \
    --organization "My_Organization"
  3. Upload the ISO file to the repository:

    # hammer repository upload-content \
    --path ~/bootdisk.iso \
    --id repo_ID \
    --organization "My_Organization"

Managing Custom File Type Content

In orcharhino, you might require methods of managing and distributing SSH keys and source code files or larger files such as virtual machine images and ISO files. To achieve this, custom products in orcharhino include repositories for custom file types. This provides a generic method to incorporate arbitrary files in a product.

You can upload files to the repository and synchronize files from an upstream orcharhino Server. When you add files to a custom file type repository, you can use the normal orcharhino management functions such as adding a specific version to a Content View to provide version control and making the repository of files available on various orcharhino Proxys. Clients must download the files over HTTP or HTTPS using curl -O.

You can create a file type repository in orcharhino Server only in a custom product, but there is flexibility in how you create the file type repository. You can create an independent file type repository in a directory on the system where orcharhino is installed, or on a remote HTTP server, and then synchronize the contents of that directory into orcharhino. This method is useful when you have multiple files to add to a orcharhino repository.

Creating a Custom File Type Repository

The procedure for creating a custom file type repository is the same as the procedure for creating any custom content, except that when you create the repository, you select the file type. You must create a product and then add a custom repository.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure

To create a custom product, complete the following procedure:

  1. In the orcharhino management UI, navigate to Content > Products, click Create Product, and enter the following details:

  2. In the Name field, enter a name for the product. orcharhino automatically completes the Label field based on what you have entered for Name.

  3. Optional: From the GPG Key list, select the GPG key for the product.

  4. Optional: From the Sync Plan list, select a synchronization plan for the product.

  5. In the Description field, enter a description of the product, and then click Save.

To create a repository for your custom product, complete the following procedure:

  1. In the Products window, select the name of a product that you want to create a repository for.

  2. Click the Repositories tab, and then click New Repository.

  3. In the Name field, enter a name for the repository. orcharhino automatically completes the Label field based on the name.

  4. From the Type list, select file.

  5. In the Upstream URL field, enter the URL of the upstream repository to use as a source.

  6. Select the Verify SSL check box if you want to verify that the upstream repository’s SSL certificates are signed by a trusted CA.

  7. In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication.

  8. In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication.

  9. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication.

  10. Optional: Check the Mirror on Sync checkbox to have this repository mirror the source repository during synchronization. It defaults to true (checked).

  11. Optional: In the HTTP Proxy Policy field, select the desired HTTP proxy. The default value is Global Default.

  12. Optional: Check the Publish via HTTP to have this repository published using HTTP during synchronization. It defaults to true (checked).

  13. Optional: In the GPG Key field, select the GPG key for the repository.

  14. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository.

  15. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository.

  16. Optional: In the SSL Client Key field, select the SSL Client Key for the repository.

  17. Click Save.

CLI procedure
  1. Create a custom product

    # hammer product create \
    --description "My_Files" \
    --name "My_File_Product" \
    --organization "My_Organization" \
    --sync-plan "My_Sync_Plan"
    Table 7. Optional Parameters for the hammer product create Command
    Option Description

    --gpg-key gpg_key_name

    Key name to search by

    --gpg-key-id gpg_key_id

    GPG key numeric identifier

    --sync-plan sync_plan_name

    Sync plan name to search by

    --sync-plan-id sync_plan_id

    Sync plan numeric identifier

  2. Create a File Type Repository

    # hammer repository create \
    --content-type "file" \
    --name "My_Files" \
    --organization "My_Organization" \
    --product "My_File_Product"
    Table 8. Optional Parameters for the hammer repository create Command
    Option Description

    --checksum-type sha_version

    Repository checksum, currently 'sha1' & 'sha256' are supported

    --download-policy policy_name

    Download policy for yum repos (either 'immediate' or 'on_demand').

    --gpg-key gpg_key_name

    Key name to search by

    --gpg-key-id gpg_key_id

    GPG key numeric identifier

    --mirror-on-sync boolean

    Must this repo be mirrored from the source, and stale RPMs removed, when synced? Set to true or false, yes or no, 1 or 0.

    --publish-via-http boolean

    Must this also be published using HTTP? Set to true or false, yes or no, 1 or 0.

    --upstream-username repository_username

    Upstream repository user, if required for authentication

    --upstream-password repository_password

    Password for the upstream repository user

    --url source_repo_url

    URL of the Source repository

    --verify-ssl-on-sync boolean

    Must Katello verify that the upstream URL’s SSL certificates are signed by a trusted CA? Set to true or false, yes or no, 1 or 0.

Creating a Custom File Type Repository in a Local Directory

You can create a custom file type repository, from a directory of files, on the base system where orcharhino is installed using the pulp-manifest command. You can then synchronize the files into orcharhino Server. When you add files to a file type repository, you can work with the files as with any other repository.

Use this procedure to configure a repository in a directory on the base system where orcharhino is installed. To create a file type repository in a directory on a remote server, see Creating a Remote File Type Repository.

Procedure

To create a file type repository in a local directory, complete the following procedure:

  1. Ensure the Server and orcharhino client repositories are enabled.

  2. Install the Pulp Manifest package:

    # yum install python3-pulp_manifest
  3. Create a directory that you want to use as the file type repository in the HTTP server’s public folder:

    # mkdir my_file_repo
  4. Add files to the directory or create a test file:

    # touch my_file_repo/test.txt
  5. Enter the Pulp Manifest command to create the manifest:

    # pulp-manifest my_file_repo
  6. Verify the manifest was created:

    # ls my_file_repo
    PULP_MANIFEST test.txt
Importing Files from a File Type Repository

To import files from a file type repository in a local directory, complete the following procedure:

  1. Ensure a custom product exists in orcharhino Server.

  2. In the orcharhino management UI, navigate to Content > Products.

  3. Select the name of a product.

  4. Click the Repositories tab and select New Repository.

  5. In the Name field, enter a name for the repository. orcharhino automatically completes this field based on what you enter for Name.

  6. From the Type list, select the content type of the repository.

  7. In the Upstream URL field, enter the local directory with the repository to use as the source, in the form file:///my_file_repo.

  8. Select the Verify SSL check box to check the SSL certificate for the repository or clear the Verify SSL check box.

  9. Optional: In the Upstream Username field, enter the upstream user name that you require.

  10. Optional: In the Upstream Password field, enter the corresponding password for your upstream user name.

  11. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication.

  12. Optional: Check the Mirror on Sync checkbox to have this repository mirror the source repository during synchronization. It defaults to true (checked).

  13. Optional: In the HTTP Proxy Policy field, select the desired HTTP proxy. The default value is Global Default.

  14. Optional: Check the Publish via HTTP to have this repository published using HTTP during synchronization. It defaults to true (checked).

  15. Optional: In the GPG Key field, select the GPG key for the repository.

  16. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository.

  17. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository.

  18. Optional: In the SSL Client Key field, select the SSL Client Key for the repository.

  19. Select Save to save this repository entry.

Updating a File Type Repository

To update the file type repository, complete the following steps:

  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select the name of a product.

  3. Select the name of the repository you want to update.

  4. From the Select Action menu, select Sync Now.

  5. Visit the URL where the repository is published to see the files.

Creating a Remote File Type Repository

You can create a custom file type repository from a directory of files that is external to orcharhino Server using the pulp-manifest command. You can then synchronize the files into orcharhino Server over HTTP or HTTPS. When you add files to a file type repository, you can work with the files as with any other repository.

Use this procedure to configure a repository in a directory on a remote server. To create a file type repository in a directory on the base system where orcharhino Server is installed, see Creating a Custom File Type Repository in a Local Directory.

Prerequisites

Before you create a remote file type repository, ensure the following conditions exist:

  • You have a Red Hat Enterprise Linux 7 server registered to your orcharhino or the Red Hat CDN.

  • You have installed an HTTP server.

Procedure

To create a file type repository in a remote directory, complete the following procedure:

  1. On your remote server, ensure that the Server and orcharhino client repositories are enabled.

  2. Install the Pulp Manifest package:

    # yum install python3-pulp_manifest
  3. Create a directory that you want to use as the file type repository in the HTTP server’s public folder:

    # mkdir /var/www/html/pub/my_file_repo
  4. Add files to the directory or create a test file:

    # touch /var/www/html/pub/my_file_repo/test.txt
  5. Enter the Pulp Manifest command to create the manifest:

    # pulp-manifest /var/www/html/pub/my_file_repo
  6. Verify the manifest was created:

    # ls /var/www/html/pub/my_file_repo
    PULP_MANIFEST test.txt
Importing Files from a Remote a File Type Repository

To import files from a remote file type repository, complete the following procedure:

  1. Ensure a custom product exists in orcharhino Server, or create a custom product. For more information see Creating a Custom File Type Repository

  2. In the orcharhino management UI, navigate to Content > Products.

  3. Select the name of a product.

  4. Click the Repositories tab and select New Repository.

  5. In the Name field, enter a name for the repository. orcharhino automatically completes this field based on what you enter for Name.

  6. From the Type list, select file.

  7. In the Upstream URL field, enter the URL of the upstream repository to use as a source.

  8. Select the Verify SSL check box if you want to verify that the upstream repository’s SSL certificates are signed by a trusted CA.

  9. In the Upstream Username field, enter the user name for the upstream repository if required for authentication. Clear this field if the repository does not require authentication.

  10. In the Upstream Password field, enter the corresponding password for the upstream repository. Clear this field if the repository does not require authentication.

  11. Optional: In the Upstream Authentication Token field, provide the token of the upstream repository user for authentication. Leave this field empty if the repository does not require authentication.

  12. Optional: Check the Mirror on Sync checkbox to have this repository mirror the source repository during synchronization. It defaults to true (checked).

  13. Optional: In the HTTP Proxy Policy field, select the desired HTTP proxy. The default value is Global Default.

  14. Optional: Check the Publish via HTTP to have this repository published using HTTP during synchronization. It defaults to true (checked).

  15. Optional: In the GPG Key field, select the GPG key for the repository.

  16. Optional: In the SSL CA Cert field, select the SSL CA Certificate for the repository.

  17. Optional: In the SSL Client cert field, select the SSL Client Certificate for the repository.

  18. Optional: In the SSL Client Key field, select the SSL Client Key for the repository.

  19. Click Save.

  20. To update the file type repository, navigate to Content > Products. Select the name of a product that contains the repository that you want to update.

  21. In the product’s window, select the name of the repository you want to update.

  22. From the Select Action menu, select Sync Now.

Visit the URL where the repository is published to view the files.

Uploading Files To a Custom File Type Repository

Use this procedure to upload files to a custom file type repository.

Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select a custom product by name.

  3. Select a file type repository by name.

  4. Click Browse to search and select the file you want to upload.

  5. Click Upload to upload the selected file to orcharhino Server.

  6. Visit the URL where the repository is published to see the file.

CLI procedure
# hammer repository upload-content \
--id repo_ID \
--organization "My_Organization" \
--path example_file

The --path option can indicate a file, a directory of files, or a glob expression of files. Globs must be escaped by single or double quotes.

Downloading Files to a Host From a Custom File Type Repository

You can download files to a client over HTTPS using curl -O, and optionally over HTTP if the Publish via HTTP repository option is selected.

Prerequisites
Procedure
  1. In the orcharhino management UI, navigate to Content > Products.

  2. Select a custom product by name.

  3. Select a file type repository by name.

  4. Check to see if Publish via HTTP is enabled. If it is not, you require the certificates to use HTTPS.

  5. Copy the URL where the repository is published.

CLI procedure
  1. List the file type repositories.

    # hammer repository list --content-type file
    ---|------------|-------------------|--------------|----
    ID | NAME       | PRODUCT           | CONTENT TYPE | URL
    ---|------------|-------------------|--------------|----
    7  | My_Files | My_File_Product | file         |
    ---|------------|-------------------|--------------|----
  2. Display the repository information.

    # hammer repository info \
    --name "My Files" \
    --organization-id My_Organization_ID \
    --product "My_File_Product"

    If HTTP is enabled, the output is similar to this:

    Publish Via HTTP:   yes
    Published At:       http://orcharhino.example.com/pulp/isos/uuid/

    If HTTP is not enabled, the output is similar to this:

    Publish Via HTTP:   no
    Published At:       https://orcharhino.example.com/pulp/isos/uuid/
  3. On the client, enter a command in the appropriate format for HTTP or HTTPS:

    For HTTP:

    # curl -O orcharhino.example.com/pulp/isos/uuid/my_file

    For HTTPS:

    # curl -O --cert ./Default\ Organization-key-cert.pem --cacert katello-server-ca.crt orcharhino.example.com/pulp/isos/uuid/my_file

Appendix A: Using an NFS Share for Content Storage

Your environment requires adequate hard disk space to fulfill content storage. In some situations, it is useful to use an NFS share to store this content. This appendix shows how to mount the NFS share on your orcharhino Server’s content management component.

Use high-bandwidth, low-latency storage for the /var/lib/pulp file system. orcharhino has many I/O-intensive operations; therefore, high-latency, low-bandwidth storage might have issues with performance degradation.
Procedure
  1. Create the NFS share. This example uses a share at nfs.example.com:/orcharhino/pulp. Ensure this share provides the appropriate permissions to orcharhino Server and its apache user.

  2. Stop the foreman-maintain services on the orcharhino host:

    # foreman-maintain service stop
  3. Ensure orcharhino Server has the nfs-utils package installed:

    # yum install nfs-utils
  4. You need to copy the existing contents of /var/lib/pulp to the NFS share. First, mount the NFS share to a temporary location:

    # mkdir /mnt/temp
    # mount -o rw nfs.example.com:/orcharhino/pulp /mnt/temp

    Copy the existing contents of /var/lib/pulp to the temporary location:

    # cp -r /var/lib/pulp/* /mnt/temp/.
  5. Set the permissions for all files on the share to use the pulp user.

  6. Unmount the temporary storage location:

    # umount /mnt/temp
  7. Remove the existing contents of /var/lib/pulp:

    # rm -rf /var/lib/pulp/*
  8. Edit the /etc/fstab file and add the following line:

    nfs.example.com:/orcharhino/pulp    /var/lib/pulp   nfs    rw,hard,intr,context="system_u:object_r:pulpcore_var_lib_t:s0"

    This makes the mount persistent across system reboots. Ensure to include the SELinux context.

  9. Enable the mount:

    # mount -a
  10. Confirm the NFS share mounts to var/lib/pulp:

    # df
    Filesystem                         1K-blocks     Used Available Use% Mounted on
    ...
    nfs.example.com:/orcharhino/pulp 309506048 58632800 235128224  20% /var/lib/pulp
    ...

    Also confirm that the existing content exists at the mount on var/lib/pulp:

    # ls /var/lib/pulp
  11. Start the foreman-maintain services on the orcharhino host:

    # foreman-maintain service start

orcharhino Server now uses the NFS share to store content. Run a content synchronization to ensure the NFS share works as expected. For more information, see Synchronizing Repositories.

Appendix B: Importing Content ISOs into a Connected orcharhino

Even if orcharhino Server can connect directly to the Red Hat Customer Portal, you can perform the initial synchronization from locally mounted content ISOs. When the initial synchronization is completed from the content ISOs, you can switch back to downloading content through the network connection. To accomplish this, download the Content ISOs for orcharhino from the Red Hat Customer Portal and import them into orcharhino Server. For locations with bandwidth limitations, using an On Demand download policy might be more efficient than downloading and importing Content ISOs.

Note that Content ISOs previously hosted at redhat.com for import into orcharhino Server have been deprecated and will be removed in the next orcharhino version.

You can only import content ISO images for Red Hat Enterprise Linux 8 because repodata checksum from CDN does not match the repodata checksum from the content ISO images for Red Hat Enterprise Linux 7 and lower.

Note that if you synchronize a Red Hat Enterprise Linux ISO, all minor versions of Red Hat Enterprise Linux also synchronize. You require adequate storage on your orcharhino to account for this.

This section is not required if your orcharhino Server is connected to the Internet.

This example procedure performs the first synchronization of the Red Hat Enterprise Linux 8 repository from content ISO images.

Procedure
  1. Log in to the Red Hat Customer Portal at access.redhat.com.

  2. In the upper left of the window, click Downloads and select Red Hat Satellite.

  3. Click the Content ISOs tab. This page lists all the products that are available in your subscription.

  4. Click the link for the product name, such as RHEL 8 (x86_64), to reveal links to download ISO images.

  5. Download the ISO images.

  6. On orcharhino, create a directory to act as a temporary store for all of the required orcharhino content ISO images. This example uses /tmp/isos/rhel8:

    # mkdir -p /tmp/isos/rhel8
  7. On your workstation, copy the ISO files to orcharhino Server:

    $ scp ~/Downloads/iso_file root@orcharhino.example.com:/tmp/isos/rhel8
  8. On orcharhino Server, create a directory to serve as a mount point for the ISOs:

    # mkdir /mnt/iso
  9. Create a working directory to store ISO images:

    # mkdir /mnt/rhel8
  10. Temporarily mount the first ISO image:

    # mount -o loop /tmp/isos/iso_file /mnt/iso
  11. Recursively copy the contents of the first ISO to the working directory:

    # cp -ruv /mnt/iso/* /mnt/rhel8/
  12. Unmount the ISO image:

    # umount /mnt/iso
  13. Repeat the above step for each ISO until you have copied all the data from the Content ISO images into /mnt/rhel8.

  14. If required, remove the empty directory used as the mount point:

    # rmdir /mnt/iso
  15. If required, remove the temporary working directory and its contents to regain space:

    # rm -rf /tmp/isos/
  16. Set the owner and the SELinux context for the directory and its contents to be the same as /var/lib/pulp:

    # chcon -R --reference /var/lib/pulp  /mnt/rhel8/
    # chown -R apache:apache /mnt/rhel8/
  17. Create or edit the /etc/pulp/content/sources/conf.d/local.conf file. Append the following text into the file:

    [rhel-8-server]
    enabled: 1
    priority: 0
    expires: 3d
    name: Red Hat Enterprise Linux 8
    type: yum
    base_url: file:///mnt/rhel8/content/dist/rhel/server/8/x86_64/os/

    The base_url path might differ in your content ISO. The directory specified in base_url must contain the repodata directory, otherwise the synchronization fails. To synchronize multiple repositories, create a separate entry for each of them in the configuration file /etc/pulp/content/sources/conf.d/local.conf.

  18. In the orcharhino management UI, navigate to Content > Red Hat Repositories and enable the following repositories:

    • Red Hat Enterprise Linux 8 for x86_64 - BaseOS RPMs 8

    • Red Hat Enterprise Linux 8 for x86_64 - AppStream RPMs 8

  19. Under Content > Sync Status select the repositories to be synchronized and click Synchronize Now.

Note that the orcharhino management UI does not indicate which source is being used. In case of problems with a local source, orcharhino pulls content through the network. To monitor the process, enter the following command on orcharhino:

# journalctl -f -l SYSLOG_IDENTIFIER=pulp | grep -v worker[\-,\.]heartbeat

The above command displays interactive logs. First, orcharhino Server connects to the Red Hat Customer Portal to download and process repository metadata. Then, the local repository is loaded. In case of any errors, cancel the synchronization in the orcharhino management UI and verify your configuration.

After successful synchronization you can detach the local source by removing its entry from /etc/pulp/content/sources/conf.d/local.conf.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").