Configuring the Discovery service

orcharhino can detect hosts on a network that are not in your orcharhino inventory. These hosts boot the discovery image that performs hardware detection and relays this information back to orcharhino Server. This method creates a list of ready-to-provision hosts in orcharhino Server without needing to enter the MAC address of each host.

When you boot a blank bare-metal host, the boot menu has two options: local and discovery. If you select discovery to boot the Discovery image, after a few minutes, the Discovery image completes booting and a status screen is displayed.

The Discovery service is enabled by default on orcharhino Server. However, the default setting of the global templates is to boot from the local hard drive. To change the default setting, in the orcharhino management UI, navigate to Administer > Settings, and click the Provisioning tab. Locate the Default PXE global template entry row, and in the Value column, enter discovery. Then build the PXE default configuration. Navigate to Hosts > Templates > Provisioning Templates and click Build PXE Default. orcharhino distributes the PXE configuration to any TFTP Proxies.

PXE mode

Ensure that the DHCP range of all subnets that you plan to use for discovery do not intersect with the DHCP lease pool configured for the managed DHCP service. The DHCP range is set in the orcharhino management UI, while the lease pool range is set using the orcharhino-installer command. For example, in a 10.1.0.0/16 network range 10.1.0.0 to 10.1.127.255 could be allocated for leases and 10.1.128.0 to 10.1.255.254 could be allocated for reservations.

The Foreman Discovery image provided with orcharhino is based on CentOS Stream 8.

Procedure
  1. Install Discovery on orcharhino Server:

    $ orcharhino-installer \
    --enable-foreman-plugin-discovery \
    --enable-foreman-proxy-plugin-discovery
  2. Install orcharhino-fdi:

    $ dnf install orcharhino-fdi

    The orcharhino-fdi package installs the Discovery ISO to the /usr/share/foreman-discovery-image/ directory. You can build a PXE boot image from this ISO by using the livecd-iso-to-pxeboot tool. The tool saves this PXE boot image in the /var/lib/tftpboot/boot directory. For more information, see Building a Discovery Image.

When the installation completes, you can view the new menu option by navigating to Hosts > Discovered Hosts.

Installing the Host Discovery Plugin

During the installation of orcharhino with the orcharhino installer, select the host discovery plugin at the plugin installation step. This will automatically install and configure the host discovery plugin. The following installation steps are not necessary then.

Procedure
  1. Run the following command to add the host discovery plugin to your orcharhino:

    $ orcharhino-installer --enable-foreman-plugin-discovery
  2. On your orcharhino Proxy, run the following command:

    $ orcharhino-installer --enable-foreman-proxy-plugin-discovery
  3. In case of installing the host discovery plugin manually, use the following command to install the Foreman Discovery Image:

    $ dnf install orcharhino-fdi

Provisioning template PXELinux Discovery snippet

For BIOS provisioning, the PXELinux global default template in the Hosts > Templates > Provisioning Templates window contains the snippet pxelinux_discovery. The snippet has the following lines:

LABEL discovery
  MENU LABEL Foreman Discovery Image
  KERNEL boot/fdi-image/vmlinuz0
  APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman
  IPAPPEND 2

The KERNEL and APPEND options boot the Discovery image and ramdisk. The APPEND option contains a proxy.url parameter, with the foreman_server_url macro as its argument. This macro resolves to the full URL of orcharhino Server.

For UEFI provisioning, the PXEgrub2 global default template in the Hosts > Templates > Provisioning Templates window contains the snippet pxegrub2_discovery:

menuentry 'Foreman Discovery Image' --id discovery {
  linuxefi boot/fdi-image/vmlinuz0 rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman BOOTIF=01-$mac
  initrdefi boot/fdi-image/initrd0.img
}

To use orcharhino Proxy to proxy the discovery steps, edit /var/lib/tftpboot/pxelinux.cfg/default or /var/lib/tftpboot/grub2/grub.cfg and change the URL to orcharhino Proxy Server FQDN you want to use.

The global template is available on orcharhino Server and all orcharhino Proxies that have the TFTP feature enabled.

Automatic contexts for discovered hosts

orcharhino Server assigns an organization and location to discovered hosts automatically according to the following sequence of rules:

  1. If a discovered host uses a subnet defined in orcharhino, the host uses the first organization and location associated with the subnet.

  2. If the default location and organization is configured in global settings, the discovered hosts are placed in this organization and location. To configure these settings, navigate to Administer > Settings > Discovery and select values for the Discovery organization and Discovery location settings. Ensure that the subnet of discovered host also belongs to the selected organization and location, otherwise orcharhino refuses to set it for security reasons.

  3. If none of the previous conditions is met, orcharhino assigns the first organization and location ordered by name.

You can change the organization or location manually by using the bulk actions on the Discovered Hosts page. Select the discovered hosts to modify and, from the Select Action menu, select Assign Organization or Assign Location.

Discovery template and snippet settings

To use the Discovery service, you must configure provisioning settings to set Discovery as the default service and set the templates that you want to use.

Setting Discovery service as default

For both BIOS and UEFI, to set the Discovery service as the default service that boots for hosts that are not present in your current orcharhino inventory, complete the following steps:

  1. In the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab.

  2. For the Default PXE global template entry, in the Value column, enter discovery.

To use a template, in the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab and set the templates that you want to use.

Customizing templates and snippets

Templates and snippets are locked to prevent changes. If you want to edit a template or snippet, clone it, save it with a unique name, and then edit the clone.

When you change the template or a snippet it includes, the changes must be propagated to orcharhino Server’s default PXE template.

In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates and click Build PXE Default.

This refreshes the default PXE template on orcharhino Server.

Additional settings
The proxy.url argument

During the orcharhino installation process, if you use the default option --enable-foreman-plugin-discovery, you can edit the proxy.url argument in the template to set the URL of orcharhino Proxy Server that provides the discovery service. You can change the proxy.url argument to the IP address or FQDN of another provisioning orcharhino Proxy that you want to use, but ensure that you append the port number, for example, 9090. If you use an alternative port number with the --foreman-proxy-ssl-port option during orcharhino installation, you must add that port number. You can also edit the proxy.url argument to use a orcharhino IP address or FQDN so that the discovered hosts communicate directly with orcharhino Server.

The proxy.type argument

If you use a orcharhino Proxy Server FQDN for the proxy.url argument, ensure that you set the proxy.type argument to proxy. If you use a orcharhino FQDN, update the proxy.type argument to foreman.

proxy.url=https://orcharhino-proxy.network2.example.com:9090 proxy.type=proxy
Rendering the orcharhino Proxy’s host name

orcharhino deploys the same template to all TFTP orcharhino Proxies and there is no variable or macro available to render the orcharhino Proxy’s host name. The hard-coded proxy.url does not not work with two or more TFTP orcharhino Proxies. As a workaround, every time you click Build PXE Defaults, edit the configuration file in the TFTP directory using SSH, or use a common DNS alias for appropriate subnets.

Tagged VLAN provisioning

If you want to use tagged VLAN provisioning, and you want the discovery service to send a discovery request, add the following information to the KERNEL option in the discovery template:

fdi.vlan.primary=example_VLAN_ID

Creating hosts from discovered hosts

Provisioning discovered hosts follows a provisioning process that is similar to PXE provisioning. The main difference is that instead of manually entering the host’s MAC address, you can select the host to provision from the list of discovered hosts.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites
  • Configure a domain and subnet on orcharhino. For more information about networking requirements, see Configuring Networking.

  • Configure the discovery service on orcharhino. For more information, see Installing the Discovery Service.

  • A bare-metal host or a blank VM.

  • Provide the installation medium for the operating systems that you want to use to provision hosts. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing Content.

  • Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing Content.

For information about the security tokens, see Configuring the Security Token Validity Duration.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Discovered host. Select the host you want to use and click Provision to the right of the list.

  2. Select from one of the two following options:

    • To provision a host from a host group, select a host group, organization, and location, and then click Create Host.

    • To provision a host with further customization, click Customize Host and enter the additional details you want to specify for the new host.

  3. Verify that the fields are populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino Server automatically assigns an IP address for the new host.

    • orcharhino Server automatically populates the MAC address from the Discovery results.

  4. Ensure that orcharhino Server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  5. Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.

  6. Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. The host must resolve to the following provisioning templates:

    • kexec Template: Discovery Red Hat kexec

    • provision Template: orcharhino Kickstart Default

      For more information about associating provisioning templates, see provisioning templates.

  7. Click Submit to save the host details.

When the host provisioning is complete, the discovered host becomes a content host. To view the host, navigate to Hosts > Content Hosts.

CLI procedure
  1. Identify the discovered host to use for provisioning:

    $ hammer discovery list
  2. Select a host and provision it using a host group. Set a new host name with the --new-name option:

    $ hammer discovery provision \
    --build true \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --location "My_Location" \
    --managed true \
    --name "My_Host_Name" \
    --new-name "My_New_Host_Name" \
    --organization "My_Organization"

    This removes the host from the discovered host listing and creates a host entry with the provisioning settings. The Discovery image automatically resets the host so that it can boot to PXE. The host detects the DHCP service on orcharhino Server’s integrated orcharhino Proxy and starts installing the operating system. The rest of the process is identical to normal PXE workflow described in Creating Hosts with Unattended Provisioning.

Creating Discovery rules

As a method of automating the provisioning process for discovered hosts, orcharhino provides a feature to create discovery rules. These rules define how discovered hosts automatically provision themselves, based on the assigned host group. For example, you can automatically provision hosts with a high CPU count as hypervisors. Likewise, you can provision hosts with large hard disks as storage servers.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

NIC considerations

Auto provisioning does not currently allow configuring NICs; all systems are being provisioned with the NIC configuration that was detected during discovery. However, you can set the NIC in the OS installer recipe scriplet, using a script, or using configuration management at a later stage.

Procedure
  1. In the orcharhino management UI, navigate to Configure > Discovery rules, and select Create Rule.

  2. In the Name field, enter a name for the rule.

  3. In the Search field, enter the rules to determine whether to provision a host. This field provides suggestions for values you enter and allows operators for multiple rules. For example: cpu_count > 8.

  4. From the Host Group list, select the host group to use as a template for this host.

  5. In the Hostname field, enter the pattern to determine host names for multiple hosts. This uses the same ERB syntax that provisioning templates use. The host name can use the @host attribute for host-specific values and the rand macro for a random number or the sequence_hostgroup_param_next macro for incrementing the value. For more information about provisioning templates, see provisioning templates and the API documentation.

    • myhost-<%= sequence_hostgroup_param_next("EL7/MyHostgroup", 10, "discovery_host") %>

    • myhost-<%= rand(99999) %>

    • abc-<%= @host.facts['bios_vendor'] %>-<%= rand(99999) %>

    • xyz-<%= @host.hostgroup.name %>

    • srv-<%= @host.discovery_rule.name %>

    • server-<%= @host.ip.gsub('.','-') + '-' + @host.hostgroup.subnet.name %>

      When creating host name patterns, ensure that the resulting host names are unique, do not start with numbers, and do not contain underscores or dots. A good approach is to use unique information provided by Facter, such as the MAC address, BIOS, or serial ID.

  6. In the Hosts limit field, enter the maximum number of hosts that you can provision with the rule. Enter 0 for unlimited.

  7. In the Priority field, enter a number to set the precedence the rule has over other rules. Rules with lower values have a higher priority.

  8. From the Enabled list, select whether you want to enable the rule.

  9. To set a different provisioning context for the rule, click the Organizations and Locations tabs and select the contexts you want to use.

  10. Click Submit to save your rule.

  11. In the orcharhino management UI, navigate to Hosts > Discovered Host and select one of the following two options:

    • From the Discovered hosts list on the right, select Auto-Provision to automatically provisions a single host.

    • On the upper right of the window, click Auto-Provision All to automatically provisions all hosts.

CLI procedure
  1. Create the rule with the hammer discovery-rule create command:

    $ hammer discovery-rule create \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --hostname "hypervisor-<%= rand(99999) %>" \
    --hosts-limit 5 \
    --name "My_Hypervisor" \
    --priority 5 \
    --search "cpu_count  > 8"
  2. Automatically provision a host with the hammer discovery auto-provision command:

    $ hammer discovery auto-provision --name "macabcdef123456"

Implementing PXE-less Discovery

orcharhino provides a PXE-less Discovery service that operates without the need for PXE-based services (DHCP and TFTP). You accomplish this using orcharhino Server’s Discovery image. Once a discovered node is scheduled for installation, it uses kexec command to reload Linux kernel with OS installer without rebooting the node.

Known issues

The console might freeze during the process. On some hardware, you might experience graphical hardware problems.

PXE-less mode

If you have not yet installed the Discovery service or image, see Building a Discovery Image.

Attended use
  1. Copy this media to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at /dev/sdb:

    $ dd bs=4M \
    if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \
    of=/dev/sdb
  2. Insert the Discovery boot media into a bare-metal host, start the host, and boot from the media. The Discovery Image displays an option for either Manual network setup or Discovery with DHCP:

    If you select Manual network setup, the Discovery image requests a set of network options. This includes the primary network interface that connects to orcharhino Server. This Discovery image also asks for network interface configuration options, such as an IPv4 Address, IPv4 Gateway, and an IPv4 DNS server.

  3. After entering these details, select Next.

  4. If you select Discovery with DHCP, the Discovery image requests only the primary network interface that connects to orcharhino Server. It attempts to automatically configure the network interface using a DHCP server, such as one that a orcharhino Proxy Server provides.

  5. After the primary interface configuration, the Discovery image requests the Server URL, which is the URL of orcharhino Server or orcharhino Proxy Server offering the Discovery service. For example, to use the integrated orcharhino Proxy on orcharhino Server, use the following URL:

    https://orcharhino.example.com:9090
  6. Set the Connection type to Proxy, then select Next.

  7. Optional: The Discovery image also provides a set of fields to input Custom facts for the Facter tool to relay back to orcharhino Server. These are entered in a name-value format. Provide any custom facts you require and select Confirm to continue.

orcharhino reports a successful communication with orcharhino Server’s Discovery service. In the orcharhino management UI, navigate to Hosts > Discovered Hosts and view the newly discovered host.

For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.

Building a Discovery image

The Discovery image is a minimal operating system that is PXE-booted on hosts to acquire initial hardware information and to check in with orcharhino. Discovered hosts keep running the Discovery image until they are rebooted into Anaconda, which then initiates the provisioning process.

Run the following procedure on Red Hat Enterprise Linux 8.

The Discovery image is based on CentOS Stream 8.

Use this procedure to build a orcharhino discovery image or rebuild an image if you change configuration files.

Do not use this procedure on your production orcharhino or orcharhino Proxy. Use either a dedicated environment or copy the synchronized repositories and a kickstart file to a separate server.

Prerequisites
  • Ensure that hardware virtualization is available on your host.

  • Install the following packages:

    $ dnf install git-core lorax anaconda pykickstart wget qemu-kvm
  • Clone the foreman-discovery-image repository:

    $ git clone https://github.com/theforeman/foreman-discovery-image.git
    $ cd foreman-discovery-image
Procedure
  1. Replace the contents of the 00-repos-centos8.ks file with the following:

    url --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=baseos
    repo --name="AppStream" --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=appstream
    repo --name="foreman-el8" --baseurl=http://yum.theforeman.org/releases/6.10/el8/$basearch/
    repo --name="foreman-plugins-el8" --baseurl=http://yum.theforeman.org/plugins/6.10/el8/$basearch/
    module --name=ruby --stream=2.7
    module --name=postgresql --stream=12
    module --name=foreman --stream=el8
  2. Prepare the kickstart file:

    $ ./build-livecd fdi-centos8.ks
  3. Build the ISO image:

    $ sudo ./build-livecd-root custom ./result "nomodeset nokaslr"

    The options nomodeset and nokaslr instruct the Discovery image kernel to disable mode setting and to disable ASLR (Address Space Layout Randomization).

  4. Verify that your ./result/fdi-custom-XXXXXXX.iso file is created:

    $ ls -h ./result/*.iso

Use the following command to build a fully automated Discovery image with a static network configuration:

$ sudo ./build-livecd-root custom ./result \
"nomodeset nokaslr \
fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
fdi.pxdns=192.168.140.2 proxy.url=https://orcharhino.example.com:8443 \
proxy.type=foreman fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"

For more information about configuration options, see Unattended Use Customization and Image Remastering.

When you create the .iso file, you can boot the .iso file over a network or locally. Complete one of the following procedures.

If you want to boot the .iso file over a network, complete the following steps:

Procedure
  1. Create a directory to store your boot files:

    $ mkdir /var/lib/tftpboot/boot/myimage
  2. Copy the ./tftpboot/initrd0.img and ./tftpboot/vmlinuz0 files to your new directory.

  3. Edit the KERNEL and APPEND entries in the /var/lib/tftpboot/pxelinux.cfg file to add the information about your own initial ramdisk and kernel files.

If you want to create a hybrid .iso file for booting locally, complete the following steps:

Procedure
  1. To convert the .iso file to an .iso hybrid file for PXE provisioning, enter the following command:

    $ isohybrid --partok fdi.iso

    If you have grub2 packages installed, you can also use the following command to install a grub2 bootloader:

    $ isohybrid --partok --uefi fdi.iso
  2. To add md5 checksum to the .iso file so it can pass installation media validation tests in orcharhino, enter the following command:

    $ implantisomd5 fdi.iso

Unattended use, customization, and image remastering

You can create a customized Discovery ISO to automate the image configuration process after booting. The Discovery image uses a Linux kernel for the operating system, which passes kernel parameters to configure the discovery service. These kernel parameters include the following entries:

fdi.cachefacts

Number of fact uploads without caching. By default, orcharhino does not cache any uploaded facts.

fdi.countdown

Number of seconds to wait until the text-user interface is refreshed after the initial discovery attempt. This value defaults to 45 seconds. Increase this value if the status page reports the IP address as N/A.

fdi.dhcp_timeout

NetworkManager DHCP timeout. The default value is 300 seconds.

fdi.dns_nameserver

Nameserver to use for DNS SRV record.

fdi.dns_ndots

ndots option to use for DNS SRV record.

fdi.dns_search

Search domain to use for DNS SRV record.

fdi.initnet

By default, the image initializes all network interfaces (value all). When this setting is set to bootif, only the network interface it was network-booted from will be initialized.

fdi.ipv4.method

By default, NetworkManager IPv4 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv4 stack. This option works only in DHCP mode.

fdi.ipv6.method

By default, NetworkManager IPv6 method setting is set to auto. This option overrides it, set it to ignore to disable the IPv6 stack. This option only works in DHCP mode.

fdi.ipwait

Duration in seconds to wait for IP to be available in HTTP proxy SSL cert start. By default, orcharhino waits for 120 seconds.

fdi.nmwait

nmcli -wait option for NetworkManager. By default, nmcli waits for 120 seconds.

fdi.proxy_cert_days

Number of days the self-signed HTTPS cert is valid for. By default, the certificate is valid for 999 days.

fdi.pxauto

To set automatic or semi-automatic mode. If set to 0, the image uses semi-automatic mode, which allows you to confirm your choices through a set of dialog options. If set to 1, the image uses automatic mode and proceeds without any confirmation.

fdi.pxfactname1, fdi.pxfactname2 …​ fdi.pxfactnameN

Use to specify custom fact names.

fdi.pxfactvalue1, fdi.pxfactvalue2 …​ fdi.pxfactvalueN

The values for each custom fact. Each value corresponds to a fact name. For example, fdi.pxfactvalue1 sets the value for the fact named with fdi.pxfactname1.

fdi.pxip, fdi.pxgw, fdi.pxdns

Manually configures IP address (fdi.pxip), the gateway (fdi.pxgw), and the DNS (fdi.pxdns) for the primary network interface. If you omit these parameters, the image uses DHCP to configure the network interface. You can add multiple DNS entries in a comma-separated [1] list, for example fdi.pxdns=192.168.1.1,192.168.200.1.

fdi.pxmac

The MAC address of the primary interface in the format of AA:BB:CC:DD:EE:FF. This is the interface you aim to use for communicating with orcharhino Proxy Server. In automated mode, the first NIC (using network identifiers in alphabetical order) with a link is used. In semi-automated mode, a screen appears and requests you to select the correct interface.

fdi.rootpw

By default, the root account is locked. Use this option to set a root password. You can enter both clear and encrypted passwords.

fdi.ssh

By default, the SSH service is disabled. Set this to 1 or true to enable SSH access.

fdi.uploadsleep

Duration in seconds between facter runs. By default, facter runs every 30 seconds.

fdi.vlan.primary

VLAN tagging ID to set for the primary interface.

fdi.zips

Filenames with extensions to be downloaded and started during boot. For more information, see see Extending the Discovery Image.

fdi.zipserver

TFTP server to use to download extensions from. For more information, see see Extending the Discovery Image.

proxy.type

The proxy type. This is usually set to proxy to connect to orcharhino Proxy Server. This parameter also supports a legacy foreman option, where communication goes directly to orcharhino Server instead of a orcharhino Proxy Server.

proxy.url

The URL of orcharhino Proxy Server or orcharhino Server providing the Discovery service.

Using the discovery-remaster tool to remaster an OS image
$ discovery-remaster ~/iso/foreman-discovery-image-3.4.4-5.iso \
"fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
fdi.pxdns=192.168.140.2 proxy.url=https://orcharhino.example.com:9090 \
proxy.type=proxy fdi.pxfactname1=customhostname fdi.pxfactvalue1=myhost fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"

Copy this media to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at /dev/sdb:

$ dd bs=4M \
if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \
of=/dev/sdb

Insert the Discovery boot media into a bare-metal host, start the host, and boot from the media.

For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.

Extending the Discovery image

You can extend the orcharhino Discovery image with custom facts, software, or device drivers. You can also provide a compressed archive file containing extra code for the image to use.

Procedure
  1. Create the following directory structure:

    .
    ├── autostart.d
    │   └── 01_zip.sh
    ├── bin
    │   └── ntpdate
    ├── facts
    │   └── test.rb
    └── lib
        ├── libcrypto.so.1.0.0
        └── ruby
            └── test.rb
    • The autostart.d directory contains scripts that are executed in POSIX order by the image when it starts, but before the host is registered to orcharhino.

    • The bin directory is added to the $PATH variable; you can place binary files in this directory and use them in the autostart scripts.

    • The facts directory is added to the FACTERLIB variable so that custom facts can be configured and sent to orcharhino.

    • The lib directory is added to the LD_LIBRARY_PATH variable and lib/ruby is added to the RUBYLIB variable, so that binary files in /bin can be executed correctly.

  2. After creating the directory structure, create a .zip file archive with the following command:

    $ zip -r my_extension.zip .
  3. To inform the Discovery image of the extensions it must use, place your zip files on your TFTP server with the Discovery image, and then update the APPEND line of the PXELinux template with the fdi.zips option where the paths are relative to the TFTP root. For example, if you have two archives at $TFTP/zip1.zip and $TFTP/boot/zip2.zip, use the following syntax:

    fdi.zips=zip1.zip,boot/zip2.zip

You can append new directives and options to the existing environment variables (PATH, LD_LIBRARY_PATH, RUBYLIB and FACTERLIB). If you want to specify the path explicitly in your scripts, the .zip file contents are extracted to the /opt/extension directory on the image.

You can create multiple .zip files but be aware that they are extracted to the same location on the Discovery image. Files extracted from in later .zip files overwrite earlier versions if they have the same file name.

Troubleshooting Discovery

If a machine is not listed in the orcharhino management UI in Hosts > Discovered Hosts, inspect the following configuration areas to help isolate the error:

  • In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates.

    1. Click Build PXE Default to redeploy the default PXELinux template.

  • Verify the pxelinux.cfg/default configuration file on the TFTP orcharhino Proxy.

  • Ensure adequate network connectivity between hosts, orcharhino Proxy Server, and orcharhino Server.

  • Check the PXELinux template in use and determine the PXE discovery snippet it includes. Snippets are named as follows: pxelinux_discovery, pxegrub_discovery, or pxegrub2_discovery. Verify the proxy.url and proxy.type options in the PXE discovery snippet.

  • Ensure that DNS is working correctly for the discovered nodes, or use an IP address in the proxy.url option in the PXE discovery snippet included in the PXELinux template you are using.

  • Ensure that the DHCP server is delivering IP addresses to the booted image correctly.

  • Ensure the discovered host or virtual machine has at least 1200 MB of memory. Less memory can lead to various random kernel panic errors because the image is extracted in-memory.

For gathering important system facts, use the discovery-debug command. It prints out system logs, network configuration, list of facts, and other information on the standard output. The typical use case is to redirect this output and copy it with the scp command for further investigation.

The first virtual console on the discovered host is reserved for systemd logs. Particularly useful system logs are tagged as follows:

  • discover-host — initial facts upload

  • foreman-discovery — facts refresh, reboot remote commands

  • nm-prepare — boot script which pre-configures NetworkManager

  • NetworkManager — networking information

Use TTY2 or higher to log in to a discovered host. The root account and SSH access are disabled by default, but you can enable SSH and set the root password using the following kernel command-line options in the Default PXELinux template on the APPEND line:

fdi.ssh=1 fdi.rootpw=My_Password

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0").


1. NetworkManager expects ; as a list separator but currently also accepts ,. For more information, see man nm-settings-keyfile and Shell-like scripting in GRUB