Configuring the Discovery Service
orcharhino can detect hosts on a network that are not in your orcharhino inventory. These hosts boot the discovery image that performs hardware detection and relays this information back to orcharhino Server. This method creates a list of ready-to-provision hosts in orcharhino Server without needing to enter the MAC address of each host.
When you boot a blank bare-metal host, the boot menu has two options: local
and discovery
.
If you select discovery
to boot the Discovery image, after a few minutes, the Discovery image completes booting and a status screen is displayed.
The Discovery service is enabled by default on orcharhino Server. However, the default setting of the global templates is to boot from the local hard drive. To change the default setting, in the orcharhino management UI, navigate to Administer > Settings, and click the Provisioning tab. Locate the Default PXE global template entry row, and in the Value column, enter discovery.
Ensure that the DHCP range of all subnets that you plan to use for discovery do not intersect with the DHCP lease pool configured for the managed DHCP service.
The DHCP range is set in the orcharhino management UI, while the lease pool range is set using the orcharhino-installer
command.
For example, in a 10.1.0.0/16 network range 10.1.0.0 to 10.1.127.255 could be allocated for leases and 10.1.128.0 to 10.1.255.254 could be allocated for reservations.
-
Install Discovery on orcharhino Server:
The
orcharhino-fdi
package installs the Discovery ISO to the/usr/share/foreman-discovery-image/
directory. You can build a PXE boot image from this ISO using thelivecd-iso-to-pxeboot
tool. The tool saves this PXE boot image in the/var/lib/tftpboot/boot
directory. For more information, see Building a Discovery Image.$ orcharhino-installer \ --enable-foreman-plugin-discovery \ --enable-foreman-proxy-plugin-discovery
-
Install
orcharhino-fdi
:$ dnf install orcharhino-fdi
When the installation completes, you can view the new menu option by navigating to Hosts > Discovered Hosts.
Installing the Host Discovery Plugin
During the installation of orcharhino with the orcharhino installer, select the host discovery plugin at the plugin installation step. This will automatically install and configure the host discovery plugin. The following installation steps are not necessary then.
-
Run the following command to add the host discovery plugin to your orcharhino:
$ orcharhino-installer --enable-foreman-plugin-discovery
-
On your orcharhino Proxy, run the following command:
$ orcharhino-installer --enable-foreman-proxy-plugin-discovery
-
In case of installing the host discovery plugin manually, use the following command to install the Foreman Discovery Image:
$ dnf install orcharhino-fdi
Provisioning Template PXELinux Discovery Snippet
For BIOS provisioning, the PXELinux global default
template in the Hosts > Provisioning Templates window contains the snippet pxelinux_discovery
.
The snippet has the following lines:
LABEL discovery
MENU LABEL Foreman Discovery Image
KERNEL boot/fdi-image/vmlinuz0
APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman
IPAPPEND 2
The KERNEL
and APPEND
options boot the Discovery image and ramdisk.
The APPEND
option contains a proxy.url
parameter, with the foreman_server_url
macro as its argument.
This macro resolves to the full URL of orcharhino Server.
For UEFI provisioning, the PXEgrub2 global default
template in the Hosts > Provisioning Templates window contains the snippet pxegrub2_discovery
:
menuentry 'Foreman Discovery Image' --id discovery {
linuxefi boot/fdi-image/vmlinuz0 rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman BOOTIF=01-$mac
initrdefi boot/fdi-image/initrd0.img
}
To use orcharhino Proxy to proxy the discovery steps, edit /var/lib/tftpboot/pxelinux.cfg/default
or /var/lib/tftpboot/grub2/grub.cfg
and change the URL to orcharhino Proxy FQDN you want to use.
The global template is available on orcharhino Server and all orcharhino Proxies that have the TFTP feature enabled.
Automatic Contexts for Discovered Hosts
orcharhino Server assigns an organization and location to discovered hosts according to the following sequence of rules:
-
If a discovered host uses a subnet defined in orcharhino, the host uses the first organization and location associated with the subnet.
-
If the
discovery_organization
ordiscovery_location
fact values are set, the discovered host uses these fact values as an organization and location. To set these fact values, navigate to Administer > Settings > Discovered, and add these facts to the Default organization and Default location fields. Ensure that the discovered host’s subnet also belongs to the organization and location set by the fact, otherwise orcharhino refuses to set it for security reasons. -
If none of the previous conditions exists, orcharhino assigns the first Organization and Location ordered by name.
You can change the organization or location using the bulk actions menu of the Discovered hosts page. Select the discovered hosts to modify and select Assign Organization or Assign Location from the Select Action menu.
Note that the foreman_organization
and foreman_location
facts are no longer valid values for assigning context for discovered hosts.
You still can use these facts to configure the host for Puppet runs.
To do this, navigate to Administer > Settings > Puppet section and add the foreman_organization
and foreman_location
facts to the Default organization and Default location fields.
Discovery Templates and Snippets Settings
To use the Discovery service, you must configure provisioning settings to set Discovery as the default service and set the templates that you want to use.
For both BIOS and UEFI, to set the Discovery service as the default service that boots for hosts that are not present in your current orcharhino inventory, complete the following steps:
-
In the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab.
-
For the Default PXE global template entry, in the Value column, enter
discovery
.
To use a template, in the orcharhino management UI, navigate to Administer > Settings and click the Provisioning tab and set the templates that you want to use.
Templates and snippets are locked to prevent changes. If you want to edit a template or snippet, clone it, save it with a unique name, and then edit the clone.
When you change the template or a snippet it includes, the changes must be propagated to orcharhino Server’s default PXE template.
In the orcharhino management UI, navigate to Hosts > Provisioning Templates and click Build PXE Default.
This refreshes the default PXE template on orcharhino Server.
- The proxy.url argument
-
During the orcharhino installation process, if you use the default option
--enable-foreman-plugin-discovery
, you can edit theproxy.url
argument in the template to set the URL of orcharhino Proxy that provides the discovery service. You can change theproxy.url
argument to the IP address or FQDN of another provisioning orcharhino Proxy that you want to use, but ensure that you append the port number, for example,9090
. If you use an alternative port number with the--foreman-proxy-ssl-port
option during orcharhino installation, you must add that port number. You can also edit theproxy.url
argument to use a orcharhino IP address or FQDN so that the discovered hosts communicate directly with orcharhino Server. - The proxy.type argument
-
If you use a orcharhino Proxy FQDN for the
proxy.url
argument, ensure that you set theproxy.type
argument toproxy
. If you use a orcharhino FQDN, update theproxy.type
argument toforeman
.proxy.url=https://orcharhino-proxy.network2.example.com:9090 proxy.type=proxy
orcharhino deploys the same template to all TFTP orcharhino Proxies and there is no variable or macro available to render the orcharhino Proxy’s host name.
The hard-coded proxy.url
does not not work with two or more TFTP orcharhino Proxies.
As a workaround, every time you click Build PXE Defaults, edit the configuration file in the TFTP directory using SSH, or use a common DNS alias for appropriate subnets.
If you want to use tagged VLAN provisioning, and you want the discovery service to send a discovery request, add the following information to the KERNEL
option in the discovery template:
fdi.vlan.primary=example_VLAN_ID
Creating Hosts from Discovered Hosts
Provisioning discovered hosts follows a provisioning process that is similar to PXE provisioning. The main difference is that instead of manually entering the host’s MAC address, you can select the host to provision from the list of discovered hosts.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
Configure a domain and subnet on orcharhino. For more information about networking requirements, see Configuring Networking.
-
Configure the discovery service on orcharhino. For more information, see Installing the Discovery Service.
-
A bare metal host or a blank VM.
-
Provide the installation medium for the operating systems that you want to use to provision hosts. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing Content.
-
Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing Content.
For information about the security tokens, see Configuring the Security Token Validity Duration.
-
In the orcharhino management UI, navigate to Hosts > Discovered host. Select the host you want to use and click Provision to the right of the list.
-
Select from one of the two following options:
-
To provision a host from a host group, select a host group, organization, and location, and then click Create Host.
-
To provision a host with further customization, click Customize Host and enter the additional details you want to specify for the new host.
-
-
Verify that the fields are populated with values. Note in particular:
-
The Name from the Host tab becomes the DNS name.
-
orcharhino Server automatically assigns an IP address for the new host.
-
orcharhino Server automatically populates the MAC address from the Discovery results.
-
-
Ensure that orcharhino Server automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.
-
Click the Operating System tab, and verify that all fields contain values. Confirm each aspect of the operating system.
-
Click Resolve in Provisioning template to check the new host can identify the right provisioning templates to use. The host must resolve to the following provisioning templates:
-
kexec Template:
Discovery Red Hat kexec
-
provision Template:
orcharhino Kickstart Default
For more information about associating provisioning templates, see provisioning templates.
-
-
Click Submit to save the host details.
When the host provisioning is complete, the discovered host becomes a content host. To view the host, navigate to Hosts > Content Hosts.
-
Identify the discovered host to use for provisioning:
$ hammer discovery list
-
Select a host and provision it using a host group. Set a new host name with the
--new-name
option:$ hammer discovery provision \ --build true \ --enabled true \ --hostgroup "My_Host_Group" \ --location "My_Location" \ --managed true \ --name "My_Host_Name" \ --new-name "My_New_Host_Name" \ --organization "My_Organization"
This removes the host from the discovered host listing and creates a host entry with the provisioning settings. The Discovery image automatically resets the host so that it can boot to PXE. The host detects the DHCP service on orcharhino Server’s integrated orcharhino Proxy and starts installing the operating system. The rest of the process is identical to normal PXE workflow described in Creating Hosts with Unattended Provisioning.
Creating Discovery Rules
As a method of automating the provisioning process for discovered hosts, orcharhino provides a feature to create discovery rules. These rules define how discovered hosts automatically provision themselves, based on the assigned host group. For example, you can automatically provision hosts with a high CPU count as hypervisors. Likewise, you can provision hosts with large hard disks as storage servers.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
Auto provisioning does not currently allow configuring NICs; all systems are being provisioned with the NIC configuration that was detected during discovery.
However, you can set the NIC in the OS installer recipe
scriplet, using a script, or using configuration management at a later stage.
-
In the orcharhino management UI, navigate to Configure > Discovery rules, and select Create Rule.
-
In the Name field, enter a name for the rule.
-
In the Search field, enter the rules to determine whether to provision a host. This field provides suggestions for values you enter and allows operators for multiple rules. For example:
cpu_count > 8
. -
From the Host Group list, select the host group to use as a template for this host.
-
In the Hostname field, enter the pattern to determine host names for multiple hosts. This uses the same ERB syntax that provisioning templates use. The host name can use the
@host
attribute for host-specific values and therand
macro for a random number or thesequence_hostgroup_param_next
macro for incrementing the value. For more information about provisioning templates, see provisioning templates and the API documentation.-
myhost-<%= sequence_hostgroup_param_next("EL7/MyHostgroup", 10, "discovery_host") %>
-
myhost-<%= rand(99999) %>
-
abc-<%= @host.facts['bios_vendor'] %>-<%= rand(99999) %>
-
xyz-<%= @host.hostgroup.name %>
-
srv-<%= @host.discovery_rule.name %>
-
server-<%= @host.ip.gsub('.','-') + '-' + @host.hostgroup.subnet.name %>
When creating host name patterns, ensure that the resulting host names are unique, do not start with numbers, and do not contain underscores or dots. A good approach is to use unique information provided by Facter, such as the MAC address, BIOS, or serial ID.
-
-
In the Hosts limit field, enter the maximum number of hosts that you can provision with the rule. Enter
0
for unlimited. -
In the Priority field, enter a number to set the precedence the rule has over other rules. Rules with lower values have a higher priority.
-
From the Enabled list, select whether you want to enable the rule.
-
To set a different provisioning context for the rule, click the Organizations and Locations tabs and select the contexts you want to use.
-
Click Submit to save your rule.
-
In the orcharhino management UI, navigate to Hosts > Discovered Host and select one of the following two options:
-
From the Discovered hosts list on the right, select Auto-Provision to automatically provisions a single host.
-
On the upper right of the window, click Auto-Provision All to automatically provisions all hosts.
-
-
Create the rule with the
hammer discovery-rule create
command:$ hammer discovery-rule create \ --enabled true \ --hostgroup "My_Host_Group" \ --hostname "hypervisor-<%= rand(99999) %>" \ --hosts-limit 5 \ --name "My_Hypervisor" \ --priority 5 \ --search "cpu_count > 8"
-
Automatically provision a host with the
hammer discovery auto-provision
command:$ hammer discovery auto-provision --name "macabcdef123456"
Implementing PXE-less Discovery
orcharhino provides a PXE-less Discovery service that operates without the need for PXE-based services (DHCP and TFTP).
You accomplish this using orcharhino Server’s Discovery image.
Once a discovered node is scheduled for installation, it uses kexec
command to reload Linux kernel with OS installer without rebooting the node.
The console might freeze during the process. On some hardware, you might experience graphical hardware problems.
If you have not yet installed the Discovery service or image, see Building a Discovery Image.
-
Copy this media to either a CD, DVD, or a USB stick. For example, to copy to a USB stick at
/dev/sdb
:$ dd bs=4M \ if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \ of=/dev/sdb
-
Insert the Discovery boot media into a bare metal host, start the host, and boot from the media. The Discovery Image displays an option for either Manual network setup or Discovery with DHCP:
If you select Manual network setup, the Discovery image requests a set of network options. This includes the primary network interface that connects to orcharhino Server. This Discovery image also asks for network interface configuration options, such as an IPv4 Address, IPv4 Gateway, and an IPv4 DNS server.
-
After entering these details, select Next.
-
If you select Discovery with DHCP, the Discovery image requests only the primary network interface that connects to orcharhino Server. It attempts to automatically configure the network interface using a DHCP server, such as one that a orcharhino Proxy provides.
-
After the primary interface configuration, the Discovery image requests the Server URL, which is the URL of orcharhino Server or orcharhino Proxy offering the Discovery service. For example, to use the integrated orcharhino Proxy on orcharhino Server, use the following URL:
https://orcharhino.example.com:9090
-
Set the Connection type to
Proxy
, then select Next. -
Optional: The Discovery image also provides a set of fields to input Custom facts for the Facter tool to relay back to orcharhino Server. These are entered in a name-value format. Provide any custom facts you require and select Confirm to continue.
orcharhino reports a successful communication with orcharhino Server’s Discovery service. In the orcharhino management UI, navigate to Hosts > Discovered Hosts and view the newly discovered host.
For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.
Building a Discovery Image
The Discovery image is a minimal operating system that is PXE-booted on hosts to acquire initial hardware information and to check in with orcharhino. Discovered hosts keep running the Discovery image until they are rebooted into Anaconda, which then initiates the provisioning process.
Run the following procedure on Red Hat Enterprise Linux 8.
The Discovery image is based on CentOS Stream 8.
Use this procedure to build a orcharhino discovery image or rebuild an image if you change configuration files.
Do not use this procedure on your production orcharhino or orcharhino Proxy. Use either a dedicated environment or copy the synchronized repositories and a kickstart file to a separate server.
-
Ensure that hardware virtualization is available on your host.
-
Install the following packages:
$ dnf install git-core lorax anaconda pykickstart wget qemu-kvm
-
Clone the
foreman-discovery-image
repository:$ git clone https://github.com/theforeman/foreman-discovery-image.git $ cd foreman-discovery-image
-
Replace the contents of the
00-repos-centos8.ks
file with the following:url --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=baseos repo --name="AppStream" --mirrorlist=http://mirrorlist.centos.org/?release=8&arch=$basearch&repo=appstream repo --name="foreman-el8" --baseurl=http://yum.theforeman.org/releases/6.7/el8/$basearch/ repo --name="foreman-plugins-el8" --baseurl=http://yum.theforeman.org/plugins/6.7/el8/$basearch/ module --name=ruby --stream=2.7 module --name=postgresql --stream=12 module --name=foreman --stream=el8
-
Prepare the kickstart file:
$ ./build-livecd fdi-centos8.ks
-
Build the ISO image:
$ sudo ./build-livecd-root custom ./result "nomodeset nokaslr"
The options
nomodeset
andnokaslr
instruct the Discovery image kernel to disable mode setting and to disable ASLR (Address Space Layout Randomization). -
Verify that your
./result/fdi-custom-XXXXXXX.iso
file is created:$ ls -h ./result/*.iso
Use the following command to build a fully automated Discovery image with a static network configuration:
$ sudo ./build-livecd-root custom ./result \
"nomodeset nokaslr \
fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
fdi.pxdns=192.168.140.2 proxy.url=https://orcharhino.example.com:8443 \
proxy.type=foreman fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"
For more information about configuration options, see Unattended Use Customization and Image Remastering.
When you create the .iso
file, you can boot the .iso
file over a network or locally.
Complete one of the following procedures.
If you want to boot the .iso
file over a network, complete the following steps:
-
Create a directory to store your boot files:
$ mkdir /var/lib/tftpboot/boot/myimage
-
Copy the
./tftpboot/initrd0.img
and./tftpboot/vmlinuz0
files to your new directory. -
Edit the
KERNEL
andAPPEND
entries in the/var/lib/tftpboot/pxelinux.cfg
file to add the information about your own initial ramdisk and kernel files.
If you want to create a hybrid .iso
file for booting locally, complete the following steps:
-
To convert the
.iso
file to an.iso
hybrid file for PXE provisioning, enter the following command:$ isohybrid --partok fdi.iso
If you have
grub2
packages installed, you can also use the following command to install agrub2
bootloader:$ isohybrid --partok --uefi fdi.iso
-
To add
md5
checksum to the.iso
file so it can pass installation media validation tests in orcharhino, enter the following command:$ implantisomd5 fdi.iso
Unattended Use, Customization, and Image Remastering
You can create a customized Discovery ISO to automate the image configuration process after booting. The Discovery image uses a Linux kernel for the operating system, which passes kernel parameters to configure the discovery service. These kernel parameters include the following entries:
- fdi.cachefacts
-
Number of fact uploads without caching. By default, orcharhino does not cache any uploaded facts.
- fdi.countdown
-
Number of seconds to wait until the text-user interface is refreshed after the initial discovery attempt. This value defaults to 45 seconds. Increase this value if the status page reports the IP address as
N/A
. - fdi.dhcp_timeout
-
NetworkManager DHCP timeout. The default value is 300 seconds.
- fdi.dns_nameserver
-
Nameserver to use for DNS SRV record.
- fdi.dns_ndots
-
ndots
option to use for DNS SRV record. - fdi.dns_search
-
Search domain to use for DNS SRV record.
- fdi.initnet
-
By default, the image initializes all network interfaces (value
all
). When this setting is set tobootif
, only the network interface it was network-booted from will be initialized. - fdi.ipv4.method
-
By default, NetworkManager IPv4 method setting is set to
auto
. This option overrides it, set it toignore
to disable the IPv4 stack. This option works only in DHCP mode. - fdi.ipv6.method
-
By default, NetworkManager IPv6 method setting is set to
auto
. This option overrides it, set it toignore
to disable the IPv6 stack. This option only works in DHCP mode. - fdi.ipwait
-
Duration in seconds to wait for IP to be available in HTTP proxy SSL cert start. By default, orcharhino waits for 120 seconds.
- fdi.nmwait
-
nmcli -wait
option for NetworkManager. By default,nmcli
waits for 120 seconds. - fdi.proxy_cert_days
-
Number of days the self-signed HTTPS cert is valid for. By default, the certificate is valid for 999 days.
- fdi.pxauto
-
To set automatic or semi-automatic mode. If set to 0, the image uses semi-automatic mode, which allows you to confirm your choices through a set of dialog options. If set to 1, the image uses automatic mode and proceeds without any confirmation.
- fdi.pxfactname1, fdi.pxfactname2 … fdi.pxfactnameN
-
Use to specify custom fact names.
- fdi.pxfactvalue1, fdi.pxfactvalue2 … fdi.pxfactvalueN
-
The values for each custom fact. Each value corresponds to a fact name. For example,
fdi.pxfactvalue1
sets the value for the fact named withfdi.pxfactname1
. - fdi.pxip, fdi.pxgw, fdi.pxdns
-
Manually configures IP address (
fdi.pxip
), the gateway (fdi.pxgw
), and the DNS (fdi.pxdns
) for the primary network interface. If your omit these parameters, the image uses DHCP to configure the network interface. - fdi.pxmac
-
The MAC address of the primary interface in the format of
AA:BB:CC:DD:EE:FF
. This is the interface you aim to use for communicating with orcharhino Proxy. In automated mode, the first NIC (using network identifiers in alphabetical order) with a link is used. In semi-automated mode, a screen appears and requests you to select the correct interface. - fdi.rootpw
-
By default, the
root
account is locked. Use this option to set a root password. You can enter both clear and encrypted passwords. - fdi.ssh
-
By default, the SSH service is disabled. Set this to
1
ortrue
to enable SSH access. - fdi.uploadsleep
-
Duration in seconds between facter runs. By default, facter runs every 30 seconds.
- fdi.vlan.primary
-
VLAN tagging ID to set for the primary interface.
- fdi.zips
-
Filenames with extensions to be downloaded and started during boot. For more information, see see Extending the Discovery Image.
- fdi.zipserver
-
TFTP server to use to download extensions from. For more information, see see Extending the Discovery Image.
- proxy.type
-
The proxy type. This is usually set to
proxy
to connect to orcharhino Proxy. This parameter also supports a legacyforeman
option, where communication goes directly to orcharhino Server instead of a orcharhino Proxy. - proxy.url
-
The URL of orcharhino Proxy or orcharhino Server providing the Discovery service.
discovery-remaster
Tool to Remaster an OS Image$ discovery-remaster ~/iso/foreman-discovery-image-3.4.4-5.iso \
"fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 \
fdi.pxdns=192.168.140.2 proxy.url=https://orcharhino.example.com:9090 \
proxy.type=proxy fdi.pxfactname1=customhostname fdi.pxfactvalue1=myhost fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1"
Copy this media to either a CD, DVD, or a USB stick.
For example, to copy to a USB stick at /dev/sdb
:
$ dd bs=4M \
if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso \
of=/dev/sdb
Insert the Discovery boot media into a bare metal host, start the host, and boot from the media.
For more information about provisioning discovered hosts, see Creating Hosts from Discovered Hosts.
Extending the Discovery Image
You can extend the orcharhino Discovery image with custom facts, software, or device drivers. You can also provide a compressed archive file containing extra code for the image to use.
-
Create the following directory structure:
. ├── autostart.d │ └── 01_zip.sh ├── bin │ └── ntpdate ├── facts │ └── test.rb └── lib ├── libcrypto.so.1.0.0 └── ruby └── test.rb
-
The
autostart.d
directory contains scripts that are executed in POSIX order by the image when it starts, but before the host is registered to orcharhino. -
The
bin
directory is added to the$PATH
variable; you can place binary files in this directory and use them in theautostart
scripts. -
The
facts
directory is added to theFACTERLIB
variable so that custom facts can be configured and sent to orcharhino. -
The
lib
directory is added to theLD_LIBRARY_PATH
variable andlib/ruby
is added to theRUBYLIB
variable, so that binary files in/bin
can be executed correctly.
-
-
After creating the directory structure, create a
.zip
file archive with the following command:$ zip -r my_extension.zip .
-
To inform the Discovery image of the extensions it must use, place your zip files on your TFTP server with the Discovery image, and then update the
APPEND
line of the PXELinux template with thefdi.zips
option where the paths are relative to the TFTP root. For example, if you have two archives at$TFTP/zip1.zip
and$TFTP/boot/zip2.zip
, use the following syntax:fdi.zips=zip1.zip,boot/zip2.zip
You can append new directives and options to the existing environment variables (PATH
, LD_LIBRARY_PATH
, RUBYLIB
and FACTERLIB
).
If you want to specify the path explicitly in your scripts, the .zip
file contents are extracted to the /opt/extension
directory on the image.
You can create multiple .zip
files but be aware that they are extracted to the same location on the Discovery image.
Files extracted from in later .zip
files overwrite earlier versions if they have the same file name.
Troubleshooting Discovery
If a machine is not listed in the orcharhino management UI in Hosts > Discovered Hosts, inspect the following configuration areas to help isolate the error:
-
In the orcharhino management UI, navigate to Hosts > Provisioning Templates and redeploy the default PXELinux template using the Build PXE Default button.
-
Verify the
pxelinux.cfg/default
configuration file on the TFTP orcharhino Proxy. -
Ensure adequate network connectivity between hosts, orcharhino Proxy, and orcharhino Server.
-
Check the PXELinux template in use and determine the PXE discovery snippet it includes. Snippets are named as follows:
pxelinux_discovery
,pxegrub_discovery
, orpxegrub2_discovery
. Verify theproxy.url
andproxy.type
options in the PXE discovery snippet. -
Ensure that DNS is working correctly for the discovered nodes, or use an IP address in the
proxy.url
option in the PXE discovery snippet included in the PXELinux template you are using. -
Ensure that the DHCP server is delivering IP addresses to the booted image correctly.
-
Ensure the discovered host or virtual machine has at least 1200 MB of memory. Less memory can lead to various random kernel panic errors because the image is extracted in-memory.
For gathering important system facts, use the discovery-debug
command.
It prints out system logs, network configuration, list of facts, and other information on the standard output.
The typical use case is to redirect this output and copy it with the scp
command for further investigation.
The first virtual console on the discovered host is reserved for systemd logs. Particularly useful system logs are tagged as follows:
-
discover-host
— initial facts upload -
foreman-discovery
— facts refresh, reboot remote commands -
nm-prepare
— boot script which pre-configures NetworkManager -
NetworkManager
— networking information
Use TTY2 or higher to log in to a discovered host. The root account and SSH access are disabled by default, but you can enable SSH and set the root password using the following kernel command-line options in the Default PXELinux template on the APPEND line:
fdi.ssh=1 fdi.rootpw=My_Password
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA"). |