Configuring Provisioning Resources
Provisioning Contexts
A provisioning context is the combination of an organization and location that you specify for orcharhino components. The organization and location that a component belongs to sets the ownership and access for that component.
Organizations divide orcharhino components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through orcharhino and assign components to each individual organization. This ensures orcharhino Server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in Managing Organizations and Locations.
Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in Managing Organizations and Locations.
Setting the Provisioning Context
When you set a provisioning context, you define which organization and location to use for provisioning hosts.
The organization and location menus are located in the menu bar, on the upper left of the orcharhino management UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location.
-
Click Any Organization and select the organization.
-
Click Any Location and select the location to use.
Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the orcharhino management UI and select My account to edit your user account settings.
-
When using the CLI, include either
--organization
or--organization-label
and--location
or--location-id
as an option. For example:# hammer host list --organization "My_Organization" --location "My_Location"
This command outputs hosts allocated to
My_Organization
andMy_Location
.
Creating Operating Systems
An operating system is a collection of resources that define how orcharhino Server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others.
You can also add custom operating systems using the following procedure. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > Operating systems and click New Operating system.
-
In the Name field, enter a name to represent the operating system entry.
-
In the Major field, enter the number that corresponds to the major version of the operating system.
-
In the Minor field, enter the number that corresponds to the minor version of the operating system.
-
In the Description field, enter a description of the operating system.
-
From the Family list, select the operating system’s family.
-
From the Root Password Hash list, select the encoding method for the root password.
-
From the Architectures list, select the architectures that the operating system uses.
-
Click the Partition table tab and select the possible partition tables that apply to this operating system.
-
Click the Installation Media tab and enter the information for the installation media source. For more information, see Adding Installation Media to orcharhino.
-
Click the Templates tab and select a PXELinux template, a Provisioning template, and a Finish template for your operating system to use. You can select other templates, for example an iPXE template, if you plan to use iPXE for provisioning.
-
Click Submit to save your provisioning template.
-
Create the operating system using the
hammer os create
command:# hammer os create \ --architectures "x86_64" \ --description "My_Custom_Operating_System" \ --family "Red Hat" \ --major 9 \ --media "Red Hat" \ --minor 1 \ --name "Oracle Linux" \ --partition-tables "My_Partitioning_Table" \ --provisioning-templates "My_Provisioning_Template"
Updating the Details of Multiple Operating Systems
Use this procedure to update the details of multiple operating systems.
This example shows you how to assign each operating system a partition table called Kickstart default
, a configuration template called Kickstart default PXELinux
, and a provisioning template called Kickstart Default
.
-
On orcharhino Server, run the following Bash script:
PARTID=$(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1) PXEID=$(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1) SATID=$(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1) for i in $(hammer --no-headers --csv os list | awk -F, {'print $1'}) do hammer partition-table add-operatingsystem --id="${PARTID}" --operatingsystem-id="${i}" hammer template add-operatingsystem --id="${PXEID}" --operatingsystem-id="${i}" hammer os set-default-template --id="${i}" --config-template-id=${PXEID} hammer os add-config-template --id="${i}" --config-template-id=${SATID} hammer os set-default-template --id="${i}" --config-template-id=${SATID} done
-
Display information about the updated operating system to verify that the operating system is updated correctly:
# hammer os info --id 1
Creating Architectures
An architecture in orcharhino represents a logical grouping of hosts and operating systems. Architectures are created by orcharhino automatically when hosts check in with Puppet. Basic i386 and x86_64 architectures are already preset in orcharhino.
Use this procedure to create an architecture in orcharhino.
-
In the orcharhino management UI, navigate to Hosts > Architectures and click Create Architecture.
-
In the Name field, enter a name for the architecture.
-
From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Operating Systems.
-
Click Submit.
-
Enter the
hammer architecture create
command to create an architecture. Specify its name and operating systems that include this architecture:# hammer architecture create \ --name "My_Architecture" \ --operatingsystems "My_Operating_System"
Creating Hardware Models
Use this procedure to create a hardware model in orcharhino so that you can specify which hardware model a host uses.
-
In the orcharhino management UI, navigate to Hosts > Hardware Models and click Create Model.
-
In the Name field, enter a name for the hardware model.
-
Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system.
-
In the Info field, enter a description of the hardware model.
-
Click Submit to save your hardware model.
-
Create a hardware model using the
hammer model create
command. The only required parameter is--name
. Optionally, enter the hardware model with the--hardware-model
option, a vendor class with the--vendor-class
option, and a description with the--info
option:# hammer model create \ --hardware-model "My_Hardware_Model" \ --info "My_Description" \ --name "My_Hardware_Model_Name" \ --vendor-class "My_Vendor_Class"
Using a Synced Kickstart Repository for a Host’s Operating System
orcharhino contains a set of synchronized kickstart repositories that you use to install the provisioned host’s operating system. For more information about adding repositories, see Syncing Repositories in Managing Content.
Use this procedure to set up a kickstart repository.
You must enable both BaseOS and Appstream Kickstart before provisioning.
-
Add the synchronized kickstart repository that you want to use to the existing Content View, or create a new Content View and add the kickstart repository.
For Oracle Linux 8, ensure that you add both Oracle Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Oracle Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories.
If you use a disconnected environment, you must import the Kickstart repositories from a Oracle Linux binary DVD. For more information, see Importing Kickstart Repositories in Managing Content.
-
Publish a new version of the Content View where the kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing Content Views in Managing Content.
-
When you create a host, in the Operating System tab, for Media Selection, select the Synced Content checkbox.
To view the kickstart tree, enter the following command:
# hammer medium list --organization "My_Organization"
Adding Installation Media to orcharhino
Installation media are sources of packages that orcharhino Server uses to install a base operating system on a machine from an external repository.
Installation media must be in the format of an operating system installation tree, and must be accessible to the machine hosting the installer through an HTTP URL. You can view installation media by navigating to Hosts > Installation Media menu.
If you want to improve download performance when using installation media to install operating systems on multiple host, you must modify the Path of the installation medium to point to the closest mirror or a local copy.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > Installation Media and click Create Medium.
-
In the Name field, enter a name to represent the installation media entry.
-
In the Path enter the URL or NFS share that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions:
-
$arch
– The system architecture. -
$version
– The operating system version. -
$major
– The operating system major version. -
$minor
– The operating system minor version.Example HTTP path:
http://download.example.com/oracle_linux/$version/Server/$arch/os/
Example NFS path:
nfs://download.example.com:/oracle_linux/$version/Server/$arch/os/
Synchronized content on orcharhino Proxies always uses an HTTP path. orcharhino Proxy managed content does not support NFS paths.
-
-
From the Operating system family list, select the distribution or family of the installation medium. Oracle Linux belongs to the
Red Hat
family. -
Click the Organizations and Locations tabs, to change the provisioning context. orcharhino Server adds the installation medium to the set provisioning context.
-
Click Submit to save your installation medium.
ATIX AG provides file repositories for installation media. You can synchronize them to provision hosts in disconnected environments by using local installation media. For a list of upstream URLs, see Using file repositories for installation media for Debian/Ubuntu in the ATIX Service Portal. ATIX AG provides the following installation media as file repository:
|
-
Create the installation medium using the
hammer medium create
command:# hammer medium create \ --locations "My_Location" \ --name "My_OS" \ --organizations "My_Organization" \ --os-family "Redhat" \ --path "http://download.example.com/oracle_linux/$version/Server/$arch/os/"
Creating Partition Tables
A partition table is a type of template that defines the way orcharhino Server configures the disks available on a new host.
A Partition table uses the same ERB syntax as provisioning templates.
orcharhino contains a set of default partition tables to use, including a Kickstart default
.
You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Hosts > Partition Tables and click Create Partition Table.
-
In the Name field, enter a name for the partition table.
-
Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations.
-
Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables.
-
From the Operating System Family list, select the distribution or family of the partitioning layout. Oracle Linux belongs to the
Red Hat
family. -
In the Template editor field, enter the layout for the disk partition. For example:
zerombr clearpart --all --initlabel autopart
You can also use the Template file browser to upload a template file.
The format of the layout must match that for the intended operating system. For example, Oracle Linux requires a layout that matches a kickstart file.
-
In the Audit Comment field, add a summary of changes to the partition layout.
-
Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. orcharhino adds the partition table to the current provisioning context.
-
Click Submit to save your partition table.
-
Before you create a partition table with the CLI, create a plain text file that contains the partition layout. This example uses the
~/my_partition_table
file. -
Create the installation medium using the
hammer partition-table create
command:# hammer partition-table create \ --file "path/to/my_partition_table" \ --locations "My_Location" \ --name "My_Partition_Table" \ --organizations "My_Organization" \ --os-family "{client-os-family-hammer}" \ --snippet false
Dynamic Partition Example
Using an Anaconda kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the next sequence of events in the provisioning process:
zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %>
Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer.
If you want to provision servers and use dynamic partitioning, add the following example as a template.
When the #Dynamic
entry is included, the content of the template loads into a %pre
shell scriplet and creates a /tmp/diskpart.cfg
that is then included into the Kickstart partitioning section.
#Dynamic (do not remove this line) MEMORY=$((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ "$MEMORY" -lt 2048 ]; then SWAP_MEMORY=$(($MEMORY * 2)) elif [ "$MEMORY" -lt 8192 ]; then SWAP_MEMORY=$MEMORY elif [ "$MEMORY" -lt 65536 ]; then SWAP_MEMORY=$(($MEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr yes clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size "$SWAP_MEMORY" part / --fstype ext4 --size 1024 --grow EOF
Provisioning Templates
A provisioning template defines the way orcharhino Server installs an operating system on a host.
orcharhino includes many template examples. In the orcharhino management UI, navigate to Hosts > Provisioning templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Provisioning templates > Create Template > Help.
Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing Hosts.
You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in Managing Organizations and Locations.
You can synchronize templates between orcharhino Server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in Managing Hosts.
To view the history of changes applied to a template, navigate to Hosts > Provisioning templates, select one of the templates, and click History. Click Revert to override the content with the previous version. You can also revert to an earlier change. Click Show Diff to see information about a specific change:
-
The Template Diff tab displays changes in the body of a provisioning template.
-
The Details tab displays changes in the template description.
-
The History tab displays the user who made a change to the template and date of the change.
Types of Provisioning Templates
There are various types of provisioning templates:
- Provision
-
The main template for the provisioning process. For example, a kickstart template.
- PXELinux, PXEGrub, PXEGrub2
-
PXE-based templates that deploy to the template orcharhino Proxy associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2.
- Finish
-
Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finishing templates only for imaged-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment.
When a finish script successfully exits with the return code
0
, orcharhino treats the code as a success and the host exits the build mode. Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD. - user_data
-
Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require orcharhino to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image.
Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example,
cloud-init
, which expects YAML input, orignition
, which expects JSON input. - cloud_init
-
Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the
foreman
plug-in, which attempts to download the template directly from orcharhino over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized.Ensure that you meet the following requirements to use the
cloud_init
template:-
Ensure that the image that you want to provision has the software to read the data installed and set to start during boot.
-
A provisioned host is able to reach orcharhino from the IP address that matches the host’s provisioning interface IP.
Note that cloud-init does not work behind NAT.
-
- Bootdisk
-
Templates for PXE-less boot methods.
- Kernel Execution (kexec)
-
Kernel execution templates for PXE-less boot methods.
- Script
-
An arbitrary script not used by default but useful for custom tasks.
- ZTP
-
Zero Touch Provisioning templates.
- POAP
-
PowerOn Auto Provisioning templates.
- iPXE
-
Templates for
iPXE
orgPXE
environments to use instead of PXELinux.
Creating Provisioning Templates
A provisioning template defines the way orcharhino Server installs an operating system on a host. Use this procedure to create a new provisioning template.
-
In the orcharhino management UI, navigate to Hosts > Provisioning Templates and click Create Template.
-
In the Name field, enter a name for the provisioning template.
-
Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template.
-
Before you create a template with the CLI, create a plain text file that contains the template. This example uses the
~/my-template
file. -
Create the template using the
hammer template create
command and specify the type with the--type
option:# hammer template create \ --file ~/my-template \ --locations "My_Location" \ --name "My_Provisioning_Template" \ --organizations "My_Organization" \ --type provision
Cloning Provisioning Templates
A provisioning template defines the way orcharhino Server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone.
-
In the orcharhino management UI, navigate to Hosts > Provisioning Templates and search for the template that you want to use.
-
Click Clone to duplicate the template.
-
In the Name field, enter a name for the provisioning template.
-
Select the Default checkbox to set the template to associate automatically with new organizations or locations.
-
In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file.
-
In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes.
-
Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates.
-
From the Type list, select the type of the template. For example, Provisioning template.
-
Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template.
-
Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments.
-
Click the Organizations and Locations tabs to add any additional contexts to the template.
-
Click Submit to save your provisioning template.
Creating Custom Provisioning Snippets
Custom provisioning snippets allow you to execute custom code during host provisioning. You can run code before and/or after the provisioning process.
-
Depending on your provisioning template, multiple custom snippet hooks exist which you can use to include custom provisioning snippets. Ensure that you check your provisioning template first to verify which custom snippets you can use.
-
In the orcharhino management UI, navigate to Hosts > Provisioning Templates and click Create Template.
-
In the Name field, enter a name for your custom provisioning snippet. The name has to start with the name of a provisioning template that supports including custom provisioning snippets:
-
Append ` custom pre` to the name of a provisioning template to run code before provisioning a host.
-
Append ` custom post` to the name of a provisioning template to run code after provisioning a host.
-
-
On the Type tab, select Snippet.
-
Click Submit to create your custom provisioning snippet.
-
Before you create a template with the CLI, create a plain text file that contains your custom snippet.
-
Create the template using
hammer
:# hammer template create \ --file "/path/to/My_Snippet" \ --locations "My_Location" \ --name "My_Template_Name_custom_pre" \ --organizations "_My_Organization" \ --type snippet
Custom Provisioning Snippet Example for Enterprise Linux
You can use Custom Post
snippets to call external APIs from within the provisioning template directly after provisioning a host.
Kickstart default finish custom post
Example for Enterprise Linuxecho "Calling API to report successful host deployment" yum install -y curl ca-certificates curl -X POST \ -H "Content-Type: application/json" \ -d '{"name": "<%= @host.name %>", "operating_system": "<%= @host.operatingsystem.name %>", "status": "provisioned",}' \ "https://api.example.com/"
Installing katello-host-tools-tracer
during Provisioning
The katello-host-tools
package is responsible for the connection between managed hosts and content from orcharhino Server and orcharhino Proxies.
It uploads currently used repositories and installed packages to orcharhino.
The katello-host-tools-tracer
package signals orcharhino if services such as nginx
have to be restarted or the managed host itself has to be rebooted after updating packages such as the Linux Kernel.
-
In the orcharhino management UI, navigate to the host group you are using to provision your host.
Note that you can add the parameter to any level in the parameter hierarchy, for example to individual hosts, host groups, or operating systems.
-
Add a parameter with name
redhat_install_host_tracer_tools
, typeboolean
, and valuetrue
. -
In the orcharhino management UI, navigate to the provisioning template you are using to provision your host.
-
Ensure to reference the snippet to install
katello-host-tools-tracer
:<%= snippet 'redhat_register' %>
After the deployment, orcharhino will automatically install both
katello-host-tools
andkatello-host-tools-tracer
on your managed host.
Installing katello-host-tools-tracer
Manually
The katello-host-tools
package is responsible for the connection between managed hosts and content from orcharhino Server and orcharhino Proxies.
It uploads currently used repositories and installed packages to orcharhino.
The katello-host-tools-tracer
package signals orcharhino if services such as nginx
have to be restarted or the managed host itself has to be rebooted after updating packages such as the Linux Kernel.
-
In the orcharhino management UI, navigate to Monitor > Jobs.
-
Click Run Job.
-
In the Job category drop down menu, select
Commands
. -
In the Job template drop down menu, select
Run Command - SSH Default
. -
In the Search Query field, filter your managed hosts, for example all hosts running Oracle Linux:
os = OracleLinux
Alternatively, you can also filter for specific hosts using
name = my-host.example.com
. -
In the Command field, enter the command to install the
katello-host-tools-tracer
package:# dnf install -y katello-host-tools-tracer
-
Click Submit to install the package.
Enabling Ksplice
The following additional steps are required to deploy hosts which are configured to have ksplice
installed and are capable to run uptrack-updates right after deployment:
-
Create a provisioning template snippet called
install_ksplice
containing the following commands.-
On Oracle Linux 8 and 9:
# dnf install uptrack-offline # dnf install uptrack-updates-$(uname -r) # dnf install ksplice
-
On Oracle Linux 7:
# yum install uptrack-offline # yum install uptrack-updates-$(uname -r) # yum install ksplice
-
-
Add the snippet to the provisioning template:
<% if host_param_true?('install_ksplice') -%> <%= snippet 'install_ksplice' -%> <% end -%>
Add this before the following line:
# touch /tmp/foreman_built
We recommend to not edit a locked template, but rather clone it and edit the cloned template. Remember to add the cloned template to your operating systems.
Enable the snippet by setting the default value to
true
:- install_ksplice: boolean (default=true)
Add this before enabling the Puppet repository:
- enable-puppetlabs-puppet7-repo: boolean (default=false)
Create the parameter
install_ksplice
as a boolean in yourOracle Linux 8 ULN
hostgroup and set it totrue
.
In case an older version of the Linux kernel has been active during deployment, run the following command after rebooting the host. ** On Oracle Linux 8 and 9:
+
# dnf install uptrack-updates-$(uname -r)
-
On Oracle Linux 7:
# yum install uptrack-updates-$(uname -r)
Creating Compute Profiles
You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage. A default installation of orcharhino contains three predefined profiles:
-
1-Small
-
2-Medium
-
3-Large
-
In the orcharhino management UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile.
-
In the Name field, enter a name for the profile.
-
Click Submit. A new window opens with the name of the compute profile.
-
In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile.
Setting a Default Encrypted Root Password for Hosts
If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password.
The default root password can be inherited by a host group and consequentially by hosts in that group.
If you change the password and reprovision the hosts in the group that inherits the password, the password will be overwritten on the hosts.
-
Generate an encrypted password:
# python -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass("Confirm: ")) else exit()'
-
Copy the password for later use.
-
In the orcharhino management UI, navigate to Administer > Settings.
-
On the Settings page, select the Provisioning tab.
-
In the Name column, navigate to Root password, and click Click to edit.
-
Paste the encrypted password, and click Save.
Using noVNC to Access Virtual Machines
You can use your browser to access the VNC console of VMs created by orcharhino.
orcharhino supports using noVNC on the following virtualization platforms:
-
VMware
-
Libvirt
-
oVirt
-
You must have a virtual machine created by orcharhino.
-
For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC.
-
You must import the Katello root CA certificate into your orcharhino Server. Adding a security exception in the browser is not enough for using noVNC. For more information, see Installing the Katello Root CA Certificate in Administering orcharhino.
-
On the VM host system, configure the firewall to allow VNC service on ports 5900 to 5930:
-
On operating systems with the
iptables
command:# iptables -A INPUT -p tcp --dport 5900:5930 -j ACCEPT # service iptables save
-
On operating systems with the
firewalld
service:# firewall-cmd --add-port=5900-5930/tcp # firewall-cmd --add-port=5900-5930/tcp --permanent
-
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource.
-
In the Virtual Machines tab, select the name of a VM host. Ensure the machine is powered on and then select Console.
Host Parameter Hierarchy
You can access host parameters when provisioning hosts. Hosts inherit their parameters from the following locations, in order of increasing precedence:
Parameter Level | Set in orcharhino management UI |
---|---|
Globally defined parameters |
Configure > Global parameters |
Organization-level parameters |
Administer > Organizations |
Location-level parameters |
Administer > Locations |
Domain-level parameters |
Infrastructure > Domains |
Subnet-level parameters |
Infrastructure > Subnets |
Operating system-level parameters |
Hosts > Operating Systems |
Host group-level parameters |
Configure > Host Groups |
Host parameters |
Hosts > All Hosts |
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA"). |