ATIX offers Puppet trainings for beginners and advanced users on how to use Puppet as a configuration management tool. This helps you automate your infrastructure using Puppet. It communicates how to create, use, and maintain modules based on best practices. Refer to the Puppet trainings website for more information or contact us.
This guide is intended to collect Puppet related documentation in a single place. It is meant to complement relevant subsections of the management UI chapter:
Puppet is a configuration management tool using a declarative language to describe the state hosts should be in.
Puppet increases your productivity as you can administer multiple hosts at the same time and decreases your configuration effort as Puppet makes it easy to verify and possibly correct the state hosts are in. This increases reliability, scalability, performance, and service availability.
Puppet uses a server-client architecture with hosts running the Puppet agent and the orcharhino or any orcharhino proxies acting as a Puppet master. Puppet let’s you define the desired state of hosts and the Puppet agent will enforce this state on your hosts. The Puppet agent also sends facts to your Puppet master, receives catalogs that are compiled from Puppet manifests, enforces the desired configuration, and sends reports to your orcharhino.
Any differences between your desired state and the actual state are called drifts in Puppet terminology.
Puppet facts are information about the hosts running the Puppet agent.
facter on a host displays available Puppet facts as key-value pairs.
They are collected by the Puppet agent and reported to the Puppet master on each run.
orcharhino users can define Puppet parameters: key-value pairs. They behave similar to host parameters or Ansible variables: they have a data type and you can create and edit them.
This usage guides is set out to use a Puppet module to install, configure, and manage the ntp service.
There are four requirements to run the Puppet agent on managed hosts:
Provide the managed host with the Puppet agent via its activation key.
Adjust the provisioning templates to have the Puppet agent installed automatically when provisioning a new host:
<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') -%> <% if puppet_enabled %> <%= snippet 'puppet_setup' %> <% end -%>
Setting a Puppet master automatically installs Puppet on the managed host using the
enable-puppet6as parameter to your new host or host group. Set the parameter type to
booleanand its value to
Set a Puppet environment, Puppet master, and Puppet CA when provisioning a host.
In this example, we are going to add the ntp module to our hosts.
To accomplish this, navigate to forge.puppet.com and search for
One of the first modules is puppetlabs/ntp:
The Puppet Forge is a repository for pre-built Puppet modules. We recommend using modules flagged as supported as shown above.
Connect to your orcharhino via SSH and install the module via the following command:
puppet module install puppetlabs-ntp -i /etc/puppetlabs/code/environment/production/modules
-i parameter and the path it should be installed to (i.e.
production) to specify the proper Puppet environment.
Once the installation is completed, the output looks as follows:
Notice: Preparing to install into /etc/puppetlabs/code/environment/production/modules ... Notice: Created target directory /etc/puppetlabs/code/environment/production/modules Notice: Downloading from https://forgeapi.puppet.com ... Notice: Installing -- do not interrupt ... /etc/puppetlabs/code/environment/production/modules |-| puppetlabs-ntp (v8.3.0) |-- puppetlabs-stdlib (v4.25.1) [/etc/puppetlabs/code/environments/production/modules]
An alternative way to install a Puppet module is to copy a folder containing the Puppet module to the module path as mentioned above. Note that you will have to manage its dependencies by hand.
There are two steps necessary to properly upgrade a Puppet module:
Connect to your Puppet master via SSH and run the following command to find out where your Puppet modules are located:
puppet config print modulepath
This will return output as follows:
If the module is located in the path as displayed above, the following command will upgrade the module:
puppet module upgrade puppetlabs-ntp
Navigate to Configure > Classes.
You have to import a Puppet module into orcharhino before you can assign any of its classes to hosts. Make sure to select Any Organization and Any Location as context otherwise the import might fail.
Click the Import button in the top right corner and select which machine you want to import modules from. You may typically choose between your orcharhino and any attached orcharhino proxies.
This will show the Puppet module to be imported:
Select the classes of your choice and check the box on the left (1).
Click the Update button (2) to import the Puppet classes to orcharhino.
This will result in a notification as follows:
Successfully updated environments and Puppet classes from the on-disk Puppet installation
After importing the Puppet class to orcharhino, we override the list of ntp servers with specific servers to use. As ATIX is based in Garching near Munich, we choose the German ntp server of ntppool.org in this example.
Navigate to Configure > Classes and select the ntp Puppet module to change its default configuration.
Select the Smart Class Parameter tab (1) and search for servers (2).
Make sure the Override checkbox (3) is checked.
The Parameter Type drop down menu (4) has to be set to array.
Insert a list of your desired ntp servers as Default Value (5):
An alternative way to describe the array is the
- 0.de.pool.ntp.org - 1.de.pool.ntp.org - 2.de.pool.ntp.org - 3.de.pool.ntp.org
After the values have been added, click the submit button at the bottom. This changes the default configuration of the ntp Puppet module.
r10k allows you to manage Puppet modules and environments.
It uses a list of Puppet roles from a so called
Puppetfile to download Puppet modules from the Puppet Forge or git repositories.
r10k does not handle dependencies between Puppet modules.
A Puppetfile may look as follows:
forge "https://forge.puppet.com" mod "puppet-nginx", :git => "https://github.com/voxpupuli/puppet-nginx.git", :ref => "master" mod "puppetlabs/apache" mod "puppetlabs/ntp", "8.3.0"
There are three different ways to use a Puppet class within orcharhino. You can assign Puppet classes to either a single host or multiple hosts via host groupings such as a location based context or Puppet config groups in combination with orcharhino host groups.
A Puppet module consists of several Puppet classes. You can only assign Puppet classes to hosts.
Navigate to Hosts > All hosts and click on the edit button of the host you want to add the ntp class to. Go to the Puppet classes tab as shown below and look for the ntp class:
Click the + symbol next to
ntpto add the ntp submodule to the list of included classes.
Clicking the submit button on the bottom will save your changes to orcharhino.
If the Puppet classes tab of an individual host is empty, check if it’s assigned a proper Puppet environment as follows:
Select the YAML button on the host overview page to verify the proper Puppet configuration. This should produce output similar as follows:
--- parameters: <SHORTENED YAML OUTPUT> classes: ntp: servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]' environment: production ...
Connect to your host via SSH and check the content of
/etc/ntp.conf to verify the ntp configuration.
This examples assumes your host is running CentOS 7.
Other operating systems may store the ntp config file in a different path.
You may need to run the Puppet agent on your desired host by executing the following command:
Running the following command on the host will check which ntp servers are used for clock synchronization:
This will return output similar as follows:
# ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org
You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically.
You can assign the class to a host group and then assign this host group to an organization and location to assign the ntp Puppet class to multiple hosts at once. You can also set the configuration in such a way that every host, which is getting deployed in a specific host group and/or organization and location, has this Puppet class installed.
The following parameters have to be set:
A Lifecycle Environment (1) describes the stage in which certain versions of content are available to hosts.
A Content View (2) is comprised of products and allows for version control of content repositories.
A Puppet Environment (3) allows you to supply a group of hosts with their own dedicated configuration.
Then, go to Configure > Classes and add the Puppet class to the included classes or to the included config groups if a Puppet config group is configured, as we did previously for a single host.
Click submit and assign the host group to your organization or location (Administer > Organizations/Locations >> Your Organization/Location > Host Groups).
A Puppet config group is a named list of Puppet classes that allows you to combine their capabilities and assign them to a host at a click. This is equivalent to the concept of profiles in pure Puppet.
To create a Puppet config group, navigate to Configure > Config Groups > New Config Group and select the classes you want to add to the config group.
Choose a meaningful Name (1) for the Puppet config group.
Add any desired Puppet classes to the Included Classes field (2).
Clicking the Submit button (3) saves your Puppet config group to orcharhino.
This allows you to assign the same config group to several different servers without the need of configuring each server individually.
Parameters can be overridden within different layers. This is the order in which the parameters will be overwritten:
Host group parameters
This means host specific parameters will override any other parameter and location parameters for example will only override any organization or global parameters.
This feature is especially useful when grouping hosts with location or organization contexts.
This is recommended if you have multiple machines and only want to make changes to a single one.
You can override parameters on specific hosts by navigating to Hosts > All Hosts, edit a host and select the parameters tab.
Clicking the override button as highlighted above allows you to edit the Puppet parameter in the text area next to it where you can insert a host specific value.
You can use groups of hosts to override Puppet parameters for multiple hosts at once.
The following examples chooses the location context to illustrate setting context based parameters.
Navigate to Configure > Classes and go to the smart class parameter tab. Select the desired parameter (e.g. servers as detailed above).
The Order list (1) allows you to define the hierarchy of the Puppet parameters. The individual host (
fqdn) marks the most and the location context (
location) the least relevant.
There are multiple options on how to handle different values (2): Merge Overrides allows you to add all further matched parameters after finding the first match. Merge Default allows you to also include the default value even if there are more specific values defined. Avoid Duplicates allows you to create a list of unique values for the selected parameter.
The matcher field (3) requires an attribute type from the order list. This examples chooses the location context which is set to
Paris. The value is set to french ntp servers.
The Add Matcher button (4) allows you to add more matchers.
Remember to click the submit button on the bottom to save your changes to orcharhino.
Overriding an organization based context is essentially the same as overriding a location based context, with the difference that you will have to select an organization and that organization based Puppet parameters are overridden by location based Puppet parameters as described in the Puppet parameter hierarchy.
You need to assign the proper job template to the Run Puppet Once feature to run Puppet on managed hosts.
Navigate to Administer > Remote Execution Features.
puppet_run_hostremote execution feature.
Run Puppet Once - SSH Defaultjob template.