Provisioning virtual machines on KVM (libvirt)
Kernel-based Virtual Machines (KVMs) use an open source virtualization daemon and API called libvirt
running on AlmaLinux.
orcharhino can connect to the libvirt
API on a KVM server, provision hosts on the hypervisor, and control certain virtualization functions.
Only Virtual Machines created through orcharhino can be managed. Virtual Machines with other than directory storage pool types are unsupported.
You can use KVM provisioning to create hosts over a network connection or from an existing image.
-
Provide the installation medium for the operating systems that you want to use to provision hosts. For more information, see Syncing Repositories in Managing Content.
-
Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing Content.
-
A orcharhino Proxy Server managing a network on the KVM server. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy Server. For more information about network service configuration for orcharhino Proxy Servers, see Configuring Networking in Provisioning Hosts.
-
A server running KVM virtualization tools (libvirt daemon).
-
A virtual network running on the libvirt server. Only NAT and isolated virtual networks can be managed through orcharhino.
-
An existing virtual machine image if you want to use image-based provisioning. Ensure that this image exists in a storage pool on the KVM host. The
default
storage pool is usually located in/var/lib/libvirt/images
. Only directory pool storage types can be managed through orcharhino. -
Optional: The examples in these procedures use the root user for KVM. If you want to use a non-root user on the KVM server, you must add the user to the
libvirt
group on the KVM server:$ usermod -a -G libvirt non_root_user
-
For a list of permissions a non-admin user requires to provision hosts, see permissions required to provision hosts.
-
You can configure orcharhino to remove the associated virtual machine when you delete a host. For more information, see removing a virtual machine upon host deletion.
Configuring orcharhino Server for KVM connections
Before adding the KVM connection, create an SSH key pair for the foreman
user to ensure a secure connection between orcharhino Server and KVM.
-
On orcharhino Server, switch to the
foreman
user:$ su foreman -s /bin/bash
-
Generate the key pair:
$ ssh-keygen
-
Copy the public key to the KVM server:
$ ssh-copy-id root@kvm.example.com
-
Exit the bash shell for the
foreman
user:$ exit
-
Install the
libvirt-client
package:$ dnf install libvirt-client
-
Use the following command to test the connection to the KVM server:
$ su foreman -s /bin/bash -c 'virsh -c qemu+ssh://root@kvm.example.com/system list'
Adding a KVM connection to orcharhino Server
Use this procedure to add KVM as a compute resource in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.
-
In the Name field, enter a name for the new compute resource.
-
From the Provider list, select Libvirt.
-
In the Description field, enter a description for the compute resource.
-
In the URL field, enter the connection URL to the KVM server. For example:
qemu+ssh://root@kvm.example.com/system
-
From the Display type list, select either VNC or Spice.
-
Optional: To secure console access for new hosts with a randomly generated password, select the Set a randomly generated password on the display connection checkbox. You can retrieve the password for the VNC console to access the guest virtual machine console from the output of the following command executed on the KVM server:
$ virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>
The password is randomly generated every time the console for the virtual machine is opened, for example, with virt-manager.
-
Click Test Connection to ensure that orcharhino Server connects to the KVM server without fault.
-
Verify that the Locations and Organizations tabs are automatically set to your current context. If you want, add additional contexts to these tabs.
-
Click Submit to save the KVM connection.
-
To create a compute resource, enter the
hammer compute-resource create
command:$ hammer compute-resource create --name "My_KVM_Server" \ --provider "Libvirt" --description "KVM server at kvm.example.com" \ --url "qemu+ssh://root@kvm.example.com/system" --locations "New York" \ --organizations "My_Organization"
Adding KVM images to orcharhino Server
To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino Server.
Note that you can manage only directory pool storage types through orcharhino.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources and click the name of the KVM connection.
-
Click Create Image.
-
In the Name field, enter a name for the image.
-
From the Operating System list, select the base operating system of the image.
-
From the Architecture list, select the operating system architecture.
-
In the Username field, enter the SSH user name for image access. This is normally the
root
user. -
In the Password field, enter the SSH password for image access.
-
In the Image path field, enter the full path that points to the image on the KVM server. For example:
/var/lib/libvirt/images/TestImage.qcow2
-
Optional: Select the User Data checkbox if the image supports user data input, such as
cloud-init
data. -
Click Submit to save the image details.
-
Create the image with the
hammer compute-resource image create
command. Use the--uuid
field to store the full path of the image location on the KVM server.$ hammer compute-resource image create \ --name "KVM Image" \ --compute-resource "My_KVM_Server" --operatingsystem "RedHat version" \ --architecture "x86_64" \ --username root \ --user-data false \ --uuid "/var/lib/libvirt/images/KVMimage.qcow2" \
Adding KVM details to a compute profile
Use this procedure to add KVM hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Profiles.
-
In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.
-
Click the name of the KVM compute resource.
-
In the CPUs field, enter the number of CPUs to allocate to the new host.
-
In the Memory field, enter the amount of memory to allocate to the new host.
-
From the Image list, select the image to use if performing image-based provisioning.
-
From the Network Interfaces list, select the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a orcharhino Proxy-managed network.
-
In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host.
-
Click Submit to save the settings to the compute profile.
-
To create a compute profile, enter the following command:
$ hammer compute-profile create --name "Libvirt CP"
-
To add the values for the compute profile, enter the following command:
$ hammer compute-profile values create --compute-profile "Libvirt CP" \ --compute-resource "My_KVM_Server" \ --interface "compute_type=network,compute_model=virtio,compute_network=examplenetwork" \ --volume "pool_name=default,capacity=20G,format_type=qcow2" \ --compute-attributes "cpus=1,memory=1073741824"
Creating hosts on KVM
In orcharhino, you can use KVM provisioning to create hosts over a network connection or from an existing image:
-
If you want to create a host over a network connection, the new host must be able to access either orcharhino Server’s integrated orcharhino Proxy or an external orcharhino Proxy Server on a KVM virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the KVM server to create and start a virtual machine. If the virtual machine detects the defined orcharhino Proxy Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.
-
If you want to create a host with an existing image, the new host entry triggers the KVM server to create the virtual machine using a pre-existing image as a basis for the new volume.
To use the CLI instead of the orcharhino management UI, see the CLI procedure.
For network-based provisioning, if you use a virtual network on the KVM server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino Server when booting new hosts.
-
In the orcharhino management UI, navigate to Hosts > Create Host.
-
In the Name field, enter a name for the host.
-
Optional: Click the Organization tab and change the organization context to match your requirement.
-
Optional: Click the Location tab and change the location context to match your requirement.
-
From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.
-
From the Deploy on list, select the KVM connection.
-
From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. The KVM-specific fields are populated with settings from your compute profile. Modify these settings if required.
-
Click the Interfaces tab, and on the interface of the host, click Edit.
-
Verify that the fields are populated with values. Note in particular:
-
orcharhino automatically assigns an IP address for the new host.
-
Ensure that the MAC address field is blank. KVM assigns a MAC address to the host during provisioning.
-
The Name from the Host tab becomes the DNS name.
-
Ensure that orcharhino automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.
-
-
Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.
-
Click the Operating System tab, and confirm that all fields automatically contain values.
-
Select the Provisioning Method that you want to use:
-
For network-based provisioning, click Network Based.
-
For image-based provisioning, click Image Based.
-
-
Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.
-
Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.
-
Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.
-
Click Submit to save the host entry.
-
To use network-based provisioning, create the host with the
hammer host create
command and include--provision-method build
. Replace the values in the following example with the appropriate values for your environment.$ hammer host create \ --build true \ --compute-attributes="cpus=1,memory=1073741824" \ --compute-resource "My_KVM_Server" \ --enabled true \ --hostgroup "My_Host_Group" \ --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \ --location "My_Location" \ --managed true \ --name "My_Host_Name" \ --organization "My_Organization" \ --provision-method "build" \ --root-password "My_Password" \ --volume="pool_name=default,capacity=20G,format_type=qcow2"
-
To use image-based provisioning, create the host with the
hammer host create
command and include--provision-method image
. Replace the values in the following example with the appropriate values for your environment.$ hammer host create \ --compute-attributes="cpus=1,memory=1073741824" \ --compute-resource "My_KVM_Server" \ --enabled true \ --hostgroup "My_Host_Group" \ --image "My_KVM_Image" \ --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \ --location "My_Location" \ --managed true \ --name "My_Host_Name" \ --organization "My_Organization" \ --provision-method "image" \ --volume="pool_name=default,capacity=20G,format_type=qcow2"
For more information about additional host creation parameters for this compute resource, enter the hammer host create --help
command.
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0"). |