Provisioning Virtual Machines on KVM (libvirt)

Kernel-based Virtual Machines (KVMs) use an open source virtualization daemon and API called libvirt running on Red Hat Enterprise Linux. orcharhino can connect to the libvirt API on a KVM server, provision hosts on the hypervisor, and control certain virtualization functions.

Only Virtual Machines created through orcharhino can be managed. Virtual Machines with other than directory storage pool types are unsupported.

You can use KVM provisioning to create hosts over a network connection or from an existing image.

Prerequisites
  • Provide the installation medium for the operating systems that you want to use to provision hosts. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing Content.

  • Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing Content.

  • A orcharhino Proxy managing a network on the KVM server. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information about network service configuration for orcharhino Proxies, see Configuring Networking in Provisioning Hosts.

  • A server running KVM virtualization tools (libvirt daemon).

  • A virtual network running on the libvirt server. Only NAT and isolated virtual networks can be managed through orcharhino.

  • An existing virtual machine image if you want to use image-based provisioning. Ensure that this image exists in a storage pool on the KVM host. The default storage pool is usually located in /var/lib/libvirt/images. Only directory pool storage types can be managed through orcharhino.

  • Optional: The examples in these procedures use the root user for KVM. If you want to use a non-root user on the KVM server, you must add the user to the libvirt group on the KVM server:

    $ usermod -a -G libvirt non_root_user
  • A orcharhino user account with the following roles:

  • A custom role in orcharhino with the following permissions:

    • view_compute_resources

    • destroy_compute_resources_vms

    • power_compute_resources_vms

    • create_compute_resources_vms

    • view_compute_resources_vms

    • view_locations

    • view_subnets

      For more information about creating roles, see Creating a Role in Administering orcharhino. For more information about adding permissions to a role, see Adding Permissions to a Role in Administering orcharhino.

Configuring orcharhino Server for KVM Connections

Before adding the KVM connection, create an SSH key pair for the foreman user to ensure a secure connection between orcharhino Server and KVM.

Procedure
  1. On orcharhino Server, switch to the foreman user:

    $ su foreman -s /bin/bash
  2. Generate the key pair:

    $ ssh-keygen
  3. Copy the public key to the KVM server:

    $ ssh-copy-id root@kvm.example.com
  4. Exit the bash shell for the foreman user:

    $ exit
  5. Install the libvirt-client package:

    $ dnf install libvirt-client
  6. Use the following command to test the connection to the KVM server:

    $ su foreman -s /bin/bash -c 'virsh -c qemu+ssh://root@kvm.example.com/system list'

Adding a KVM Connection to orcharhino Server

Use this procedure to add KVM as a compute resource in orcharhino. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource.

  2. In the Name field, enter a name for the new compute resource.

  3. From the Provider list, select Libvirt.

  4. In the Description field, enter a description for the compute resource.

  5. In the URL field, enter the connection URL to the KVM server. For example:

     qemu+ssh://root@kvm.example.com/system
  6. From the Display type list, select either VNC or Spice.

  7. Optional: To secure console access for new hosts with a randomly generated password, select the Set a randomly generated password on the display connection checkbox. You can retrieve the password for the VNC console to access the guest virtual machine console from the output of the following command executed on the KVM server:

    $ virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password is randomly generated every time the console for the virtual machine is opened, for example, with virt-manager.

  8. Click Test Connection to ensure that orcharhino Server connects to the KVM server without fault.

  9. Verify that the Locations and Organizations tabs are automatically set to your current context. If you want, add additional contexts to these tabs.

  10. Click Submit to save the KVM connection.

CLI procedure
  • To create a compute resource, enter the hammer compute-resource create command:

    $ hammer compute-resource create --name "My_KVM_Server" \
    --provider "Libvirt" --description "KVM server at kvm.example.com" \
    --url "qemu+ssh://root@kvm.example.com/system" --locations "New York" \
    --organizations "My_Organization"

Adding KVM Images to orcharhino Server

To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your orcharhino Server.

Note that you can manage only directory pool storage types through orcharhino.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources and click the name of the KVM connection.

  2. Click Create Image.

  3. In the Name field, enter a name for the image.

  4. From the Operating System list, select the base operating system of the image.

  5. From the Architecture list, select the operating system architecture.

  6. In the Username field, enter the SSH user name for image access. This is normally the root user.

  7. In the Password field, enter the SSH password for image access.

  8. In the Image path field, enter the full path that points to the image on the KVM server. For example:

     /var/lib/libvirt/images/TestImage.qcow2
  9. Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data.

  10. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the KVM server.

    $ hammer compute-resource image create \
    --name "KVM Image" \
    --compute-resource "My_KVM_Server"
    --operatingsystem "RedHat version" \
    --architecture "x86_64" \
    --username root \
    --user-data false \
    --uuid "/var/lib/libvirt/images/KVMimage.qcow2" \

Adding KVM Details to a Compute Profile

Use this procedure to add KVM hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Profiles.

  2. In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile, enter a Name, and click Submit.

  3. Click the name of the KVM compute resource.

  4. In the CPUs field, enter the number of CPUs to allocate to the new host.

  5. In the Memory field, enter the amount of memory to allocate to the new host.

  6. From the Image list, select the image to use if performing image-based provisioning.

  7. From the Network Interfaces list, select the network parameters for the host’s network interface. You can create multiple network interfaces. However, at least one interface must point to a orcharhino Proxy-managed network.

  8. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host.

  9. Click Submit to save the settings to the compute profile.

CLI procedure
  1. To create a compute profile, enter the following command:

    $ hammer compute-profile create --name "Libvirt CP"
  2. To add the values for the compute profile, enter the following command:

    $ hammer compute-profile values create --compute-profile "Libvirt CP" \
    --compute-resource "My_KVM_Server" \
    --interface "compute_type=network,compute_model=virtio,compute_network=examplenetwork" \
    --volume "pool_name=default,capacity=20G,format_type=qcow2" \
    --compute-attributes "cpus=1,memory=1073741824"

Creating Hosts on KVM

In orcharhino, you can use KVM provisioning to create hosts over a network connection or from an existing image:

  • If you want to create a host over a network connection, the new host must be able to access either orcharhino Server’s integrated orcharhino Proxy or an external orcharhino Proxy on a KVM virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the KVM server to create and start a virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

  • If you want to create a host with an existing image, the new host entry triggers the KVM server to create the virtual machine using a pre-existing image as a basis for the new volume.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

DHCP Conflicts

For network-based provisioning, if you use a virtual network on the KVM server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino Server when booting new hosts.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Click the Organization and Location tabs to ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select the KVM connection.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings.

  7. Click the Interface tab and click Edit on the host’s interface.

  8. Verify that the fields are automatically populated, particularly the following items:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino Server automatically assigns an IP address for the new host.

    • The MAC address field is blank. The KVM server assigns a MAC address to the host.

    • The Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

    • The KVM-specific fields are populated with settings from your compute profile. Modify these settings if required.

  9. Click the Operating System tab, and confirm that all fields automatically contain values.

  10. Select the Provisioning Method that you want to use:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

  11. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  12. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs.

  13. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key.

  14. Click Submit to save the host entry.

CLI procedure
  • To use network-based provisioning, create the host with the hammer host create command and include --provision-method build. Replace the values in the following example with the appropriate values for your environment.

    $ hammer host create \
    --name "kvm-host1" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_KVM_Server" \
    --provision-method build \
    --build true \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --compute-attributes="cpus=1,memory=1073741824" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2" \
    --root-password "password"
  • To use image-based provisioning, create the host with the hammer host create command and include --provision-method image. Replace the values in the following example with the appropriate values for your environment.

    $ hammer host create \
    --name "kvm-host2" \
    --organization "My_Organization" \
    --location "New York" \
    --hostgroup "Base" \
    --compute-resource "My_KVM_Server" \
    --provision-method image \
    --image "KVM Image" \
    --enabled true \
    --managed true \
    --interface "managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork" \
    --compute-attributes="cpus=1,memory=1073741824" \
    --volume="pool_name=default,capacity=20G,format_type=qcow2"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").