Provisioning Virtual Machines in Proxmox

Proxmox is open source software made for virtualization using KVM hypervisor and LXC containers. orcharhino can interact with Proxmox, including creating virtual machines and controlling their power management states.

Installing the Proxmox Plug-in

Install the Proxmox plug-in to attach a Proxmox compute resource provider to orcharhino. This allows you to manage and deploy hosts to Proxmox.

Procedure
  1. Install the Proxmox plug-in package on your orcharhino Server:

    $ dnf install rubygem-foreman_fog_proxmox
  2. Enable the Proxmox plug-in on your orcharhino Server:

    $ orcharhino-installer
  3. Optional: In the orcharhino management UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Proxmox plug-in.

Adding a Proxmox Connection to orcharhino Server

Use this procedure to add a Proxmox connection to orcharhino.

Prerequisites
  • Ensure that the host and network-based firewalls are configured to allow communication from orcharhino to Proxmox on TCP port 443.

  • Verify that orcharhino Server is able to resolve the host name of your Proxmox compute resource and Proxmox is able to resolve the host name of orcharhino Server.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Click Create Compute Resource.

  3. In the Name field, enter a name for the compute resource.

  4. From the Provider list, select Proxmox.

  5. Optional: In the Description field, enter a description for the compute resource.

  6. Optional: In the Authentication method list, select User token if you do not want to use access tickets to authenticate on Proxmox.

  7. In the Url field, enter the IP address or host name of your Proxmox server. Ensure to specify its port and add /api2/json to its path.

  8. In the Username field, enter the user name with permission to access the Proxmox server.

  9. In the Password field, enter the password for the user.

  10. Optional: Select the SSL verify peer checkbox to verify the certificate used for the encrypted connection from orcharhino to Proxmox.

  11. In the X509 Certificate Authorities field, enter a certificate authority or a correctly ordered chain of certificate authorities.

  12. Optional: Click Test Connection to verify the URL and its corresponding credentials.

  13. Click Submit to save the connection.

Adding Proxmox Images to orcharhino Server

Proxmox uses templates as images for creating new virtual machines. When you use image-based provisioning to create new hosts, you need to add Proxmox template details to your orcharhino Server. This includes access details and the template name.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. Click Create Image.

  4. In the Name field, enter a name for the image.

  5. From the Operating System list, select the base operating system of the image.

  6. From the Architecture list, select the operating system architecture.

  7. In the Username field, enter the SSH user name for image access. By default, this is set to root. This is only necessary if the image has an inbuilt user account.

  8. From the User data list, select whether you want the images to support user data input, such as cloud-init data.

  9. In the Password field, enter the SSH password for image access.

  10. In the Image field, enter the relative path and name of the template on the vSphere environment. Do not include the data center in the relative path.

  11. Click Submit to save the image details.

Adding Proxmox Details to a Compute Profile

You can predefine certain hardware settings for virtual machines on Proxmox. You achieve this through adding these hardware settings to a compute profile.

Note that this procedure assumes you are using KVM/qemu-based virtualization.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Profiles.

  2. Select a compute profile.

  3. Select a Proxmox compute resource.

  4. In the Type list, select the kind of virtualization solution hosts will run on when created using this compute profile. In Proxmox, you may choose between LXC (Linux containers) and KVM/QEMU (kernel-based virtual machines). LXC focuses more on isolating applications and reusing the host kernel. KVM runs its own kernel and lets you boot various guest operating systems.

  5. In the Node list, select the Proxmox node you want to deploy your hosts to. This makes sense if you run a Proxmox cluster.

  6. If you want to automatically start virtual machines after creation, select the Start after creation? checkbox. You can overwrite this for individual hosts when you create them.

  7. In the Image list, select an image that is available on your Proxmox compute resource.

  8. In the Pool list, select a resource allocation pool.

  9. Optional: You can select the Main options, CPU, Memory, and OS checkboxes to configure the compute profile in more detail.

  10. In the Identifier field, enter the name of the network interface of your managed host. The default value is net0.

  11. From the Card list, select the type of network interface card of your managed host.

  12. Optional: From the Bridge list, select a bridge to assign the host to it.

  13. Optional: In the VLAN tag field, you can assign your host to a VLAN. The VLAN tag field lets you input the corresponding VLAN tag which your Proxmox machine is part of.

  14. Optional: In the Rate limit field, you can limit the network traffic of hosts.

  15. Optional: In the Multiqueue field, you can set multiple virtual CPUs to process network packets.

  16. Optional: If you want to use the integrated firewall in Proxmox, select the Firewall checkbox.

  17. Optional: If you want to disconnect your managed host from the network, select the Disconnect checkbox.

  18. Optional: Click Add Interface to add another network interface.

  19. From the Storage list, select the name of the storage pool within Proxmox.

  20. In the Controller field, select the type of the disk controller of your managed host in Proxmox. ATIX AG recommends to use either SCSI or VirtIO Block.

  21. In the Device field, enter the number of the disk.

  22. In the Cache list, select the type of disk cache you want to use on your managed host in Proxmox. ATIX AG does not recommend using caching within Proxmox.

  23. In the Size (GB) field, enter the size of the disk.

  24. Click Submit to save the compute profile.

Creating Hosts on Proxmox

The Proxmox provisioning process provides the option to create hosts over a network connection or using an existing image.

For network-based provisioning, you must create a host to access either integrated orcharhino Proxy or an external orcharhino Proxy on a Proxmox virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the Proxmox server to create the virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

DHCP Conflicts

If you use a virtual network on the Proxmox server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino Server when booting new hosts.

For image-based provisioning, use the pre-existing image as a basis for the new volume.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a host name.

  3. On the Organization and Location tabs, ensure that the provisioning context is automatically set to the current context.

  4. From the Host Group list, select the host group that you want to use to populate the form.

  5. From the Deploy on list, select your Proxmox compute resource.

  6. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  7. Click the Interface tab and click Edit on the host’s interface.

  8. Verify that the fields are automatically populated with values. Note in particular:

    • The Name from the Host tab becomes the DNS name.

    • orcharhino Server automatically assigns an IP address for the new host.

  9. Ensure that the MAC address field is blank. The Proxmox server assigns one to the host.

  10. Verify that the Managed, Primary, and Provision options are automatically selected for the first interface on the host. If not, select them.

  11. In the interface window, review the Proxmox-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs.

  12. On the Operating System tab, confirm that all fields automatically contain values.

  13. Select the Provisioning Method that you want:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

    • If the foreman_bootdisk plug-in is installed and you want to use boot-disk provisioning, click Boot disk based.

  14. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  15. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.

  16. On the Parameters tab, ensure that a parameter exists that provides an activation key. If not, add an activation key.

  17. Click Submit to provision your host on Proxmox.

Deleting a Virtual Machine on Proxmox

You can delete virtual machines running on Proxmox from within orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from your Proxmox compute resource while retaining any associated hosts within orcharhino.

  4. Optional: If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Importing a Virtual Machine from Proxmox into orcharhino

You can import existing virtual machines running on Proxmox into orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your Proxmox compute resource.

  3. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a Host in orcharhino in Managing Hosts.

  4. Click Submit to import the virtual machine into orcharhino.

Adding a Remote Console Connection for a Managed Host on Proxmox

Use this procedure to add a remote console connection using VNC for a managed host on Proxmox.

Limitation
  • This procedure only works for hosts based on qemu. It does not work for LXC container on Proxmox.

Procedure
  1. On your Proxmox, open the required port for VNC:

    $ firewall-cmd --add-port=My_Port/tcp --permanent

    The port is the sum of 5900 and the VMID of your managed host. You can view the VMID on the VM tab for your managed host in the orcharhino management UI. For example, if your managed host has the VMID 142, open port 6042 on your Proxmox.

    To enable a remote console connection for multiple managed hosts, open a range of ports using firewall-cmd --add-port=5900-My_Upper_Port_Range_Limit/tcp --permanent.

  2. Reload the firewall configuration:

    $ firewall-cmd --reload
  3. On your Proxmox, enable VNC for your managed host in /etc/pve/nodes/My_Proxmox_Instance/qemu-server/My_VMID.conf:

    -vnc 0.0.0.0:_My_VMID_

    For more information, see VNC Client Access in the Proxmox documentation.

  4. Restart your managed host on Proxmox.

  5. Optional: Verify that the required port for VNC is open:

    $ netstat -ltunp
  6. On your orcharhino Server, allow the websockify capability in SELinux:

    $ setsebool -P websockify_can_connect_all on
  7. If you use a custom SSL certificate, import the SSL certificate from orcharhino Server into your browser. For more information, see Importing the Katello Root CA Certificate in Administering orcharhino.

    The remote console connection to your managed host will fail if you choose to temporarily accept not checking the certificates in your browser.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").