Provisioning virtual machines in Proxmox
Proxmox is open source software made for virtualization using KVM hypervisor and LXC containers. orcharhino can interact with Proxmox, including creating virtual machines and controlling their power management states.
Installing the Proxmox plugin
Install the Proxmox plugin to attach a Proxmox compute resource provider to orcharhino. This allows you to manage and deploy hosts to Proxmox.
-
Install the Proxmox plugin on your orcharhino Server:
# orcharhino-installer --enable-foreman-plugin-proxmox
-
Optional: In the orcharhino management UI, navigate to Administer > About and select the Compute Resources tab to verify the installation of the Proxmox plugin.
Adding a Proxmox connection to orcharhino Server
Use this procedure to add a Proxmox connection to orcharhino. If your Proxmox instance consists of multiple nodes, you have to create a Proxmox connection to orcharhino for each node.
-
Ensure that the host and network-based firewalls are configured to allow communication from orcharhino to Proxmox on TCP port 443.
-
Verify that orcharhino Server is able to resolve the host name of your Proxmox compute resource and Proxmox is able to resolve the host name of orcharhino Server.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources.
-
Click Create Compute Resource.
-
In the Name field, enter a name for the compute resource.
-
From the Provider list, select Proxmox.
-
Optional: In the Description field, enter a description for the compute resource.
-
Optional: In the Authentication method list, select
User token
if you do not want to use access tickets to authenticate on Proxmox. -
In the Url field, enter the IP address or host name of your Proxmox node. Ensure to specify its port and add
/api2/json
to its path. -
In the Username field, enter the user name to access Proxmox.
-
In the Password field, enter the password for the user.
-
Optional: Select the SSL verify peer checkbox to verify the certificate used for the encrypted connection from orcharhino to Proxmox.
-
In the X509 Certificate Authorities field, enter a certificate authority or a correctly ordered chain of certificate authorities.
-
Optional: Click Test Connection to verify the URL and its corresponding credentials.
-
Click Submit to save the connection.
Adding Proxmox images to orcharhino Server
Proxmox uses templates as images for creating new virtual machines. When you use image-based provisioning to create new hosts, you need to add Proxmox template details to your orcharhino Server. This includes access details and the template name.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources.
-
Select your Proxmox compute resource.
-
Click Create Image.
-
In the Name field, enter a name for the image.
-
From the Operating System list, select the base operating system of the image.
-
From the Architecture list, select the operating system architecture.
-
In the Username field, enter the SSH user name for image access. By default, this is set to
root
. This is only necessary if the image has an inbuilt user account. -
From the User data list, select whether you want the images to support user data input, such as
cloud-init
data. -
In the Password field, enter the SSH password for image access.
-
In the Image field, enter the relative path and name of the template on the Proxmox environment. Do not include the data center in the relative path.
-
Click Submit to save the image details.
Adding Proxmox details to a compute profile
You can predefine certain hardware settings for virtual machines on Proxmox. You achieve this through adding these hardware settings to a compute profile.
Note that this procedure assumes you are using KVM/qemu-based virtualization.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Profiles.
-
Select a compute profile.
-
Select a Proxmox compute resource.
-
In the Type list, select the kind of virtualization solution hosts will run on when created using this compute profile. In Proxmox, you can choose between LXC container (Linux containers) and KVM/Qemu server (kernel-based virtual machines). LXC focuses more on isolating applications and reusing the host kernel. KVM runs its own kernel and lets you boot various guest operating systems.
-
Click on the General tab.
-
In the Node list, select the Proxmox node you want to deploy your hosts to. This makes sense if you run a Proxmox cluster.
-
In the Image list, select an image that is available on your Proxmox compute resource.
-
In the Pool list, select a resource allocation pool.
-
Optional: Click on Advanced Options, Hardware, Network Interfaces, or Storage and follow the orcharhino management UI to configure the compute profile.
-
On the Storage tab, ATIX AG recommends to use either SCSI or VirtIO Block in the Controller field as the type of the disk controller of your host in Proxmox.
-
On the Storage tab, ATIX AG does not recommend to use caching within Proxmox.
-
Click Submit to save the compute profile.
Creating hosts on Proxmox
The Proxmox provisioning process provides the option to create hosts over a network connection or using an existing image.
For network-based provisioning, you must create a host to access either integrated orcharhino Proxy or an external orcharhino Proxy Server on a Proxmox virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the Proxmox node to create the virtual machine. If the virtual machine detects the defined orcharhino Proxy Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.
If you use a virtual network on the Proxmox node for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino Server when booting new hosts.
For image-based provisioning, use the pre-existing image as a basis for the new volume.
-
In the orcharhino management UI, navigate to Hosts > Create Host.
-
In the Name field, enter a name for the host.
-
Optional: Click the Organization tab and change the organization context to match your requirement.
-
Optional: Click the Location tab and change the location context to match your requirement.
-
From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.
-
From the Deploy on list, select your Proxmox compute resource.
-
From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.
-
Click the Interfaces tab, and on the interface of the host, click Edit.
-
Verify that the fields are populated with values. Note in particular:
-
orcharhino automatically assigns an IP address for the new host.
-
Ensure that the MAC address field is blank. Proxmox assigns a MAC address to the host during provisioning.
-
The Name from the Host tab becomes the DNS name.
-
Ensure that orcharhino automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.
-
-
Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.
-
On the Operating System tab, confirm that all fields automatically contain values.
-
Select the Provisioning Method that you want:
-
For network-based provisioning, click Network Based.
-
For image-based provisioning, click Image Based.
-
If the
foreman_bootdisk
plugin is installed and you want to use boot-disk provisioning, click Boot disk based.
-
-
Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.
-
Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.
-
On the Parameters tab, ensure that a parameter exists that provides an activation key. If not, add an activation key.
-
Click Submit to provision your host on Proxmox.
Deleting a virtual machine on Proxmox
You can delete virtual machines running on Proxmox from within orcharhino.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources.
-
Select your Proxmox compute resource.
-
On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from your Proxmox compute resource while retaining any associated hosts within orcharhino.
-
Optional: If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.
Importing a virtual machine from Proxmox into orcharhino
You can import existing virtual machines running on Proxmox into orcharhino.
-
In the orcharhino management UI, navigate to Infrastructure > Compute Resources.
-
Select your Proxmox compute resource.
-
On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in orcharhino in Managing Hosts.
-
Click Submit to import the virtual machine into orcharhino.
Adding a remote console connection for a host on Proxmox
Use this procedure to add a remote console connection using VNC for a host on Proxmox.
-
This procedure only works for hosts based on
qemu
. It does not work for LXC container on Proxmox.
-
Ensure that your host has
Standard VGA
selected from the VGA list.
-
On your Proxmox, open the required port for VNC:
# firewall-cmd --add-port=My_Port/tcp --permanent
The port is the sum of
5900
and the VMID of your host. You can view the VMID on the VM tab for your host in the orcharhino management UI. For example, if your host has the VMID 142, open port6042
on your Proxmox.To enable a remote console connection for multiple hosts, open a range of ports using
firewall-cmd --add-port=5900-My_Upper_Port_Range_Limit/tcp --permanent
. -
Reload the firewall configuration:
# firewall-cmd --reload
-
On your Proxmox, enable VNC for your host in
/etc/pve/nodes/My_Proxmox_Instance/qemu-server/My_VMID.conf
:args: -vnc 0.0.0.0:_My_VMID_
For more information, see VNC Client Access in the Proxmox documentation.
-
Restart your host on Proxmox.
-
Optional: Verify that the required port for VNC is open:
# netstat -ltunp
-
On your orcharhino Server, allow the
websockify
capability in SELinux:# setsebool -P foreman_rails_can_connect_all on # setsebool -P websockify_can_connect_all on
-
If you use a custom SSL certificate, import the SSL certificate from orcharhino Server into your browser. For more information, see Importing the Katello Root CA Certificate in Administering orcharhino.
The remote console connection to your host will fail if you choose to temporarily accept not checking the certificates in your browser.
The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution Share Alike 4.0 International ("CC BY-SA 4.0") license. This page also contains text from the official Foreman documentation which uses the same license ("CC BY-SA 4.0"). |