Provisioning virtual machines in VMware vSphere

VMware vSphere is an enterprise-level virtualization platform from VMware. orcharhino can interact with the vSphere platform, including creating new virtual machines and controlling their power management states.

Installing the VMware plug-in

Install the VMware plug-in to attach an VMware compute resource provider to orcharhino. This allows you to manage and deploy hosts to VMware.

Procedure
  1. Install the VMware compute resource provider on your orcharhino Server:

    # orcharhino-installer --enable-foreman-compute-vmware
  2. Optional: In the orcharhino management UI, navigate to Administer > About and select the compute resources tab to verify the installation of the VMware plug-in.

Prerequisites for VMware provisioning

The requirements for VMware vSphere provisioning include:

  • A supported version of VMware vCenter Server. The following versions have been fully tested with orcharhino:

    • vCenter Server 8.0

    • vCenter Server 7.0

    • vCenter Server 6.7 (EOL)

    • vCenter Server 6.5 (EOL)

  • A orcharhino Proxy managing a network on the vSphere environment. Ensure no other DHCP services run on this network to avoid conflicts with orcharhino Proxy. For more information, see [Configuring_Networking_provisioning].

  • An existing VMware template if you want to use image-based provisioning.

  • Provide the installation medium for the operating systems that you want to use to provision hosts. For more information, see Syncing Repositories in Managing Content.

  • Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing Content.

Creating a VMware user

The VMware vSphere server requires an administration-like user for orcharhino Server communication. For security reasons, do not use the administrator user for such communication. Instead, create a user with the following permissions:

For VMware vCenter Server version 6.7, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Change Config (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Edit Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

  • All Privileges → Virtual Machine → Guest Operations (All)

Note that the same steps also apply to VMware vCenter Server version 7.0.

For VMware vCenter Server version 6.5, set the following permissions:

  • All Privileges → Datastore → Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations

  • All Privileges → Network → Assign Network

  • All Privileges → Resource → Assign virtual machine to resource pool

  • All Privileges → Virtual Machine → Configuration (All)

  • All Privileges → Virtual Machine → Interaction (All)

  • All Privileges → Virtual Machine → Inventory (All)

  • All Privileges → Virtual Machine → Provisioning (All)

  • All Privileges → Virtual Machine → Guest Operations (All)

Adding a VMware connection to orcharhino Server

Use this procedure to add a VMware vSphere connection in orcharhino Server’s compute resources. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Prerequisites
  • Ensure that the host and network-based firewalls are configured to allow communication from orcharhino Server to vCenter on TCP port 443.

  • Verify that orcharhino Server and vCenter can resolve each other’s host names.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources, and in the Compute Resources window, click Create Compute Resource.

  2. In the Name field, enter a name for the resource.

  3. From the Provider list, select VMware.

  4. In the Description field, enter a description for the resource.

  5. In the VCenter/Server field, enter the IP address or host name of the vCenter server.

  6. In the User field, enter the user name with permission to access the vCenter’s resources.

  7. In the Password field, enter the password for the user.

  8. Click Load Datacenters to populate the list of data centers from your VMware vSphere environment.

  9. From the Datacenter list, select a specific data center to manage from this list.

  10. In the Fingerprint field, ensure that this field is populated with the fingerprint from the data center.

  11. From the Display Type list, select a console type, for example, VNC or VMRC. Note that VNC consoles are unsupported on VMware ESXi 6.5 and later.

  12. Optional: In the VNC Console Passwords field, select the Set a randomly generated password on the display connection checkbox to secure console access for new hosts with a randomly generated password. You can retrieve the password for the VNC console to access guest virtual machine console from the libvirtd host from the output of the following command:

    # virsh edit your_VM_name
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='your_randomly_generated_password'>

    The password randomly generates every time the console for the virtual machine opens, for example, with virt-manager.

  13. From the Enable Caching list, you can select whether to enable caching of compute resources. For more information, see Caching of Compute Resources.

  14. Click the Locations and Organizations tabs and verify that the values are automatically set to your current context. You can also add additional contexts.

  15. Click Submit to save the connection.

CLI procedure
  • Create the connection with the hammer compute-resource create command. Select Vmware as the --provider and set the instance UUID of the data center as the --uuid:

    # hammer compute-resource create \
    --datacenter "My_Datacenter" \
    --description "vSphere server at vsphere.example.com" \
    --locations "My_Location" \
    --name "My_vSphere" \
    --organizations "My_Organization" \
    --password "My_Password" \
    --provider "Vmware" \
    --server "vsphere.example.com" \
    --user "My_User"

Adding VMware images to orcharhino Server

VMware vSphere uses templates as images for creating new virtual machines. If using image-based provisioning to create new hosts, you need to add VMware template details to your orcharhino Server. This includes access details and the template name.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your Vmware compute resource.

  3. Click Create Image.

  4. In the Name field, enter a name for the image.

  5. From the Operating System list, select the base operating system of the image.

  6. From the Architecture list, select the operating system architecture.

  7. In the Username field, enter the SSH user name for image access. By default, this is set to root.

  8. If your image supports user data input such as cloud-init data, click the User data checkbox.

  9. Optional: In the Password field, enter the SSH password to access the image.

  10. From the Image list, select an image from VMware.

  11. Click Submit to save the image details.

CLI procedure
  • Create the image with the hammer compute-resource image create command. Use the --uuid field to store the relative template path on the vSphere environment:

    # hammer compute-resource image create \
    --architecture "My_Architecture" \
    --compute-resource "My_VMware"
    --name "My_Image" \
    --operatingsystem "My_Operating_System" \
    --username root \
    --uuid "My_UUID"

Adding VMware details to a compute profile

You can predefine certain hardware settings for virtual machines on VMware vSphere. You achieve this through adding these hardware settings to a compute profile. To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Profiles.

  2. Select a compute profile.

  3. Select a Vmware compute resource.

  4. In the CPUs field, enter the number of CPUs to allocate to the host.

  5. In the Cores per socket field, enter the number of cores to allocate to each CPU.

  6. In the Memory field, enter the amount of memory in MiB to allocate to the host.

  7. In the Firmware checkbox, select either BIOS or UEFI as firmware for the host. By default, this is set to automatic.

  8. In the Cluster list, select the name of the target host cluster on the VMware environment.

  9. From the Resource pool list, select an available resource allocations for the host.

  10. In the Folder list, select the folder to organize the host.

  11. From the Guest OS list, select the operating system you want to use in VMware vSphere.

  12. From the Virtual H/W version list, select the underlying VMware hardware abstraction to use for virtual machines.

  13. If you want to add more memory while the virtual machine is powered on, select the Memory hot add checkbox.

  14. If you want to add more CPUs while the virtual machine is powered on, select the CPU hot add checkbox.

  15. If you want to add a CD-ROM drive, select the CD-ROM drive checkbox.

  16. From the Boot order list, define the order in which the virtual machines tried to boot.

  17. Optional: In the Annotation Notes field, enter an arbitrary description.

  18. If you use image-based provisioning, select the image from the Image list.

  19. From the SCSI controller list, select the disk access method for the host.

  20. If you want to use eager zero thick provisioning, select the Eager zero checkbox. By default, the disk uses lazy zero thick provisioning.

  21. From the Network Interfaces list, select the network parameters for the host’s network interface. At least one interface must point to a orcharhino Proxy-managed network.

  22. Optional: Click Add Interface to create another network interfaces.

  23. Click Submit to save the compute profile.

CLI procedure
  1. Create a compute profile:

    # hammer compute-profile create --name "My_Compute_Profile"
  2. Set VMware details to a compute profile:

    # hammer compute-profile values create \
    --compute-attributes "cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-profile "My_Compute_Profile" \
    --compute-resource "My_VMware" \
    --interface "compute_type=VirtualE1000,compute_network=mynetwork \
    --volume "size_gb=20G,datastore=Data,name=myharddisk,thin=true"

Creating hosts on VMware

The VMware vSphere provisioning process provides the option to create hosts over a network connection or using an existing image.

For network-based provisioning, you must create a host to access either orcharhino Server’s integrated orcharhino Proxy or an external orcharhino Proxy on a VMware vSphere virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the VMware vSphere server to create the virtual machine. If the virtual machine detects the defined orcharhino Proxy through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system.

DHCP conflicts

If you use a virtual network on the VMware vSphere server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with orcharhino Server when booting new hosts.

For image-based provisioning, use the pre-existing image as a basis for the new volume.

To use the CLI instead of the orcharhino management UI, see the CLI procedure.

Procedure
  1. In the orcharhino management UI, navigate to Hosts > Create Host.

  2. In the Name field, enter a name for the host.

  3. Optional: Click the Organization tab and change the organization context to match your requirement.

  4. Optional: Click the Location tab and change the location context to match your requirement.

  5. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form.

  6. From the Deploy on list, select the VMware vSphere connection.

  7. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings.

  8. Click the Interfaces tab, and on the interface of the host, click Edit.

  9. Verify that the fields are populated with values. Note in particular:

    • orcharhino automatically assigns an IP address for the new host.

    • Ensure that the MAC address field is blank. VMware assigns a MAC address to the host during provisioning.

    • The Name from the Host tab becomes the DNS name.

    • Ensure that orcharhino automatically selects the Managed, Primary, and Provision options for the first interface on the host. If not, select them.

  10. In the interface window, review the VMware-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs.

  11. Click OK to save. To add another interface, click Add Interface. You can select only one interface for Provision and Primary.

  12. Click the Operating System tab, and confirm that all fields automatically contain values.

  13. Select the Provisioning Method that you want:

    • For network-based provisioning, click Network Based.

    • For image-based provisioning, click Image Based.

    • If the foreman_bootdisk plug-in is installed, and you want to use boot-disk provisioning, click Boot disk based.

  14. Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use.

  15. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements.

  16. Click the Parameters tab and ensure that a parameter exists that provides an activation key. If a parameter does not exist, click + Add Parameter. In the field Name, enter kt_activation_keys. In the field Value, enter the name of the activation key used to register the Content Hosts.

  17. Click Submit to provision your host on VMware.

CLI procedure
  • Create the host from a network with the hammer host create command and include --provision-method build to use network-based provisioning:

    # hammer host create \
    --build true \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-resource "My_VMware" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host" \
    --organization "My_Organization" \
    --provision-method build \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"
  • Create the host from an image with the hammer host create command and include --provision-method image to use image-based provisioning:

    # hammer host create \
    --compute-attributes="cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true" \
    --compute-resource "My_VMware" \
    --enabled true \
    --hostgroup "My_Host_Group" \
    --image "My_VMware_Image" \
    --interface "managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork" \
    --location "My_Location" \
    --managed true \
    --name "My_Host" \
    --organization "My_Organization" \
    --provision-method image \
    --volume="size_gb=20G,datastore=Data,name=myharddisk,thin=true"

For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.

Using VMware cloud-init and userdata templates for provisioning

You can use VMware with the Cloud-init and Userdata templates to insert user data into the new virtual machine, to make further VMware customization, and to enable the VMware-hosted virtual machine to call back to orcharhino.

You can use the same procedures to set up a VMware compute resource within orcharhino, with a few modifications to the workflow.

Provisioning User Data Sequence
Figure 1. VMware cloud-init provisioning overview

When you set up the compute resource and images for VMware provisioning in orcharhino, the following sequence of provisioning events occurs:

  • The user provisions one or more virtual machines using the orcharhino management UI, API, or hammer

  • orcharhino calls the VMware vCenter to clone the virtual machine template

  • orcharhino userdata provisioning template adds customized identity information

  • When provisioning completes, the Cloud-init provisioning template instructs the virtual machine to call back to orcharhino Proxy when cloud-init runs

  • VMware vCenter clones the template to the virtual machine

  • VMware vCenter applies customization for the virtual machine’s identity, including the host name, IP, and DNS

  • The virtual machine builds, cloud-init is invoked and calls back orcharhino on port 80, which then redirects to 443

Prerequisites
  • Configure port and firewall settings to open any necessary connections. Because of the cloud-init service, the virtual machine always calls back to orcharhino even if you register the virtual machine to orcharhino Proxy.

  • If you want to use orcharhino Proxies instead of your orcharhino Server, ensure that you have configured your orcharhino Proxies accordingly. For more information, see Configuring orcharhino Proxy for Host Registration and Provisioning in Installing orcharhino Proxy.

  • Back up the following configuration files:

    • /etc/cloud/cloud.cfg.d/01_network.cfg

    • /etc/cloud/cloud.cfg.d/10_datasource.cfg

    • /etc/cloud/cloud.cfg

Associating the Userdata and Cloud-init templates with the operating system
  1. In the orcharhino management UI, navigate to Hosts > Templates > Provisioning Templates.

  2. Search for the CloudInit default template and click its name.

  3. Click the Association tab.

  4. Select all operating systems to which the template applies and click Submit.

  5. Repeat the steps above for the UserData open-vm-tools template.

  6. Navigate to Hosts > Provisioning Setup > Operating Systems.

  7. Select the operating system that you want to use for provisioning.

  8. Click the Templates tab.

  9. From the Cloud-init template list, select CloudInit default.

  10. From the User data template list, select UserData open-vm-tools.

  11. Click Submit to save the changes.

Preparing an image to use the cloud-init template

To prepare an image, you must first configure the settings that you require on a virtual machine that you can then save as an image to use in orcharhino.

To use the cloud-init template for provisioning, you must configure a virtual machine so that cloud-init is installed, enabled, and configured to call back to orcharhino Server.

For security purposes, you must install a CA certificate to use HTTPS for all communication. This procedure includes steps to clean the virtual machine so that no unwanted information transfers to the image you use for provisioning.

If you have an image with cloud-init, you must still follow this procedure to enable cloud-init to communicate with orcharhino because cloud-init is disabled by default.

These instructions are for CentOS or Fedora, follow similar steps for other Linux distributions.

Procedure
  1. On the virtual machine that you use to create the image, install the required packages:

    # dnf install cloud-init open-vm-tools perl-interpreter perl-File-Temp
  2. Disable network configuration by cloud-init:

    # cat << EOM > /etc/cloud/cloud.cfg.d/01_network.cfg
    network:
      config: disabled
    EOM
  3. Configure cloud-init to fetch data from orcharhino:

    # cat << EOM > /etc/cloud/cloud.cfg.d/10_datasource.cfg
    datasource_list: [NoCloud]
    datasource:
      NoCloud:
        seedfrom: https://orcharhino.example.com/userdata/
    EOM

    If you intend to provision through orcharhino Proxy, use the URL of your orcharhino Proxy in the seedfrom option, such as https://orcharhino-proxy.network2.example.com:9090/userdata/.

  4. Configure modules to use in cloud-init:

    # cat << EOM > /etc/cloud/cloud.cfg
    cloud_init_modules:
     - bootcmd
     - ssh
    
    cloud_config_modules:
     - runcmd
    
    cloud_final_modules:
     - scripts-per-once
     - scripts-per-boot
     - scripts-per-instance
     - scripts-user
     - phone-home
    
    system_info:
      distro: rhel
      paths:
        cloud_dir: /var/lib/cloud
        templates_dir: /etc/cloud/templates
      ssh_svcname: sshd
    EOM
  5. Enable the CA certificates for the image:

    # update-ca-trust enable
  6. Download the katello-server-ca.crt file from orcharhino Server:

    # wget -O /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt https://orcharhino.example.com/pub/katello-server-ca.crt

    If you intend to provision through orcharhino Proxy, download the file from your orcharhino Proxy, such as https://orcharhino-proxy.network2.example.com/pub/katello-server-ca.crt.

  7. Update the record of certificates:

    # update-ca-trust extract
  8. Clean the image:

    # systemctl stop rsyslog
    # systemctl stop auditd
    # package-cleanup --oldkernels --count=1
    # dnf clean all
  9. Reduce logspace, remove old logs, and truncate logs:

    # logrotate -f /etc/logrotate.conf
    # rm -f /var/log/*-???????? /var/log/*.gz
    # rm -f /var/log/dmesg.old
    # rm -rf /var/log/anaconda
    # cat /dev/null > /var/log/audit/audit.log
    # cat /dev/null > /var/log/wtmp
    # cat /dev/null > /var/log/lastlog
    # cat /dev/null > /var/log/grubby
  10. Remove udev hardware rules:

    # rm -f /etc/udev/rules.d/70*
  11. Remove the ifcfg scripts related to existing network configurations:

    # rm -f /etc/sysconfig/network-scripts/ifcfg-ens*
    # rm -f /etc/sysconfig/network-scripts/ifcfg-eth*
  12. Remove the SSH host keys:

    # rm -f /etc/ssh/ssh_host_*
  13. Remove root user’s SSH history:

    # rm -rf ~root/.ssh/known_hosts
  14. Remove root user’s shell history:

    # rm -f ~root/.bash_history
    # unset HISTFILE
  15. Create an image from this virtual machine.

  16. Add your image to orcharhino.

Deleting a VM on VMware

You can delete VMs running on VMware from within orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your VMware provider.

  3. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the VMware compute resource while retaining any associated hosts within orcharhino. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually.

Importing a virtual machine from VMware into orcharhino

You can import existing virtual machines running on VMware into orcharhino.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select your VMware compute resource.

  3. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in orcharhino in Managing Hosts.

  4. Click Submit to import the virtual machine into orcharhino.

Caching of compute resources

Caching of compute resources speeds up rendering of VMware information.

Enabling caching of compute resources

To enable or disable caching of compute resources:

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Click the Edit button to the right of the VMware server you want to update.

  3. Select the Enable caching checkbox.

Refreshing the compute resources cache

Refresh the cache of compute resources to update compute resources information.

Procedure
  1. In the orcharhino management UI, navigate to Infrastructure > Compute Resources.

  2. Select a VMware server you want to refresh the compute resources cache for and click Refresh Cache.

CLI procedure
  • Use this API call to refresh the compute resources cache:

    # curl -H "Accept:application/json" \
    -H "Content-Type:application/json" -X PUT \
    -u username:password -k \
    https://orcharhino.example.com/api/compute_resources/compute_resource_id/refresh_cache

    Use hammer compute-resource list to determine the ID of the VMware server you want to refresh the compute resources cache for.

The text and illustrations on this page are licensed by ATIX AG under a Creative Commons Attribution–Share Alike 3.0 Unported ("CC-BY-SA") license. This page also contains text from the official Foreman documentation which uses the same license ("CC-BY-SA").