- GPU node runs the same virt-handler and virt-launcher as every other node
- No native way to use GPU-optimised images for GPU workloads
- Workarounds: separate installs (overhead), bloated images, or external webhooks (kubevirt-aie-webhook)
- One workaround: build a single image containing all specialised components
- Every node runs the GPU-optimised images, even nodes without GPUs
- Unnecessary image bloat, larger attack surface, slower pulls
- Another workaround: run separate KubeVirtations per node pool
- Doubles operational overhead, upgrades, and configuration management
- Prevents resource sharing between pools
- The kubevirt-aie-webhook project already exists to solve the virt-launcher image problem
- It uses an external MutatingAdmissionWebhook to replace virt-launcher images based on VMI device and label selectors
- Only mutates virt-launcher, cannot customise virt-handler
- External dependency with its own lifecycle, upgrades, and failure modes
- Webhook failures block VMI creation entirely
- This VEP proposes folding that capability natively into KubeVirt and extending it to also support per-pool virt-handler customisation
- Single KubeVirtation manages all nodes
- Primary virt-handler DaemonSet covers standard nodes with anti-affinity to avoid MegaSlop nodes
- virt-handler-megaslop DaemonSet targets only nodes with the MegaSlop-9000 label
- No operational overhead, no image bloat, no external webhooks
- New virtHandlerPools field in the KubeVirt CR
- Each pool deploys an additional virt-handler DaemonSet
- Custom virtHandlerImage and/or virtLauncherImage per pool
- VMIs matched to pools transparently via deviceNames or vmLabels
- No changes required from VM users
- VM user submits a VMI as normal, no pool-specific configuration needed
- virt-controller evaluates VMI against pool selectors
- Matches superduper.io/MegaSlop_9000 to megaslop deviceNames
- Selects virt-launcher:v1.9.0-megaslop as the launcher image
- Merges pool's nodeSelector into virt-launcher pod node affinity
- VMI lands on Node C with the correct virt-launcher image
- VEP #97 introduces per-cluster hypervisor configuration via the hypervisor abstraction layer
- Virt-handler pools are a natural mechanism for extending this to per-pool hypervisor backends
- Example: KVM on some nodes, MSHV on others within the same cluster
- A future iteration could add an optional hypervisor field to VirtHandlerPoolConfig
- Out of scope for the initial implementation but a clear extension point
- VM user labels their VMI with kubevirt.io/hypervisor: mshv
- virt-controller matches via vmLabels selector to the mshv
- Selects the MSHV-specific virt-launcher image
- Merges pool nodeSelector into pod affinity ensuring it lands on an MSHV node
- The VMI runs under the MSHV hypervisor transparently