Like many of you, I’m dual booting on my Framework laptop, with Linux and Windows installed on the same SSD. I’ve found a method for booting Windows in a VM in Linux and visa-versa, so I’ll describe that here.
In a nutshell, an OS installed on a disk can be booted in a VM fairly easily as long as each OS on the disk has its own EFI System Partition (ESP). I used VMware Workstation Player due to familiarity, but something similar should be possible with VirtualBox.
I started with fresh installs of each OS, but if I hadn’t, I would’ve backed up my data before starting. I suggest you do the same. If you’d like to reproduce my setup, you can follow my instructions below. In my case, I actually used Kubuntu (and soon after again with Fedora), but the instructions use Ubuntu as an example.
Installation of Windows and Ubuntu
The TL;DR for this section: if you install Linux second, you can create a second ESP during installation by selecting manual/custom/advanced installation. Windows will not mount this ESP. This is necessary for the VM configuration I describe in the following sections.
Running an Ubuntu VM (off of the disk partition) in Windows
Though VirtualBox and VMWare Workstation Player both have the necessary features, I use VMWare here.
The Ubuntu ESP must not be used to boot Ubuntu in the VM because it references Windows partitions (if the above steps were followed). An alternative is using a bootloader ISO.
Running a Windows VM (off of the disk partition) in Ubuntu
Fortunately, the Windows ESP is perfectly suitable for booting a Windows VM.
Additional advice
It would be wise to disable hibernate/suspend-to-disk on both OSes as soon as possible. Saving the contents of memory on one piece of hardware and then restoring them on another is obviously a terrible idea. Expect data loss.
Have you ever tried booting the Windows partition from within Linux using KVM/QEMU? Last time I checked you could only do it if Windows occupied an entire disk, but maybe there is a workaround.
This is genius! I was going to dual boot, but I didn’t know you could boot an OS in VM from another. I’m definitely going to do this. Thank you for putting together this guide!!
That’s a super clever workaround! Is there a way to use this method to also utilize as much hardware power as possible from my computer? Running a Windows VM in Ubuntu that can also reliably work with CPU/GPU-heavy Adobe stuff like After Effects would be a dream.
Although this IS a VMWare tutorial so I completely understand if this isn’t a replacement for KVM/VFIO/whatever-it’s-called
I have this set up with windows running in a kvm setup. Other than windows complaining about needing to be activated when running in the VM (I have it activated when booted directly to windows) it works fine. If you need to do really heavy lifting in windows you’d likely want to restart into native windows. Otherwise you can assign it as many CPUs and RAM as you think that you can spare from your host system. I have other setups where the GPU and other hardware are passed through to the VM, but if that’s feasible on these laptops with only integrated graphics, I haven’t figured out how to do it - I don’t think that it is feasible, though.
Unfortunately, I haven’t. Starting to experiment with it now, though! Maybe @lbkNhubert would be willing to share how this is done? Sounds like they have a great setup.
@vaioware Yeah … might need to figure out how to do this with KVM and related tools. Using an entire drive for Windows seems doable, but I haven’t figured out how to use specific partitions.
This document from Intel indicates that it is possible to use Iris XE graphics in the guest OS, but the guest is Ubuntu, so not quite an answer.
Going from memory here, but basically you set up the VM, but do not yet start it. I set up the “bare” drive as an SATA device. If memory serves, it doesn’t work as a virtio device. Next, edit the raw xml for the device. Here’s what I have set up (edited to anonymize):
I also grabbed the UUID from windows and replaced what had been generated at the top of the full xml (overview, xml view). You can do all of this from a text editor and using virsh to create/import, but I find it easier to do from within virt manager.
You’ll wind up with extra devices, some used in the vm, some when booting natively, but it seems to work ok. I am by no means an expert at this, hopefully this is helpful to you. Good luck, let us know how it goes!
Forgot to note - from the article that you linked, it looks like the machine being referenced has two GPUs - the Iris X and the Iris X MAX - I believe that’s why you can pass one through to the VM. That’s what I have set up on the server where I have that configured - integrated to the base OS, discrete GPU passed through to the VM.
Thanks - I’ll be trying KVM again when I get my Framework. I’ve used it in the past but haven’t found a way to dual boot Windows and Linux from the same physical device AND boot the Windows partition(s) as a VM guest from a Linux host. I can’t see your XML but I’ll tinker once I get it.
Unfortunately, as @lbkNhubert said, this is for the Iris Xe MAX which is a discrete GPU. Still planning on passing through an eGPU (did this with really good success on my Thinkpad X1C6).
@Enjewneer - thanks I edited the post, I spaced that the xml would get eaten without putting it in a code block.
I think that you can point to a specific partition on the drive. In my case I have windows on a separate device so I set the config to the entire drive. With EFI it might be tricky to pull off. I’ll try to dig up an old MicroSD card that I had set with multiple VMs on it and look at how I had it configured.
Ok, found it. The VM is set up as EFI and pointed to the whole drive (in that case /dev/sdX) although there are multiple OSes on the card.
I found a tutorial for doing exactly what you’re looking for (booting the Windows partition in a dual boot system with a single SSD in KVM/QEMU): Boot Your Windows Partition from Linux using KVM
The juicy bits are after the heading “building your virtual disk,” which also links to a more in-depth tutorial elsewhere. However, this method requires re-building the virtual disk with a script every time you reboot Windows, which is a little inconvenient.
Thanks so much for digging this up! Coincidentally, I actually found this about 5 hours ago. I’ve got experience with mdadm and it’s a really clever solution. I’ll try it once I get the Framework.
Not too inconvenient if I use QEMU hooks upon booting that specific machine!
Thank you for this guide! I haven’t tried booting either OS as a VM yet, but I successfully got through the installation process yesterday (with the added quirk of making sure the Ubuntu install was LUKS-encrypted, which meant that besides the “physical volume for encryption” for the OS and the Linux EFI partition, I needed an unencrypted ext4 /boot partition, and I had to break out to the command-line to set up the LVM logical volumes and then restart the installer).
There’s one thing that wasn’t entirely clear to me from your writeup, though. When installing Ubuntu, after Windows has already been successfully installed, where’s the best place to install the bootloader? Apparently the possibilities are /dev/nvme0n1 (the whole drive), the Windows EFI partition (probably /dev/nvme0n1p1 if the Windows install was default), or the new Linux EFI partitition I just set up (/dev/nvme0n1p5 in my case).
I chose my new Linux EFI partition, and that seemed to work fine as far as getting a GRUB menu that lets me boot successfully into Xubuntu or into Windows — which I think means that the Linux installer configured the system to use the Linux EFI partitition by default, and the Windows EFI partition isn’t being used at all even when I choose to boot into Windows, although I’m not 100% sure —, but I’m not entirely sure that was the best thing to do. Seems like it probably was, since the point of having a Linux EFI partition would be that you don’t have to touch the Windows one and you can use it for booting into a VM, but it might be useful to clarify that step.
I followed mainly this guide to set up LUKS and LVM on the command-line before doing the Linux install.
Looking forward to trying to run Windows as a guest OS! In the meantime, dual boot works smoothly.
This doesn’t really get mentioned because, in my instructions, creating the ESP and selecting it (to be /boot/efi) occur in the same step (or that’s what I’m trying to imply in step 4 under “installing Ubuntu”, I ought to flesh that out), but it appears that you had some extra steps between them. Installing the bootloader in the new ESP was the right thing to do.
I actually went back and did this as well, so I can confirm that both OSes are able to host each other in VMware just fine if using Windows 10. However, I can’t seem to get either VM to work after upgrading to Windows 11. After a VMWare update and fixing a careless mistake of mine, it works the same as before with Windows 11!
By the way, it’s a Wiki post, so feel free to edit to add and/or clarify!
So Linux is the host OS, and Windows is the guest? I get the same error if I skip this step:
Are you saying that you had it working before, but it stopped working? You may need to follow the instructions again. Kernel updates seem to mess it up.
I tried this and it just says “Failed to enroll new keys”
I had it working on my old laptop, Linux Mint, Kernel 5.04 I think, Legacy boot no secure boot.
This is my new Framework Linux Mint Edge, New Kernel and secure boot enabled.
FYI: I’ve now taken secure boot off my Framework to get VMWare working for now. Why a piece of software designed to run another piece of software in isolation needs such deep access to your system is confusing/worrying.
Can you update guide for windows 11? I tried but it said no operation system found. I remove the cd/dvd sata too ( the one which you will put iso inside for a clean vmware win 11 install i guess