User avatar
sakaki
Posts: 324
Joined: Sun Jul 16, 2017 1:11 pm

Run another OS on your RPi3 as a virtualized QEMU guest under KVM (64-bit)

Thu Oct 04, 2018 1:34 pm

Hello,

as the current gentoo-on-rpi3-64bit image (for the RPi3 B/B+) has KVM enabled in the kernel, a number of people have emailed me asking how to use this to efficiently run virtualized OSes on their system.

Since the steps involved aren't necessarily immediately obvious, I thought it'd be worth posting a short walkthough here. For the sake of concreteness, I'll show how to start up the following (obviously, you can adapt for your particular interests):
  • hardware: RPi3 (B or B+);
  • 'host' OS: >= v1.3.0 of my bootable gentoo-on-rpi3-64bit image;
  • 'guest' OS: the latest official 64-bit aarch64 cloud image of Ubuntu Server 18.04 LTS ('bionic beaver') (cloud images are deliberately minimal, so well suited for our needs);
  • virtualizer: app-emulation/qemu-3.0.0
  • BIOS: tianocore aarch64 EFI;
  • cores: 2 out of the 4 available;
  • memory: 256 MiB memory allocation (from the RPi3's 1GiB);
  • console-only setup, no graphics, SPICE etc. (easy to add if you want);
  • pass-through networking enabled (so you can apt-get from the guest, etc.);
  • cloud-init 'NoCloud' data source set up, to provide the initial machine name, ubuntu user's password etc.
  • running as the regular user ('demouser'), not root.
OK then, start, if you haven't already done so, by downloading - and writing to microSD card - a copy of the latest gentoo-on-rpi3-64bit image (full instructions are on the GitHub page just linked). Boot your RPi3 with it, and wait for the graphical desktop to come up.

Then, ensure you have network connectivity (instructions also on the GitHub page), open a terminal, and issue (I assume you are working as the regular user 'demouser' unless otherwise specified):

Code: Select all

mkdir -p qemu-test && cd qemu-test

Now we can collect the various pieces of software we need in order to boot the guest. The first is an EFI BIOS. Here, we'll use the latest aarch64 (aka arm64) tianocore image from Linaro. Issue:

Code: Select all

wget -c http://snapshots.linaro.org/components/kernel/leg-virt-tianocore-edk2-upstream/latest/QEMU-AARCH64/RELEASE_CLANG35/QEMU_EFI.fd
The firmware image is 2MiB so shouldn't take long to download.

Note that for simplicity, we'll not set up the ability for persistent EFI variables in this demo (see here for instructions on how to do so).

Next, download the latest Ubuntu 'bionic' (18.04 LTS server) arm64 cloud image. Issue:

Code: Select all

wget -c https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-arm64.img
You can of course use a different OS or version if you like. The exact size of this image will depend upon what is 'current' when you try it, of course, but at the time of writing it was ~300MiB, so the above download may take a little time, depending on the speed of your network connection.

Note that although the image is already in QEMU QCOW2 format, we won't use it for booting directly, as we may want to start various fresh instances (and the copy we boot will be modified once used, since it encapsulates a writeable root filesystem). Instead, we'll make a copy, using this image as the basis. The gentoo-on-rpi3-64bit image already contains app-emulation/qemu pre-installed, so issue:

Code: Select all

qemu-img create -f qcow2 -b bionic-server-cloudimg-arm64.img bionic-image-01.img
to create a derivative 'instance' QCOW2 image (bionic-image-01.img), backed by the 'master' copy you just downloaded. (you can do this as many times as you like).

Now, as is now common with such 'cloud' images (not just from Ubuntu), no 'out of the box' login credentials, hostname etc. are configured. The chicken-and-egg problem this obviously creates is solved through the use of the cloud-init service (pre set-up to run on boot). Inter alia, this will look for configuration data stored in a specially named (viz.: 'cidata') iso9660 filesystem, and if found uses it to set up initial passwords and so forth.

Following these notes, we'll just create a bare-minimum 'NoCloud' data source here. Issue:

Code: Select all

{ echo instance-id: kvm-bionic-01; echo local-hostname: kvm-bionic; } > meta-data
printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
If you run "tail -n +1 *-data", you should now see:

Code: Select all

==> meta-data <==
instance-id: kvm-bionic-01
local-hostname: kvm-bionic

==> user-data <==
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
These two files (when suitably packaged) will instruct cloud-init to:
  • Set up an instance called kvm-bionic--01, with hostname kvm-bionic
  • Set the password for the 'ubuntu' (default) user to passw0rd (adapt if desired), ensure it has no expiry, and allow it to be used for login via ssh.
More sophisticated configs are possible of course (setting up public keys for ssh login etc.) but this isn't a tutorial on cloud-init, so we won't use them ^-^

Next, to be able to package the config data, we need a utility called mkisofs; this is part of the app-cdr/cdrtools package (covered by virtual/cdrtools), which is not shipped with the gentoo-on-rpi3-64bit image by default, but is available on the binhost. So to install it, issue:

Code: Select all

sudo emerge --verbose  --noreplace virtual/cdrtools
This shouldn't take long to complete. Once done, you can proceed to build the specially named iso9660 image; issue:

Code: Select all

mkisofs -o seed-kvm-bionic-01.iso -V cidata -J -rock user-data meta-data
Hint: on some distributions (although not Gentoo, yet) mkisofs has been replaced by genisoimage, which has a slightly different invocation syntax.

If you run "ls" in your qemu-test directory, you should now see:

Code: Select all

QEMU_EFI.fd          bionic-server-cloudimg-arm64.img  seed-kvm-bionic-01.iso
bionic-image-01.img  meta-data                         user-data
That's the preparation over, we can now boot the image! Make sure your RPi3 isn't too heavily loaded, then run (still as the regular user, in the qemu-test directory):

Code: Select all

qemu-system-aarch64 -M virt -cpu host \
  -m 256M -smp 2  -nographic \
  -bios QEMU_EFI.fd \
  -cdrom seed-kvm-bionic-01.iso \
  -drive if=none,file=bionic-image-01.img,id=hd0 -device virtio-blk-device,drive=hd0 \
  -device virtio-net-device,netdev=vmnic -netdev user,id=vmnic  \
  -accel kvm 2>/dev/null
  
Most of these options should be self explanatory (see the qemu docs for more details). Note in particular that we:
  • allocate 256MiB of memory and restrict to two processors, no graphics (second line); and
  • turn on KVM acceleration (last line) and specify the 'host' cpu type (which requires it; first line).
If you see a grub boot screen displayed, just press Enter to continue. A small number of error messages may also be shown, but after a few seconds the bionic image should start booting; you will see its kernel output followed by standard systemd traces, printed to the same console window in which you issued the above qemu-system-aarch64 call.

Shortly thereafter, if all is well, you should be greeted by a login prompt. Log in as user ubuntu, password passw0rd.

Once in, you can then play around with your Ubuntu system! Here's a screenshot from one of my RPi3B+'s:

Image

Note how the Gentoo (top console) and Ubuntu (bottom console) instances are running different kernels - this is not simply a chroot. And also that the system load is very low, due to the efficiency of the kvm virtualization.

One point: with networking set up as here, you can't ping from inside the guest system, but you can wget etc., and so apt-get works. This networking issue can easily be resolved, but this isn't a detailed qemu tutuorial.

On the text console, you can use Ctrl-a then c to switch between the bash prompt and the qemu monitor prompt (e.g. see the top of the lower console window in the above screenshot).

have fun ^-^

sakaki

PS: there's nothing particularly Gentoo-specific about the above; if you have a 64-bit kernel with the appropriate options set, and a modern build of qemu, you should be able to try it on another distro. The current kernel config may be viewed here, diff against the bcmrpi3_defconfig here.

User avatar
sakaki
Posts: 324
Joined: Sun Jul 16, 2017 1:11 pm

Re: Run another OS on your RPi3 as a virtualized QEMU guest under KVM (64-bit)

Mon Oct 08, 2018 7:39 pm

Hello,

In my previous post, I ran through how to run a second, guest OS on your 64-bit RPi3, under KVM , with the host OS running an xfce4 desktop, and the guest in console-only mode.

But what if you wanted a GUI on your guest OS also? Resources are super-tight on the RPi3 platform, but it is just about possible ^-^ So, in this follow-up guide I'll show you a few ways to go about it, picking up from where we left off last time. (To see a screenshot of the final result, scroll to the end of this post.)

As before, we'll be targetting: You can of course adapt these instructions to your own requirements (most other 64-bit aarch64 OSes can be switched in as the guest, and you could use a non-Gentoo host if you wished (I have chosen that particular image because it ships with KVM support in its kernel, and I happen to maintain it ^-^).

Just before we dive in, a brief introduction to terminology may be in order. KVM (here) stands for "Kernel-based Virtual Machine": a virtualization infrastructure for the Linux kernel that turns it into a hypervisor. This technology, together with some userspace "glue" (here, QEMU), allows two (or more) distinct operating systems to efficiently, and securely, share a common SoC. Unlike emulation, both host and guest run almost all instructions natively (without translation). And unlike a chroot, this arrangement allows both host and guest to run distinct kernels (and init systems). A short intro to KVM on ARM may be found here (this is for 32-bit v7, but the 64-bit v8 code is not too different, so the concepts are still relevant).

OK, I'm going to assume you have already completed the setup from the previous post; so if you haven't, begin by doing that first.

Then, if you're currently running the guest VM, shut it down (you can do so by running "sudo shutdown" as the ubuntu user). Next, restart it by issuing the following (as your regular user "demouser", working on the gentoo-on-rpi3-64bit image booted on an RPi3 B or B+) from a console in the qemu-test directory:

Code: Select all

qemu-system-aarch64 -M virt -cpu host \
  -m 384M -smp 2  -nographic \
  -bios QEMU_EFI.fd \
  -cdrom seed-kvm-bionic-01.iso \
  -drive if=none,file=bionic-image-01.img,id=hd0 -device virtio-blk-device,drive=hd0 \
  -device virtio-net-device,netdev=vmnic -netdev user,id=vmnic,hostfwd=tcp::5555-:22  \
  -accel kvm 2>/dev/null
Hint: if this fails with an exception, simply issue "pkill qemu" from another window and try again.This happens sometimes when booting the UEFI firmware.

This is almost the same as last time, but with two changes:
  • We have allocated more memory (384MiB vs 256MiB) to the guest, a bare minimum to run a GUI; and
  • We have requested QEMU forward port 5555/tcp on the host to port 22/tcp on the guest; this will allow us to connect via ssh.
As before, once this is run, the guest will start up (you may need to press Enter a few times at the GRUB boot stage). A minute or so later you should see an Ubuntu login prompt (in the same terminal as you issued the qemu-system-aarch64 command, since -nographic was specified).

Now open a new terminal window on your gentoo-on-rpi3-64bit desktop, and (as "demouser") issue:
Hint: if your browser shows "email protected", that's just the board's anti-spam system being a bit over-zealous: the login specifier is ubuntu at-sign 127.0.0.1 (in this and subsequent ssh commands - see the screenshot at the end of this post).

Enter the password for the "ubuntu" user (passw0rd) when prompted, and you should be in, connected independently from the QEMU console link (which is in the window where you issued qemu-system-aarch64, just a moment ago), via localhost port 5555 (which QEMU forwards to port 22 - the standard ssh port - on the guest).

Now you're logged in as ubuntu, take the opportunity to update your system, reboot, then install some necessary software on the host. Working within the ssh window (as the "ubuntu" user), issue:

Code: Select all

sudo apt-get update
sudo apt-get -y upgrade
sudo reboot
Once the system comes back up again within QEMU, re-establish the ssh/ubuntu terminal connection as before, then issue (from that ssh terminal, as the "ubuntu" user):

Code: Select all

sudo apt-get install -y mousepad 
mousepad is a simple editor app which can run on X11. It will pull in a number of additional dependency libraries, as the image starts with no GUI support at all, so please be patient.

Once the install is complete, you can try out the a first approach to using a GUI from your guest: X11 forwarding onto your host's X server. To do so, open a fresh terminal on your (host) desktop and issue (as "demouser"):

Code: Select all

ssh -f -T [email protected] -p 5555 -Y mousepad /etc/os-release 2>/dev/null
The -f tells ssh to background after asking for a password, and the -T disables pseudo-terminal allocation. The -Y enables (for convenience) trusted X11 forwarding, allowing the guest applications to use your host's X11 server to for graphical I/O.

Enter ubuntu's password (passw0rd) when prompted, and then, if all is well, you should find that an editor window opens on your host desktop, but the underlying mousepad application (and os-release file it is editing, as you can see from its content) is on the guest system.

This is a highly efficient way to use the guest's GUI applications, since only one X server is running (your host's). However, it has a number of drawbacks. There are security implications to opening up your host's X11 server (see these notes, for example), and while it is possible to work around these (using e.g. Xephyr inside firejail on the host), it's still not an ideal solution for a full guest desktop.

So, while this approach is handy to keep in mind for quick one-off access to individual apps, let's next put together a full "remote desktop" for our guest.

There are various ways to approach this issue, but since QEMU on aarch64 on the RPi3 does not currently support the QXL paravirtual graphics card (which would be the default route on x86_64), we'll instead run a virtual framebuffer (Xvfb)-backed X11 server on the guest, run a lightweight desktop on that (xfce4), and forward the resulting (otherwise invisible) desktop to the host via VNC.

OK, to begin, install the necessary software on your guest. Running as the ubuntu user, within the ssh terminal again, issue:

Code: Select all

sudo apt-get install -y xfce4 xvfb x11vnc xfce4-taskmanager xfce4-cpugraph-plugin xfce4-terminal links2
I don't recommend using --no-install-recommends with xfce4; you'll end up missing things like dbus-x11 which make it essentially unusable.

and let this run to completion (it will take a while, downloading ~80MiB of archives which take up ~400MiB when installed - the qcow2 disk image has sufficient space though). In the above:
  • xfce4 is a relatively lightweight desktop system for X11 (you could just run a windowing manager, like openbox, but we're trying to push the envelope here ^-^);
  • xvfb is a virtual framebuffer for X11 (a pretend graphics card that renders to a memory buffer);
  • x11vnc is a VNC server for X11 (we'll use this in preference to QEMU's bundled VNC server, which does not always work correctly with aarch64);
  • xfce4-taskmanager is a simple process monitor app for xfce4; installing it is optional for a minimal setup, but recommended;
  • xfce4-cpugraph-plugin is a panel plugin for xfce4 that displays an running CPU load; installing it is optional (but nice to have);
  • xfce4-terminal is a nice terminal emulator for xfce4; optional (but nicer than xterm!); and
  • links2 is a super-light-weight web browser that can run in text or X11 mode; installing it is optional (but having some sort of web-browsing capability is nice when configuring a system).
Once the above has completed, you can start xfce4 on your guest!

Begin by creating an X11 virtual framebuffer on display :1, and putting it into the background. We'll make this 800x600 pixels at 24bit depth, you can vary this as desired (but don't go crazy, there isn't a lot of memory to play with here ^-^). Working as the ubuntu user in the ssh terminal, issue:

Code: Select all

export DISPLAY=:1
Xvfb $DISPLAY -screen 0 800x600x24 &
Hint - you can use nohup with these commands if desired.

Now start the xfce4 desktop itself, for the ubuntu user! Still in the same terminal, issue:

Code: Select all

startxfce4 &>/dev/null&
Apart from a bit of CPU activity, nothing will apparently happen, but that's because the desktop is being rendered to our new virtual framebuffer only (this is the same trick sometimes used to run GUIs on headless cloud VM images etc.).

Next, start up the x11vnc server, serving the same screen and display. Still as the ubuntu user, in the same console window, issue:

Code: Select all

x11vnc -display $DISPLAY -bg -nopw -listen localhost -xkb &>/dev/null
The -bg instructs the server to background itself after setup; the -nopw disables the "no password" warning; the -listen localhost directive instructs the server to only accept connections on (guest) 127.0.0.1; and -xkb uses the XKEYBOARD extension, hopefully avoiding most keymapping problems.

With that done, you now have an xfce4 desktop running over an X11 server on the guest, rendering to a virtual Xvfb framebuffer, and available for remote viewing via VNC on (guest) port 127.0.0.1:5900/tcp (the port numbering is by convention, you can specify a different one if you like).

We're almost there now, but two problems remain.

The first issue is that the gentoo-on-rpi3-64bit image does not ship with a VNC client pre-installed. Fortunately, net-misc/tigervnc is available on the binhost (as a binary package). To install it, issue (as the "demouser" user on a terminal in the host desktop):

Code: Select all

sudo emerge --verbose --noreplace net-misc/tigervnc
This shouldn't take long. The program it installs may be launched from the commandline as "vncviewer" (and will also appear in the desktop Applications-> Internet menu, as "TigerVNC Viewer").

The second issue is that the the guest localhost port 5900 isn't visible on the host system by default. There are various ways around this, but to avoid having to set up multiple networking cards on QEMU, here we'll just take advantage of another neat / scary ssh feature, port forwarding. Using this, we can request ssh transparently forward traffic from a given port on the host system to another on the remote side (including replies).

To do so here, working in the same host terminal, and still as demouser, issue:

Code: Select all

ssh -f -N -T -L 5910:localhost:5900 [email protected] -p 5555
The -f and -T options we've seen before; the -N option tells ssh there is no command payload. The -L component sets up the port forwarding.

Enter ubuntu's password (passw0rd) when prompted. Nothing will appear to happen, but the tunnel for VNC traffic is now in place between host and guest.

All that remains is to open a viewer! Still working as demouser, in the same host terminal, issue:

Code: Select all

vncviewer 127.0.0.1:5910 &>/dev/null&
And with luck (and OOM-killer permitting ^-^) a window should open on the host, showing the guest desktop. Here's a screenshot of one of my gentoo-on-rpi3-64bit RPi3's, on which the above steps have been run:

Image

Note that for this setup, I changed the icons (using the Xfce Applications->Settings->Appearance tool) after launch to the Ubuntu-Mono-Light set, downloaded (using the links browser!) an Ubuntu 18.04 desktop png, for appearance sake, and installed the cpugraph plugin. But everything else is vanilla.

If you look at the above screenshot, you'll notice that:
  • There are four different connections to the guest in use: the bottom right QEMU terminal (which I have here rotated into QEMU monitor mode using Ctrl-a c; do this again and you'd get a synthetic serial console login prompt); the ssh terminal connection (top right, here already logged in as the ubuntu user); the X-forwarded mousepad editor (one from bottom on the right) and the VNC desktop itself (the large window on the left).
  • The host and guest are running different kernels - this is not simply a chroot. Compare the output from "uname -a" in the Gentoo terminal (one from top on the right) and in the terminal window in the VNC guest desktop (and also in the ssh terminal).
  • You can't see it here, but they're running different init systems too: OpenRC on the host, and systemd on the guest.
  • System load is low (see the cpu graph plugins, in the top horizontal panel, on host and guest). KVM virtualization imposes very low overhead, as most code runs natively on both guest and host.
  • The guest (as we specified when invoking qemu-system-aarch64) has only 2 cpu cores available, whereas the host has 4 (see the same graph plugins). You can do fun things with cpu affinity if you really want to minimize the latter stepping on the toes of the former, but I haven't for this simple example.
  • You can launch apps etc. on the guest as you wish - it is a full xfce4 system. So in the above, I've opened the thunar file browser and an xfce4-terminal.
Of course, this is only a proof-of-concept/demo setup. If you used such a thing in a production scenario, you'd have services to launch the various components, with restart-on-failure etc. But hopefully it shows what can be done.

That's it for this time. I may follow up with one more post about SPICE in the future, if this is proving useful to anyone ^-^

best, sakaki

PS: there is the small question of why any sane person would want to do any of this in the first place, of course ^-^. It is perfectly possible to chroot most guest systems (even 32-bit guest on a 64-bit host), provided you are comfortable sharing a kernel, and there aren't any init system-expectation mismatches. KVM does provide pretty strong isolation I suppose, so if you had a server component you wanted to absolutely lock down (tor, for example) you could put it in a firejail chroot on a hardened guest OS, and then pipe traffic to it from the host... mostly though, on such a resource-limited system as the RPi3, it's for fun ^-^

PPS: forcing the X11 and VNC interaction over an ssh tunnel locally will involve encryption / decryption overhead. If you wanted to do the above in a production system, it'd be better to set up another virtual network card on QEMU shared by the host and guest, and vector traffic over that. Or use a paravirtualized graphics card (but unfortunately these don't seem to be available for vc4 / aarch64 at present, although I'd be happy to be proven wrong on that point).

gilius
Posts: 96
Joined: Sun Apr 08, 2018 1:12 pm

Re: Run another OS on your RPi3 as a virtualized QEMU guest under KVM (64-bit)

Mon Nov 26, 2018 1:04 am

Very nice topic and glad to see the KVM modules are still maintained in Gentoo! It's useful as I haven't been able to get Ubuntu 18.04 to work properly when running natively on the Rpi3B+.

Just a shame we can't (yet) get Window on ARM to run due to some problems (deliberate?) with KVM:
https://github.com/virtio-win/kvm-guest ... issues/177
(but I need to follow up on what mariobalanica mentioned today about using the right USB controller)

code_exec
Posts: 271
Joined: Sun Sep 30, 2018 12:25 pm

Re: Run another OS on your RPi3 as a virtualized QEMU guest under KVM (64-bit)

Mon Jan 28, 2019 7:38 am

Unfortunately, KVM under 1GB RAM is quite restrictive. At most, you'll be able to allocate 600MB RAM to your VM and that will make your host system very slow. We'll need a Pi with at least 2GB RAM before we can make more use out of KVM. But good tutorial. I'm assuming that this can also be done on an x64 host without using KVM acceleration.
Ubuntu 18.04 LTS desktop images for the Raspberry Pi 3.

https://github.com/CodeExecution/Ubuntu-ARM64-RPi

Return to “General discussion”