Projects like this and Docker make me seriously wonder where software engineering is going. Don't get me wrong, I don't mean to criticize Docker or Toro in parcicular. It's the increasing dependency on such approaches that bothers me.
Docker was conceived to solve the problem of things "working on my machine", and not anywhere else. This was generally caused by the differences in the configuration and versions of dependencies. Its approach was simple: bundle both of these together with the application in unified images, and deploy these images as atomic units.
Somewhere along the lines however, the problem has mutated into "works on my container host". How is that possible? Turns out that with larger modular applications, the configuration and dependencies naturally demand separation. This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Now hardware virtualization. I like how AArch64 generalizes this: there are 4 levels of privilege baked into the architecture. Each has control over the lower and can call up the one immediately above to request a service. Simple. Let's narrow our focus to the lowest three: EL0 (classically the user space), EL1 (the kernel), EL2 (the hypervisor). EL0, in most operating systems, isn't capable of doing much on its own; its sole purpose is to do raw computation and request I/O from EL1. EL1, on the other hand, has the powers to directly talk to the hardware.
Everyone is happy, until the complexity of EL1 grows out of control and becomes a huge attack surface, difficult to secure and easy to exploit from EL0. Not good. The naive solution? Go a level above, and create a layer that will constrain EL1, or actually, run multiple, per-application EL1s, and punch some holes through for them to still be able to do the job—create a hypervisor. But then, as those vaguely defined "holes", also called system calls and hyper calls, grow, won't so the attack surface?
Or in other words, with the user space shifting to EL1, will our hypervisor become the operating system, just like docker-compose became a dynamic linker?
I see a number of assumptions in your post which I find not matching my view of the picture.
Containers arose as a way to solve the dependency problems created by traditional Unix. They grow from tools like chroot, BSD jails, and Solaris Zones. Containers allow to deploy dependencies that cannot be simultaneously installed on a traditional Unix host system. it's not a UNIX architecture limitation but rather a result of POSIX + tradition; e.g. Nix also solves this, but differently.
Containers (like chroot and jail before them) also help ensure that a running service does not depend on the parts of the filesystem it wasn't given access to. Additionally, containers can limit network access, and process tree access.
These limitations are not a proper security boundary, but definitely a dependency boundary, helping avoid spaghetti-style dependencies, and surprises like "we never realized that our ${X} depends on ${Y}".
Then, there's the Fundamental Theorem of Software Engineering [1], which states: "We can solve any problem by introducing an extra level of indirection." So yes, expect the number of levels of indirection to grow everywhere in the stack. A wise engineer can expect to merge or remove a some levels here and there, when the need for them is gone, but they would never expect that new levels of indirection should stop emerging.
To be honest, I've read your response 3 times and I still don't see where we disagree, assuming that we do.
I've mostly focused on the worst Docker horrors I've seen in production, extrapolating that to the future of containers, as pulling in new "containerized" dependencies will inevitably become just as effortless as it currently is with regular dependencies in the new-style high-level programming languages. You've primarily described a relatively fresh, or a well-managed Docker deployment, while admitting that spaghetti-style dependencies have become a norm and new layers will pile up (and by extension, make things hard to manage).
I think our points of view don't actually collide.
Containers got popular at at time when there were an increasingly number of people that were finding it hard to install software on their system locally - especially if you were, for instance, having to juggle multiple versions of ruby or multiple versions of python and those linked to various major versions of c libraries.
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their software)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
I can understand the public cloud argument; if the cloud provider insists on you delivering an entire operating system to run your workloads, a unikernel indeed slashes the amount of layers you have to care about.
Suppose you control the entire stack though, from the bare metal up. (Correct me if I'm wrong, but) Toro doesn't seem to run on real hardware, you have to run it atop QEMU or Firecracker. In that case, what difference does it make if your application makes I/O requests through paravirtualized interfaces of the hypervisor or talks directly to the host via system calls? Both ultimately lead to the host OS servicing the request. There isn't any notable difference between the kernel/hypervisor and the user/kernel boundary in modern processors either; most of the time, privilege escalations come from errors in the software running in the privileged modes of the processor.
Technically, in the former case, besides exploiting the application, a hypothetical attacker will also have to exploit a flaw in QEMU to start processes or gain further privileges on the host, but that's just due to a layer of indirection. You can accomplish this without resorting to hardware virtualization. Once in QEMU, the entire assortment of your host's system calls and services is exposed, just as if you ran your code as a regular user space process.
This is the level you want to block exec() and other functionality your application doesn't need at, so that neither QEMU nor your code ran directly can perform anything out of their scope. Adding a layer of indirection while still leaving user/kernel, or unikernel/hypervisor junction points unsupervised will only stop unmotivated attackers looking for low-hanging fruit.
> In that case, what difference does it make if your application makes I/O requests through paravirtualized interfaces of the hypervisor or talks directly to the host via system calls?
Hypervisors expose a much smaller API surface area to their tenants than an operating system does to its processes which makes them much easier to secure.
That is a artifact of implementation. Monolithic operating systems with tons of shared services expose lots to their tenants. Austere hypervisors, the ones with small API surface areas, basically implement a microkernel interface yet both expose significantly more surface area and offer a significantly worse guest experience than microkernels. That is why high security systems designed for multi-level security for shared tenants that need to protect against state actors use microkernels instead of hypervisors.
I can't speak for all the various projects but imo these aren't made for bare metal - if you want true bare metal (metal you can physically touch) use linux.
One of the things that might not be so apparent is that when you deploy these to something like AWS all the users/process mgmt/etc. gets shifted up and out of the instance you control and put into the cloud layer - I feel that would be hard to do with physical boxen cause it becomes a slippery slope of having certain operations (such as updates) needing auth for instance.
> other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box
What does that have to do with unikernel vs more traditional VMs? You can build a rootfs that doesn't have any interactive userland. Lots of container images do that already.
I am not a security researcher, but I wouldn't think it would be too hard to load your own shell into memory once you get access to it. At least, compared to pulling off an exploit in the first place.
I would think that merging kernel and user address spaces in a unikernel would, if anything, make it more vulnerable than a design using similar kernel options that did not attempt to merge everything into the kernel. Since now every application exploit is a kernel exploit.
A shell by design is explicitly made to run other programs. You type in 'ls', 'cd', 'cat', etc. but those are all different programs. A "webshell" can work to a degree as you could potentially upload files, cat files, write to files, etc. but you aren't running other programs under these conditions - that'd be code you're executing - scripting languages make this vastly easier than compiled ones. It's a lot more than just slapping a heavy-handed seccomp profile on your app.
Also merging the address space is not a necessity. In fact - 64-bit (which is essentially all modern cloud software) mandates virtual memory to begin with and many unikernel projects support elf loading.
I think they were talking more about the degraded performance.
In terms of the security aspects though, how does security holes in a layer that restricts things more than without it degrade security? Seems like saying that CVEs on browser's javascript sandboxing degrade the browser security more than just not having sandboxes.
Duplicating a networking and storage layer on top of existing storage/networking layers that containers, and the orchestrators such as k8s provide, absolutely degrade performance - full stop. No one runs containers raw (w/out an underlying vm) in the cloud - they always exist on top of vms.
The problem with "container" security is that even in this thread many people seem to think that it is a security barrier of some kind when it was never designed to be one. The v8 sandbox was specifically created to deal with sandboxing. It still has issues but at least it was thought about and a lot of engineering went into it. Container runtimes are not exported via the kernel. Unshare is not named 'create_container'. A lot of the container issues we see are runtime issues. There are over a half-dozen different namespaces that are used in different manners that expose hard to understand gotchas. The various container runtimes decide themselves how to deal with these and they have to deal with all the issues in their code when using them. A very common bug that these runtimes get hit by are TOCTOU (time of check to time of use) vulns that get exposed in these runtimes.
Right now there is a conversation about the upcoming change to systemd that runs sshd on vsock by default (you literally have to disable it via kernel cli flag - systemd.ssh_auto=no) - guess what one of the concerns is? Vsock isn't bound to a network namespace. This is not itself a vulnerability but it most definitely is going to get taken advantage in the future.
With a standard windows server license you are only allowed to have a two hyperv virtual machines but unlimited "windows containers". The design is similar to Linux with namespaces bolted onto the main kernel so they don't provide any better security guaranies than Linux namespaces.
Very useful if you are packaging trusted software don't want to upgrade your windows server license.
Windows containers are actually quite nice once you get past a few issues. Perf is the biggest as it seems to run in a VM in windows 11.
Perf is much better on Windows server. It's actually really pleasant to get your office appliances (a build agent etc) in a container on a beefy Windows machine running Windows server.
>what an attacker actually wants to do after landing on your box.
Aren't there ways of overwriting the existing kernel memory/extending it to contain an a new application if an attacker is able to attack the running unikernel?
What protections are provided by the unikernel to prevent this?
To be clear there are still numerous attacks one might lob at you. For instance you if you are running a node app and the attacker uploads a new js file that they can have the interpreter execute that's still an issue. However, you won't be able to start running random programs like curling down some cryptominer or something - it'd all need to be contained within that code.
What becomes harder is if you have a binary that forces you to rewrite the program in memory as you suggest. That's where classic page protections come into play such as not exec'ing rodata, not writing to txt, not exec'ing heap/stack, etc. Just to note that not all unikernel projects have this and even if they do it might be trivial to turn them off. The kernel I'm involved with (Nanos) has other features such as 'exec protection' which prevents that app from exec-mapping anything not already explicitly mapped exec.
Running arbitrary programs, which is what a lot of exploit payloads try to achieve, is pretty different than having to stuff whatever they want to run inside the payload itself. For example if you look at most malware it's not just one program that gets ran - it's like 30. Droppers exist solely to load third party programs on compromised systems.
> The kernel I'm involved with (Nanos) has other features such as 'exec protection' which prevents that app from exec-mapping anything not already explicitly mapped exec.
Does this mean JIT (and I guess most binary instrumentation (debuggers) / virtualization / translation tech) won't run as expected?
If the stack and heap are non-executable and page tables can't be modified then it's hard to inject code. Whether unikernels actually apply this hardening is another matter.
I always thought of Docker as a "fuck it" solution. It's the epitomy of giving up. Instead of some department at a company releasing a libinference.so.3 and a libinference-3.0.0.x86_64.deb they ship some docker image that does inference and call it a microservice. They write that they launched, get a positive performance review, get promoted, and the Docker containers continue to multiply.
Python package management is a disaster. There should be ways of having multiple versions of a package coexist in /usr/lib/python, nicely organized by package name and version number, and import the exact version your script wants, without containerizing everything.
Electron applications are the other type of "fuck it" solution. There should be ways of writing good-looking native apps in JavaScript without actually embedding a full browser. JavaScript is actually a nice language to write front-ends in.
> Python package management is a disaster. There should be ways of having multiple versions of a package coexist in /usr/lib/python, nicely organized by package name and version number, and import the exact version your script wants, without containerizing everything.
I think there's merit to your criticisms of the way docker is used, but it also seems like it provides substantial benefits for application developers. They don't need to beg OS maintainers to update the package, and they don't need to maintain builds for different (OS, version) targets any more.
They can just say "here's the source code, here's a container where it works, the rest is the OS maintainer's job, and if Debian users running 10 year old software bug me I'm just gonna tell them to use the container"
Yeah I'm not against Docker in its entirety. I think it is good for development purposes to emulate multiple different environments and test things inside them, just not as a way to ship stuff.
There is a vast amount of complexity involved in rolling things from scratch today in this fractured ecosystem and providing the same experience for everyone.
Sometimes, the reduction of development friction is the only reason a product ends up in your hands.
I say this as someone whose professional toolkit includes Docker, Python and Electron; Not necessarily tools of choice, but I'm one guy trying to build a lot of things and life is short. This is not a free lunch and the optimizer within me screams out whenever performance is left on the table, but everything is a tradeoff. And I'm always looking for better tools, and keep my eyes on projects such as Tauri.
Agree on all fronts. The advent of Dockerfiles as a poor mans packaging system and the per-language package managers has set the industry back several years in some areas IMHO.
> This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
The difference is that you can move that whole bunch of interlinked containers to another machine and it will work. You don't get that when running on bare hardware. The technology of "containers" is ultimately about having the kernel expose a cleaned up "namespaced" interface to userspace running inside the container, that abstracts away the details of the original machine. This is very much not intended as "sandboxing" in a security sense, but for most other system administration purposes it gets pretty darn close.
> This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Yea, with uneeded bload like rule based access controls, ACS and secret management. Some comments on this site.
At some point, few people even understand the whole system and whether all these layers are actually accomplishing anything.
It’s especially bad when the code running at rarified levels is developed by junior engineers and “sold” as an opaque closed source thing. It starts to actually weaken security in some ways but nobody is willing to talk about that.
This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
I think you're over-egging the pudding. In reality, you're unlikely to use more than 2 types of container host (local dev and normal deployment maybe), so I think we've moved way beyond square 1. Config is normally very similar, just expressed differently, and being able to encapsulate dependencies removes a ton of headaches.
Nix is where we're going. Maybe not with the configuration language that annoys python devs, but declarative reproducible system closures are a joy to work with at scale.
Reproducible can have a lot of meanings. Nix guarantees that your build environment + commands are the same. It still uses all the usual build tools and it would be trivial to create a non-reproducible binary (--impure).
I've been running either Qubes OS or KVM/QEMU based VMs as my desktop daily driver for 10 years. Nothing runs on bare metal except for the host kernel/hypervisor and virt stack.
I've achieved near-native performance for intensive activities like gaming, music and visual production. Hardware acceleration is kind of a mess but using tricks like GPU passthrough for multiple cards, dedicated audio cards and and block device passthrough, I can achieve great latency and performance.
One benefit of this is that my desktop acts as a mainframe, and streaming machines to thin clients is easy.
My model for a long time has been not to trust anything I run, and this allows me to keep both my own and my client's work reasonably safe from a drive-by NPM install or something of that caliber.
Now that I also use a Apple Silicon MacBook as a daily driver, I very much miss the comfort of a fully virtualized system. I do stream in virtual machines from my mainframe. But the way Tahoe is shaping up, I might soon put Asahi on this machine and go back to a fully virtualized system.
I think this is the ideal way to do things, however, it will need to operate mostly transparently to an end user or they will quickly get security fatigue; the sacrifices involved today are not for those who lack patience.
I think it's fine if you do it for yourself. It's a bit of a poor man's Linux-turned-microkernel solution. In fact, I work like this too, and this extends to my Apple Silicon Mac. The separation does have big security advantages, especially when different pieces of hardware are exclusively passed to the different, closed-off "partitions" of the system and the layer orchestrating everything is as minimal as it gets, or at least as guarded against the guests as it gets.
What worries me is when this model escalates from being cobbled up together by a system administrator with limited resources, to becoming baked into the design of software; the appropriation of the hypervisor layer by software developers who are reluctant to untangle the mess they've created at the user/kernel boundary of their program and instead start building on top of hardware virtualization for "security", to ultimately go on and pollute the hypervisor as the level of host OS access proves insufficient. This is beautifully portrayed by the first XKCD you've linked. I don't want to lose the ability to securely run VMs as the interface between the host and the guest OSes grows just as unmanageable as that of Linux and BSD system calls and new software starts demanding that I let it use the entirety of it, just like some already insists that I let it run as root because privilege dropping was never implemented.
If you develop software, you should know what kind of operating system access it needs to function and sandbox it appropriately, using the operating system's sandboxing facilities, not the tools reserved for system administrators.
At a practical level I think a thesis that "good" process isolation systems (aka, not hosted on Linux) build on years of development that unikernels will struggle to replace holds true.
At a conceptual level I really disagree with this piece, though:
> one cannot play up Linux kernel vulnerabilities as a silent menace while simultaneously dismissing hypervisor vulnerabilities as imaginary.
One can reasonably recognize Linux kernel vulnerabilities as extant and pervasive while acknowledging that hypervisors can be vulnerable. One can also realize that the surface area exposed by Linux is fundamentally much larger than that exposed by most hypervisors, and that the Linux `unshare` mechanism is insecure by default. It's kind of funny - the invocation of Linux really undermines this argument; there's no _reason_ a process / container isolation based system should be completely broken, but Linux _is_, and so it becomes a very weak opponent.
I really don't think I can agree with the debugging argument here at a conceptual level, either. Issues with debugging unikernels are caused by poor outside-in tooling, but with good outside-in tooling, a unikernel should be _easier_ to debug than a container or OS process, because the VM-owner / hypervisor will often already have a way to inspect the unikernel-machine's entire state from the outside, without additional struggle of trying to maintain, juggle, and restore multiple contexts within a running system. There is essentially an ISP/ICE debugging probe attached to the entire system end to end by default, in the form of the hypervisor.
For example, there is no reason a hosting hypervisor could not provide DTrace in a way which is completely transparent to the unikernel guest, and this would be much easier to implement than DTrace self-hosted in a running kernel!
If done properly, this way a uni-application basically becomes debugging-agnostic: it doesn't need cooperative tracepoints or self-modifying patches (and all of the state juggling that comes with that, think like Kprobe), because the hypervisor can do the tracing externally. The unikernel does not need to grow (in security surface area, debug-size, blast radius, etc.) to add more trace and debug capability.
Agree with your points and I think fundamentally the issue with unikernels at this point comes down to: nobody has really done it right yet.
By which I mean I see two variants:
1- exotic and interesting and constrained but probably not applicable for people in the form of e.g. MirageOS. not applicable because OCaml just isn't mainstream enough
2- Or other systems which allow much easier porting of existing systems by providing a libc and extended set of "porting" libraries which end up by recreating huge swathes of what the operating system is doing already anyways, in order to make the existing application just cross compile and "feel at home". But in reality probably always in an incomplete or odd way, and now you're using someone's hand crafted set of compatibility libraries instead of a battle tested operating system.
I just think we haven't seen the right system, yet, which would probably be some specific application development mostly from the ground up in the context of unikernel, not the other way around. Potentially a set of constrained and targeted Rust etc crates built from nostd up + some services. I kept looking for MirageOS for Rust and haven't seen, instead saw stuff more like 2.
In Qubes you use VMs to separate your banking environment from the one where you pull npm dependencies and the one where you open untrusted PDFs.
Networking also happens in its own VM, and you can have multiple VMs dedicated to networking.
Much lower memory footprint running mirage firewall, and an attack surface orders of magnitude smaller (compared to a VM running a Linux distribution purely for networking).
damn… i am a big fan of bryan and i thought i was a big fan of unikernels… well, i still am, but all the points he makes are absolutely well-founded. i will say, in contraposition to the esteemed (and hilarious) mr. cantrill, that it is quite incredible to get to the end of an article about unikernels without seeing any mention of how the “warmup” time for a unikernel is subsecond whereas the warmup time for, say, containers is… let’s just call it longer than the warmup time for the water i am heating to make some pourover coffee after i finish my silly post. to dismiss this as a profound advantage is to definitely sell the idea more than a little short.
but at the same time i do think it is fair at this juncture to compare the tech to things like wasm, with which unikernels are much more of a direct competitor than containers. it is ironic because i can already hear in my head the hilarious tirade he would unleash about how horrific docker is in every way, debugging especially, but yet somehow this is worse for containers than for unikernels. my view at the present is that unikernels are fantastic for software you trust well enough to compile down to the studs, and the jury is still out on their efficacy beyond that. but holy fuck i seear to god i have spent more time fucking with docker than many professional programmers have spent learning their craft en toto, and i have nothing to show for it. it sucks every time, no matter what. my only gratitude for that experience revolves around (1) saving other peoples’ time on my team (same goes for git, but git is, indisputably, a “good” technology, all things considered, which redeems it entirely), and (2) it motivated me to learn about all the features that linux, systemd, et al. have (chroot jails, namespaces, etc.) in a way that surely exceeds my natural interest level.
> the “warmup” time for a unikernel is subsecond whereas the warmup time for, say, containers is… let’s just call it longer than the warmup time for the water i am heating to make some pourover coffee after i finish my silly post. to dismiss this as a profound advantage is to definitely sell the idea more than a little short.
I'm surprised to read that unikernels would start up much faster than containers. It seems like a unikernel needs to do more work (load kernel, and load app), in a more restricted way (hypervisor) than simply loading the app in a cgroup + namespace and letting it rip.
Are you sure this is an apples to apples comparison of similarly optimized images?
> to dismiss this as a profound advantage is to definitely sell the idea more than a little short.
Nah not really what he's saying. He's saying that if you throw out all the security affordances provided by page tables and virtual memory, it outweighs the "profound advantage" (which as he mentions, is arguable anyway since user/kernel context switch is a negligible cost in most modern systems).
You're selling a great deal in order to buy not much. It's a poor tradeoff.
Just to save people from wasting their time reading this drivel:
`
If this approach seems fringe, things get much further afield with language-specific unikernels like MirageOS that deeply embed a particular language runtime. On the one hand, allowing implementation only in a type-safe language allows for some of the acute reliability problems of unikernels to be circumvented. On the other hand, hope everything you need is in OCaml!
`
ToroKernel is written in freepascal.
All of the text before and after is completely irrelevant
Cantrill is far smarter and accomplished than me, but this article feels a bit strawman and hit and run?
I think unikernels potentially have their place, but as he points, they remain mostly untried, so that's fair. We should analyze why that is.
On performance: I think the kernel makes many general assumptions that some specialized domains may want to short circuit entirely. In particular I am thinking how there's a whole body of research of database buffer pool management basically having elaborate work arounds for kernel virtual memory subsystme page management techniques, and I suspect there's wins there in unikernel world. Same likely goes for inference engines for LLMs.
The Linux kernel is a general purpose utility optimizing for the entire range of "normal things" people do with their Linux machines. It naturally has to make compromises that might impact individual domains.
That and startup times, big world of difference.
Is it going to help people better sling silly old web pages and whatever it is people do with computers conventionally? Yeah, I'd expect not.
On security, I don't think it's unreasonable or pure "security theatre" to go removing an attack surface entirely by simply not having it if you don't need it (no users, no passwords, no filesystem, whatever). I feel like he was a bit dismissive here? That is also the principle behind capability-passing security to some degree.
I would hate to see people close the door on a whole world of potentials based on this kind of summary dismissal. I think people should be encouraged to explore this domain, at least in terms of research.
If you software has no bugs then unikernels are a straight upgrade. If your software has bugs then the blast area for issues is now much larger. When was the last time you needed a kernel debugger for a misbehaving application?
> On performance: ... In particular I am thinking how there's a whole body of research of database buffer pool management
Why? The solution thus far has been to turn off what the kernel does, and, do those things in userspace, not move everything in the kernel? Where are these performance gains to be had?
> The Linux kernel is a general purpose utility optimizing for the entire range of "normal things" people do with their Linux machines.
Yeah, like logging and debugging. Perhaps you say: "Oh we just add that logging and debugging to the blob we run". Well isn't that now another thing that can take down the system, when before it was a separate process?
> That and startup times, big world of difference.
Perhaps in this very narrow instance, this is useful, but what is it useful for? Can't Linux or another OS be optimized for this use case without having to throw the baby out with the bathwater? Can't one snapshot a Firecracker VM and reach even faster startup times?
> On security, I don't think it's unreasonable or pure "security theatre" to go removing an attack surface entirely
Isn't perhaps the most serious problem removing any and all protection domains? Like between apps and the kernel and between the apps themselves?
I mean -- sure maybe remove the filesystem, but isn't no memory protection what makes it a unikernel? And, even then, a filesystem is usually a useful abstraction! When have I found myself wanting less filesystem? Usually, I want more -- like ZFS.
This is all just to say -- you're right -- there may be a use case for such systems, but no one has really adequately described what that actually is, and therefore this feels like systems autoeroticism.
> Why? The solution thus far has been to turn off what the kernel does, and, do those things in userspace, not move everything in the kernel? Where are these performance gains to be had?
There's all sorts of jankin' about trying to squeeze ounces of performance out of the kernel's page management, specifically for buffer pools.
Page management isn't really a thing we can do well "in user space". And the kernel has strong ideas about how this stuff works, which work very well in the general case. But a DB (or other system level things like garbage collectors, etc) are special cases, often with special needs.
LeanStore, Umbra, etc. do tricks with VMM overcommit and the like to fiddle around with this, and the above paper even proposes custom kernel modules for the purpose (There's a github repo associated, I'd have to go look).
And then, further, a DB goes and basically implements its own equivalent of a filesystem, managing its own storage. Often fighting with the OS about the semantics of fsync/durability, etc.
I don't think it's an unreasonable mental leap for people to start thinking: "I'm by necessity [cuz cloud] in a VM. Now I'm inside an OS in a VM, and the OS is sometimes getting in my way, and I'm doing things to get around the OS... Why?"
> Page management isn't really a thing we can do well "in user space".
But it is the thing most high performance OLTP DBMSs, most of us are aware of, do? I'm also not sure your cite is relevant here. Or it is at least niche. The comparison is made to LeanStore, which is AFAICT is not feature complete, and a research prototype?
Your cite does not describe a unikernel use case, but instead in kernel helper modules. Your cite is about leveraging in kernel virtual memory for the DB buffer cache, and thus one wonders how sophisticated the VM subsystem is in most unikernels? That is -- how is this argument for unikernels? Seems as though your cite is making the opposite argument -- for a more complex relationship between the DB and the kernel, not a pared down one.
> And then, further, a DB goes and basically implements its own equivalent of a filesystem, managing its own storage. Often fighting with the OS about the semantics of fsync/durability, etc.
The fights you're describing what have thus far been the problems of ceding control of the buffer cache to the kernel, via mmap, especially re: transactional safety.
If your argument is kernels may need a bottom up redesign to make ideas like this work, I suppose that makes sense. However, again, I'm not sure that makes unikernels more of an answer here than anywhere else, though.
> I don't think it's an unreasonable mental leap for people to start thinking: "I'm by necessity [cuz cloud] in a VM. Now I'm inside an OS in a VM, and the OS is sometimes getting in my way, and I'm doing things to get around the OS... Why?"
I think that's a fair thought to have, but the problem is how it actually works in practice. As in, less code seems really enticing, the problem is what abstractions are you throwing away. If the abstraction is less memory protection, maybe this is not a good tradeoff in practice.
I've been using unikraft (https://unikraft.org/) unikernels for a while and the startup times are quite impressive (easily sub-second for our Rust application).
Fast boot up means nothing if your agent/app is slow at runtime (due to virtualization tax or QEMU emulation). Fast boot up is a PR term, which can easily be optimized for compared to designed a better virtualization layer that performs near-bare-metal.
Wouldn't faster boot times mean that scale-out can be done on-demand? Whether this is preferable or not over poorer runtime performance is up to the domain, no?
When scaling out, edge latency will overshadow kernel boot-up times: speeding up boot-up from 1.5s to 150ms will not have any perceived impact on app performance when scaling on edge to meet the demand.
I use LXD + LXC, wondering if this is worth trying or if the overhead of accessing (network, etc) would be too much to deal with/care about.
Also always a little wary of projects that have bad typos or grammar problems in a README - in particular on one of the headings (thought it's possible these are on purpose?). But that's just me :\
My last name is finally on the front page of HN as a project name, look mah!
I was not expecting Pascal, thats an interesting choice. One thing I do like is that Freepascal has one of the better ways of making GUIs meanwhile every other language had decided that just letting Javascript build UIs is the way.
I don't want the observability of my applications to be bound by themselves, it's kind of a real pain. I'm all for microvm images without excess dependencies, but coupling the kernel and diagnostic tools to rapidly developing application code can be a real nightmare as soon as the sun stops shining.
Smaller than containers seems unlikely since a container doesn't have any kernel at all, while these microvms have to reproduce at least the amount of kernel they would otherwise need (e.g., a networking stack). I'm sure some will be inclined to compare an optimized microvm to an application binary slapped into an Ubuntu container image, but that's obviously apples/oranges.
Faster might be possible without the context switching between kernel and app? And maybe additional opportunities for the compiler to optimize the entire thing (e.g., LTO)?
Presumably to avoid the cost of context switches or copying between kernel/user address spaces? Looks to be the opposite of userspace networking like DPDK: kernel space application programming.
Not really. Separation from (type 1) hypervisor (or rather distrust of the host [0]) requires hardware support; ex: ARM CCA / AMD SEV-SNP / Intel TDX.
For separation from the supervisor, Android developed a peculiar approach in "pKVM" for ARM where the host (supervisor) is partitioned away from the guest [1].
Both those "separations" is not something Toro provides on its own; the Toro unikernel would totally be under the control of the host, from what I can tell. That said, what Toro (or any unikernel, really) does is reduce the attack surface area, as the (guest) supervisor is pruned to run just one particular application (more code to partition things up will eliminate a class of attacks but may result in new attack vectors [2]).
I think one reason UniKernels can be different are perhaps that they can allow more isolation or run user generated code perhaps inside the Unikernel with proper isolation whereas I don't think actors can do that
Docker was conceived to solve the problem of things "working on my machine", and not anywhere else. This was generally caused by the differences in the configuration and versions of dependencies. Its approach was simple: bundle both of these together with the application in unified images, and deploy these images as atomic units.
Somewhere along the lines however, the problem has mutated into "works on my container host". How is that possible? Turns out that with larger modular applications, the configuration and dependencies naturally demand separation. This results in them moving up a layer, in this case creating a network of inter-dependent containers that you now have to put together for the whole thing to start... and we're back to square one, with way more bloat in between.
Now hardware virtualization. I like how AArch64 generalizes this: there are 4 levels of privilege baked into the architecture. Each has control over the lower and can call up the one immediately above to request a service. Simple. Let's narrow our focus to the lowest three: EL0 (classically the user space), EL1 (the kernel), EL2 (the hypervisor). EL0, in most operating systems, isn't capable of doing much on its own; its sole purpose is to do raw computation and request I/O from EL1. EL1, on the other hand, has the powers to directly talk to the hardware.
Everyone is happy, until the complexity of EL1 grows out of control and becomes a huge attack surface, difficult to secure and easy to exploit from EL0. Not good. The naive solution? Go a level above, and create a layer that will constrain EL1, or actually, run multiple, per-application EL1s, and punch some holes through for them to still be able to do the job—create a hypervisor. But then, as those vaguely defined "holes", also called system calls and hyper calls, grow, won't so the attack surface?
Or in other words, with the user space shifting to EL1, will our hypervisor become the operating system, just like docker-compose became a dynamic linker?
Containers arose as a way to solve the dependency problems created by traditional Unix. They grow from tools like chroot, BSD jails, and Solaris Zones. Containers allow to deploy dependencies that cannot be simultaneously installed on a traditional Unix host system. it's not a UNIX architecture limitation but rather a result of POSIX + tradition; e.g. Nix also solves this, but differently.
Containers (like chroot and jail before them) also help ensure that a running service does not depend on the parts of the filesystem it wasn't given access to. Additionally, containers can limit network access, and process tree access.
These limitations are not a proper security boundary, but definitely a dependency boundary, helping avoid spaghetti-style dependencies, and surprises like "we never realized that our ${X} depends on ${Y}".
Then, there's the Fundamental Theorem of Software Engineering [1], which states: "We can solve any problem by introducing an extra level of indirection." So yes, expect the number of levels of indirection to grow everywhere in the stack. A wise engineer can expect to merge or remove a some levels here and there, when the need for them is gone, but they would never expect that new levels of indirection should stop emerging.
[1]: https://en.wikipedia.org/wiki/Fundamental_theorem_of_softwar...
I've mostly focused on the worst Docker horrors I've seen in production, extrapolating that to the future of containers, as pulling in new "containerized" dependencies will inevitably become just as effortless as it currently is with regular dependencies in the new-style high-level programming languages. You've primarily described a relatively fresh, or a well-managed Docker deployment, while admitting that spaghetti-style dependencies have become a norm and new layers will pile up (and by extension, make things hard to manage).
I think our points of view don't actually collide.
Unfortunately containers have always had an absolutely horrendous security story and they degrade performance by quite a lot.
The hypervisor is not going away anytime soon - it is what the entire public cloud is built on.
While you are correct that containers do add more layers - unikernels go the opposite direction and actively remove those layers. Also, imo the "attack surface" is by far the smallest security benefit - other architectural concepts such as the complete lack of an interactive userland is far more beneficial when you consider what an attacker actually wants to do after landing on your box. (eg: run their software)
When you deploy to AWS you have two layers of linux - one that AWS runs and one that you run - but you don't really need that second layer and you can have much faster/safer software without it.
Suppose you control the entire stack though, from the bare metal up. (Correct me if I'm wrong, but) Toro doesn't seem to run on real hardware, you have to run it atop QEMU or Firecracker. In that case, what difference does it make if your application makes I/O requests through paravirtualized interfaces of the hypervisor or talks directly to the host via system calls? Both ultimately lead to the host OS servicing the request. There isn't any notable difference between the kernel/hypervisor and the user/kernel boundary in modern processors either; most of the time, privilege escalations come from errors in the software running in the privileged modes of the processor.
Technically, in the former case, besides exploiting the application, a hypothetical attacker will also have to exploit a flaw in QEMU to start processes or gain further privileges on the host, but that's just due to a layer of indirection. You can accomplish this without resorting to hardware virtualization. Once in QEMU, the entire assortment of your host's system calls and services is exposed, just as if you ran your code as a regular user space process.
This is the level you want to block exec() and other functionality your application doesn't need at, so that neither QEMU nor your code ran directly can perform anything out of their scope. Adding a layer of indirection while still leaving user/kernel, or unikernel/hypervisor junction points unsupervised will only stop unmotivated attackers looking for low-hanging fruit.
Hypervisors expose a much smaller API surface area to their tenants than an operating system does to its processes which makes them much easier to secure.
One of the things that might not be so apparent is that when you deploy these to something like AWS all the users/process mgmt/etc. gets shifted up and out of the instance you control and put into the cloud layer - I feel that would be hard to do with physical boxen cause it becomes a slippery slope of having certain operations (such as updates) needing auth for instance.
What does that have to do with unikernel vs more traditional VMs? You can build a rootfs that doesn't have any interactive userland. Lots of container images do that already.
I am not a security researcher, but I wouldn't think it would be too hard to load your own shell into memory once you get access to it. At least, compared to pulling off an exploit in the first place.
I would think that merging kernel and user address spaces in a unikernel would, if anything, make it more vulnerable than a design using similar kernel options that did not attempt to merge everything into the kernel. Since now every application exploit is a kernel exploit.
Also merging the address space is not a necessity. In fact - 64-bit (which is essentially all modern cloud software) mandates virtual memory to begin with and many unikernel projects support elf loading.
This is demonstratably untrue.
In terms of the security aspects though, how does security holes in a layer that restricts things more than without it degrade security? Seems like saying that CVEs on browser's javascript sandboxing degrade the browser security more than just not having sandboxes.
The problem with "container" security is that even in this thread many people seem to think that it is a security barrier of some kind when it was never designed to be one. The v8 sandbox was specifically created to deal with sandboxing. It still has issues but at least it was thought about and a lot of engineering went into it. Container runtimes are not exported via the kernel. Unshare is not named 'create_container'. A lot of the container issues we see are runtime issues. There are over a half-dozen different namespaces that are used in different manners that expose hard to understand gotchas. The various container runtimes decide themselves how to deal with these and they have to deal with all the issues in their code when using them. A very common bug that these runtimes get hit by are TOCTOU (time of check to time of use) vulns that get exposed in these runtimes.
Right now there is a conversation about the upcoming change to systemd that runs sshd on vsock by default (you literally have to disable it via kernel cli flag - systemd.ssh_auto=no) - guess what one of the concerns is? Vsock isn't bound to a network namespace. This is not itself a vulnerability but it most definitely is going to get taken advantage in the future.
The story is quite different in HP-UX, Aix, Solaris, BSD, Windows, IBM i, z/OS,...
Very useful if you are packaging trusted software don't want to upgrade your windows server license.
There are AppContainers. Those have existed for a while and are mostly targeted at developers intending to secure their legacy applications.
https://learn.microsoft.com/en-us/windows/win32/secauthz/app...
There's also Docker for Windows, with native Windows container support. This one is new-ish:
https://learn.microsoft.com/en-us/virtualization/windowscont...
Perf is much better on Windows server. It's actually really pleasant to get your office appliances (a build agent etc) in a container on a beefy Windows machine running Windows server.
Doesn’t “virtualization-based security” mean everything does, container or no? Or are they actually VMs even with VBS disabled?
Aren't there ways of overwriting the existing kernel memory/extending it to contain an a new application if an attacker is able to attack the running unikernel?
What protections are provided by the unikernel to prevent this?
What becomes harder is if you have a binary that forces you to rewrite the program in memory as you suggest. That's where classic page protections come into play such as not exec'ing rodata, not writing to txt, not exec'ing heap/stack, etc. Just to note that not all unikernel projects have this and even if they do it might be trivial to turn them off. The kernel I'm involved with (Nanos) has other features such as 'exec protection' which prevents that app from exec-mapping anything not already explicitly mapped exec.
Running arbitrary programs, which is what a lot of exploit payloads try to achieve, is pretty different than having to stuff whatever they want to run inside the payload itself. For example if you look at most malware it's not just one program that gets ran - it's like 30. Droppers exist solely to load third party programs on compromised systems.
Does this mean JIT (and I guess most binary instrumentation (debuggers) / virtualization / translation tech) won't run as expected?
Python package management is a disaster. There should be ways of having multiple versions of a package coexist in /usr/lib/python, nicely organized by package name and version number, and import the exact version your script wants, without containerizing everything.
Electron applications are the other type of "fuck it" solution. There should be ways of writing good-looking native apps in JavaScript without actually embedding a full browser. JavaScript is actually a nice language to write front-ends in.
Have you tried uv?
They can just say "here's the source code, here's a container where it works, the rest is the OS maintainer's job, and if Debian users running 10 year old software bug me I'm just gonna tell them to use the container"
Sometimes, the reduction of development friction is the only reason a product ends up in your hands.
I say this as someone whose professional toolkit includes Docker, Python and Electron; Not necessarily tools of choice, but I'm one guy trying to build a lot of things and life is short. This is not a free lunch and the optimizer within me screams out whenever performance is left on the table, but everything is a tradeoff. And I'm always looking for better tools, and keep my eyes on projects such as Tauri.
Curious, can you expand on this?
The difference is that you can move that whole bunch of interlinked containers to another machine and it will work. You don't get that when running on bare hardware. The technology of "containers" is ultimately about having the kernel expose a cleaned up "namespaced" interface to userspace running inside the container, that abstracts away the details of the original machine. This is very much not intended as "sandboxing" in a security sense, but for most other system administration purposes it gets pretty darn close.
Yea, with uneeded bload like rule based access controls, ACS and secret management. Some comments on this site.
At some point, few people even understand the whole system and whether all these layers are actually accomplishing anything.
It’s especially bad when the code running at rarified levels is developed by junior engineers and “sold” as an opaque closed source thing. It starts to actually weaken security in some ways but nobody is willing to talk about that.
“It has electrolytes…”
I think you're over-egging the pudding. In reality, you're unlikely to use more than 2 types of container host (local dev and normal deployment maybe), so I think we've moved way beyond square 1. Config is normally very similar, just expressed differently, and being able to encapsulate dependencies removes a ton of headaches.
From what I read, I gather nixpkgs are more hermetic (as in Bazel [0]) & not reproducible? https://discourse.nixos.org/t/nixos-is-not-reproducible/4268... / https://archive.vn/mXeih
[0] https://bazel.build/basics/hermeticity
I've achieved near-native performance for intensive activities like gaming, music and visual production. Hardware acceleration is kind of a mess but using tricks like GPU passthrough for multiple cards, dedicated audio cards and and block device passthrough, I can achieve great latency and performance.
One benefit of this is that my desktop acts as a mainframe, and streaming machines to thin clients is easy.
My model for a long time has been not to trust anything I run, and this allows me to keep both my own and my client's work reasonably safe from a drive-by NPM install or something of that caliber.
Now that I also use a Apple Silicon MacBook as a daily driver, I very much miss the comfort of a fully virtualized system. I do stream in virtual machines from my mainframe. But the way Tahoe is shaping up, I might soon put Asahi on this machine and go back to a fully virtualized system.
I think this is the ideal way to do things, however, it will need to operate mostly transparently to an end user or they will quickly get security fatigue; the sacrifices involved today are not for those who lack patience.
Also, relevant XKCDs:
https://www.explainxkcd.com/wiki/index.php/2044:_Sandboxing_...
https://www.explainxkcd.com/wiki/index.php/2166:_Stack
What worries me is when this model escalates from being cobbled up together by a system administrator with limited resources, to becoming baked into the design of software; the appropriation of the hypervisor layer by software developers who are reluctant to untangle the mess they've created at the user/kernel boundary of their program and instead start building on top of hardware virtualization for "security", to ultimately go on and pollute the hypervisor as the level of host OS access proves insufficient. This is beautifully portrayed by the first XKCD you've linked. I don't want to lose the ability to securely run VMs as the interface between the host and the guest OSes grows just as unmanageable as that of Linux and BSD system calls and new software starts demanding that I let it use the entirety of it, just like some already insists that I let it run as root because privilege dropping was never implemented.
If you develop software, you should know what kind of operating system access it needs to function and sandbox it appropriately, using the operating system's sandboxing facilities, not the tools reserved for system administrators.
Are you for real? Tell us you've never worked on a mainframe without telling us you've ever worked on a mainframe.
[0]: https://www.tritondatacenter.com/blog/unikernels-are-unfit-f...
At a conceptual level I really disagree with this piece, though:
> one cannot play up Linux kernel vulnerabilities as a silent menace while simultaneously dismissing hypervisor vulnerabilities as imaginary.
One can reasonably recognize Linux kernel vulnerabilities as extant and pervasive while acknowledging that hypervisors can be vulnerable. One can also realize that the surface area exposed by Linux is fundamentally much larger than that exposed by most hypervisors, and that the Linux `unshare` mechanism is insecure by default. It's kind of funny - the invocation of Linux really undermines this argument; there's no _reason_ a process / container isolation based system should be completely broken, but Linux _is_, and so it becomes a very weak opponent.
I really don't think I can agree with the debugging argument here at a conceptual level, either. Issues with debugging unikernels are caused by poor outside-in tooling, but with good outside-in tooling, a unikernel should be _easier_ to debug than a container or OS process, because the VM-owner / hypervisor will often already have a way to inspect the unikernel-machine's entire state from the outside, without additional struggle of trying to maintain, juggle, and restore multiple contexts within a running system. There is essentially an ISP/ICE debugging probe attached to the entire system end to end by default, in the form of the hypervisor.
For example, there is no reason a hosting hypervisor could not provide DTrace in a way which is completely transparent to the unikernel guest, and this would be much easier to implement than DTrace self-hosted in a running kernel!
If done properly, this way a uni-application basically becomes debugging-agnostic: it doesn't need cooperative tracepoints or self-modifying patches (and all of the state juggling that comes with that, think like Kprobe), because the hypervisor can do the tracing externally. The unikernel does not need to grow (in security surface area, debug-size, blast radius, etc.) to add more trace and debug capability.
By which I mean I see two variants:
1- exotic and interesting and constrained but probably not applicable for people in the form of e.g. MirageOS. not applicable because OCaml just isn't mainstream enough
2- Or other systems which allow much easier porting of existing systems by providing a libc and extended set of "porting" libraries which end up by recreating huge swathes of what the operating system is doing already anyways, in order to make the existing application just cross compile and "feel at home". But in reality probably always in an incomplete or odd way, and now you're using someone's hand crafted set of compatibility libraries instead of a battle tested operating system.
I just think we haven't seen the right system, yet, which would probably be some specific application development mostly from the ground up in the context of unikernel, not the other way around. Potentially a set of constrained and targeted Rust etc crates built from nostd up + some services. I kept looking for MirageOS for Rust and haven't seen, instead saw stuff more like 2.
Unikernels don't work for him; there are many of us who are very thankful for them.
Why? Can you explain, in light of the article, and for those of us who may not be familiar with qubes-mirage-firewall, why?
Networking also happens in its own VM, and you can have multiple VMs dedicated to networking.
Much lower memory footprint running mirage firewall, and an attack surface orders of magnitude smaller (compared to a VM running a Linux distribution purely for networking).
but at the same time i do think it is fair at this juncture to compare the tech to things like wasm, with which unikernels are much more of a direct competitor than containers. it is ironic because i can already hear in my head the hilarious tirade he would unleash about how horrific docker is in every way, debugging especially, but yet somehow this is worse for containers than for unikernels. my view at the present is that unikernels are fantastic for software you trust well enough to compile down to the studs, and the jury is still out on their efficacy beyond that. but holy fuck i seear to god i have spent more time fucking with docker than many professional programmers have spent learning their craft en toto, and i have nothing to show for it. it sucks every time, no matter what. my only gratitude for that experience revolves around (1) saving other peoples’ time on my team (same goes for git, but git is, indisputably, a “good” technology, all things considered, which redeems it entirely), and (2) it motivated me to learn about all the features that linux, systemd, et al. have (chroot jails, namespaces, etc.) in a way that surely exceeds my natural interest level.
I'm surprised to read that unikernels would start up much faster than containers. It seems like a unikernel needs to do more work (load kernel, and load app), in a more restricted way (hypervisor) than simply loading the app in a cgroup + namespace and letting it rip.
Are you sure this is an apples to apples comparison of similarly optimized images?
Nah not really what he's saying. He's saying that if you throw out all the security affordances provided by page tables and virtual memory, it outweighs the "profound advantage" (which as he mentions, is arguable anyway since user/kernel context switch is a negligible cost in most modern systems).
You're selling a great deal in order to buy not much. It's a poor tradeoff.
` If this approach seems fringe, things get much further afield with language-specific unikernels like MirageOS that deeply embed a particular language runtime. On the one hand, allowing implementation only in a type-safe language allows for some of the acute reliability problems of unikernels to be circumvented. On the other hand, hope everything you need is in OCaml! `
ToroKernel is written in freepascal.
All of the text before and after is completely irrelevant
I think unikernels potentially have their place, but as he points, they remain mostly untried, so that's fair. We should analyze why that is.
On performance: I think the kernel makes many general assumptions that some specialized domains may want to short circuit entirely. In particular I am thinking how there's a whole body of research of database buffer pool management basically having elaborate work arounds for kernel virtual memory subsystme page management techniques, and I suspect there's wins there in unikernel world. Same likely goes for inference engines for LLMs.
The Linux kernel is a general purpose utility optimizing for the entire range of "normal things" people do with their Linux machines. It naturally has to make compromises that might impact individual domains.
That and startup times, big world of difference.
Is it going to help people better sling silly old web pages and whatever it is people do with computers conventionally? Yeah, I'd expect not.
On security, I don't think it's unreasonable or pure "security theatre" to go removing an attack surface entirely by simply not having it if you don't need it (no users, no passwords, no filesystem, whatever). I feel like he was a bit dismissive here? That is also the principle behind capability-passing security to some degree.
I would hate to see people close the door on a whole world of potentials based on this kind of summary dismissal. I think people should be encouraged to explore this domain, at least in terms of research.
Why? The solution thus far has been to turn off what the kernel does, and, do those things in userspace, not move everything in the kernel? Where are these performance gains to be had?
> The Linux kernel is a general purpose utility optimizing for the entire range of "normal things" people do with their Linux machines.
Yeah, like logging and debugging. Perhaps you say: "Oh we just add that logging and debugging to the blob we run". Well isn't that now another thing that can take down the system, when before it was a separate process?
> That and startup times, big world of difference.
Perhaps in this very narrow instance, this is useful, but what is it useful for? Can't Linux or another OS be optimized for this use case without having to throw the baby out with the bathwater? Can't one snapshot a Firecracker VM and reach even faster startup times?
> On security, I don't think it's unreasonable or pure "security theatre" to go removing an attack surface entirely
Isn't perhaps the most serious problem removing any and all protection domains? Like between apps and the kernel and between the apps themselves?
I mean -- sure maybe remove the filesystem, but isn't no memory protection what makes it a unikernel? And, even then, a filesystem is usually a useful abstraction! When have I found myself wanting less filesystem? Usually, I want more -- like ZFS.
This is all just to say -- you're right -- there may be a use case for such systems, but no one has really adequately described what that actually is, and therefore this feels like systems autoeroticism.
There's all sorts of jankin' about trying to squeeze ounces of performance out of the kernel's page management, specifically for buffer pools.
e.g. https://www.cs.cit.tum.de/fileadmin/w00cfj/dis/_my_direct_up...
Page management isn't really a thing we can do well "in user space". And the kernel has strong ideas about how this stuff works, which work very well in the general case. But a DB (or other system level things like garbage collectors, etc) are special cases, often with special needs.
LeanStore, Umbra, etc. do tricks with VMM overcommit and the like to fiddle around with this, and the above paper even proposes custom kernel modules for the purpose (There's a github repo associated, I'd have to go look).
And then, further, a DB goes and basically implements its own equivalent of a filesystem, managing its own storage. Often fighting with the OS about the semantics of fsync/durability, etc.
I don't think it's an unreasonable mental leap for people to start thinking: "I'm by necessity [cuz cloud] in a VM. Now I'm inside an OS in a VM, and the OS is sometimes getting in my way, and I'm doing things to get around the OS... Why?"
But it is the thing most high performance OLTP DBMSs, most of us are aware of, do? I'm also not sure your cite is relevant here. Or it is at least niche. The comparison is made to LeanStore, which is AFAICT is not feature complete, and a research prototype?
Your cite does not describe a unikernel use case, but instead in kernel helper modules. Your cite is about leveraging in kernel virtual memory for the DB buffer cache, and thus one wonders how sophisticated the VM subsystem is in most unikernels? That is -- how is this argument for unikernels? Seems as though your cite is making the opposite argument -- for a more complex relationship between the DB and the kernel, not a pared down one.
> And then, further, a DB goes and basically implements its own equivalent of a filesystem, managing its own storage. Often fighting with the OS about the semantics of fsync/durability, etc.
The fights you're describing what have thus far been the problems of ceding control of the buffer cache to the kernel, via mmap, especially re: transactional safety.
If your argument is kernels may need a bottom up redesign to make ideas like this work, I suppose that makes sense. However, again, I'm not sure that makes unikernels more of an answer here than anywhere else, though.
> I don't think it's an unreasonable mental leap for people to start thinking: "I'm by necessity [cuz cloud] in a VM. Now I'm inside an OS in a VM, and the OS is sometimes getting in my way, and I'm doing things to get around the OS... Why?"
I think that's a fair thought to have, but the problem is how it actually works in practice. As in, less code seems really enticing, the problem is what abstractions are you throwing away. If the abstraction is less memory protection, maybe this is not a good tradeoff in practice.
Still difficult to see how the unikernel could be slower, but I doubt the difference would be huge? Don't have anything to back that up though.
Also always a little wary of projects that have bad typos or grammar problems in a README - in particular on one of the headings (thought it's possible these are on purpose?). But that's just me :\
As much as I'm nostalgic about Pascal and my childhood... I'd personally prefer OCaml.
Neat.
I was not expecting Pascal, thats an interesting choice. One thing I do like is that Freepascal has one of the better ways of making GUIs meanwhile every other language had decided that just letting Javascript build UIs is the way.
Now I write Javascript and SQL.
:)
What use cases would Toro fit? pros and cons ?
Plus these might be smaller and might run faster than containers too.
Faster might be possible without the context switching between kernel and app? And maybe additional opportunities for the compiler to optimize the entire thing (e.g., LTO)?
Not really. Separation from (type 1) hypervisor (or rather distrust of the host [0]) requires hardware support; ex: ARM CCA / AMD SEV-SNP / Intel TDX.
For separation from the supervisor, Android developed a peculiar approach in "pKVM" for ARM where the host (supervisor) is partitioned away from the guest [1].
Both those "separations" is not something Toro provides on its own; the Toro unikernel would totally be under the control of the host, from what I can tell. That said, what Toro (or any unikernel, really) does is reduce the attack surface area, as the (guest) supervisor is pruned to run just one particular application (more code to partition things up will eliminate a class of attacks but may result in new attack vectors [2]).
[0] ex: https://news.ycombinator.com/item?id=44678249
[1] Protected KVM on Arm64: A Technical Deep Dive - Quentin Perret, Google https://www.youtube.com/watch?v=9npebeVFbFw (2023)
[2] Mitigations are attack surface, too https://projectzero.google/2020/02/mitigations-are-attack-su... (2020)
file sharing is complex too it seems
would be good to see a benchmark or something showing where it shines