[Microkernel-devroom] [TALK-PROPOSAL] The microkernel overhead

Martin Decky martin at decky.cz
Fri Dec 30 23:11:38 CET 2011


Hello folks,

please see below my proposal for a talk at our devroom. It would be great to cooperate with anybody who is interested in this topic, because it is not HelenOS-specific. Just let me know.


Regards

M.D.


[Title]
The microkernel overhead

[Full name]
Martin Děcký

[Short bio]
Martin has been with the HelenOS team for more than 6 years now. A life-long operating systems enthusiast, he enjoys everything from bare metal programming and submitting Linux kernel patches to designing the most progressive microkernel and working on its formal verification as a researcher at Charles University in Prague.

[Estimated duration]
60 - 90 min

[Abstract]
Since the famous Tanenbaum-Torvalds debate [1] the general public sticks to the golden rule of thumb: Microkernel systems, while nice and elegant, are just academic toys. Due to the infamous communication overhead and other self-imposed limitations, they are never going to be as useful for the general use (in terms of performance) as the good old monolithic systems.

Since the 1990s many researchers (especially the people around L4) have struggled to lower the overhead using the most extraordinary tricks. Others have tried to apply the microkernel design to mission critical, safety critical and other niche targets, where the benefits of the microkernel design clearly outweights the drawbacks. And other folks spent years creating hybrid systems to get the best (and hopefully not the worst) of the both worlds.

But are the drawbacks of the microkernels fundamental?

The way computers are designed, the way programmers think and the way the IT economy works have changed profoundly over the last 20 years. We no longer try hard to save every single CPU cycle and every single byte of RAM in every single routine. We acknowledge that spending 20 % more on a faster CPU and more RAM to run inteligebly designed software is a better idea than spending 20 % more each year on maintaing software with tons of ugly performance hacks and quirks. Our machines are massively concurrent and we tend to (or are forced to) think more in terms of effective parallel algorithms than just plain sequential throughput.

So perhaps it is time to reconsider the true impact of the microkernel overhead given the present conditions and requirements.

Key topics:
* Reasons for the microkernel overhead
* Qualitative and quantitative analysis of the overhead
* Ways to minimize it
* Ways to live with it
* Ways to embrace it

[1] http://en.wikipedia.org/wiki/Tanenbaum–Torvalds_debate


More information about the Microkernel-devroom mailing list