I'll be posting here until I either pay for another year of Radio Userland hosting or figure out how to upstream to somewhere else
Wednesday, November 12, 2003
Ever get that �not-so-fresh feeling� from your real-time microkernel? Up until recently, our CS452 project had a major flaw that made it not fit the definition of a real-time system.
The kernel a UW student will write in real-time is a lot different than what you saw in CS354 �Operating Systems� because it�s a microkernel. The microkernel philosophy says to minimize the amount of stuff that gets done in kernel mode by implementing things that you would consider operating system services as regular processes. These system servers communicate with each other and with their userland clients using regular IPC. This leaves you with just a small chunk of code that runs with elevated privileges and it is thus easy to verify its correctness. Since the kernel should be small and Spartan, we simplify our lives (a lot) by leaving hardware interrupts turned off while executing in it. This means no worrying about concurrency/reentrancy and all the headaches that come with it. In fact, the criterion for whether a feature belongs in the kernel is: Does it need to have exclusive access to shared data structures? Does it require interrupts to be off? If so, it must be in the kernel. If not, put it in a system server process and let it run with interrupts on.
Hardware interrupts indicate to the computer that some external asynchronous event has occurred. For example, a hardware interrupt might indicate a key has been pressed on the keyboard, or that the hardware clock has ticked, or that a train event needs processing. This interaction with the outside world is common to most computer systems, but what makes a real-time system real-time is that there are constraints on how long you can take to deal with these external events. When was the last time you clicked on something in Windows and waited more than a couple seconds for the system to respond? The system was busy doing something else and you just had to wait until it decided to turn its attention to you. This kind of delay happens all the time in non real-time systems, but in a real-time system that mouse click could be a message from the track indicating two trains are about to collide, or a message from the engine indicating that the air/fuel mixture is too rich, or (insert usual real-time space/medical life critical example here) blah blah blah. The point is that the system must respond to the external event not at some unknown time in the future when it�s convenient, but before a fixed amount of real time has elapsed, or else your system has failed.
Turning interrupts off means that we can�t respond immediately to a request from the outside world because we don�t notice that it has occurred. Since we want to design a system that is guaranteed to respond to these events within a bounded amount of real-time, the amount of stuff we do with interrupts off must also be bounded. When writing code that runs in kernel mode, this means all your algorithms must run in O(1) time, that is take less than a fixed number of cycles no matter what the input.
If you follow the �cookbook� approach to writing your kernel that is taught in the lectures, this isn�t too hard. Everything you need to do can be easily done in constant time. However, along with another group, we implemented hardware page translation, something which is not part of the CS452 kernel specification. Paging lets every process run in its own virtual address space, separated from every other running process. Every modern operating system uses page translation, because it lets you run programs independent of each other, and it avoids fragmentation of physical memory because you can give a process memory that is physically non-contiguous. The problem is doing paging means setting up page tables that define the virtual to physical mapping of addresses, which takes O(n) time where n is the number of pages in the process image. In other words, the time it takes to create a new process is not fixed; rather, it�s proportional to the size of the process you�re creating. If you create a process of size 5, it will take 5 units of time, if you create a process of size 43, 43 units of time will pass before you�re done. Therefore, putting this feature in the kernel where it�s most natural and easiest to implement is not an option if you want to maintain an upper bound on the amount of time that hardware interrupts are turned off. That is, if process creation with page table setup is done by the kernel, we can�t make guarantees about interrupt service time anymore because we might be busy creating the process for an unbounded amount of time. But since paging is so cool and it�s just natural to put it in the kernel (for microkernel newbies), I just went ahead and dropped it in there anyway, deciding to worry about it later.
Later came quickly when the next assignment asked us to give an upper bound on the length of time that hardware interrupts can be turned off. Because I didn�t really know what to do, we just put something lame like �Since the amount of physical memory in the machine is an upper bound on the size of any one process, process creation is O(1)� (where �1� is apparently quite large). This is like saying, �since every physical computer has finite storage capacity, any algorithm that terminates runs in constant time because there are only a constant number of states the computer can iterate through.� (Remember the proof of the pumping lemma?) Too bad that constant is 2 to the power of the number of bits of memory you have, say 2^1,000,000,000.
Anyway, after a lot of teeth gnashing and consulting with the other group who already did this, I wrote a memory management server that does all the slow parts of process creation in user space. I can now feel good about my RTOS again.
Tearing working code out of your system and replacing it with something that is newly written and full of bugs sucks, but you choose to do things the easy way or the hard way. The obvious way to go about this was: step 1, delete the process creation code from the kernel and then step 2, re-implement it as its own user space server from scratch. The problem with this approach is the little space between step one and two. Unless your coding is divinely inspired, your programs do not work the first time you run them. So you�ll do step one, erase all your working but wrong-headed code from the kernel, and replace it with a new version rewritten as a process. Then you�ll boot your system up and watch it crash because during your major open-heart system surgery you put at least a dozen bugs on the critical path between system startup and the first character even being printed. Oh, and in this course, when you make a mistake in your program, the computer typically just reboots. There is no SEGFAULT, or debugger to break into. The screen goes black and the computer restarts while you make the long walk back to your terminal to guess at what went wrong. So rewriting something as fundamental to system operation as process creation from scratch would take me straight back to the bad-old-days when debugging was just guesswork in the dark.
(Actually this was only true for us until we got our stuff running in bochs which is a software x86 emulator. When things crash, it tells you what you did wrong, so you�ll either fix your bugs faster � or just get more ambitious about the features your kernel has and still end up spending the same amount of time on CS452, and by this I mean all of it.)
So whenever you write code you�ll have bugs � that�s inevitable. But in this case, we could choose to fix them with a still working kernel (eyes open) or after tearing out all the working process creation code, which is like watch repair while blindfolded, and hands tied behind your back. On the advice of a colleague, I left the working in-kernel version of process creation alone while implementing the new version, and slowly moved things over to the new version as I fixed the bugs in it. If you write perfect code this method of incremental transition is actually slower because you have to make all the intermediate (but ultimately useless) phases work (like having a shell that can create processes the old way or the new way), but for the real world flesh-and-blood programmer, the expected time of the �re-factoring� method is shorter.
Subscribe to:
Posts (Atom)