- SECD: about
- Part I and Part Ia describe high-level, formal definition of a SECD machine;
- Part II discusses low-level details of SECD implementation in C;
- Part III (TODO): how to compile a Scheme subset into SECD code.
A few days ago I was bored and stumbled across https://hackerrank.com. It’s a nice site for a programmer wanting to kill some time getting some rest from Serious Tasks. When I got even more tired and stupid, I found a task which perfectly suited my mood: FizzBuzz challenge.
Here it goes:
Write a program that prints (to STDOUT) the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
Pretty easy, huh? Not when you try to squeeze this task into as few characters as possible. Here comes fun!
Disclaimer: this post describes experience of a Happstack newbie.
A very simple (but already useful!) server in Happstack may be written like this:
import Happstack.Server import System.Environment import Control.Monad (when) main = do args <- getArgs when (length args < 2) $ error "Usage: ./HelloHTTP <a directory to serve>" simpleHTTP nullConf $ serveDirectory EnableBrowsing ["index.htm"] (head args)
This has been pretty sufficient for my simple home page for some time, but now I want logging about who is visiting the page.
Let’s do this!
Now we’re starting the most interesting part, adding functions into the functional virtual machine. First of all, a function is usually tied to its environment: a pair of function definition (arguments and body) and the environment it has been created in is called closure. So don’t be afraid of this semantic rules for opcode
LDF (the newly made closure is underlined):
(s, e, LDF.(args body).c, d) => (((args body).e).s, e, c, d)
I want to stress that the environment is paired with the function definition on the stack, producing a closure
(args body).e, which may be saved, used as an argument to another function (thus bound to a symbol), and so on.
The closure on top of the stack may be called using opcode
(((argnames body).e').argvals.s, e, AP.c, d)
=> ( (), new-frame(argnames, argvals).e', body, s.e.c.d)
Formally, it is just a tuple of four lists with some set of rigidly defined operations on it. The SECD semantics describes a stack-based virtual machine which is designed to run pure functions. As seen by a computer scientist: one of first denotational semantics for a pure functional language, where operations are transitions between possible machine states.
The components of the tuple are named (unsurprisingly):
Perhaps, a lot of people studying functional programming languages write their own simplistic Lisp interpreter. First of all, it’s quite easy with a modern functional language: parsing is primitive, the runtime maps straightforwardly to a favorite language’s runtime, etc.
Writing it in C is a more complicated task, since garbage collection and nitty-gritty details of implementation are up to you all of a sudden. I went crazy when I thought about endless
conses created with every operation. I don’t want to say that writing a Lisp runtime in C is fundamentally complex, but it’s still true that it’s quite hard to implement something you haven’t quite grokked yet.
I am fond of Haskell and GHC’s functioning often strikes like magic of waving terms into complex pattern that are reduced into normal forms at run time. In order to understand that, I started to study compiler theory. A classic book for understanding functional programming is Functional programming: Application and Implementation by Peter Henderson. Although it may seem very old (published in 1980) it’s still a good and clear introduction into functional way of thinking. When I came across SECD machine description I was amused by its simplicity and how easy is to encode it in a low-level language and how easy is to map a decent functional language into it at the same time (let’s forget about types for a while, we’re just studying). So I started to write my little SECD interpreter in C.
So I wrote a simple SECD interpreter in C and a self-hosting Scheme-to-SECD compiler in subset of Scheme. The code may be found here.
Maybe, it’s a bit of trivial, but I want to express this again: technical savviness of linux user may be a reason why linux application are often so poor.
What do I mean by the allegation above? I have a Juniper VPN at work (yes, I know, proprietary sucks) and I happen to use 64-bit KUbuntu (*irony*: what a weird choice, huh?), which means that in order to install the vpn client I have to install 32-bit JRE and 32-bit Firefox. In my case, this is not a large trouble: I use Chrome and don’t use Java virtually, so the compromise is acceptable. If I was a Java developer, this would paralyze my workflow.
But why does anyone not care about this pitiful situation? Maybe, because linux users are savvy enough to install all that stuff and shut up (“it works for me”)?
Update: also my brain explodes every time I see this:
I can’t trust UNKNOWN publisher at all, but I must do this every day!
The last two weeks were a time of revelations for me: I dropped off the stream of IT news and focused on math and lectures much more than before, time to time arguing with my university lectors. They seem to live in the computer world of late 90s (it’s Ukraine, yeah): they still moan about “redundant” gigaherzs and processor cores, about unnecessary Windows (lolwhut, who use it nowadays?) features put together by malevolent software vendors intentionally in order to get us use new hardware and so on (for me having been exposed to HN crowd for some time, this is pretty ridiculous).
One of the most outrageous claims was that “8 cores are enough for anything, increasing the amount won’t gain anything”. My first reaction was “lol, do you still live in the world of the single-core?!”, I tried to argue that “Modern systems have hundreds of threads running in parallel, wouldn’t it be nice for each of them to have a core?”, but this had been easily refuted: “Are they doing anything most of the time?” – and I had to agree that most of the time they’re blocked by some kind of IO (even now on my Core i7 8-core CPU I see only two tasks running). So I proceeded: “Yes, you are right, those computing resources are not required for the mundane user activities. And the market reflects this: it steadily diversifies, non-fastidious users moving to tablets, gamers and professionals using the desktop for the work and so on”. I felt proud: I had been bringing a new vision to those old-school people.
Then I was discussing my programming projects with my peer, showing it on my Nexus 7 on Github (my laptop broke recently, so I am forced to use the tablet), I’d shown Terminal IDE, c4droid, Limbo PC emulator with Kolibri OS and my own COSEC inside. The feeling was: it’s so much of a toy, not a real productive environment, it can so much and still is very far from good enough to do anything creative. I returned home and read news: Intel abandons the motherboard manufacturing, Dell shows an “office” tablet.
The brave new world of tablet computing is awfully hostile to usual programmer’s activities (keyboard, the old clunky mechanical keyboard with mechanical buttons, I really, really miss you on slick modern gadgets): do you want to write a program? You have to go through the pain of software keyboard (even if it’s Hacker’s keyboard), you have to do programming inside a separate application, without hope to do something system-wide until it’s packed into APK and installed (coming from Linux, where programs breath with cooperation and gluing together, Android apps are a bunch of black boxes, each on its own), you have to sign every program in order to comply with the wallen garden rules, you can’t install another operating system to your gadget easily (free bootloaders anyone?) – it seems that the brave new world of tablets does everything to make hacking and fiddling around more complicated and hindered.
The problem is: how will the new generation of hackers emerge? How to support the desire to know what’s inside of complex systems (be it hardware or software) for a generation that will not have seen a desktop computer? How to grow the ability to tinker about? How to save it from suffocation in the world of fences and locks?
There is a lot of Haskell OpenGL tutorials on the web introducing to basic OpenGL drawing in Haskell. I’ve read about a dozen of them, but none are highlighting an important issue right: how to react to user’s input? The tutorials either omit this topic, or fall back to ugly and non-idiomatic
IORefs to return data from callbacks (even the Haskell wiki goes this way). Although there is a much nicer way to handle GLUT/GLFW callbacks which is explained below.
This spring my assembly coding experiments grew to something more: I moved from the x86 real mode to first steps in the protected mode environment, which seemed hostile and quite unusual. A lot of intricate and arcane assembly code needed to fit opportunities of a full 32bit mode really puzzled me first, but my slow progress was persistent enough to run first C code examples and to see a “Hello world” message in an OSless emulator.
I’ve been really impressed by Linux and its kernel and, like a child, wanted to know “how does this thing work”, so writing a linux-like kernel was a pleasant and useful challenge for my programming skills. My guidelines are not fancy: system clearness and simplicity, minimal POSIX-compatibility and experiments inspired by Plan 9 from Bell Labs, like network transparency and resources unification through files. Also I lean to microkernel design as far as it does not complicate the picture.
I haven’t had a definite goal of my project for a long time, I fluctuated between considering this OS just a training ground for system programming and exaggerated expectations for a new OS architecture and language-based features (resembling Singularity software stack), although my experience in writing code for managed platforms and writing interpreters/virtual machines itselves is miserable (and this is my first large project in pure C). Now I came to a definite goal: I want to have a minimal self-hosting POSIX-like OS using C (roughly corresponding to Linux 0.01 functionality, but a bit more slim, there are two decades of history after Linux creation).
Now my system is still quite nascent, but the experiments now involve much more mature things. Now my tiny OS contains:
The system is still a single binary with simplest built-in kernel shell (userspace is waiting for FS implementation on order to be able host a functional shell). It is loaded with GRUB according to GNU mutliboot specification, though I have my own bootloader and switch-to-protected-mode code snippets.
Plans for the future are:
Here is the source code:
I am a programmer from Ukraine. My interests are: