Category Archives: misc

SECD implementation in C: general design

Contents


A SECD machine is a stack-based abstract machine built around lists. That determines the design of its implementation. 

Memory layout

Continue reading

Advertisements

Anatomy of Yet Another Haskell FizzBuzz

A few days ago I was bored and stumbled across https://hackerrank.com. It’s a nice site for a programmer wanting to kill some time getting some rest from Serious Tasks. When I got even more tired and stupid, I found a task which perfectly suited my mood: FizzBuzz challenge.

Here it goes:

Write a program that prints (to STDOUT) the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Pretty easy, huh? Not when you try to squeeze this task into as few characters as possible. Here comes fun!
Continue reading

Yet another Linux rant

Maybe, it’s a bit of trivial, but I want to express this again: technical savviness of linux user may be a reason why linux application are often so poor.

What do I mean by the allegation above? I have a Juniper VPN at work (yes, I know, proprietary sucks) and I happen to use 64-bit KUbuntu (*irony*: what a weird choice, huh?), which means that in order to install the vpn client I have to install 32-bit JRE and 32-bit Firefox. In my case, this is not a large trouble: I use Chrome and don’t use Java virtually, so the compromise is acceptable. If I was a Java developer, this would paralyze my workflow.

But why does anyone not care about this pitiful situation? Maybe, because linux users are savvy enough to install all that stuff and shut up (“it works for me”)?

Update: also my brain explodes every time I see this:

unknown_publisher

I can’t trust UNKNOWN publisher at all, but I must do this every day!

Brave new world of tablet computing

The last two weeks were a time of revelations for me: I dropped off the stream of IT news and focused on math and lectures much more than before, time to time arguing with my university lectors. They seem to live in the computer world of late 90s (it’s Ukraine, yeah): they still moan about “redundant” gigaherzs and processor cores, about unnecessary Windows (lolwhut, who use it nowadays?) features put together by malevolent software vendors intentionally in order to get us use new hardware and so on (for me having been exposed to HN crowd for some time, this is pretty ridiculous).

One of the most outrageous claims was that “8 cores are enough for anything, increasing the amount won’t gain anything”. My first reaction was “lol, do you still live in the world of the single-core?!”, I tried to argue that “Modern systems have hundreds of threads running in parallel, wouldn’t it be nice for each of them to have a core?”, but this had been easily refuted: “Are they doing anything most of the time?” – and I had to agree that most of the time they’re blocked by some kind of IO (even now on my Core i7 8-core CPU I see only two tasks running). So I proceeded: “Yes, you are right, those computing resources are not required for the mundane user activities. And the market reflects this: it steadily diversifies, non-fastidious users moving to tablets, gamers and professionals using the desktop for the work and so on”. I felt proud: I had been bringing a new vision to those old-school people.

Then I was discussing my programming projects with my peer, showing it on my Nexus 7 on Github (my laptop broke recently, so I am forced to use the tablet), I’d shown Terminal IDE, c4droid, Limbo PC emulator with Kolibri OS and my own COSEC inside. The feeling was: it’s so much of a toy, not a real productive environment, it can so much and still is very far from good enough to do anything creative. I returned home and read news: Intel abandons the motherboard manufacturing, Dell shows an “office” tablet.

The brave new world of tablet computing is awfully hostile to usual programmer’s activities (keyboard, the old clunky mechanical keyboard with mechanical buttons, I really, really miss you on slick modern gadgets): do you want to write a program? You have to go through the pain of software keyboard (even if it’s Hacker’s keyboard), you have to do programming inside a separate application, without hope to do something system-wide until it’s packed into APK and installed (coming from Linux, where programs breath with cooperation and gluing together, Android apps are a bunch of black boxes, each on its own), you have to sign every program in order to comply with the wallen garden rules, you can’t install another operating system to your gadget easily (free bootloaders anyone?) – it seems that the brave new world of tablets does everything to make hacking and fiddling around more complicated and hindered.

The problem is: how will the new generation of hackers emerge? How to support the desire to know what’s inside of complex systems (be it hardware or software) for a generation that will not have seen a desktop computer? How to grow the ability to tinker about? How to save it from suffocation in the world of fences and locks?

Welcome

Hello.

I am a programmer from Ukraine. My interests are:

  • low-level, system programming (x86, x86_64, ARM), C and assembly languages for aforementioned platforms;
  • OS theory, design and implementation (but I am not that hardware fan), distributed and managed operating systems;
  • operating systems: Unix of all kinds, especially Linux (now I’m getting familiar with its kernel), Inferno, Plan 9 from Bell Labs; QNX, L4; BeOS, KolibriOS; COSEC (which is written by me);
  • modern (and timeless) script and functional programming languages like Lisp/Scheme and Python (my plans are to get closer to Haskell and OCaml);
  • a bit of natural language processing and practical AI;
  • once upon a time I had some experience with C++ and OpenGL;
If it is not empty words for you, welcome to my blog.