Yesterday I wrote a simple C program to perform median filtering on a 4352 × 7712 pixel grayscale image (taken from a Nokia Lumia 1020 41MP camera). The Macbook Pro with the 2.5 GHz Intel Core I5 processor (8GB memory) took between 200 and 500 milliseconds to process the image, and produce a filtered image. I dumped the same code on my Raspberry Pi (model B), which contains a 700 MHz ARM1176JZF-S processor with 256MB of memory.
What happened? The Pi killed the program. I was a little befuddled to say the least. I have never had an OS kill a program. I even used the heap (mainly because the C program wouldn’t even run with static arrays of size 33562624 pixels × 4 bytes (int) ≈ 128MB of heap memory). Which was probably my downfall on the Pi. Largely because I created two of these images, which would have eaten up the whole memory. Of course the Pi just gives you the message: “killed”. Was it a algorithmic homocide? To investigate this further, I looked in the kernel log: /var/log/kern.log. What did I find?
raspberrypi kernel: [ 2164.651521] Out of memory: Kill process 1558 (a.out) score 905 or sacrifice child
And so it ends. Things that we take for-granted on “normal systems”, don’t work so well on constrained systems such as the Raspberry Pi. It forces us to rethink the way an algorithm is designed. The obvious solution is to reduce the size of the image – but is this realistic? The better approach may be to process the image in parts, perform some form of better memory management, or save the processed image to file rather than store it in memory.