Raspberry Pi Lego Rack Designs

A few people requested that we describe the design of our racks.

The truth is, each rack is slightly different and the final build is not the one we’d planned. A couple of reasons for this: we didn’t quite receive the Lego pieces we were expecting, and we had to tweak the designs to make things fit better. So this is a somewhat retrospective design document…

We have four racks containing 14 Raspberry Pi’s each, actually composed of two adjacent towers. In between the towers are two USB hubs. The design is such that the front provides access to the SD card slot and micro-USB power supply, so we can easily change SD cards and reset the Pi’s. We can also slide the Pi’s out, which is incredibly useful as we tend to be cannibalising Pi’s quite often, or else swapping them around for testing. The back of the racks has space to reach the ethernet port, and each rack has a dedicated Netgear GS116E switch.

Each rack sits on a green 32 by 32 stud (25cm x 25cm) Lego baseplate.

Ten studs worth of space is left in front of the rack, and eight studs at the back. There’s a gap of two studs either side of the rack.

For the most part, the towers follow four simple layers of Lego, corresponding to a snug fit for a single Raspberry Pi. The only exceptions to these layers are (a) connecting structs between the two towers to hold them together and (b) the top of the racks, which are frankly all of differing design depending on the academic or student that built them! Particularly as the available Lego ran out, the designs became more improvised. So here I’ll just show the three layers that make up the towers and a few examples of the improvisations people chose.

For the first layer, we use 2×4 Lego bricks to create “feet” that protrude into the space for the Pi. The Pi actually sits on these feet, to give it space below for the SD card and to allow airflow underneath the Pi’s (the extra piece in the centre is to keep the USB hub in place):


Next, we simply add two layers of Lego that do not overlap the feet, just building the perimeter wall with enough room for the Pi:


Here’s a few horizontal shots to clarify:

IMG_2264 IMG_2263

The fourth layer is more or less the same as the first layer, but we add a long strut at the back instead of the two 2×4 lego pieces we had, to strengthen the structure:


shots from the side and the back:



Now we simply repeat the second, third and fourth layers on top of this, until we eventually have enough room for seven Pi’s in each tower. Just a couple of exceptions:

Struts are added an intervals in the rack to strengthen the towers. For example, see the one halfway up the red tower in the image below:

The tops of the racks are a little improvised. Here’s one example:

IMG_2272 IMG_2271

Raspberry Pi Cloud status update

Our Glasgow Raspberry Pi cloud system is an academic project, which means it will be a never-ending work-in-progress. In the past few days, we had lots of publicity (thanks (merci) guys!) so we want to give a quick status update so people know what we’ve done so far:


We have 56 Pi boards in 4 Lego mini-racks. Sadly these are 256MB model B boards, not the newer 512MB version. We have 56 because each rack has a Top Of Rack Switch, which has 16 ethernet connections. We use 14 connections for the Pi boards, and the others for inter-switch connections.

Software Stack

We run Raspbian Linux on each Pi board. We have three LXC containers on each Pi, each running a Linux instance. There is no resource isolation or accounting yet, so we don’t make any guarantees about utilization for individual containers.

We have experimented with more adventurous technology, including libvirt (hacking this, but not yet got full RasPi support working) and docker (had discussions with the developers, watch this space).

Hosted Software

Within each container, we run simple workloads such as lighttpd. We also use artificial workloads like lookbusy for our experiments. We are currently working with Hadoop, although at present this is on the native Linux instance, rather than an LXC instance.

Management Layer

Our project student (Richard) built a nice AWS-like web management console for the Glasgow Raspberry Pi Cloud. Here are some screenshots.

Web Management - main consoleWeb Management screenshot Screen Shot 2013-06-13 at 15.20.00 Screen Shot 2013-06-13 at 15.19.27 Screen Shot 2013-06-13 at 15.19.49

If/when we get libvirt working, then we hope to be able to use standard tools like ovirt.

Edit (22 June 2013): The Glasgow Raspberry Pi Cloud is entirely distinct from PiCloud, as the PiCloud folks requested us to say…

Running Hadoop Java and C++ Word Count example on Raspberry Pi

I hear that hadoop is incredibly slow on pi from various blog posts, and yes pls lower your hope as the speed is really appalling. But it is very interesting to see how slow it can be on the pi.

This post assumes you already have hadoop installed and configured on your pi. Before we start, we need to increase swap file size if your pi is 256MB ver. otherwise your pi will run out of memory.

1. Increase the swap file size (I stole this from David’s post)

hduser@raspberrypi ~ $ pico /etc/dphys-swapfile
change the value to 500 (MB)
hduser@raspberrypi ~ $ sudo dphys-swapfile setup
hduser@raspberrypi ~ $ sudo reboot

2. Download the example file

go http://www.gutenberg.org/ebooks/20417 and download the plain text e-book. Assuming you have downloaded the file to your home directory, we then copy this file to HDFS.

hduser@raspberrypi ~ $ start-all.sh
hduser@raspberrypi ~ $ hadoop dfs -copyFromLocal pg20417.txt /user/hduser/wordcount/pg20417.txt 

You can the check the file existence similar to ls command

hduser@raspberrypi ~ $ hadoop dfs -ls /user/hduser/wordcount

3.  Run example Java wordcount example

hduser@raspberrypi ~ $ hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar wordcount /user/hduser/wordcount /user/hduser/wordcount-output

Now, be patient! it will take approx. 8 minutes to complete….

4. Check execution result

hduser@raspberrypi ~ $ hadoop dfs -cat /user/hduser/wordcount-output/part-r-00000

5. C++ wordcount example

Getting hadoop pipes to run on pi needs a little more effort (hacking?) as we will need to build some pi compatible libraries. Particularly we’ll want libhdfs libhadooppipes as well as libhadooputils.

Let’s get the build environment ready first.

hduser@raspberrypi ~ $ apt-get install libssl-dev

go to /usr/local/hadoop/src/c++/libhdfs/ and edit the configure file, so it will run without errors.

in configure file, find the comment out the following two lines.

as_fn_error $? "Unsupported CPU architecture \"$host_cpu\"" "$LINENO" 5;;


define size_t unsigned int

Those are all hackings we need to do. Next,

hduser@raspberrypi ~ $ ./configure --prefix=/usr/local/hadoop/c++/Linux-i386-32
hduser@raspberrypi ~ $ make
hduser@raspberrypi ~ $ make install

We’re almost done, just do the same for pipes and utils. Once finished, you’ll have pi compatible libraries and just build the wordcount.cpp with Makefile given below.


#include <algorithm>
#include <limits>
#include <string>

#include  "stdint.h"  // <--- to prevent uint64_t errors! 

#include "hadoop/Pipes.hh"
#include "hadoop/TemplateFactory.hh"
#include "hadoop/StringUtils.hh"

using namespace std;

class WordCountMapper : public HadoopPipes::Mapper {
  // constructor: does nothing
  WordCountMapper( HadoopPipes::TaskContext& context ) {

  // map function: receives a line, outputs (word,"1")
  // to reducer.
  void map( HadoopPipes::MapContext& context ) {
    //--- get line of text ---
    string line = context.getInputValue();

    //--- split it into words ---
    vector< string > words =
      HadoopUtils::splitString( line, " " );

    //--- emit each word tuple (word, "1" ) ---
    for ( unsigned int i=0; i < words.size(); i++ ) {
      context.emit( words[i], HadoopUtils::toString( 1 ) );

class WordCountReducer : public HadoopPipes::Reducer {
  // constructor: does nothing
  WordCountReducer(HadoopPipes::TaskContext& context) {

  // reduce function
  void reduce( HadoopPipes::ReduceContext& context ) {
    int count = 0;

    //--- get all tuples with the same key, and count their numbers ---
    while ( context.nextValue() ) {
      count += HadoopUtils::toInt( context.getInputValue() );

    //--- emit (word, count) ---
    context.emit(context.getInputKey(), HadoopUtils::toString( count ));

int main(int argc, char *argv[]) {
  return HadoopPipes::runTask(HadoopPipes::TemplateFactory< 
                              WordCountReducer >() );


CC = g++
HADOOP_INSTALL = /usr/local/hadoop
PLATFORM = Linux-i386-32

wordcount: wordcount.cpp
     $(CC) $(CPPFLAGS) $< -Wall -L$(HADOOP_INSTALL)/c++/$(PLATFORM)/lib -lhadooppipes \
     -lhadooputils -lpthread -lcrypto -lssl -g -O2 -o $

Remark: on my 256BM ver.B pi, C++ wordcount take about 10 minutes to finish.


[1] http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_2.2_–_Running_C%2B%2B_Programs_on_Hadoop

[2] http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#Copy_local_example_data_to_HDFS

Getting hadoop to run on the Raspberry Pi

Hadoop was implemented on Java, so getting it to run on the Pi is just as easy as doing so on x86 servers. First of all, we need JVM for pi. You can either get OpenJDK or Oracle’s JDK 8 for ARM Early Access. I would personally recommended JDK8 as it is **just a little slightly* faster than OpenJDK, which is easier to install.

1. Install Java

Installing OpenJDK is easy, just do and wait

pi@raspberrypi ~ $ sudo apt-get install openjdk-7-jdk
pi@raspberrypi ~ $ java -version
java version "1.7.0_07"
OpenJDK Runtime Environment (IcedTea7 2.3.2) (7u7-2.3.2a-1+rpi1)
OpenJDK Zero VM (build 22.0-b10, mixed mode)

Alternatively, you can install Oracle’s JDK 8 for ARM Early Access (some said it was optimized for Pi).
First get it from here: https://jdk8.java.net/fxarmpreview/index.html

pi@raspberrypi ~ $ sudo tar zxvf jdk-8-ea-b36e-linux-arm-hflt-*.tar.gz -C /opt
pi@raspberrypi ~ $ sudo update-alternatives --install "/usr/bin/java" 
"java" "/opt/jdk1.8.0/bin/java" 1 
pi@raspberrypi ~ $ java -version
java version "1.8.0-ea"
Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)

If you have both versions installed, you can use switch between them with

sudo update-alternatives --config java

2. Create a hadoop system user

pi@raspberrypi ~ $ sudo addgroup hadoop
pi@raspberrypi ~ $ sudo adduser --ingroup hadoop hduser
pi@raspberrypi ~ $ sudo adduser hduser sudo

3. Setup SSH

pi@raspberrypi ~ $ su - hduser
hduser@raspberrypi ~ $ ssh-keygen -t rsa -P ""

This will create an RSA key pair with an empty password. It is done so to stop Hadoop prompting for the passphrase when in talks to its nodes

hduser@raspberrypi ~ $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Now SSH access to your local machine is enabled with this newly created key

hduser@raspberrypi ~ $ ssh localhost

You should be good to login without password

4. Download (install?) Hadoop
Download hadoop from http://www.apache.org/dyn/closer.cgi/hadoop/core

hduser@raspberrypi ~ $ wget http://mirror.catn.com/pub/apache/hadoop/core/hadoop-1.1.2/hadoop-1.1.2.tar.gz
hduser@raspberrypi ~ $ sudo tar vxzf hadoop-1.1.2.tar.gz -C /usr/local
hduser@raspberrypi ~ $ cd /usr/local
hduser@raspberrypi /usr/local $ sudo mv hadoop-1.1.2 hadoop
hduser@raspberrypi /usr/local $ sudo chown -R hduser:hadoop hadoop

Now hadoop has been installed and ready to roll (not yet). Edit .bashrc under your home, and append the following lines

export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-armhf
export HADOOP_INSTALL=/usr/local/hadoop

modify JAVA_HOME accordingly if you use oracle’s version.

Reboot Pi and verify the installation:

hduser@raspberrypi ~ $ hadoop version
Hadoop 1.1.2
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/
branch-1.1 -r 1440782
Compiled by hortonfo on Thu Jan 31 02:03:24 UTC 2013
From source with checksum c720ddcf4b926991de7467d253a79b8b

5. Configure Hadoop
NOTE: this how-to is just a minimal configuration for single-node mode hadoop

configuration files are at "/usr/local/hadoop/conf/", and will need to 
edit core-site.xml, hdfs-site.xml, mapred-site.xml







OK, we’re almost done, one last step.

hduser@raspberrypi ~ $ sudo mkdir -p /fs/hadoop/tmp
hduser@raspberrypi ~ $ sudo chown hduser:hadoop /fs/hadoop/tmp
hduser@raspberrypi ~ $ sudo chmod 750 /fs/hadoop/tmp
hduser@raspberrypi ~ $ hadoop namenode -format


If you use JDK 8 for hadoop, you need to force DataNode to run in JVM client mode as JDK 8 does not support server yet. Go to /usr/local/hadoop/bin and edit hadoop file (please create a backup first). Assuming you’re using nano, the procedure is as follows.  nano hadoop, ctrl-w to search for “-server” argument. What you need is to delete “-server” and then save & exit.

Now hadoop single-node system is ready. Below are some useful commands.

1. jps           // will report the local VM identifier
2. start-all.sh  // will start all hadoop processes
3. stop-all.sh   // will stop all hadoop processes



[1] http://raspberrypi.stackexchange.com/questions/4683/how-to-install-java-jdk-on-raspberry-pi
[2] http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Creating an LXC Container on the Raspberry Pi

This post assumes you’ve followed the instructions in our post “Building an LXC-friendly Kernel on the Raspberry Pi” to get kernel support working and install the lxc tools from the LXC git repository.

Mount a cgroup

# pico /etc/fstab

# add the line “lxc /sys/fs/cgroup cgroup defaults 0 0″

# mount -a

Create a Directory to Store Hosts

# mkdir -p /var/lxc/guests

Create a File System for the Container

Let’s create a container called “test”.

First, create a filesystem for the container. This may take some time

# apt-get install debootstrap

# mkdir -p /var/lxc/guests/test

# debootstrap wheezy /var/lxc/guests/test/fs/ http://archive.raspbian.org/raspbian

Modify the Container’s File System

# chroot /var/lxc/guests/test/fs/

Change the root password.

# passwd

Change the hostname as you wish.

# pico /etc/hostname

undo chroot

# exit

Create a Minimal Configuration File

# pico /var/lxc/guests/test/config

Enter the following:

lxc.utsname = test

lxc.tty = 2

lxc.rootfs = /var/lxc/guests/test/fs

Create the Container

lxc-create -f /var/lxc/guests/test/config -n test

Test the Container

# lxc-start -n test -d

[wait for a while, a few minutes]

# lxc-console -n test -t 1

Building an LXC-friendly Kernel for the Raspberry Pi

This post is heavily based on Yohei Kuga’s post on Google+


We’ve spent a lot of time installing LXC in various ways and using different configurations. Recently Yohei Kuga posted a neat and minimal process, which was certainly more streamlined than our approach. I’ve expanded his notes and tested everything to ensure it works.

Download Raspbian and bake an SD Card



We used the 2013-02-09-wheezy-raspbian.zip image.

We made no other changes to this image other than those instructions listed below.

Switch to Root User on the Pi

These commands must be run as root. You can use “su”, or for convenience (I know):

# sudo su root

Expand to fill SD Card

Expand to fill SD card and reboot after entering:

# raspi-config

Update Raspbian

# apt-get update

# apt-get dist-upgrade

Install git

# sudo apt-get install git-core

Update Firmware

The clone will take a while. You might consider cloning on a desktop machine to save time. Just transfer the firmware/boot and modules/ directories from your desktop PC to the Pi after the checkout.

# cd /opt

# git clone git://github.com/raspberrypi/firmware.git

# cd firmware/boot

# cp * /boot

# cd ../modules

# cp -r * /lib/modules

# reboot

Increase the Swap File Size

I found that in order to check out the source on the Pi, you’ll need a swap file with the 256MB Pi, otherwise it will run out of RAM during the checkout (with fatal: index-pack failed).

# pico /etc/dphys-swapfile

# change to 500 (MB)

# sudo dphys-swapfile setup

# reboot

Prepare to Build Kernel

The clone will take a while. Again, you may consider using a desktop PC. Of course, if you do that, you’ll need to issue the “zcat” command from your Pi and copy the resulting “.config” file to the “linux” directory on your desktop PC.

# cd /opt

# mkdir raspberrypi

# cd raspberrypi

# git clone git://github.com/raspberrypi/linux.git

# cd linux

# zcat /proc/config.gz > .config

Decrease the Swap Space File

# pico /etc/dphys-swapfile

# change to 100 (MB)

# dphys-swapfile setup

# reboot

Install Packages for Kernel Compilation

# apt-get install ncurses-dev

Kernel Options

You’ll now need to set some kernel options to support LXC, via the menu config tool.

# cd /opt/raspberrypi/linx

# make menuconfig

You need to enable these options:

* General -> Control Group Support -> Memory Resource Controller for Control Groups (*and its three child options*)

(this has high overhead;only enable if you really need it, or else enable and remember to disable using the Kernel command line option “cgroup_disable=memory”)

* General -> Control Group Support -> cpuset support

* Device Drivers -> Character Devices -> Support multiple instances of devpts

* Device Drivers -> Network Device Support -> Virtual ethernet pair device

Build Kernel 

# make

# make modules_install

# cd /opt/raspberrypi

# git clone git://github.com/raspberrypi/tools.git

# cd tools/mkimage

# python ./imagetool-uncompressed.py /opt/raspberrypi/linux/arch/arm/boot/Image

# cp /boot/kernel.img /boot/kernel-old.img

# cp kernel.img /boot/

# reboot

Download Latest LXC

The LXC tools provided with Raspbian are out-of-date.

# mkdir /opt/lxc

# cd /opt/lxc

# git clone https://github.com/lxc/lxc.git

# apt-get install automake libcap-dev

# cd lxc

# ./autogen.sh && ./configure && make && make install

Testing the Install

Check LXC is happy with your kernel:

# lxc-checkconfig

User namespace should be “missing” (it checks for a kernel option that no longer exists) and Cgroup namespace should say “required”.

Photos from Demofest

A few photos from our recent outing to the SICSA Demofest in Edinburgh.