Kubernetes Dashboard – Run as Docker Image on Raspberry Pi 2.

The docker image has now been fixed for ARMv7 by the Google Kubernetes Dashboard team as of 19th January 2016.

Due to the change in the code, you can now build and run Dashboard in a Docker image, rather than having to serve it or use ‘Screen’ as per my previous blog post here.

So first things first, we need to update our code base to the latest version of Kubernetes Dashboard from their Github by running the following command inside our Dashboard directory:

git pull https://github.com/kubernetes/dashboard.git

This then downloads and checks the latest files against the currently installed files and updates them accordingly, as shown in the below screenshot.

KubeDashUpdate

In order to build and run the Docker image, we need to do the following:

gulp docker-image

gulpDockerImage

You can then check that the Docker image has been built by running:

docker images

dockerImage

Finally, to run the Docker image, we use the following command:

docker run –d –p :9090:9090 kubernetes/dashboard –apiserver-host=http://192.168.1.201:8080

You will need to change the IP address of the Api server host to match your Master node, in our case our Master is on 192.168.1.201.

The dashboard should now be running in a tiny docker image, without the need for screen or an active running terminal.

You can check this with:

docker ps

dockerPS

Thank you to Piotr Bryk for updating the build code and fixing this issue!
Keep an eye on the Kubernetes github here for further updates as they seem to be coming in thick and fast and you can update your Dash at any time, using this method.

Kubernetes Dashboard on Raspberry Pi 2

Google are currently working on a Dashboard UI for their Kubernetes orchestration tool. This is still a work in progress and not yet ready for production, however, they aim to have a BETA release end of January and a stable release towards the end of February. We have managed to get this working, ready for the BETA release.
This will replace the current kube-ui.

What you will need (at least):
2 x SD cards loaded with Arch Linux ARM & Kubernetes. (Covered in our previous blog post here)
2 x Raspberry Pi 2 Model B

Preparation
In order to get the Dashboard working, there are a number of software dependencies that need to be installed and configured first.
• Docker 1.3+
• Go 1.5+
• NodeJS 4.2.2+
• NPM 1.3+
• Etcd 2.2+
• Java 7+
• Gulp 3.9+
• Python
• Python2
• Make
• Bower
• GCC

Preparation Installation
Docker should already be installed, and we will be leaving Go till last. I prefer doing this in steps as sometimes mirror connections fail, but there is a single command, shown below the steps.
1. Open a terminal on your Master Kubernetes Node.
2. Log into the root account using su and the root password.
3. Install Node JS and NPM.

pacman -S nodejs npm

4. Install Java 7.  When we were testing with the later versions of Java, we ran into issues with compiling the dashboard, therefore we stuck with JDK 7.

pacman -S jd7-openjdk

5. Install Gulp

pacman -S gulp

6. Install Python and Python 2

pacman -S python python2

7. Install Make

pacman -S make

8. Install Bower

pacman -S bower

9. Install GCC

pacman -S gcc

To install all the dependencies in one go, you can use the following command:

pacman –S nodejs npm jdk7-openjdk gulp python python2 make bower gcc

Now, we are going to build Go. At the time of writing, there is not a working version of Go 1.5 for ARMv7, therefore we have to firstly install the older 1.4.3 version from the public repository and then manually build 1.5 on an ARMv7 platform.

1. Clone the “Go” repository.

git clone https://go.googlesource.com/go

2. Change directory to “Go”.

cd go

3. Checkout “Go 1.4.3”.

git checkout go1.4.3

4. Change directory to “src”.

cd src

5. Run the bash script in order to build Go 1.4.3.

./all.bash

6. As Go is installed in our user’s home directory, in order to reference this, we need to add the command “Go” to the user’s $PATH.

nano /home/alarm/.bashrc

and input at the bottom of the file:

export GOROOT=$HOME/go

export PATH=$PATH:$GOROOT/bin

7. Log out and back in to the shell as user “alarm”, not “root”
8. Check that “Go” is installed the version correctly and the $PATH has been setup.

go version

This should display the following:
goVersion

9. To check that “Go” is now installed and working correctly, we created a simple “Hello, World” script:

nano helloWorld.go

and pasted this into the file:

package main

import “fmt”
func main() {
fmt.Printf(“hello, world\n”)
}

save and exit.

10. Run the file.

go run helloWorld.go

This should return the following:

helloWorld
If this is not displayed and you receive an error saying “Go not found”, go back and ensure that “Go” has been added to your $PATH variable.

11. Now that we have “Go 1.4.3” installed, we now have to upgrade to “Go 1.5” for the Kubernetes Dashboard. You can do this the long way round by creating your own bootstrap, which you need another machine for, or thanks to the work of Dave Cheney, he has already published a public bootstrap for ARM on his website which we will be using. To use Dave Cheney’s bootstrap, you can either run the below commands one by one, or alternatively create a script to run them by doing the following:

nano upgradeGo

and paste this script into the file:

#!/bin/bash
cd $HOME
curl http://dave.cheney.net/paste/go-linux-arm-bootstrap-c788a8e.tbz | tar xj
curl https://storage.googleapis.com/golang/go1.5.src.tar.gz | tar xz
ulimit -s 1024     # set the thread stack limit to 1mb
ulimit -s          # check that it worked
cd $HOME/go/src
env GO_TEST_TIMEOUT_SCALE=10 GOROOT_BOOTSTRAP=$HOME/go-linux-arm-bootstrap ./all.bash

12. Save the script and give it execute permissions

chmod +x upgradeGo

13. Run the script as root.  Be aware, this does take a while to run and there are times when no updates are being displayed. It may also fail on the tests but running /root/go/bin/go version should show return “go version go1.5 linux/arm” once complete.

./upgradeGo

14. As we ran the upgrade Go script as root, we now need to move it from the root folder it was built in, to the alarm home folder.
Firstly, we have to remove the 1.4.3 Go directory from the home directory of alarm.

rm -rf /home/alarm/go

then, we move the folder from root/go to alarm/go

mv /root/go /home/alarm/go

Finally, to check that the folder has moved correctly:

ls /home/alarm/go

If the folder has been moved correctly, it should display the following:
alarmGoMove

Log out of the root account.

15. In order to see if it is still running correctly, test go again using the file we created earlier.

go run helloWorld.go

16. Also, recheck the Go version to ensure it returns that it is now version 1.5

go version

With the full preparation done, we moved onto building the Kubernetes Dashboard.

Building the Dashboard

1. For security reasons, the dashboard should be run as a standard user, not as root.  Therefore, as alarm, we start by cloning the latest Git release for Kubernetes Dashboard

git clone https://github.com/kubernetes/dashboard.git

2. Change to the dashboard directory

cd dashboard

3. Install the dashboard packages. This takes around 3 hours and requires the terminal to stay active throughout the process.  If the terminal becomes inactive or a remote shell is lost, you will have to restart the process.  Fortunately, it will continue from where it stopped. You may however get an error stating that “node sass” cannot be downloaded. If you receive this error, follow the below steps to manually download it once the main install has finished.

npm install

4. Optional: Install node sass if you received the above mentioned error.  This can take up to 15 minutes.

npm install node-sass

5. Log in as root and install bower.  This has to be done as root, but once you have installed bower, remember to exit the superuser.

bower install –allow-root

6. Now ready to run the dashboard.  To launch the dashboard:

gulp serve

Remember, this should be done in a standard user account.

gulpServe

7. You can see that this is up and running, as our screenshot displays the access URL’s.  To view it in a browser, simply open your web browser and go to your Pi’s IP address, as shown in your terminal.  In our case above, our IP Address is listed as http://192.168.1.201:9090 but yours may vary. This will show the test/debug version of the Kubernetes dashboard.

You can access the debugging console by going to the UI address listed in the terminal.  In our case, this was http://192.168.1.201:3001

8. To remove the debugging console and run a production version of Kubernetes dashboard:

gulp build

9. Once this is built, it is placed in the dist folder. To run the production version of Kubernetes dashboard:

gulp serve:prod

10. Open your web browser and go to your Pi’s IP address.  Again, this will vary, ours was http://192.168.1.201:9090 so check the output in the terminal window to find the IP Address if you are unsure.

Kubernetes dashboard should now be up and running.

dash

At present the Kubernetes dashboard requires an active terminal to be able to run, in order to bypass this, we used “Screen”, which runs a detached non-active terminal in the background, which means we didn’t need a running active terminal.

1. Install Screen

pacman –Sy screen

2. Open the dashboard directory

cd dashboard

3. Run screen

screen -fa -d -m gulp serve:prod

This will run in the background.

4. To see the process

screen –r

This will show either the detached terminal you are using to run the Kubernetes Dashboard, or it will list all the running screen processes.  To reattach a specific screen process, you will need to include the screen ID produced from the above command.

To detach from the screen and leave it running, press: ctrl + a + d

To completely end the process and stop the dashboard from running, use: ctrl + x on the running screen terminal.

Quick recap:
At present, the dashboard is still a work in production. Some links may not work, and is limited functonality at the time of writing this post. Keep your eye on the Kubernetes Dashboard project on github for further releases.

 

 

HOW TO: Kubernetes Multi-node on Raspberry Pi 2s

Google’s Kubernetes is a powerful orchestration tool for containerised applications across multiple hosts. We achieved the first fully running implementation of Kubernetes on Raspberry Pi 2 today, and thanks to the ease of docker, you can too.

You will need:

At least 2 Raspberry Pi 2s

Two SD cards loaded with Arch Linux | ARM

First, we need to install docker and ntpd on all the machines (the Pis need to have the correct time to download docker images):


pacman -S Docker ntpd

Just hit y to continue. I recommend that you reboot your Pis after this so that both services come up cleanly. Now we need to create a setup implementing this:

k8s-docker

Select a Pi to be Pi master, and ssh in. I recommend that you to su root for the following. Then run the this command to bring up docker-bootstrap.


sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'

Then we need to bring up etcd, the key value store used by Kubernetes. This command and any other docker run command with a new container might take a little while when first running, as docker will need to download the container. I’m working on shrinking the images to make this less of a pain.


docker -H unix:///var/run/docker-bootstrap.sock run --net=host -d andrewpsuedonym/etcd:2.1.1 /bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 –data-dir=/var/etcd/data

Then we should reserve a CIDR range for flannel


docker -H unix:///var/run/docker-bootstrap.sock run --net=host andrewpsuedonym/etcd:2.1.1 etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'

Now we need to stop docker so that we can reconfigure it to use flannel.

systemctl stop docker

Run flannel itself on docker-bootstrap. This command should print a long hash, which is the id of the container


docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net andrewpsuedonym/flanneld flanneld

Then we need to get its subnet information.

docker -H unix:///var/run/docker-bootstrap.sock exec <long-hash-from-above-here> cat /run/flannel/subnet.env

This should print out something like this


FLANNEL_SUBNET=10.1.78.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false

Now we need to configure docker to use this subnet, which is very simple. All we need to do is edit the docker.service file.


nano /usr/lib/systemd/system/docker.service

Then change the line which starts with ExecStart to include the flags –bip and –mtu. It should end up looking something like this.

ExecStart=/usr/bin/docker –bip=FLANNEL_SUBNET –mtu=FLANNEL_MTU -d -H fd://

Now we need to take down the network bridge docker0.


/sbin/ifconfig
docker0 down
brctl delbr docker0

Then we can start Docker up again


systemctl start docker

Now it’s time to launch kubernetes!
This launches the master


docker run --net=host --privileged -d -v /sys:/sys:ro -v /var/run/docker.sock:/var/run/docker.sock  andrewpsuedonym/hyperkube hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=127.0.0.1
--config=/etc/kubernetes/manifests-multi –pod-infra-container-image=andrewpsuedonym/pause

And then this launches the proxy


docker run -d --net=host --privileged andrewpsuedonym/hyperkube:v1.0.1 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2

You should now have a functioning one node cluster. Download the kubectl binary from here, and then if you run


./kubectl get nodes

You should see your node appear. Now for the first worker node.
These instructions be applied as many times as necessary to gain however many worker nodes you need.
We’ll need a docker-bootstrap again for flannel.


sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false
--bridge=none --graph=/var/lib/docker-bootstrap 2>
/var/log/docker-bootstrap.log 1> /dev/null &'

Then we should stop docker


systemctl stop docker

And add flanneld. This node doesn’t need etcd running on it, because it will use the running etcd from the master node.


docker -H unix:///var/run/docker-bootstrap.sock run -d --net=host --privileged -v /dev/net:/dev/net andrewpsuedonym/flanneld flanneld –etcd-endpoints=http://MASTER_IP:4001

The master IP address is the IP address of the first node we set up. You can
check that you have the right ip by running


curl MASTER_IP:4001

You should get a 404 response.

As before, we need to get the subnet information.


docker -H unix:///var/run/docker-bootstrap.sock exec <long-hash-from-above-here> cat /run/flannel/subnet.env

and edit the /usr/lib/systemd/system/docker.service file to include –bip=FLANNEL_SUBNET –mtu=FLANNEL_MTU when launching docker, just like we did before
Now we bring down docker’s network bridge and reload it.


/sbin/ifconfig docker0 down
brctl delbr docker0
systemctl daemon-reload
systemctl start docker

This Pi is ready for kubernetes now


docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock andrewpsuedonym/hyperkube hyperkube kubelet --api-servers=http://${MASTER_IP}:8080 --v=2 --address=0.0.0.0 --enable-server --hostname-override=$(hostname -i) –pod-infra-container-image=andrewpsuedonym/pause

docker run -d --net=host --privileged andrewpsuedonym/hyperkube hyperkube proxy --master=http://${MASTER_IP}:8080 –v=2

Running kubectl get nodes on the original Pi should now return both nodes.

up