Thursday, March 26, 2015

Pi-oneering on the Raspberry Pi 2 - part 1

Raspberry and Docker (Part 1)

I recently got my hands of a Raspberry Pi 2. The one with 1 Gb of RAM. I was excited like a little kid, and have been playing with it together with the youngest branch in the Stam family. Ideally I want to run Centos7 but the Centos7/RedSleeve project is not quite there for the ARMv7 architecture. So instead I have tried the recent Fedora 21 images for the Pi 2. This distribution is very nice, but docker has not yet been included as an rpm. After trying ArchLinux, I finally settled on using Hypriot. Both distros are nice, but I'm more familiar with 'apt-get' then I am with 'pacman'.

Fedora 21 on the Pi 2

The following thread talk is Clive Messer talking about the Pi2B Fedora Remix images he is providing. If you are running OSX like me and want to write on of these images onto your Pi flash drive then you need to download one of these compressed raw images, and you will need 'The Unarchiver' which you can download from the AppStore. While you are extracting the .xz file, you can insert your memory card into your Mac and follow the directions on Raspberry site to get your card ready. I followed the 'mostly graphical' directions:
  • Connect the SD card reader with the SD card inside. Note that it must be formatted in FAT32.
  • From the Apple menu, choose About This Mac, then click on More info...; if you are using Mac OS X 10.8.x Mountain Lion or newer then click on System Report.
  • Click on USB (or Card Reader if using a built-in SD card reader) then search for your SD card in the upper right section of the window. Click on it, then search for the BSD name in the lower right section; it will look something like 'diskn' where n is a number (for example, disk4). Make sure you take a note of this number.
  • Unmount the partition so that you will be allowed to overwrite the disk; to do this, open Disk Utility and unmount it (do not eject it, or you will have to reconnect it). Note that On Mac OS X 10.8.x Mountain Lion, "Verify Disk" (before unmounting) will display the BSD name as "/dev/disk1s1" or similar, allowing you to skip the previous two steps.
  • From the terminal run:
    sudo dd bs=1m if=path_of_your_image.raw of=/dev/diskn
    Remember to replace n with the number that you noted before
Note that this is a 8GB raw file, so your cards should have 8GB. Also note that this can take a while. For me it took about an hour. You can check the progress of 'dd' by using

kill -INFO

The 'dd' process will briefly pause to write its progress to the console and then resume writing. Once completed eject the card and stick the micro card into your Pi 2 and power it up. It is really pretty amazing to see a full Fedora 21 installation appear! I tried building docker from source but in the end I wasn't very happy happy with my results. I think I'm just going to wait till the Clive creates the rpm for it.

ArchLinux on the Pi 2

To install ArchLinux on your Pi, you can follow their instructions, but note that the Pi 2 has a armv7 processor, so you need to make sure to take a armv7 distribution. Note these images extract to 8GB and similar to the Fedora 21 remix it was slow to copy onto my SD card. The base image is very small and runs just enough services to run docker, which is great because it leaves you with as much resources as possible to for the container. I ran into an issue with the systemd bus when I tried starting httpd inside a container, which ultimately lead me to to try Hypriot.

Hypriot on the Pi 2

"Heavily ARMed after major upgrade: Raspberry Pi with Docker 1.5.0"

To install Hypriot, download the image which includes docker. Extract the zip file and follow the instructions above to copy the img to the SD card using 'dd'. The copy will be pretty fast since it's only about 1 GB, eject and stick it into the Pi slot. At the boot prompt log in with user "pi" and password "raspberry" (or with a privileged user "root" and password "hypriot"). Note that sshd is running so you can log in over the network as long as you know its IP address. Nice touch is that the hostname of the Pi is set to 'black-pearl'.

One thing that is still worth mentioning is that you need special ARM-compatible Docker Images.
Standard x86-64 Docker Images from the Docker Hub won't work. That's the reason Hypriot created a number of ARM compatible Docker Images to get you started. You will find these images and more at on Docker Hub. After booting our image on your Pi these base images are just a "docker pull" away. For example "docker pull hypriot/rpi-node". If you are missing things like 'vi' or 'git' then run

apt-get update
apt-get upgrade
apt-get install vim
apt-get install git

or whatever else you want to install. So far I've been very happy with Hypriot. Note that it does not come with a graphical environment, but in this case that's exactly what I like so that as much resources as possible are available to the Docker containers.

Adding swapspace

I'd like to run a lot of Docker containers, and think that not many will be active at the same time. This means that I'm likely going to be limited by available RAM memory, which at the moment of writing is up to 950 MB. If yours is still 760 MB then boot with the default Raspian, and run 'rpi-update'. I believe this also updates your firmware. Anyway after rebooting my memory showed up as 950 MB. However by default Hypriot does not configure any swap-space, which is most cases is the right thing to do as swapping to the SD card will apparently wear out the card pretty fast ("you can 'write' to flash only so many times"). In this cases however I want make sure I can run lots of containers, and most of these containers will most likely be dormant most of the time. So I really do want swap-space. Some people suggest to use a SSD drive. From what I can tell SSD really means 'no moving parts' and the chips uses are also flash based, so I'm not sure if that means swap on a SSD drive is bad too. Regardless it seems like a good idea to keep the swap off the SD card, and on a dedicated drive that be disposed of if needed. In short I came up with the idea to stick a fast USB3.0 stick in a fast (externally powered) USB3.0 Hub and to create swap files for each of my Pi's. Then see what happens with it over time, and I can easily replace it with another USB based storage device. To test this out I put a 8GB stick right in one of the USB ports. Following an article from theurbanpinguin as root.

1. Format the Drive
fdisk -l

lists all the drives and mine is called /dev/sda:

Disk /dev/sda: 8029 MB, 8029470208 bytes

Double check you have the right drive before proceeding as you will wipe all the data of the device.

fdisk /dev/sda

Then from the interactive options we can select


If you have just the single partition then at partition will be deleted. If you have more than 1 partition you will be prompted at to which partition should be removed. We can then enter


to create a new partition. Choose

for primary, and then

to indicate partition number 1 should be created. We will be partitioning the complete drive in the example so we will hit enter on the starting and ending position. This defaults to the complete drive. Finally we will need to save this back to the meta-data on the disks master boot record. We do this by entering:


This will also have the effect of exiting the fdisk program. And finally formatting it with an ext4 file systems

mkfs.ext4 -L DATA /dev/sda1

Now, if we want to mount the drive '/data' we need to add the following to the /etc/fstab file:

LABEL=DATA  /data  ext4  defaults 0 2

and then issue a

mount -a

and you may have to set the permissions, use

chmod 1777 /data

Now the drive should be ready to use.

2. Create a swapfile

I don't want to use the entire drive for swap, instead I really only want to use a 1 GB file. So let's create a 1 GB file using 'dd' in

dd if=/dev/zero of=/data/swapfile bs=1M count=1024

now to activate the swap we need to issue

mkswap /data/swapfile
swapon /data/swapfile

Now you can check with 'top' that you have a 1 GB of swapspace

KiB Mem:    947468 total,   913096 used,    34372 free,    69884 buffers
KiB Swap:  1048572 total,      484 used,  1048088 free,   667248 cached

To make the swap permanent between reboots add

/data/swapfile none swap sw 0 0

to you /etc/fstab file. Finally in the '/etc/sysctl.conf' I've set the swappiness to 10 to make it not very 'swappy'

vm.swappiness = 10

means it will only use swap when the RAM use gets over 90%.

3. Running some armv7 containers

You can do a search on the docker registry for armv7, hypriot images and they all seem to run fine. I could not find a package for httpd on hypriot, but when I did find it on the armv7/armhf-fedora and tried 'apachectl start' I got 'Failed to get D-Bus connection: Unknown error -1'. I'm not quite sure if this is an issue with httpd on Fedora or systemd on Fedora in general. Running ArchLinux based greatfox/tomcat8-armv7 runs great.

That's it for today.

Wednesday, January 14, 2015

API Management on Fabric8


The Fabric8 project comes with an HTTP-Gateway to create a single entry point to all Micro Services hosted by a particular Fabric8 deployment, making it easier to route all service traffic through an external firewall. It also allows you do create URL mappings using a template. On top of the regular gateway features the Fabric8 HTTP-Gateway offers API Management features. This article is an introduction into how to use these. Both the HTTP-Gateway as well as the API Management capabilities are fully asynchronous.

It is assumed that you are already familiar with fabric8 version 2. If you are not please take a look at the fabric8 docs first.


 API Management

The Fabric8 HTTP Gateway leverages the project, which 'brings an open source development methodology to API Management, coupling a rich API design & configuration layer with a blazingly fast runtime'. A popular trend in enterprise software development these days is to design applications to be very decoupled and use API’s to connect them. This approach provides an excellent way to reuse functionality across various applications and business units. Another great benefit of API usage in enterprises is the ability to create those API’s using a variety of disparate technologies. However, this approach also introduces its own pitfalls and disadvantages. Some of those disadvantages include things like:
  • Difficulty discovering or sharing existing API’s
  • Difficulty sharing common functionality across API implementations
  • Tracking of API usage/consumption
API Management is a technology that addresses these and other issues by providing an API Manager to track APIs and configure governance policies, as well as an API Gateway that sits between the API and the client. This API Gateway is responsible for applying the policies configured during management. Therefore an API management system tends to provide the following features:
  • Centralized governance policy configuration
  • Tracking of API’s and consumers of those API’s
  • Easy sharing and discovery of API’s
  • Leveraging common policy configuration across different API’s
 For more information on apiman see also their user guide.


Common Use Cases

Some common use cases most developers encounter are:

Throttling/Quotas - Limit the number of requests consumers of your APIs can make within a given time period (per service contract or per end-user).
Centralized Security - Add authentication and IP filtering capabilities in a central location, freeing your back-end services to focus on functionality.
Billing and Metrics - Easily get metrics for all your APIs so you can see what's popular or charge your consumers for their usage.

APIMan in Fabric8

The Fabric8 HTTP-Gateway contains this runtime 'embedded API manager engine'. As can be seen in Figure 1, the HTTP-Gateway opens a port at 9000 on xUbe. xUbe refers to either Kube (Kubernetes, OpenShift 3 and Docker) or Jube. By default the HTTP-Gateway has API Management turned on. This means that you will have to configure services in apiman and then 'Publish' them before they are live on the Gateway. When a service is published, an external request to this service on the gateway is routed through the apiman engine where policies are applied before the xUbe Service is called. An example of such a service is the CxfCdi Quickstart example which runs as Java Main processes. xUbe relays the request to one or more pods. When running Kube, the pod will typically open the same port number on the container as the service is running under. For Jube everything runs on the same machine so this would result in a port conflict and in this case an open port is chosen by Jube. In Figure 1, the port number is marked as 'xxxxx' as this port is not predetermined in this case.

Figure 1. Fabric8 API Management 

APIMan Console

The apiman console is shipped with fabric8 as a default application that can be run. The apiman application is deployed onto a wildly-8.1 container. From the console a service can be published to the Fabric8 HTTP-Gateway using REST management that run on port 8999 on the gateway. The console itself runs on http://:9092/apiman-manager/. The console is a self-service console where service creators and consumers can set up service contracts between them determining the terms of usage for a certain service.


Setup a rate limiting policy on the CxfCdi service in 5 minutes

From the Hawtio console (Kube: http://localhost:8484, Jube: http://localhost:8585) and under "Runtime > Apps" click on the green "Run..." button. Note that in this example I am using Jube, which means that my is "localhost". This brings up a screen from which you should select 2 applications "HTTP Gateway" and "ApiMan Console", and one Quickstart/Java "Quickstart : CXF JAX-RS CDI". Then click the green "Run App" button. After a little while, on the "Runtime > Apps" tab you should now see three green "1" icons in the 'Pods' column of each of these applications, as shown in Figure 2.

Figure 2. Hawtio Console, with running Gateway.

Now open the apiman console by navigating to:, and you can login using admin/admin123! You should change the password in the keycloak console at

Figure 3. Login into the apiman console.
We will be using the admin user, but on this screen new users would register themselves. Once logged in you should see the apiman home screen as shown in Figure 4 below.

Figure 4. The apiman console home screen.
The first you need to do is to navigate to "Manage Gateways" and to remove to existing "The Gateway" gateway and to create a new "Fabric8Gateway". The Fabric8Gateway should have an endpoint of "http://:8999/rest/apimanager". Add any credentials as they are (not yet) used.

Figure 5. Create a new Fabric8Gateway.
Next you need to add an Organization called "Fabric8Org", and now we are ready to create the CxfCdi service under this organization. We are going to create a "Public" service so we don't need to create a contract or any additional users - let's get the simple case working first! So click on "Create a new Service" from the Home screen. On the New Service screen select organization "Fabric8Org", set the name "CxfCdi", version "1.0" and description "Cxf Cdi Quickstart Demo", then click the "Create Service" button.
Figure 6. Service Implementation.
Make sure you can reach the CxfCdi service endpoint by doing GET on http://:9002/quickstart-java-cxf-cdi/cxfcdi/customerservice/customers/123. This should return a small XML structure 123John. Now, as shown in Figure 6, add http://:9002/quickstart-java-cxf-cdi in the API Endpoint box and select type "REST". Click Save and under plans select "Make this service public". Then under Policies select "Add Policy" and select a "Rate Limiting Policy", with 5 requests per service per minute.

Figure 7. Add Rate Limiting Policy.
You are now ready to publish the service to the Fabric8Gateway, so go to the "Overview" tab and click "Publish". Figure 8 shows the that the status will go to "Published".
Figure 8. Publish Service to the Fabric8Gateway.
The service is now exposed on the gateway and it should show up in the mapping page at http://:9000 which should respond with a JSON structure showing the mapping
and now we should be able to approach the service through the gateway at "http://:9000/quickstart-java-cxf-cdi", so go ahead and do a GET on "" which should respond with the same small XML structure 123John. However the 6th time in a minute it will respond with a 403 instead stating that the Rate limit was exceeded.
Figure 9. Rate policy triggered.
After a minute the service should become responsive again. You are now ready to set up more complex contracts in apiman.



Thursday, October 16, 2014

Fabric8 V2

Fabric8 v2 integrates with OpenShift v3 and Docker.

  • Fabric8 V2 documentation:
  • When an app is start as Java Main Docker uses the following docker file:
  • Use NSEnter to Connect to a running docker container:
  • Link to a running hawtio on v2: http://dockerhost:8484/hawtio/kubernetes/pods
  • See docker output: "docker logs "
  • if the docker container image's entry point of java $MAIN
    then docker run -Pit mynewlygeneratedimage will run the java main

Thursday, July 24, 2014

Hacking on Fabric8 Gateway

I'm currently adding a mbean to the Fabric8 gateway and one of the things I learned about are SRC annotations. These annotations are developed by the Apache Felix Project and they allow resource injection.

Something else I learned is that fabric8 reads its libraries straight from maven, and caches it in your local .m2 maven repository. So in development all you need to do is enable watching your repo using

Fabric8:karaf@root> fabric:watch *
Then when building your jar using 'mvn install' it will automatically be picked up by karaf without needing to restart the fabric8 server.

Tuesday, July 22, 2014

Extend a logical volume on a encrypted disk

My encrypted disk shows up as a 'double' volume in the 'Disk Utility' as shown in the following figure

You can see both volumes show up as 253 GB. I think it means the encrypted volume is using the LVM2 Physical Volume. Then the encrypted volume contains 5 mapped volumes which are listed under 'Peripheral Devices'. Initially they where:

  • Home: 4GB
  • NotBackedUp: 8GB
  • Root: 15 GB
  • Swap: 4 GB
  • VirtualMachines: 29GB
This totals  about 60 GB, and 'vgdisplay HelpDeskRHEL6' shows:

# vgdisplay HelpDeskRHEL6
  --- Volume group ---
  VG Name               HelpDeskRHEL6
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               5
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               235.47 GiB
  PE Size               4.00 MiB
  Total PE              60280
  Alloc PE / Size       15346 / 59.95 GiB
  Free  PE / Size       44934 / 175.52 GiB
  VG UUID               zXN22u-w74k-PxAn-RhB0-QZib-Pg0e-apCHx0

Which show the Alloc Size of 59.95 GB, and it also shows I still have 175.52 GB of Free Size. I want to use that as part of my 'NotBackedUp' Logical Volume and I use lvextend to add 150GB to it:

# lvextend -L 150G /dev/HelpDeskRHEL6/NotBackedUp
  Extending logical volume NotBackedUp to 150.00 GiB
  Logical volume NotBackedUp successfully resized

Next, to actually use the extra space we need to resize the filesystem using resize2fs

# resize2fs /dev/HelpDeskRHEL6/NotBackedUp
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/HelpDeskRHEL6/NotBackedUp is mounted on /NotBackedUp; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 10
Performing an on-line resize of /dev/HelpDeskRHEL6/NotBackedUp to 39321600 (4k) blocks.
The filesystem on /dev/HelpDeskRHEL6/NotBackedUp is now 39321600 blocks long.

and to verify we can see the large NotBackedUp Volume I use df:

# df -h
Filesystem            Size  Used Avail Use% Mounted on
                       15G  7.5G  6.2G  55% /
tmpfs                 7.7G  600K  7.7G   1% /dev/shm
                      148G  7.5G  133G   6% /NotBackedUp
                       29G  8.2G   20G  30% /VirtualMachines
/dev/sda1             3.0G   93M  2.8G   4% /boot
                      4.0G  260M  3.5G   7% /home


Friday, June 20, 2014

Notes on adding the Fuse Cartridge using RPM.

Fuse can be used on OpenShift using the Fuse Cartridge, the code for which can be found on github: During the setup phase of a Fuse based application on OpenShift it downloads a big zipfile. I've been working on an rpm that includes the zip file to avoid this big download, to shorten the application setup and so it doesn't depend on a network connection to the outside.

At the moment the rpm is build from the openshift-enterprise-rpm-6.1 branch, and the finished product is placed in nexus.

If you want to build the rpm by hand use 'rpmdev-setuptree', copy the zip to ~/rpmbuild/SOURCE (this needs to be the cartridge zip containing the, copy the spec file to rpmbuild/SPEC and then build the rpm using 'rpmbuild -ba openshift-origin-cartridge-fuse.spec'.

Next copy the rpm to your OpenShift node and install it using rpm -U , and once it's done restart the
ruby193-mcollective service on node and import cartridge on broker 
 #oo-admin-ctl-cartridge -c import-node --obsolete  --activate 
You can check fuse cartridge is listed by rhc cartridge-list. Note that I only found the oo-admin-ctl-cartridge command on OpenShift Enterprise and not on Origin.

At this point the Fuse application should also be listed on the application page.

You can create a new gear from there or you can use rhc:
rhc app create fuse fuse -g medium
It should now have created a running Fuse gear. Please note that on my little laptop it took roughtly 15 minutes for it to show up and rhc and the console were sort of in a frozen state.

Add Medium Gear Capablity on OpenShift

The OpenShift Virtual Machine download is pre-configured with one node, in one district with a small gear profile. There are two ways to add another gear size
  • add another node, in a new district with medium gears, but easier is it to
  • remove the district restriction and allow both small and medium gears.
Edit the /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf, and set

Next in edit the /etc/openshift/broker.conf and set

Then restart the node to let the changed take effect. If you have an existing user you may need to add the medium capability explicitly  using something like oo-admin-ctl-user -l demo --addgearsize medium.