tag:blogger.com,1999:blog-24246225160128598532024-01-19T11:54:17.066-08:00Fresh EspressoOn all things Java, SOA, and Technology in general.erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.comBlogger67125tag:blogger.com,1999:blog-2424622516012859853.post-21406255140144353522015-03-26T14:08:00.000-07:002015-03-26T14:08:00.565-07:00Pi-oneering on the Raspberry Pi 2 - part 1<h2>
Raspberry and Docker (Part 1)</h2>
<i>Summary</i> <br />
I recently got my hands of a Raspberry Pi 2. The one with 1 Gb of RAM. I was excited like a little kid, and have been playing with it together with the youngest branch in the Stam family. Ideally I want to run Centos7 but the Centos7/RedSleeve project is not quite there for the ARMv7 architecture. So instead I have tried the recent <a href="http://www.digitaldreamtime.co.uk/images/Fidora/21/">Fedora 21 images for the Pi 2</a>. This distribution is very nice, but docker has not yet been included as an rpm. After trying ArchLinux, I finally settled on using Hypriot. Both distros are nice, but I'm more familiar with 'apt-get' then I am with 'pacman'.<br />
<br />
<h3>
Fedora 21 on the Pi 2</h3>
The <a href="https://lists.fedoraproject.org/pipermail/arm/2015-February/009054.html">following thread</a> talk is Clive Messer talking about the Pi2B Fedora Remix images he is providing. If you are running OSX like me and want to write on of these images onto your Pi flash drive then you need to download one of these <a href="http://www.digitaldreamtime.co.uk/images/Fidora/21/">compressed raw images</a>, and you will need '<a href="http://unarchiver.c3.cx/">The Unarchiver</a>' which you can download from the <a href="https://itunes.apple.com/us/app/the-unarchiver/id425424353?mt=12">AppStore</a>. While you are extracting the .xz file, you can insert your memory card into your Mac and follow the <a href="http://www.raspberrypi.org/documentation/installation/installing-images/mac.md">directions on Raspberry site</a> to get your card ready. I followed the 'mostly graphical' directions:<br />
<ul>
<li><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Connect the SD card reader with the SD card inside. Note that it must be formatted in FAT32.</span></span></li>
<li><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">From the Apple menu, choose About This Mac, then click on More
info...; if you are using Mac OS X 10.8.x Mountain Lion or newer then
click on System Report.</span></span></li>
<li><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Click on USB (or Card Reader if using a built-in SD card reader)
then search for your SD card in the upper right section of the window.
Click on it, then search for the BSD name in the lower right section; it
will look something like 'diskn' where n is a number (for example,
disk4). Make sure you take a note of this number.</span></span></li>
<li><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Unmount the partition so that you will be allowed to overwrite the
disk; to do this, open Disk Utility and unmount it (do not eject it, or
you will have to reconnect it). Note that On Mac OS X 10.8.x Mountain
Lion, "Verify Disk" (before unmounting) will display the BSD name as
"/dev/disk1s1" or similar, allowing you to skip the previous two steps.</span></span></li>
<li><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">
</span></span><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">From the terminal run:</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">
</span></span><pre><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><code>sudo dd bs=1m if=path_of_your_image.raw of=/dev/diskn</code></span></span></pre>
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">
</span></span><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Remember to replace <code>n</code> with the number that you noted before</span></span>
</li>
</ul>
Note that this is a 8GB raw file, so your cards should have 8GB. Also note that this can take a while. For me it took about an hour. You can check the progress of 'dd' by using<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">kill -INFO <pid></pid></span><br />
<br />
The 'dd' process will briefly pause to write its progress to the console and then resume writing. Once completed eject the card and stick the micro card into your Pi 2 and power it up. It is really pretty amazing to see a full Fedora 21 installation appear! I tried building docker from source but in the end I wasn't very happy happy with my results. I think I'm just going to wait till the Clive creates the rpm for it.<br />
<br />
<h3>
ArchLinux on the Pi 2</h3>
To install ArchLinux on your Pi, you can follow <a href="http://archlinuxarm.org/platforms/armv6/raspberry-pi">their instructions</a>, but note that the Pi 2 has a armv7 processor, so you need to make sure to take a armv7 distribution. Note these images extract to 8GB and similar to the Fedora 21 remix it was slow to copy onto my SD card. The base image is very small and runs just enough services to run docker, which is great because it leaves you with as much resources as possible to for the container. I ran into an issue with the systemd bus when I tried starting httpd inside a container, which ultimately lead me to to try Hypriot.<br />
<br />
<h3>
Hypriot on the Pi 2</h3>
<h4>
<a href="http://blog.hypriot.com/heavily-armed-after-major-upgrade-raspberry-pi-with-docker-1-dot-5-0">"Heavily ARMed after major upgrade: Raspberry Pi with Docker 1.5.0"</a></h4>
To install <a href="http://blog.hypriot.com/heavily-armed-after-major-upgrade-raspberry-pi-with-docker-1-dot-5-0">Hypriot</a>, download the <a href="http://assets.hypriot.com/hypriot-rpi-20150301-140537.img.zip">image which includes docker</a>. Extract the zip file and follow the instructions above to copy the img to the SD card using 'dd'. The copy will be pretty fast since it's only about 1 GB, eject and stick it into the Pi slot. At the boot prompt log in with user "<b>pi</b>" and password "<b>raspberry</b>" (or with a privileged user "<b>root</b>" and password "<b>hypriot</b>"). Note that sshd is running so you can log in over the network as long as you know its IP address. Nice touch is that the hostname of the Pi is set to 'black-pearl'.<br />
<br />
One thing that is still worth mentioning is that you need <b>special ARM-compatible Docker Images</b>.<br />Standard x86-64 Docker
Images from the Docker Hub won't work. That's the reason Hypriot
created a number of ARM compatible Docker Images to get you started. You will find these images and more at on <a href="https://registry.hub.docker.com/search?q=hypriot&searchfield=">Docker Hub</a>. After booting our image on your Pi these base images
are just a "docker pull" away. For example "docker pull
hypriot/rpi-node". If you are missing things like 'vi' or 'git' then run<br />
<br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">apt-get update</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">apt-get upgrade</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">apt-get install vim</span></span><br />
<span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">apt-get install git</span></span><br />
<br />
or whatever else you want to install. So far I've been very happy with Hypriot. Note that it does not come with a graphical environment, but in this case that's exactly what I like so that as much resources as possible are available to the Docker containers.<br />
<br />
<h4>
Adding swapspace</h4>
I'd like to run a lot of Docker containers, and think that not many will be active at the same time. This means that I'm likely going to be limited by available RAM memory, which at the moment of writing is up to 950 MB. If yours is still 760 MB then boot with the default Raspian, and <a href="http://www.reddit.com/r/raspberry_pi/comments/2vk2tl/my_raspberry_pi_2_is_showing_less_than_1gb_ram_is/">run 'rpi-update'</a>. I believe this also updates your firmware. Anyway after rebooting my memory showed up as 950 MB. However by default Hypriot does not configure any swap-space, which is most cases is the right thing to do as swapping to the SD card will apparently wear out the card pretty fast ("you can 'write' to flash only so many times"). In this cases however I want make sure I can run lots of containers, and most of these containers will most likely be dormant most of the time. So I really do want swap-space. Some people suggest to use a SSD drive. From what I can tell SSD really means 'no moving parts' and the chips uses are also flash based, so I'm not sure if that means swap on a SSD drive is bad too. Regardless it seems like a good idea to keep the swap off the SD card, and on a dedicated drive that be disposed of if needed. In short I came up with the idea to stick a fast USB3.0 stick in a fast (externally powered) USB3.0 Hub and to create swap files for each of my Pi's. Then see what happens with it over time, and I can easily replace it with another USB based storage device. To test this out I put a 8GB stick right in one of the USB ports. Following an article from <a href="http://theurbanpenguin.com/wp/?p=2429">theurbanpinguin</a> as root<strong>. </strong><br />
<br />
<br />
<strong>1. Format the Drive</strong><br />
<strong> </strong><br />
<span style="font-family: "Courier New",Courier,monospace;">fdisk -l</span><br />
<strong> </strong><br />
lists all the drives and mine is called /dev/sda:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">Disk /dev/sda: 8029 MB, 8029470208 bytes</span><br />
<br />
<i><b>Double check you have the right drive before proceeding as you will wipe all the data of the device.</b></i><br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">fdisk /dev/sda</span><br />
<br />
Then from the interactive options we can select<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">d</span><br />
<br />
If you have just the single partition then at partition will be
deleted. If you have more than 1 partition you will be prompted at to
which partition should be removed. We can then enter<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">n</span><br />
<br />
to create a new partition. Choose<br />
<span style="font-family: "Courier New",Courier,monospace;">p</span><br />
<br />
for primary, and then<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">1 </span><br />
<br />
to indicate partition number 1 should be created. We will be
partitioning the complete drive in the example so we will hit enter on
the starting and ending position. This defaults to the complete drive.
Finally we will need to save this back to the meta-data on the disks
master boot record. We do this by entering:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">w</span><br />
<br />
This will also have the effect of exiting the fdisk program. And finally formatting it with an ext4 file systems<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mkfs.ext4 -L DATA /dev/sda1</span><br />
<br />
Now, if we want to mount the drive '/data' we need to add the following to the /etc/fstab file:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">LABEL=DATA /data ext4 defaults 0 2</span><br />
<br />
and then issue a<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mount -a</span><br />
<br />
and you may have to set the permissions, use<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">chmod 1777 /data</span><br />
<br />
Now the<strong> </strong>drive should be ready to use.<br />
<br />
<br />
<strong>2. Create a swapfile</strong><br />
<br />
I don't want to use the entire drive for swap, instead I really only want to use a 1 GB file. So let's create a 1 GB file using 'dd' in<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">dd if=/dev/zero of=/data/swapfile bs=1M count=1024</span><br />
<br />
now to activate the swap we need to issue<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">mkswap /data/swapfile</span><br />
<span style="font-family: "Courier New",Courier,monospace;">swapon /data/swapfile</span><br />
<br />
Now you can check with 'top' that you have a 1 GB of swapspace<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">KiB Mem: 947468 total, 913096 used, 34372 free, 69884 buffers<br />KiB Swap: 1048572 total, 484 used, 1048088 free, 667248 cached</span></span><br />
<br />
To make the swap permanent between reboots add<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">/data/swapfile none swap sw 0 0</span><br />
<br />
to you /etc/fstab file. Finally in the '<code>/etc/sysctl.conf'</code> I've set the swappiness to 10 to make it not very 'swappy'<br />
<br />
<pre><code>vm.swappiness = 10</code></pre>
<br />
means it will only use swap when the RAM use gets over 90%.<br />
<br />
<br /><b>3. Running some armv7 containers</b><br />
<br />
You can do a search on the docker registry for <a href="https://registry.hub.docker.com/search?q=armv7&searchfield=">armv7</a>, <a href="https://registry.hub.docker.com/search?q=hypriot&searchfield=">hypriot</a> images and they all seem to run fine. I could not find a package for httpd on hypriot, but when I did find it on the armv7/armhf-fedora and tried 'apachectl start' I got 'Failed to get D-Bus connection: Unknown error -1'. I'm not quite sure if this is an issue with httpd on Fedora or systemd on Fedora in general. Running ArchLinux based greatfox/tomcat8-armv7 runs great.<br />
<br />
That's it for today.<br />
erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-79122853260368995082015-01-14T08:32:00.002-08:002015-01-14T09:46:09.475-08:00API Management on Fabric8Summary<br />
<br />
<i>The Fabric8 project comes with an HTTP-Gateway to create a single entry point to all Micro Services hosted by a particular Fabric8 deployment, making it easier to route all service traffic through an external firewall. It also allows you do create URL mappings using a template. On top of the regular gateway features the Fabric8 HTTP-Gateway offers API Management features. This article is an introduction into how to use these. Both the HTTP-Gateway as well as the API Management capabilities are fully asynchronous.</i><br />
<br />
It is assumed that you are already familiar with fabric8 version 2. If you are not please take a look at the <a href="http://fabric8.io/docs/index.html">fabric8 docs first</a>.<br />
<h3>
</h3>
<h3>
API Management</h3>
The Fabric8 HTTP Gateway leverages the <a href="http://apiman.io/">apiman.io</a> project, which 'brings an open source development methodology to API Management,
coupling a rich API design & configuration layer with a blazingly fast runtime'. A popular trend in enterprise software development these days is to design applications to be very decoupled
and use API’s to connect them. This approach provides an excellent way to reuse functionality across various
applications and business units. Another great benefit of API usage in enterprises is the ability to create
those API’s using a variety of disparate technologies. However, this approach also introduces its own pitfalls and disadvantages. Some of those disadvantages
include things like:<br />
<div class="itemizedlist">
<ul>
<li>Difficulty discovering or sharing existing API’s</li>
<li>Difficulty sharing common functionality across API implementations</li>
<li>Tracking of API usage/consumption</li>
</ul>
</div>
API Management is a technology that addresses these and other issues by providing an API Manager to track
APIs and configure governance policies, as well as an API Gateway that sits between the API and the client.
This API Gateway is responsible for applying the policies configured during management. Therefore an API management system tends to provide the following features:<br />
<div class="itemizedlist">
<ul>
<li>Centralized governance policy configuration</li>
<li>Tracking of API’s and consumers of those API’s</li>
<li>Easy sharing and discovery of API’s</li>
<li>Leveraging common policy configuration across different API’s</li>
</ul>
</div>
<span style="font-weight: normal;">For</span> more information on apiman see also their <a href="http://www.apiman.io/latest/user-guide.html#_introduction">user guide</a>.<br />
<h3>
</h3>
<h3>
Common Use Cases</h3>
<div class="ucitem">
Some common use cases most developers encounter are:<br />
<b> </b><br />
<i>Throttling/Quotas</i> - Limit the number of requests consumers of your APIs can make within a given time period (per service contract or per end-user).
</div>
<div class="ucitem">
<i>Centralized Security</i> - Add authentication and IP filtering capabilities in
a central location, freeing your back-end services to focus on
functionality.</div>
<div class="ucitem">
<i>Billing and Metrics</i> - Easily get metrics for all your APIs so you can see what's popular or charge your consumers for their usage.<br />
<br />
<h3>
APIMan in Fabric8 </h3>
</div>
The Fabric8 HTTP-Gateway contains this runtime 'embedded API manager engine'. As can be seen in Figure 1, the HTTP-Gateway opens a port at 9000 on xUbe. xUbe refers to either Kube (Kubernetes, OpenShift 3 and Docker) or Jube. By default the HTTP-Gateway has API Management turned on. This means that you will have to configure services in apiman and then 'Publish' them before they are live on the Gateway. When a service is published, an external request to this service on the gateway is routed through the apiman engine where policies are applied before the xUbe Service is called. An example of such a service is the CxfCdi Quickstart example which runs as Java Main processes. xUbe relays the request to one or more pods. When running Kube, the pod will typically open the same port number on the container as the service is running under. For Jube everything runs on the same machine so this would result in a port conflict and in this case an open port is chosen by Jube. In Figure 1, the port number is marked as 'xxxxx' as this port is not predetermined in this case. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCWMRaasCz3Li2KzslEv_BNGN8LcLR1WUnsbvzjqHuGkvxHnq-MJogeUmT7YU-lbAMzvXP_pEYRg3T59xdRFGUUdtmliE1utFBs0WebLopdTe6vNuXGHBLVLakEMVxBYbTwhXTN68cR1lo/s1600/Fabric8+Gateway(2).png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCWMRaasCz3Li2KzslEv_BNGN8LcLR1WUnsbvzjqHuGkvxHnq-MJogeUmT7YU-lbAMzvXP_pEYRg3T59xdRFGUUdtmliE1utFBs0WebLopdTe6vNuXGHBLVLakEMVxBYbTwhXTN68cR1lo/s1600/Fabric8+Gateway(2).png" height="501" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1. Fabric8 API Management </td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<h3>
APIMan Console</h3>
The apiman console is shipped with fabric8 as a default application that can be run. The apiman application is deployed onto a wildly-8.1 container. From the console a service can be published to the Fabric8 HTTP-Gateway using REST management that run on port 8999 on the gateway. The console itself runs on http://<openshifthost>:9092/apiman-manager/. The console is a self-service console where service creators and consumers can set up service contracts between them determining the terms of usage for a certain service.</openshifthost><br />
<h3>
</h3>
<h3>
Setup a rate limiting policy on the CxfCdi service in 5 minutes</h3>
From the Hawtio console (Kube: http://localhost:8484, Jube: http://localhost:8585) and under "Runtime > Apps" click on the green "Run..." button. Note that in this example I am using Jube, which means that my <openshifthost> is "localhost". This brings up a screen from which you should select 2 applications "HTTP Gateway" and "ApiMan Console", and one Quickstart/Java "<span class="contained c-max"><span class="app-name ng-binding">Quickstart : CXF JAX-RS CDI</span>". Then click the green "Run App" button. After a little while, on the "Runtime > Apps" tab you should now see three green "1" icons in the 'Pods' column of each of these applications, as shown in Figure 2. </span></openshifthost><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHLyZfm2vxo4CvYl5L1yrH_Tjqq0y0_5iMJuK-sLDtYDmthzuJMr6iT94PV2w58rcgOg3dkIv6BkmAJYOG1bR6z51VTyhgaiakuJDmnvve0PgmOvCpqCjjkaTjGQgkxUvUP-v96lPb2N8C/s1600/Screenshot+2015-01-14+10.42.41.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHLyZfm2vxo4CvYl5L1yrH_Tjqq0y0_5iMJuK-sLDtYDmthzuJMr6iT94PV2w58rcgOg3dkIv6BkmAJYOG1bR6z51VTyhgaiakuJDmnvve0PgmOvCpqCjjkaTjGQgkxUvUP-v96lPb2N8C/s1600/Screenshot+2015-01-14+10.42.41.png" height="266" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 2. Hawtio Console, with running Gateway.</td></tr>
</tbody></table>
<span class="contained c-max"><br /></span>
Now open the apiman console by navigating to: <a href="http://127.0.0.1:9092/apiman-manager/">http://127.0.0.1:9092/apiman-manager/</a>, and you can login using admin/admin123! You should change the password in the keycloak console at <a href="http://127.0.0.1:9092/auth/">http://127.0.0.1:9092/auth/</a>. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3OPhKP0G6VMogKPKlmIJNGloI4EDzZQAg7T2uk9u3mMVdIOroDdHu_fo1uyWebDte7Pm4L1QdDa4Bamiz3vbU4p7pW_5UMMKH2zL20_CFakZH7vNq70G5-KhfaxT9BrlYSTsYaYiwzATk/s1600/Screenshot+2015-01-14+10.51.22.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3OPhKP0G6VMogKPKlmIJNGloI4EDzZQAg7T2uk9u3mMVdIOroDdHu_fo1uyWebDte7Pm4L1QdDa4Bamiz3vbU4p7pW_5UMMKH2zL20_CFakZH7vNq70G5-KhfaxT9BrlYSTsYaYiwzATk/s1600/Screenshot+2015-01-14+10.51.22.png" height="244" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 3. Login into the apiman console.</td></tr>
</tbody></table>
We will be using the admin user, but on this screen new users would register themselves. Once logged in you should see the apiman home screen as shown in Figure 4 below.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-VTwsLqqHapvwt8wlK2aEGeDkeDp335stQTKtJ6t-SiUBn4gUNG_gkkOhtmWJlgjcKZOkGMTKtUZ5x8rW4p8GfmKzbBNz4jnHqvm088OCunGi9UCI1bAD32cnUzCEoqGBy4aJvXidGaso/s1600/Screenshot+2015-01-14+10.53.57.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-VTwsLqqHapvwt8wlK2aEGeDkeDp335stQTKtJ6t-SiUBn4gUNG_gkkOhtmWJlgjcKZOkGMTKtUZ5x8rW4p8GfmKzbBNz4jnHqvm088OCunGi9UCI1bAD32cnUzCEoqGBy4aJvXidGaso/s1600/Screenshot+2015-01-14+10.53.57.png" height="544" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 4. The apiman console home screen.</td></tr>
</tbody></table>
The first you need to do is to navigate to "Manage Gateways" and to remove to existing "The Gateway" gateway and to create a new "Fabric8Gateway". The Fabric8Gateway should have an endpoint of "http://<openshifthost>:8999/rest/apimanager". Add any credentials as they are (not yet) used.</openshifthost><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirV2pRCWJqdzFte4rymuivVy_9zMtfXN0lkBK8Twv8O6Xy4NBRmrJdwtDx23kP1GKQZDmfjbqR0etH3v-HBYpACJyOYG5DBbD03sj6b_P_XNL2MBKKP6pLwYosReMulXH8EVFEx74-BrqI/s1600/Screenshot+2015-01-14+10.57.17.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirV2pRCWJqdzFte4rymuivVy_9zMtfXN0lkBK8Twv8O6Xy4NBRmrJdwtDx23kP1GKQZDmfjbqR0etH3v-HBYpACJyOYG5DBbD03sj6b_P_XNL2MBKKP6pLwYosReMulXH8EVFEx74-BrqI/s1600/Screenshot+2015-01-14+10.57.17.png" height="428" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 5. Create a new Fabric8Gateway.</td></tr>
</tbody></table>
Next you need to add an Organization called "Fabric8Org", and now we are ready to create the CxfCdi service under this organization. We are going to create a "Public" service so we don't need to create a contract or any additional users - let's get the simple case working first! So click on "Create a new Service" from the Home screen. On the New Service screen select organization "Fabric8Org", set the name "CxfCdi", version "1.0" and description "Cxf Cdi Quickstart Demo", then click the "Create Service" button. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9HFhSYqDL88ICrbOauOIL_rvCzzSlT7NUVAb000YiCgW00y5fKsdyn0wM0_atVG6RTCNG-l5d-VkCNpx0jSujtyYJjzwnRmrKOMHKJ5OgDEkLGTcFZRqucAD0xqVzsFHOJNAc8itWaxOG/s1600/Screenshot+2015-01-14+11.09.23.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9HFhSYqDL88ICrbOauOIL_rvCzzSlT7NUVAb000YiCgW00y5fKsdyn0wM0_atVG6RTCNG-l5d-VkCNpx0jSujtyYJjzwnRmrKOMHKJ5OgDEkLGTcFZRqucAD0xqVzsFHOJNAc8itWaxOG/s1600/Screenshot+2015-01-14+11.09.23.png" height="336" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 6. Service Implementation.</td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
Make sure you can reach the CxfCdi service endpoint by doing GET on <a href="http://127.0.0.1:9002/quickstart-java-cxf-cdi/cxfcdi/customerservice/customers/123">http://<openshifthost>:9002/quickstart-java-cxf-cdi/cxfcdi/customerservice/customers/123</openshifthost></a>. This should return a small XML structure <customer><id>123</id><name>John</name></customer>. Now, as shown in Figure 6, add http://<openshifthost>:9002/quickstart-java-cxf-cdi in the API Endpoint box and select type "REST". Click Save and under plans select "Make this service public". Then under Policies select "Add Policy" and select a "Rate Limiting Policy", with <b>5</b> requests per <b>service</b> per <b>minute</b>.</openshifthost><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHaCEoURJtllJvysu70yFrCuFxhOsbRXIBHVOWeFlh0-FgTeC4VFXFUlnPlgmIKDFLfJC0VNMvp6rmjdS7OeWbnO12OunZCbUiCjaqdikhk811o6_qMNHyLcRhUhPfcYNHlBg6z2nL8oN6/s1600/Screenshot+2015-01-14+11.18.25.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHaCEoURJtllJvysu70yFrCuFxhOsbRXIBHVOWeFlh0-FgTeC4VFXFUlnPlgmIKDFLfJC0VNMvp6rmjdS7OeWbnO12OunZCbUiCjaqdikhk811o6_qMNHyLcRhUhPfcYNHlBg6z2nL8oN6/s1600/Screenshot+2015-01-14+11.18.25.png" height="296" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 7. Add Rate Limiting Policy.</td></tr>
</tbody></table>
You are now ready to publish the service to the Fabric8Gateway, so go to the "Overview" tab and click "Publish". Figure 8 shows the that the status will go to "Published".<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwkymId6_KLiV5N9rzbPiYZRLkYJS7C56m2LS8kEGflB0t6gYd9zIEjjjT5ZsYbAILfx8YluvYLSlKOsCCmQ8VdSwggAfwuBYNxXD3p_0bDJXB8cQL2CDOrp6uFFmuB-Wxq3UXVPBhf-3C/s1600/Screenshot+2015-01-14+11.22.03.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwkymId6_KLiV5N9rzbPiYZRLkYJS7C56m2LS8kEGflB0t6gYd9zIEjjjT5ZsYbAILfx8YluvYLSlKOsCCmQ8VdSwggAfwuBYNxXD3p_0bDJXB8cQL2CDOrp6uFFmuB-Wxq3UXVPBhf-3C/s1600/Screenshot+2015-01-14+11.22.03.png" height="320" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 8. Publish Service to the Fabric8Gateway.</td></tr>
</tbody></table>
The service is now exposed on the gateway and it should show up in the mapping page at http://<openshifthost>:9000 which should respond with a JSON structure showing the mapping </openshifthost><br />
<pre>{"/quickstart-java-cxf-cdi":["http://localhost:9002/quickstart-java-cxf-cdi"]}</pre>
<pre> </pre>
<div class="separator" style="clear: both; text-align: center;">
</div>
and now we should be able to approach the service through the gateway at "http://<openshifthost>:9000/quickstart-java-cxf-cdi", so go ahead and do a GET on "http://127.0.0.1:9000/quickstart-java-cxf-cdi/cxfcdi/customerservice/customers/123" which should respond with the same small XML structure <customer><id>123</id><name>John</name></customer>. However the 6th time in a minute it will respond with a 403 instead stating that the Rate limit was exceeded.</openshifthost><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhp7PGU22B0qb4KcTwnMDiPillST07ArAfTPn0Si4TFlkFOYVgQr2jPtnG4DRiw4mG6OBtNI8D7jzvmS2ALjnY5X55DiB-knYQsn_yKVVZ-MhysG0uJbHPLT37X9PZTh2q0CGN39vlHRhhV/s1600/Screenshot+2015-01-14+11.29.43.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhp7PGU22B0qb4KcTwnMDiPillST07ArAfTPn0Si4TFlkFOYVgQr2jPtnG4DRiw4mG6OBtNI8D7jzvmS2ALjnY5X55DiB-knYQsn_yKVVZ-MhysG0uJbHPLT37X9PZTh2q0CGN39vlHRhhV/s1600/Screenshot+2015-01-14+11.29.43.png" height="362" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 9. Rate policy triggered.</td></tr>
</tbody></table>
After a minute the service should become responsive again. You are now ready to set up more complex contracts in apiman. <br />
<br />
Cheers!<br />
<br />
--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-68484454791796811832014-10-16T16:35:00.000-07:002014-10-16T16:35:22.483-07:00Fabric8 V2<a href="https://github.com/fabric8io/fabric8/tree/2.0">Fabric8 v2</a> integrates with OpenShift v3 and Docker.<br />
<br />
<ul>
<li>Fabric8 V2 documentation: http://fabric8.io/v2/mavenPlugin.html#example </li>
<li>When an app is start as Java Main Docker uses the following docker file: https://github.com/fabric8io/java-docker/blob/master/Dockerfile</li>
<li>Use <a href="https://github.com/jpetazzo/nsenter">NSEnter</a> to Connect to a running docker container: http://ro14nd.de/NSEnter-with-Boot2Docker/</li>
<li>Link to a running hawtio on v2: http://dockerhost:8484/hawtio/kubernetes/pods</li>
<li>See docker output: "docker logs <containerid>"</containerid></li>
<li>if the docker container image's entry point of java $MAIN<br />then docker run -Pit mynewlygeneratedimage will run the java main</li>
</ul>
<br />erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-64329100353805042812014-07-24T12:08:00.001-07:002014-07-24T12:11:19.943-07:00Hacking on Fabric8 GatewayI'm currently adding a mbean to the Fabric8 gateway and one of the things I learned about are <a href="http://felix.apache.org/documentation/subprojects/apache-felix-maven-scr-plugin/scr-annotations.html">SRC annotations</a>. These annotations are developed by the Apache Felix Project and they allow resource injection.<br />
<br />
Something else I learned is that <a href="http://fabric8.io/gitbook/developer.html#rad-workflow">fabric8 reads its libraries straight from maven</a>, and caches it in your local .m2 maven repository. So in development all you need to do is enable watching your repo using<br />
<br />
<pre><code>Fabric8:karaf@root> fabric:watch *</code></pre>
<pre><code> </code></pre>
<pre><code><span style="font-family: inherit;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;">Then when building your jar using 'mvn install' it will automatically be picked up by karaf without needing to restart the fabric8 server.</span></span></span></code></pre>
<br />
<br />
<br />erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-19438021827813940852014-07-22T07:10:00.000-07:002014-07-22T07:11:29.468-07:00Extend a logical volume on a encrypted diskMy encrypted disk shows up as a 'double' volume in the 'Disk Utility' as shown in the following figure<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFE-Rw7k_Ms4ntcTPrKRAJ8_Tns4ZUwJ7H-_RbjrdSnX6vA71y6A_LXRZAvK7R27zjbKmKjFwUX1J7GJ0-BdyFwIIaOyGqLZiW-qjBOKMwiuz7aJBTScYvWt4iwaRnNq9MpwBksaX37z2c/s1600/Screenshot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFE-Rw7k_Ms4ntcTPrKRAJ8_Tns4ZUwJ7H-_RbjrdSnX6vA71y6A_LXRZAvK7R27zjbKmKjFwUX1J7GJ0-BdyFwIIaOyGqLZiW-qjBOKMwiuz7aJBTScYvWt4iwaRnNq9MpwBksaX37z2c/s1600/Screenshot.png" height="213" width="320" /></a></div>
<br />
<br />
<br />
<br />
You can see both volumes show up as 253 GB. I think it means the encrypted volume is using the LVM2 Physical Volume. Then the encrypted volume contains 5 mapped volumes which are listed under 'Peripheral Devices'. Initially they where:<br />
<br />
<ul>
<li>Home: 4GB</li>
<li>NotBackedUp: 8GB</li>
<li>Root: 15 GB</li>
<li>Swap: 4 GB</li>
<li>VirtualMachines: 29GB</li>
</ul>
This totals about 60 GB, and 'vgdisplay HelpDeskRHEL6' shows:<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"># vgdisplay HelpDeskRHEL6<br /> --- Volume group ---<br /> VG Name HelpDeskRHEL6<br /> System ID <br /> Format lvm2<br /> Metadata Areas 1<br /> Metadata Sequence No 6<br /> VG Access read/write<br /> VG Status resizable<br /> MAX LV 0<br /> Cur LV 5<br /> Open LV 5<br /> Max PV 0<br /> Cur PV 1<br /> Act PV 1<br /> VG Size 235.47 GiB<br /> PE Size 4.00 MiB<br /> Total PE 60280<br /> Alloc PE / Size 15346 / 59.95 GiB<br /> Free PE / Size 44934 / 175.52 GiB<br /> VG UUID zXN22u-w74k-PxAn-RhB0-QZib-Pg0e-apCHx0</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;">Which show the Alloc Size of 59.95 GB, and it also shows I still have 175.52 GB of Free Size. I want to use that as part of my 'NotBackedUp' Logical Volume and I use lvextend to add 150GB to it:</span></span></span></span><br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"># lvextend -L 150G /dev/HelpDeskRHEL6/NotBackedUp <br /> Extending logical volume NotBackedUp to 150.00 GiB<br /> Logical volume NotBackedUp successfully resized</span></span></span></span></span></span><br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;">Next, to actually use the extra space we need to resize the filesystem using resize2fs </span></span></span></span></span></span></span></span><br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;"># resize2fs /dev/HelpDeskRHEL6/NotBackedUp<br />resize2fs 1.41.12 (17-May-2010)<br />Filesystem at /dev/HelpDeskRHEL6/NotBackedUp is mounted on /NotBackedUp; on-line resizing required<br />old desc_blocks = 1, new_desc_blocks = 10<br />Performing an on-line resize of /dev/HelpDeskRHEL6/NotBackedUp to 39321600 (4k) blocks.<br />The filesystem on /dev/HelpDeskRHEL6/NotBackedUp is now 39321600 blocks long.</span></span> </span></span></span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;">and to verify we can see the large NotBackedUp Volume I use df:</span></span></span></span><br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"><span style="font-family: Times,"Times New Roman",serif;"><span style="font-size: small;"><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"># df -h <br />Filesystem Size Used Avail Use% Mounted on<br />/dev/mapper/HelpDeskRHEL6-Root<br /> 15G 7.5G 6.2G 55% /<br />tmpfs 7.7G 600K 7.7G 1% /dev/shm<br />/dev/mapper/HelpDeskRHEL6-NotBackedUp<br /> 148G 7.5G 133G 6% /NotBackedUp<br />/dev/mapper/HelpDeskRHEL6-VirtualMachines<br /> 29G 8.2G 20G 30% /VirtualMachines<br />/dev/sda1 3.0G 93M 2.8G 4% /boot<br />/dev/mapper/HelpDeskRHEL6-Home<br /> 4.0G 260M 3.5G 7% /home</span></span><br /> </span></span></span></span>erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-26401994174243904462014-06-20T03:45:00.000-07:002014-07-23T08:26:04.211-07:00Notes on adding the Fuse Cartridge using RPM.Fuse can be used on OpenShift using the Fuse Cartridge, the code for which can be found on github:<a href="https://github.com/jboss-fuse/fuse-openshift-cartridge">https://github.com/jboss-fuse/fuse-openshift-cartridge</a>. During the setup phase of a Fuse based application on OpenShift it downloads a big zipfile. I've been working on an rpm that includes the zip file to avoid this big download, to shorten the application setup and so it doesn't depend on a network connection to the outside.<br />
<br />
At the moment the rpm is build from the openshift-enterprise-rpm-6.1 branch <a href="https://github.com/jboss-fuse/fuse-openshift-cartridge/tree/openshift-enterprise-rpm-6.1">https://github.com/jboss-fuse/fuse-openshift-cartridge/tree/openshift-enterprise-rpm-6.1</a>, and the finished product is placed in <a href="http://repository.jboss.org/nexus/content/groups/ea/org/jboss/fuse/fuse-openshift-cartridge-openshift-enterprise-rpm/">nexus</a>. <br />
<br />
If you want to build the rpm by hand use 'rpmdev-setuptree', copy the zip to ~/rpmbuild/SOURCE (this needs to be the cartridge zip containing the fuse.zip), copy the spec file to rpmbuild/SPEC and then build the rpm using 'rpmbuild -ba openshift-origin-cartridge-fuse.spec'.<br />
<br />
Next copy the rpm to your OpenShift node and install it using rpm -U <rpmname>, and once it's done restart the</rpmname><br />
<div wrap="">
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">ruby193-mcollective service on node and import cartridge on broker</span></span> </div>
<pre wrap=""> #oo-admin-ctl-cartridge -c import-node --obsolete --activate </pre>
<div wrap="">
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">You can check fuse cartridge is listed by rhc cartridge-list. Note that I only found the oo-admin-ctl-cartridge command on OpenShift Enterprise and not on Origin.</span></span></div>
<div wrap="">
<br /></div>
<div wrap="">
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">At this point the Fuse application should also be listed on the application page.</span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8MxWe9tCtC_uDxrS5edxtCythaACv1m2Fm0bl5zUOLjwdDC-9lfDbYDXhrO14pnkjA5_UF4mMyizb56T3NV0VorFBWBnHq5BbCSnQ65JEZWalWydP-Y3KdL8kkyXxfrGvafqt62cW7sX8/s1600/Screen+Shot+2014-06-20+at+12.51.46+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg8MxWe9tCtC_uDxrS5edxtCythaACv1m2Fm0bl5zUOLjwdDC-9lfDbYDXhrO14pnkjA5_UF4mMyizb56T3NV0VorFBWBnHq5BbCSnQ65JEZWalWydP-Y3KdL8kkyXxfrGvafqt62cW7sX8/s1600/Screen+Shot+2014-06-20+at+12.51.46+PM.png" height="232" width="320" /></a></span></span></div>
<br />
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">You can create a new gear from there or you can use rhc:</span></span></div>
<pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0">rhc app create fuse fuse -g medium</pre>
<pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0"> </pre>
<pre class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0"> </pre>
<div class="bz_comment_text
bz_wrap_comment_text" id="comment_text_0">
It should now have created a running Fuse gear. Please note that on my little laptop it took roughtly 15 minutes for it to show up and rhc and the console were sort of in a frozen state.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipf2ZqIeoxiLvQyt8NsEXoiHIGYmNNSAUuc_j9NSEFwU5S1NXfw32uqmvGhSJUndsloKbSNinhkuAmgcrs2S_c0lTDZDVmqU3kGwF2mqrEF9vs3g6b5hgnZjHPUJGWpa9iRSOYoQPB6O2l/s1600/Screenshot+2014-06-20+12.52.33.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipf2ZqIeoxiLvQyt8NsEXoiHIGYmNNSAUuc_j9NSEFwU5S1NXfw32uqmvGhSJUndsloKbSNinhkuAmgcrs2S_c0lTDZDVmqU3kGwF2mqrEF9vs3g6b5hgnZjHPUJGWpa9iRSOYoQPB6O2l/s1600/Screenshot+2014-06-20+12.52.33.png" height="202" width="320" /></a></div>
</div>
<div wrap="">
<br /></div>
<br />
<br />erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-24167128907229227692014-06-20T03:06:00.002-07:002014-06-20T03:46:40.291-07:00Add Medium Gear Capablity on OpenShiftThe OpenShift Virtual Machine download is pre-configured with one node, in one district with a small gear profile. There are two ways to add another gear size<br />
<ul>
<li>add another node, in a new district with medium gears, but easier is it to</li>
<li>remove the district restriction and allow both small and medium gears.</li>
</ul>
Edit the /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf, and set<br />
DISTRICTS_ENABLED=false<br />
DISTRICTS_REQUIRE_FOR_APP_CREATE=false<br />
NODE_PROFILE_ENABLED=false<br />
ZONES_REQUIRE_FOR_APP_CREATE=false<br />
<br />
Next in edit the /etc/openshift/broker.conf and set<br />
VALID_GEAR_SIZES="small,medium"<br />
DEFAULT_GEAR_CAPABILITIES="small,medium"<br />
<br />
Then restart the node to let the changed take effect. If you have an existing user you may need to add the medium capability explicitly using something like <span class="st">oo-admin-ctl-user -l demo --addgearsize medium.</span>erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com1tag:blogger.com,1999:blog-2424622516012859853.post-49256103422402734092014-06-20T00:10:00.000-07:002014-06-20T03:48:05.597-07:00Notes on Getting started with OpenShiftThere are two versions of the RedHat Cloud: OpenShift Origin (the project) and OpenShift Enterprise. I needed the completely stable version, so I downloaded the OpenShift from <a href="https://access.redhat.com/home">https://access.redhat.com/home</a>. Navigate to the <a href="https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=21355">OpenShift Enterprise</a> page, as shown in Figure 1. <br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNQA5fyVimunDkopNlHbP2YRJn1_-3lINyFRbSXLW1KMEWoHvtYciAvmhFECKUM5Iqh46Ys5w1XvARTfzACW8EVQY0ZtgjoXJxFLHK3bAhfKoUVup0LYaA3UzTfHaaU12DdZkxwjrDxazm/s1600/Screen+Shot+2014-06-19+at+10.30.54+PM.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNQA5fyVimunDkopNlHbP2YRJn1_-3lINyFRbSXLW1KMEWoHvtYciAvmhFECKUM5Iqh46Ys5w1XvARTfzACW8EVQY0ZtgjoXJxFLHK3bAhfKoUVup0LYaA3UzTfHaaU12DdZkxwjrDxazm/s1600/Screen+Shot+2014-06-19+at+10.30.54+PM.png" height="242" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Figure 1. Download OSEoD Virtual Machine</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
From here I downloaded the OSEoD Virtual Machine, which is a (large) vmdk file. This vm has a 10 Gb hard drive with only a few hunderd Mb of free space on it. On OSX I'm using VirtualBox and to increase this drive you will need to convert it to a vdi format. There is a <a href="http://www.ifusio.com/blog/resize-your-sda1-disk-of-your-vagrant-virtualbox-vm">great detailed writeup by Thomas Vial</a> on this.<br />
<br />
1. Convert to vdi: <br />
<pre>VBoxManage clonehd box-disk1.vmdk box-disk1.vdi --format vdi</pre>
<br />
2. Extend to 30 Gb: <br />
<pre>VBoxManage modifyhd box-disk1.vdi --resize 30720 </pre>
<pre> </pre>
<pre> </pre>
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">3. Now create a new virtual machine and hook it up to this new large file. Then under the settings for the CD/DVD drive point this to the</span></span> <a href="http://gparted.sourceforge.net/download.php">GParted ISO</a> on your disk. Now when you start the OpenShift VM, GParted is stared and you can extend the sda2 partition, and use all the newly created 20Gb. Now apply to the file system. Then disconnect the GParted iso and restart.<br />
<br />
4. Now when the OpenShift VM comes up, sudo to root and now the new space needs to be added to the LVM. Check with 'df' what the LVM partition needs to be extended and use '<br />
<pre>'lvm vgdisplay' <span style="font-family: Times, "Times New Roman", serif; font-size: small;">to get the Free PE and using lvm lvdisplay <lvmpath> you can find the Current LE,</lvmpath></span></pre>
<pre><span style="font-family: Times, "Times New Roman", serif; font-size: small;">the Current LE + FreePE gives the new volume size. Use </span>lvm lvresize -l <new size="" volume=""> <lvmpath></lvmpath></new></pre>
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">to resize. Finally use resize2fs <lvmpath> to use the new size. Verify with 'df' that the LVM is now larger. For more details see the <a href="http://wiki.centos.org/TipsAndTricks/ExpandLV">centos docs</a>.</lvmpath></span></span><br />
<br />
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;">5. Finally there are two small issues you have to workaround, the /etc/resolve.conf is recreated with every reboot of the server you need to manually add 'nameserver 127.0.0.1' back in. The other issue is that the harddrive is set to dynamically allocate space. This process slowed down the creation of an application for me such that OpenShift began to lock up. Copying a huge file into the filesystem, and then deleting it to force the allocation fixed that for me.</span></span><br />
<br />
<br />
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>
<br />
<span style="font-size: small;"><span style="font-family: Times,"Times New Roman",serif;"><br /></span></span>erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-72073723291640461192013-03-20T18:25:00.001-07:002013-03-20T18:25:07.390-07:00JBoss Governance: Project Overlord: Release S-RAMP-0.1.1 adds Governance Workflow<a href="http://jboss-overlord.blogspot.com/2013/03/release-s-ramp-011-adds-governance.html?spref=bl">JBoss Governance: Project Overlord: Release S-RAMP-0.1.1 adds Governance Workflow</a>: I'm proud to announce release 0.1.1 of the S-RAMP project. The S-RAMP project implements the OASIS SOA Repository Artifact Model and Pro...erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-74219222718569097282011-03-18T10:07:00.000-07:002011-03-18T12:17:27.715-07:00As if Time isn't difficult enough! I say this is huge bug in java.sql.TimestampBACKGROUND<br />I found a bug in java.sql.Timestamp that was making my unit tests fail <br />once it a while when the planets aligned just the right way. I was <br />storing java.util.Date values to a Db using Hibernate and when they come <br />back out, they are represented as java.sql.Timestamp, which is a small <br />wrapper around java.util.Date.<br /><br />ISSUE<br />In TimeStamp, they are storing the time in whole seconds and the <br />remaining nanos seperately. In the Timestamp.compareTo, they call the <br />super class first (but pass the time part only). So methods like <br />before() and after() are all in error when milliseconds count. Check out the following code:<br /><br /><pre class="java" name="code" ><br />/*<br /> * Copyright 2001-2009 The Apache Software Foundation.<br /> * <br /> * Licensed under the Apache License, Version 2.0 (the "License");<br /> * you may not use this file except in compliance with the License.<br /> * You may obtain a copy of the License at<br /> * http://www.apache.org/licenses/LICENSE-2.0<br /> * <br /> * Unless required by applicable law or agreed to in writing, software<br /> * distributed under the License is distributed on an "AS IS" BASIS,<br /> * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.<br /> * See the License for the specific language governing permissions and<br /> * limitations under the License.<br /> */<br />package org.apache.juddi;<br /><br />import java.sql.Timestamp;<br />import java.util.Date;<br /><br />import junit.framework.Assert;<br /><br />import org.junit.Test;<br /><br /><br />public class TimeStampDateTest {<br /> <br /> @Test<br /> public void testDates() {<br /> <br /> java.util.Date startDate = new Date(1300286601055l);<br /> java.util.Date modifiedDate = new Date(1300286601334l);<br /> Timestamp timeStamp = new Timestamp(modifiedDate.getTime());<br /> <br /> System.out.println(startDate.getTime() + " startDate " + startDate);<br /> System.out.println(modifiedDate.getTime() + " modifiedDate " + modifiedDate);<br /> System.out.println(timeStamp.getTime() + " modifiedDtDB " + timeStamp);<br /> <br /> System.out.print("DT:" + startDate.getTime() + " is before " + modifiedDate.getTime() + ": "); <br /> if (startDate.before(modifiedDate)) {<br /> System.out.println("before");<br /> } else {<br /> System.out.println("after ******* WRONG!!!!");<br /> }<br /> Assert.assertTrue(startDate.before(modifiedDate));<br /> <br /> System.out.print("DB:" + startDate.getTime() + " is before " + timeStamp.getTime() + ": ");<br /> if (startDate.before(timeStamp)) {<br /> System.out.println("before");<br /> } else {<br /> System.out.println("after ******* WRONG!!!!");<br /> System.out.println("The reason is a bug in Timestamp, it stores the time (whole seconds) and the nanos seperately," +<br /> "and when running a compareTo it only uses the nanos when the Time part is inconclusive, however" +<br /> "it is not inconclusive because the compareTo from the super class (java.util.Date) DOES" +<br /> "already take into account the millies!");<br /> }<br /> Assert.assertTrue(startDate.before(timeStamp));<br /> }<br /> <br />}<br /><br /></pre><br /><br />The second time the date comparison will fail and it will print out: 'after ******* WRONG!!!!'. <br /><br />Yes Really!!<br /><br />Then I was told to read the <a href="http://download.oracle.com/javase/6/docs/api/java/sql/Timestamp.html">javadoc</a>:<br /><br />"<span style="font-style:italic;">Note: This type is a composite of a java.util.Date and a separate nanoseconds value. Only integral seconds are stored in the java.util.Date component. The fractional seconds - the nanos - are separate. The Timestamp.equals(Object) method never returns true when passed an object that isn't an instance of java.sql.Timestamp, because the nanos component of a date is unknown. As a result, the Timestamp.equals(Object) method is not symmetric with respect to the java.util.Date.equals(Object) method. Also, the hashcode method uses the underlying java.util.Date implementation and therefore does not include nanos in its computation.<br /><br />Due to the differences between the Timestamp class and the java.util.Date class mentioned above, it is recommended that code not view Timestamp values generically as an instance of java.util.Date. The inheritance relationship between Timestamp and java.util.Date really denotes implementation inheritance, and not type inheritance.</span>"<br /><br />Ahh that makes it all better! No no no, this is just <span style="font-weight:bold;">crap</span> and so easy to fix. Time skrewed up enough, please don't do this Snoracle!<br /><br />There's a few of my life that I will never get back. The issue is that my unit test failed when the build machine was fast enough to make milli-seconds count when comparing dates!<br /><br />GRRRR.. <br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com3tag:blogger.com,1999:blog-2424622516012859853.post-61347430553595341462011-02-14T18:14:00.001-08:002011-02-14T18:17:14.145-08:00IPhone 3Gs Battery Drain - part 2OK so after removing all my apps, it was still draining the battery. Turns out the VPN I had installed was causing it. Not sure why it suddenly went sideways. It had been working just fine for about a year. <br /><br />Now I'm busy putting all my apps back. It's kind off refreshing though to start with a clean phone :).<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-66443700951700977082011-02-13T11:12:00.000-08:002011-02-14T06:53:16.800-08:00IPhone 3Gs Battery Drain - part 1I've had my iphone for a little over a year, and since last Friday it drains the battery in about 5 hours, this while the day before I could go for 48 hours between charges. I also noticed it being warmish. I figured it is an app that is stuck, however<br /><br />- hard reset does not alleviate the issue, and<br />- doing a full restore does not help either.<br /><br />I figured that before going over to see apple store I would restore it to having the default OS/firmware only. The idea being, that if it still drains in this configuration, then the phone is just defect. Well I did, and.. my phone is fine now! So while this is good news, this must mean that it is an app that is somehow causing the issues. How am I going to find which app is causing the issues?? I guess I have the following options:<br /><br />1. Leave it the way it is and slow add apps back as I need them<br />2. Do a full restore and use bisection on app deletion/addition.<br />3. I guess i could try to figure out which apps I upgraded last week and delete those, hoping that it is one of these updates of last week.<br />4. There is no taskmon app right? Wouldn't it be great to simply see which process is eating my CPU cycles/battery..<br /><br />Love to hear your ideas!<br /><br />Cheers,<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-35756603358030680082010-09-11T15:14:00.000-07:002010-11-08T16:39:37.152-08:00Guvnor JCR Implementation Support: JackRabbit and ModeShapeI have started working on the <a href="http://jboss.org/guvnor">Guvnor</a> project recently. Guvnor manages a SOA repository and sits on top of a JCR implementation. Up until now there really where not too many <a href="http://en.wikipedia.org/wiki/Content_repository_API_for_Java">JCR</a> implementation available. Really <a href="http://jackrabbit.apache.org/">JackRabbit</a> was the only viable Open Source gig in town. This has changed now the <a href="http://www.jboss.org/modeshape">ModeShape</a> project really has been picking up steam. They recently completed implementing the JCR 2.0 specification. ModeShape has been designed from the ground up with scalability in mind, and it can run in a clustered environment (using <a href="http://www.jgroups.org/">JGroups</a>). <br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-70500943082542756662010-07-20T11:22:00.000-07:002010-07-20T11:23:35.171-07:00Matt Ridley explains how open source worksAlthough he doesn't know it he is explaining the inner workings of open <br />source, and why it will be more successful as the number of people with <br />computers increases.<br /><br /><a href="http://www.ted.com/talks/matt_ridley_when_ideas_have_sex.html">http://www.ted.com/talks/matt_ridley_when_ideas_have_sex.html</a><br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-33952417155903552882010-07-08T07:51:00.000-07:002010-07-08T07:53:49.466-07:00OSX and slow DNS issueI just 'fixed' my macbook from having terribly slow DNS lookup performance by turning off IPv6 in both of my network adapters (switch from automatic to off). The difference is not funny.<br /><br />Cheers,<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-90888606145024859932010-05-13T15:09:00.000-07:002010-05-19T11:48:55.604-07:00Guerrilla SOA in the Cloud.This evening I'm listening to a presentation by John Graham "From the ESB to REST and Clouds an Open Source", who is talking about <a href="http://www.nejug.org/events/show/110">ESB, REST and Clouds</a> at the SUN campus in Burlington, MA. It is a nice overview of the evolution of system architectures, with the conclusion that as system became more distributed we developed different integration solutions; from Corba, Asynch Messaging to ESB. The talk is spiced up by some good word plays ("Being thrown under the ESB"), quizzes and code examples. It's entertaining.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj30mh2a4IRlPRUBsPSXuaw5Uaolz8-BJId5WKT4bnirA-GXA4Aee44hNgkY3monWelOAmEIId-1A9sSApOvaFbzBZkkx_4CakrHuq5QTQ4Q7nmTlToE7G76Uq6c7fqnglC6aGFn-tH6-3-/s1600/photo.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj30mh2a4IRlPRUBsPSXuaw5Uaolz8-BJId5WKT4bnirA-GXA4Aee44hNgkY3monWelOAmEIId-1A9sSApOvaFbzBZkkx_4CakrHuq5QTQ4Q7nmTlToE7G76Uq6c7fqnglC6aGFn-tH6-3-/s400/photo.jpg" alt="" id="BLOGGER_PHOTO_ID_5473045903786391874" border="0" /></a><br /><br />After a little while we arrive at <a href="http://www.infoq.com/interviews/jim-webber-qcon-london">Guerrilla SOA</a> coined by Jim Webber, which is a lighter style of SOA. Finally we arrive at how REST can be used in a cloud environment in a complementary fashion to manage ESBs.<br /><br />Cheers,<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-80548322250568004962010-05-07T11:48:00.000-07:002010-05-07T12:01:35.096-07:00Match made in Heaven: Maven Multi-Project and Eclipse Working SetsRecently Maven dropped support for 'nested modules' which was nice from an organizational point of view, but since all the maven modules shared one classpath in this case it wasn't really working. So I started using the 'import maven modules' after checking out the main project, which creates different project for each maven module. This works well, but it explodes the amount of project in my project explorer. I have been looking for a way to organize this and today I tripped over <a href="http://eclipse.dzone.com/articles/categorise-projects-package">this article from Byron</a>, about Categorization of projects in Eclipse using 'Working Sets'. This is what I had been looking for, and it was there all this time!<br /><br />Cheers,<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-88404166134945281182010-05-04T10:18:00.000-07:002010-05-04T10:22:43.051-07:00JBoss Quick Reference CardThis is the best <a href="http://www.osconsulting.org/code-fragments/rc097-010d-jbosseap_0.pdf">Getting Started Guide on JBoss</a> I have seen! Short and to the point.<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-24084849343594614462010-05-04T10:13:00.000-07:002010-05-07T11:46:47.083-07:00jBPM Developer Guide Book (P)Review<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.packtpub.com/jboss-business-process-management-jbpm-developer-guide/book"><img style="float: right; margin: 0pt 0pt 10px 10px; cursor: pointer; width: 125px; height: 152px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpDLAJSMZr2Nk8g2GCsgTOiNt4ldV95dwYL2ni3HkUJXuiGxxmLlsYU3os81HkR8jOIE64JQkvsSWYCU6p5YJp4WCayjxUcXA6zgn4VGYBAmWHAIsWckqrkQJ4u0lrVm75KwNOTsmkEzyN/s320/jBPM+Dev+Guide.png" alt="" id="BLOGGER_PHOTO_ID_5467464544290221922" border="0" /></a>When I returned from our spring vacation trip one of the things I found waiting for me on the porch was a copy of the book "jBPM Developer Guide". After my last <a href="http://kurtstam.blogspot.com/2008/04/book-review-business-process-management.html">jBPM book review</a> I mentioned that the target audience for that book was not the jBPM developer. Well that seems to be addressed now. Here is a <a href="http://www.osconsulting.org/code-fragments/5685-jbpm-developer-guide-sample-chapter-5-getting-your-hands-dirty-with-jpdl.pdf">link the chapter 5</a>, if you're interested while I'm reading the book myself.<br /><br />Cheers,<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-6980866382021387262010-03-17T16:55:00.000-07:002010-03-25T05:47:14.382-07:00JBossESB MBean Service Deployment OrderIn JBossESB, MBeans can be deployed inside of .esb archives. The deployment is pretty much standard and the mbean configuration lives in the jbossesb-service.xml right in the root of the .esb archive. If you need the mbean to be up before the rest of the .esb deployment, then you can reference the mbean in the META-INF/deployment.xml. If the mbean depends on another .esb service archive to be deployed first the you need to add that dependency to the jbossesb-service.xml.<br /><br />If you're looking for an example check out the jbossesb.esb.<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com1tag:blogger.com,1999:blog-2424622516012859853.post-79439237386145170642010-02-07T08:29:00.000-08:002010-02-07T09:19:54.754-08:00Cool MySQL Database Queries1. Select a count of all rows in your database:<br /><pre class="SQL" name="code"><br />SELECT SUM(table_rows) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'database_name';<br /></pre><br /><br />2. Select count of all rows of each table in your database:<br /><pre class="SQL" name="code"><br />SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'database_name';<br /></pre><br /><br />3. Gets stats on a per hour basis, written out to CSV:<br /><pre class="SQL" name="code"><br />SELECT DATE_FORMAT(date, '%m/%d/%Y %H:00:00') AS date, DATE_FORMAT(date, '%Y%m%d%H') AS hour, count(id) AS requests, max(time), avg(time), count(distinct userName) INTO OUTFILE '/tmp/usage.csv' FIELDS TERMINATED BY ',' FROM statslog GROUP BY hour ORDER BY hour ASC;<br /></pre>erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com1tag:blogger.com,1999:blog-2424622516012859853.post-6813900861932457982009-06-05T16:44:00.000-07:002009-06-06T06:31:21.481-07:00jUDDI Release-3.0.0.betaI'm proud to announce the release of <a href="http://ws.apache.org/juddi/">jUDDI-3.0.0.beta</a>. Since the alpha release the implementation has shown stability and performance, and it implements the final two UDDI API implementations targeted for the 3.0.0 release; "Subscription" and "Custody transfer". Subscriptions allow you to register for updates in the Registry. The registry will send out the notification by calling an endpoint defined at registration time. The generic UDDI client now supports InVM transport to allow jUDDI to run in embedded mode. For a complete overview of what went into this release see the release notes:<br /><a href="http://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=10401&styleName=Html&version=12313630">http://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=10401&styleName=Html&version=12313630<br /></a><br />Finally, we also started work on the console.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu31D68hIzpJc8QzigGgJkyIZJOi79d69BRYLBGiqb8bF06S4oDx5JuuLKGvsbswnb8wES_uJ3bOYTOIB3x6_sAY8TfCDgSelugS8VRn9NWyJoUnSGM74kg9dO86WCLYvSLf_MY7UhTKK5/s1600-h/Picture+3.png"><img style="display:block; margin:0px auto 10px; text-align:left;cursor:pointer; cursor:hand;width: 400px; height: 227px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu31D68hIzpJc8QzigGgJkyIZJOi79d69BRYLBGiqb8bF06S4oDx5JuuLKGvsbswnb8wES_uJ3bOYTOIB3x6_sAY8TfCDgSelugS8VRn9NWyJoUnSGM74kg9dO86WCLYvSLf_MY7UhTKK5/s400/Picture+3.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5339009974506558642" /></a><br />The console is Pluto portal which plugs in uddi-portlets. The portlets are GWT based. We'd have one to Publish, Search, Browse, Subscribe etc.. Right now you can see a tree of services under the publisher you log in as. You can download the portal-bundle from the following url if you want to see it all in action.<br /><a href="http://www.apache.org/dist/ws/juddi/3_0/juddi-portal-bundle-3.0.0.beta.zip">http://www.apache.org/dist/ws/juddi/3_0/juddi-portal-bundle-3.0.0.beta.zip</a><br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-45439649700811310152009-06-05T11:14:00.000-07:002009-06-05T11:26:44.212-07:00Synergy rocks (for once)Whenever companies merge great "Synergies" are promised by upper-management. I hate that word. It's a red flag for me. So for that reason alone I had not tried out <a href="http://synergy2.sourceforge.net/">Synergy</a> until yesterday. However I really like working on the mac, and this way I can; I can simply move my mouse across to the other machines. This is really pretty sweet. Finally a good use of the word synergy! Great work guys. Thx.<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0tag:blogger.com,1999:blog-2424622516012859853.post-60989433807281587272009-06-03T17:07:00.000-07:002009-06-03T20:17:21.130-07:00Setup of a Wireless Connection on a Lenovo T400 running centos 5.3I recently installed Centos 5.3 on my Lenovo T400, and everything pretty much worked except for the wireless connection. I had a hard time finding instructions on how to get it working but <a href="http://zgambitx.wordpress.com/2009/05/11/connecting-a-lenovo-t61-to-a-wireless-network-with-centos-5-3-gnome/">this blog entry</a> was most helpful.<br /><br />First make sure the RPMForge repo is installed<br /><br />rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm<br />yum clean all<br />yum update<br /><br />Next install the driver for the PRO/Wireless 5100 AGN[Shiloh] card from Intel<br /><br />yum install iwl5000-firmware<br /><br />The wireless device should now be working, and you can enable the NetworkManager to start using it<br /><br />chkconfig NetworkManager on<br />service NetworkManager start<br /><br />and you may want to save yourself some wait time at boottime by disabling the network since you don't need that anymore<br /><br />chkconfig network off<br /><br />Next you need to configure your wireless connection settings by going to System > Preferences > More Preferences > Network Connections<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmCQJtGaRHINcsPx7UHqNMygE71gurZ52ME_NnqIjBE3iCiBP6ns15DQg3sIazNCQof33hFCAFwcPeZillbIM9zf9ScHjS2lynKADEVCxxq_7ECqhyMV0IVS0vsCzEFdcHWhS9vD-DoGPl/s1600-h/wireless-connection.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 325px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmCQJtGaRHINcsPx7UHqNMygE71gurZ52ME_NnqIjBE3iCiBP6ns15DQg3sIazNCQof33hFCAFwcPeZillbIM9zf9ScHjS2lynKADEVCxxq_7ECqhyMV0IVS0vsCzEFdcHWhS9vD-DoGPl/s400/wireless-connection.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5343262130936308098" /></a><br /><br />Select the Wireless tab and click on Add, and fill out your connection settings.erthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com2tag:blogger.com,1999:blog-2424622516012859853.post-70244674856046049642009-05-23T06:16:00.000-07:002009-06-06T06:32:19.973-07:00jUDDI v3.0.0 SNAPSHOTThe <a href="http://ws.apache.org/juddi/">jUDDI</a> project has seen a lot of activity lately in the ramp up for the jUDDIv3 beta release. The biggest change with the alpha release is that for beta the Subscription API will be fully implemented. One of the missing features of jUDDI has always been a good console. After the beta release we will work hard to get that work completed. However it is very exciting that we already have the beginnings of the console.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu31D68hIzpJc8QzigGgJkyIZJOi79d69BRYLBGiqb8bF06S4oDx5JuuLKGvsbswnb8wES_uJ3bOYTOIB3x6_sAY8TfCDgSelugS8VRn9NWyJoUnSGM74kg9dO86WCLYvSLf_MY7UhTKK5/s1600-h/Picture+3.png"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 227px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu31D68hIzpJc8QzigGgJkyIZJOi79d69BRYLBGiqb8bF06S4oDx5JuuLKGvsbswnb8wES_uJ3bOYTOIB3x6_sAY8TfCDgSelugS8VRn9NWyJoUnSGM74kg9dO86WCLYvSLf_MY7UhTKK5/s400/Picture+3.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5339009974506558642" /></a><br /><br />You can download a ready to go bundle (3.0.0.SNAPSHOT) from the <a href="http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-portal-bundle/3.0.0.SNAPSHOT/">repos</a>.<br /><br />--Kurterthttp://www.blogger.com/profile/07418191492358888029noreply@blogger.com0