September 14, 2011

Directory and File System ... differences, similarities

As is commonly the case with many technologies, it seems LDAP products suffer from a really bad image. They are perceived as obsolete servers that are both convoluted, inadequate as general purpose stores and instead are purpose built for certain "niche" use-cases because the data layout is hierarchical and thus, weird. File systems are almost always hierarchical but somehow do not suffer from the same perception.

Let me try to draw a parallel between a file system -you know, this place where you trustfully organize all your work documents, your family pictures and other music files- and a good old LDAP Directory Server.

  • the structure is the same: it's a tree!

Above, Microsoft Windows folders (on NTFS)
Below, LDAP entries (UnboundID Directory Server

  • items can be manipulated the same way!

  • in LDAP, every object can have children. That is, it's like every file could also be a folder.
  • in LDAP, every object is characterized by a class. It is like the file type, except that a class can inherit characteristics from a parent class. Imagine that a WORD 2007 document inherited characteristics that are common to other documents like say, a revision number. Now another document, say an Excel spreadsheet could also have a revision number even though the word and excel documents contents are very different in nature, they share some characteristics that can be described in a common "structure". That's in essence what the hierarchy of object classes achieve.
  • In a file system, files can be journaled or revisioned. I don't know of any LDAP server supporting this as-is but LDAP servers usually have some sort of a changelog that can keep track of data changes for some time. This usually allows strong replication, resolution of conflicts and repairing most administrative errors with respect to data handling. Think of it as an integrated time machine.
  • LDAP allows extra mechanisms than a file system does, a big one is a strong authentication system that has effectively made Directory servers prime candidates for ... Operating system (and thus file system) authentication
  • LDAP supports grouping mechanisms
This is obviously not completely exhaustive but at least it gives you an idea of the similarities between the contents of an LDAP server and those of a file system and how to manipulate them: pretty much the same thing, just called different.

September 12, 2011

Counters done right.

Terry has posted two articles on assertions and "increment"  that I thought were good stepping stones to show you how a Directory Server may solve some tough problems better than you think: how to concurrently keep track of counters in your applications.

Let's take a simple application:
Get a value, decrement it, save it back. Simple. It works. Until ....
Shoot. We lost 1. This type of concurrency issue is well know to developers who have to concurrently keep track of sessions for example. The proposed solution is locking. Here's the issue with locking:

Why the warning sign? Well, locking works. Fine actually. But the client waits. In a lot of cases you do not even have much of a choice. But for counters, you do, and most likely you must. Here's an alternative:
OK, I know, the server-side decrement isn't exactly prime news and has been around for a while. No surprise! It's way easier from the application side and leaves the developer to solely focus on business "value-added" logic. So why is it not used more? I don't know. Maybe because not very many people know to ask the right question, so they never get the right answer?
With MySQL or other relational databases you usually have the ability to so with a pseudo syntax like:
UPDATE plan SET minutes=minutes-1 WHERE subscriber_id=(555)123-4567

Let's make that even better with some LDAP. First, let's take a look at how to decrement the "minutesLeft" counter stored in our user.19 profile:
dn: uid=user.19,ou=people,dc=example,dc=com
changetype: modify
increment: minutesLeft
minutesLeft: -1

So, functionally, our application will attempt to decrement the minutesLeft counter and depending on whether or not it succeeds it will proceed to let the user use his/her plan for a minute. Additionally, we'll make sure that there are indeed minutesLeft on the plan for the decrement to be successful. That's where the assertion comes in the picture. The --assertionFilter can be used on the CLI tool to manually test it. In your client, the UnboundID LDAP SDK provides full programmatic control.

This is how it looks when it is successful (there ARE minutes left!)
C:\UnboundID-DS\bat>ldapmodify -a -c --assertionFilter "(minutesLeft>=1)" --postReadAttributes minutesLeft -f Decrement.ldif
# Processing MODIFY request for uid=user.19,ou=people,dc=example,dc=com
# MODIFY operation successful for DN uid=user.19,ou=people,dc=example,dc=com
# Target entry after the operation:
# dn: uid=user.19,ou=People,dc=example,dc=com
# minutesleft: 2

Note that you can request the value before and/or after (only after in this example) to use the write operation as a read as well.
When our subscriber has depleted his plan completely, the server will return:
C:\UnboundID-DS\bat>ldapmodify -a -c --assertionFilter "(minutesLeft>=1)" --postReadAttributes minutesLeft -f Decrement.ldif

# Processing MODIFY request for uid=user.19,ou=people,dc=example,dc=com
MODIFY operation failed
Result Code:  122 (Assertion Failed)
Diagnostic Message:  Entry uid=user.19,ou=people,dc=example,dc=com cannot be modified because the request contained an LDAP assertion control and the associated filter did not match the contents of the that entry

So there you have it, the most elegant, concurrent-friendly way to keep track of your counters and avoid server round-trips to keep user experience nice thanks to constant low-latency requests.

September 9, 2011

Sync speed ... part 2

In the earlier post about Sync performance I had only tested on a small machine (namely, my trusty old laptop) simply to prove -or disprove- the experiment was valid.
I only recently got around to test on something more realistic although still not up to date. A Dell r610, two socket Intel Xeon E5520 @ 2.27GHz.

But let's look at our test setup:
In essence, on the source side (left), the test extension to the server simulates a database that has 1,000 new changes every time the server polls it. And the server is configured to poll every 1ms. On the destination side, the test extension gets the changes and returns immediately without doing anything with.

This setup has 2 main advantages:

  • it rules out network latency to isolate just the box the Synchronization Server runs on
  • it eliminates any latency due to either the source or the destination
That is, to date, the best way to test the absolute best performance the Synchronization server (or any piece of software, really) can achieve on particular rig.

I'm going to cut to the chase, since this is part 2 of the series, and show you 2 things:

and the less sexy capture:
[root@r610-02 UnboundID-Sync]# ./get-pipes-throughput
getting first measurement for all started sync pipes...
first measurement acquired. Reading: 3430978
Waiting for 10 seconds...
Getting second measurement...
Second measurement acquired. Reading: 4315672
884694 operations processed in 10 seconds
Throughput 88469 Sync/sec

So there you go, we have 8 physical cores on this platform, and the rule of thumb is that you can get about 10,000  transactions per second per core with sync (11,058 in this case). Note too that Sync scales really well vertically and is able to take advantage of all CPUs on the machine: there is only 7.74% CPU idle overall on this machine at the time these metrics were taken!

In reality, there is going to be a few things in the way to be able to achieve such numbers:
  • Network latency, which can hit you either on the source side, the destination side or both if your Sync server is collocated with neither
  • Source latency. For example, if you query an RDBMS source, there will be an inherent lag to the source engine processing the request and serving the results to the sync engine.
  • Destination latency. Same as the source except the destination is actually written to, which can take an even longer amount of time.
In Part 3, I will get to how we deal with these hurdles and what you can tune to help keep the synchronization as fast as possible.

September 8, 2011

1 JeeLink and 2 JeeNodes go to a bar ...

I find myself in an interesting situation where I have one house near Denver and one apartment in Steamboat Springs. Our family is going to live in the apartment the whole year with an occasional trip down to Denver every once in a while. To be able to keep an eye on things and reduce the power footprint while retaining as much comfort as possible when we go back, I set out to automate a few things in the house. I had looked around for things and had tinkered with PIC MCUs in college but I had never done much with them except a water fountain game for the kids located under our deck. More on that later.

One of the main issues is cost of course, as with most tinkerers, this is not serious enough that I would want to invest in a commercial solution or expensive electronics platforms. I looked at the arduino platform because it's obviously very popular and so modular that a 4 year-old could put something together without knowing anything about electronics. Unfortunately, I did not find anything "arduino" that met another one of my requirements: wireless simplicity.

Enter JeeLabs.

I stumbled upon the work of Jean-Claude Wippler (widely known simply as jcw) and the JeeLabs community at large, which counts lots of very active and really helpful members. What I really like about this community is that they are not going to belittle the new comer who knows little about the platform but au contraire, they will teach you how to fish instead. This has been one of my most agreeable and educative experience to date.

The merits of the JeeLabs platform are many but let me name a few that really helped me get things done instead of spending countless hours figuring things out:

  • It is somewhat "standard" as it builds on the arduino strengths and makes things even easier
  • Arduino libraries are functional right out of the box, you only need to know pin numbers are shifted on your JeeNode compared to the Arduino
  • It has an ULTRA simple, fairly reliable and pretty good range radio module making your setup instantly accessible over wireless. THAT alone is awesome. The radio module it sports isn't as reliable as other more expensive solutions are (XBee comes to mind) but it is absolutely good enough for most applications around the house. To draw a parallel, I would use this one: over IP networks, the same difference exists between TCP and UDP. TCP is reliable. UDP is not. It does not mean that UDP is UNRELIABLE. Get the nuance?
  • It is rock bottom cheap. You can get a JeeNode on ModernDevice for $22. That's with radio. C'mon. How's that even possible...
  • JeeLabs makes lots of simple yet useful "plugs", so there very little you will actually need to do yourself. It's more like Lego than electronics really, there is no glory in putting something that works together, they have already done all the work of making it easy. Literally plug and play.
  • There's also a nice JeeLink which is nothing more than a JeeNode in a USB stick format. That thing is sweet! Stick it in your USB port, start talking wirelessly to the other nodes or write your Perl/python/ script/program to drive all the nodes on your network.
Here's what I had in mind for my particular situtation: I needed to be able to control the heater / cooler in my house to be able to turn the heat up a couple of days before going home so we wouldn't find ourselves sleeping in a house at 5C (40F).

So I set out on an experiment and bought a JeeLink and JeeNode with 2 relay plugs to see if I could make a JeeNode turn the heat/cool/fan on remotely.
Unsurprisingly, the hardware part of it took all of 2 hours to solder the components on the boards and debug the lousy soldering points by resoldering a couple of times.
What was more surprising is that even though I had not written any C since 2004 (so 7 years give or take) it was very easy to find examples that I could tweak to do what I needed. So in about a day's worth of work, I had a way to remotely control the HVAC.

But I needed to make this whole contraption a wee bit smarter so it could replace the regular thermostat for good. I bought a second JeeNode that I rigged with 3 temperature sensors:
  1. indoor temperature: the sensor sits right on the JeeNode board. This temperature is used to trigger the HVAC in the appropriate mode.
  2. outside temperature: the sensor sits on a window sill outside. This temperature is used mostly for monitoring purpose and it allows the central software to be a little smarter than the regular unit by avoiding, for example, turning the AC on in summer if the outside temperature drops below the target temperature, which frequently happens at night in summer in Colorado.
  3. duct temperature: the sensor is tucked in the air vent where it can measure the output air temperature of the HVAC unit. This is helpful as a feedback mechanism to make sure that the heater or cooler actually work when turned on. If it doesn't, the central software will send me an SMS with Twilio.
All in all, it took me an entire week, working on it off hours, after work or on the 2 week-ends, and the most time was actually spent on the software part, I had to contribute fixes to java libraries for Pachube where I put my temperature metrics and figure out some timing conditions using RXTX but overall it was a great learning experience.

So: what will you do with your JeeNodes?

August 11, 2011

Family reunion: JeeNode and JeeLink get to talk

So as I said before leaving for vacation, I did receive the jeeLink and JeeNode from ModernDervice. It took a good half hour to solder the jeeNode together.

First Impressions
  The jeeNode PCB has lots of really good little things showing that jcw has an unusual attention to detail. The shape profile of the components is actually printed on the PCB to help position those that are polarized in the correct way. I also dreaded soldering the RF12 and it all went very smoothly, so don't get to worked up like I did, it's a piece of cake. The instructions on the jeelabs site to assemble the jeenode are good but I didn't like the order in which it tells you to solder the components. It should be from lowest to highest profile so can push the components down as much as you can against the PCB. The instructions do say that but then one of the capacitors from modern device actually had a higher profile than the port connectors which made it a little tricky to solder the ports correctly so be warned: the cap should go last.

  Being Arduino, I went straight to download the Arduino IDE. As much as I hate eclipse, I must say that they sure made it simple and straightforward for Arduino first time users! Once you have the IDE running, simply select the COM port your hardware is on and its model and you're in business! Sweet!
OK, not quite, you still need to get the libraries. So, for the JeeNode, you will need 2 core libraries, the rf12 and the ports libraries. You can get them there:

Do check them both out in the arduino "libraries" folder. That is IT!
Having never ever dealt with anything Arduino, I did not know anything about it and certainly did not know what to expect but I somewhat had an expectation of greater complexity.

Making it work: Hello World?
Though I sometimes like the obligatory hello world tutorial, it usually doesn't actually DO anything, which makes it kind of moot unless the environment itself is what you need to get used to. In this case, I think something a little meatier was in order so, since I had a DS18B20 on hand, I decided the better example would actually be to have the jeeNode send the temperature readings from that sensor to the jeeLink attached to my laptop. Maybe later I can make something useful with those wireless readings but at the moment, the intent is mostly to check that a) I did all the soldering right and b) I can wrap my head around the jeeLabs and Arduino goodness.

  I did need help from Google to figure it out but basically, you can find libraries to make your life easier for a lot of common components. That indeed include the Dallas 18B20 I was planning to use. It's a 1-wire component and guess what, there's a OneWire library. And lo and behold, there is even a DallasTemperature library that thinly wraps around the one wire to provide convenience functions around the temperature sensor.

Setting it all up:
 First I had to set the jeeLink up, that means I simply had to give it a node number. You do that by opening a terminal to your jeeLink COM port and simply type '1n'. This set my jeeLink as node 1.
Pretty simple eh? That's because the jeeLink comes loaded with a sketch called rf12demo and it allows for this dynamic configuration.

First test:
 ok so now the jeelink is listening, I will try to load a sketch on to the jeenode and have the jeenode send the temperature readings. The DS18B20 is plugged on to port 4 on the jeeNode but there is a little trick: Port on the jeeNode is like port 3 on the arduino, consequently, when we code to the jeenode, since we do it from an arduino software perspective, our port 4 will really be port 7 in the arduino lingo. Unless you know this, it actually is weird, once you do, well, whatever. So, here's a sketch that sends the temperature:


// Data wire is plugged into port 4 on the JeeNode, since port
// 1 of the jeeNode is really port 3 on the arduino, port 4 on
// jeeNode is port 7 in arduino lingo
#define ONE_WIRE_BUS 7

// Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs)
OneWire ds18b20 (7);
OneWire oneWire(ds18b20);
char payload[10];
float temperature;

// Pass our oneWire reference to Dallas Temperature. 
DallasTemperature sensors(&oneWire);

void setup(void)
  // start serial port
  Serial.println("Remote Temperature Sensor v0.1a Starting...");

  // Start up the library

void loop(void)
  // call sensors.requestTemperatures() to issue a global temperature 
  // request to all devices on the bus
  sensors.requestTemperatures(); // Send the command to get temperatures
  temperature = sensors.getTempCByIndex(0);
  Serial.print("Temperature: ");
  Serial.println(" C");
  sprintf(payload,"%d", int(temperature));
  Serial.print("Sending temperature: ");
  if ( trySending(payload)){
    Serial.println("Succesfully sent.");
  } else {
    Serial.println("Sending failed 3 times in a row");

boolean trySending(char message[]){
  boolean result = false;
  if (rf12_canSend()){
    rf12_sendStart(1,message,sizeof message);
  return result;

And guess what, we do get the temperature on the jeeLink: yeeehaaaw

Oh no!
 the problem with this sketch is that if rf12_canSend() returns false (which it does every now and then) then nothing gets sent until the next loop, meaning we can lose readings

Second Try
 the solution I found, at least for now, is to implement a retry mechanism like so:

boolean trySendingHarder(char message[], int retries){
  boolean result = false;
  int i = 0;
  while ((i++
  if ( i < retries ){
  return result;

All I have to change now is the call to trySending(payload) to trySendingHarder(payload,3) and it will do the retry 3 times. It has now been running 24 hours and hasn't missed a beat!

I hope this will help someone out there getting started with jeestuff.

August 10, 2011

Sync speed ... part 1

I've been surprised a number of times now with the expectation from people we introduced to our Sync technology. Let's jog back up a bit.

UnboundID offers a synchronization technology that has the particularity of doing near-real-time synchronization of two end-points, A and B. It does it by building an "object" from data gathered at both end points, comparing the two objects and applying the minimum changes to the destination so that the destination matches the source. The way we do bi-directional sync is to set up one pipe from A to B, and one pipe from B to A, taking care of ignoring changes applied by the synchronization engine itself to avoid infinite loops.

When we offer this to solve some tough business problems, we're often asked about the performance of Sync. Will it be as fast as my current ?
Most likely? Yes. I just wrote a couple of mock end-points simulating "ideal" end points to see how fast the sync engine is at processing the changes. I don't have hard numbers to share mainly because I tested on my laptop and I still need to iron out some of the wrinkles in my code to make it nicer but I just want to point out that:

  1. the sync core is really efficient. It could probably be improved slightly here and there but as it stands, it's pretty lean already.
  2. in the unlikely event that you have some mythical database outpacing Sync, we could scale out by using Sync on multiple machines, each sync managing its own subset of the data.
  3. More than likely, either or both your end-points are the issue.
The extensions are running inside Sync so it also nulls out the network latency. In a later version, I plan to add the ability to simulate network latency too in order to better determine the impact of the network on the system as a whole.
Stay tuned...

July 29, 2011

Oh The Joy!

I JUST got my new toys in!
check it out:

the BUB2, a jeelink and a jeenode! Yiiiipiiiie! can't wait!
I'll let you know what I do with it.

July 28, 2011

How SMALL can you get?

As part of a project of my own, I have to fit my server (UnboundID Directory Server to be precise) on a small machine. How small? Well that's exactly the question I'm trying to get an answer to so I can figure out what to buy for that particular project. Options are many, from fitPC to trimslice and atom-based netbooks, but the question is how small can I make the server and still be able to have meaningful use of it. The experiment needs to be able to store data from 14 sensors every 16ms. This is a total throughput of inbound data of 875 writes per second. More importantly than the sheer throughput -which will be coming from an external microcontroller in bursts- the server needs to have small response time in order to be able to process all the writes before the next burst in order to keep up with the constant flow of sensor data.

Fiddling with it a bit today I managed to fit the server in a 48MB footprint on a test laptop -which is one of the candidates as it makes my life easier on the battery side of things, provided it will be able to sustain the cold temperatures, more on that in a later post- and I am happy to report that I could comfortably achieve over 1,600 writes per second. Note that while I was doing these test, not only was I running DS in a very small heap, but the laptop is also running the UnboundID Synchronization Server in an equally small JVM and order to investigate the performance, I'm also running netbeans, jvisualvm and some other tools (Apache Derby, FirebirdSQL and JEdit to be precise). So you can see we're not nearly in a realistic environment but it does mean that there is a lot MORE stress at the moment on the machine, which is good.
So 1,600 writes per second on this extremely small and contrived setup is good, right? Not quite, the response times are hovering at 1.2ms, which is too much for the application I need DS for. So I throttled it down a bit to see what I could get without pushing the server to the limit, here's what I got :
Column 1: recent throughput (writes per second)
Column 2: recent response time (in millisecond)
Column 3: recent error rate
Column 4: average throughput (writes per second)
Column 5: average response time (in millisecond)

So there you go, ~0.8ms write response time on an old laptop that looks like a good candidate.

I pushed even further the experiment by attaching the sensor board and looking at the REAL traffic going in to the DS and the response times further fell to around ~0.7 which gives me a 0.3ms window of headroom.

Great, I might be able to do some extra crunching "in flight".

July 26, 2011

Engineering: It's an addiction...

I have sort of an addiction to "doing" stuff, solving problems in particular is a thing that I really get a kick out of. It was bug that I got a long time when my grand-father would (try to) teach me how to correctly bend wood beams to make the structure of wood boats. I found it fascinating as it takes a bit of understanding of the basic "fabric" quality of wood and how the fibers are disposed in the beam to bend, what temperature it requires and such. An absolutely fascinating art.
Later, my college buddy Matthieu Bourgeois totally infected me with the chess bug. I never quite got even nearly good at it but I did read a lot of books he recommended and found the mechanics, tactics and even philosophies behind it truly mesmerizing. 
More recently, I had the privilege to work a little bit with Kohsuke Kawaguchi (of Hudson and now Jenkins fame) who is an engineer in a league of his own. He decided to create an eXtreme Feedback Device for his Hudson builds and I soon followed suit with an XFD of my own. That introduced me to 2 things:
  - the USB bit Whacker which is an ingenious and cheap little device making it easy to go from the logical world of software engineering to the "magical" world of physical computing.
  - where he had sourced the bit whacker. And THAT was the beginning of an eye opening journey for me into the world of electronics which had to that point seemed to be way over my head.

Being such a hacker by nature, I soon started to abuse the bit whacker to do other things than simple XFD. I created a "weather station" for OpenDS, the project I was officially working on at the time. I will try to post some pictures and description of this weather station as my original blog just got wiped by Oracle (nice,btw). 

I then got so into it that I bought a couple of picaxes because it's a microcontroller that's easy to start with AND it's roughly $8 with taxes, so no harm if I can't wrap my head around making it work. With the first picaxe, I made a water game for my kids that sprays water from underneath my deck.There are four rows of sprayer heads, 4 heads per zone. I'll post a video of it running tomorrow. All that to say that this was really the start of a great journey.
I soon wanted to do more advanced stuff and looked into arduino. The only issue is that wireless is a pain, most options being way too expensive for the occasional DIYer.

Enters JeeLabs. Once again I run into an awe inspiring engineer, driven and getting stuff. I have been addicted to his daily weblog where he shares his experience with various projects he's working on or goes about explaining the basics of electronics in his "easy electrons" series, which in itself is worth the read for the software engineer I am. We're so "removed" from the physical world that it's really helpful, if you're considering starting an electronics project, I recommend to check out his easy electrons blog posts. 
Anyway, what's amazing is not WHAT he does but HOW he goes about it, trying to actually understand what's going on. He doesn't claim to know everything and admits when he doesn't, and you can walk the path with him to get to the point that it all makes sense. It's all about the attitude!
That prompted a purchase from moderndevice to source his electronics goodness and get crackin' with the next project.
Props JCW!
Keep posting, I'd be a total downer if you stop ...

July 25, 2011

Strap in, the "lab" is ramping up

You may not have seen the announcement (at Cloud Identity Summit) but there is a new "department" at UnboundID dubbed UnboundID labs and you can find it here. It may have been somewhat quiet, and there is but one project in there for now, SCIM (Simple Cloud Identity Management) but that doesn't mean we haven't some other stuff on the shelves that are ready to be given some visibility so stay tuned, and in the meantime, check out SCIM, which is probably the fastest a consortium (or a technical group or committee or whatever you want to call that) has ever gone from agreeing on a spec to putting out a reference implementation, all thanks to the awe inspiring engineers who build it.

July 22, 2011

Audit your environment in 10 seconds

So you got servers up and running, how do you make sure their configurations are in sync ?
The Meat
Simple, we provide a tool called ldap-diff tool that allows to compare two trees and it can be used to compare two servers' configurations.
For example:

$ldap-diff --outputLDIF sourcetotargetdiff.ldif --baseDN cn=config --sourceBindDN "cn=directory manager" --sourceBindPassword admin123 --sourcePort 1389 --sourcehost sourceDSIP --targetPort 1389 --targetHost targetDSIP --targetBindDN "cn=directory manager" --targetBindPassword admin123 --searchFilter '(!(objectclass=*replication*))' --numPasses 1 "^userPassword" "+" "*" "^modifyTimestamp" "^modifiersName" "^ds-entry-checksum" "^ds-update-time" "^ds-create-time" "^ds-entry-unique-id" "^creatorsName" "^createTimestamp" "^entryUUID" "^ds-cfg-password" "^pwdChangedTime"

Put that in a loop to iterate across your servers, that's it!

February 8, 2011

Tools for monitoring JVM memory usage

This is more like a note to self than anything else because I have found over time that I often come back to my own posts to fish for little details.

OK so let's get right to it: got java-based app running somewhere, need to take a look at it. 2 options I like, in this order, are:

  1. VisualVM
  2. JConsole

Visual VM

I like this one best because it has had some more done on the UI which does make a difference. Also, when you have an issue that you suspect is going to be difficult to reproduce if you bounce the app, then VisualVM is nice because it doesn't require to restart the application. To get started, just start jstatd, it comes standard with your jdk. All you need is to create a file like /tmp/my.policy and put the following contents in it:


grant codebase "file:${java.home}/../lib/tools.jar" {

Then start jstatd by doing something like this:

nohup $JAVA_HOME/bin/jstatd 2>&1 &


So the convenient part about this is that we didn't have to touch our application and we are now able to observe it.

Fire up VisualVM and point it to the host where you started jstatd and observe!

There is however a really nice feature in VisualVM that allows you to see where CPU time is spent in the observed application but VisualVM requires JMX access to the application which means that you will need to add some JVM options. Here they are:

It's not ideal and definitely not production worthy but it'll get you going when you want to tinker with your perfs in a sandbox.


JConsole is handy because of its ubiquity but it isn't nearly as nice as VisualVM.

First it always needs to have the JMX JVM options that are shown above. That may be a showstopper in and of itself. Especially for a production app that wasn't started with the JMX options for security concerns. I a lot of cases, you'll be flown in to put out a fire in a shop where there are best practices in place to block everything that isn't well understood (instead of trying to understand the uses and weigh the pros versus the cons) and you may also see situations where an issue is occuring on a live instance but you know that restarting the app will make it go away for an unknown amount of time, 8 hours, a week, a month, no clue.

Second, when you start a remote JConsole instance to start observing the application, you need to know the port. Now that was designed by guys who never went put out fires in real production situations with SLAs. Production or operations people NEVER let you get your hands on a live machine. In a lot of cases, you have to submit a request even if you only need to cat a config file to know what the JMX port was when it was configured some years back. At least with VisualVM, jstatd relays that information so it is able to discover all running java process on the machine it runs on and give you direct access to it. Sweet.

That was all for today ...

January 28, 2011

Introducing UnboundID Server SDK - Future-proof your investment

UnboundID released the UnboundID Server SDK to future proof your investment in your technology choice for your identity and application data. There will always be something new the business has to react to, a new device that didn't exist 18 months ago your identity platform will absolutely need to support to make the targets, a new application you must bring to market within the quarter, unforeseeable things when making a decision on the technology to use for your platform. But there is one thing that you can chose: a technology partner that will commercially support you with your growing and changing needs, a product line that can grow with you and shoulder the complexity of keeping pace with innovation.

The Meat
The Server SDK is a Development Kit for all UnboundID current and coming products, empowering organization with great control on their most precious asset and industrial tool: the application data store and identity repository. It allows to extend and enrich the functionality of the products with custom tailored business logic to better interact with existing systems or simply deliver functionality needed closest to the data.

So in short, the UnboundID Server SDK brings an unprecedented tool in the IT architect's tool box. A tool that changes everything. You don't have to architect a monolithic platform that is so rigid your applications development teams have to do all the hard work of keeping your organization nimble and your business in the race with a competitive market place. You can rationalize this into the platform, by providing a rock solid, near real time platform that can adapt to new demands.

This is especially important when organizations typically have hundreds of applications, all with different needs and uses for the data, tapping in one common infrastructure for rapidly changing data like location, presence, identity (authentication, authorization) and such.

Let the platform do it, save on development costs by factoring changes in one central trusted place. Instead of having to modify each application when an enhancement is needed, which is often the case today, let the platform handle it. The platform naturally brings features that would be costly to implement in each application, most notably: distribution of data, security policies including authentication and authorization but also data encryption, tamper detection, reliable replication, breach detection and notification.
You need a single entity to manage all these aspects to let each application focus on the added value they bring to the business, the end user or the customer.

I'm going to use two examples to illustrate how powerful controlling the platform is:

  • Coming up to the recent events in Tunisia, Facebook realized that the government of Tunisia was eavesdropping (an eve attack in crypto parlance) in order to get Tunisian Facebook users credentials that they would later use to delete the user's account. Because Facebook has written their platform from the ground up, they have absolute control over it, they could quickly react to this new situation by introducing a new connection policy (forcing HTTPS) and a new password policy (requesting users to identify friends from their network in a social captcha-like scheme). All that with no interruption of service or any inconvenience to users anywhere.
  • A banking institution operates a large identity infrastructure serving both customers and collaborators. Among rising concerns, they decide to implement a new feature for their customers: a one-time-password that they can use when in riskier than usual situations (checking balances over an airport free wifi hotspot for example). They do so by writing the adequate extension and rolling it out without taking the service down. Not only customers are not inconvenienced but all the applications they know and love immediately benefit from the new feature without a single change on their part: everything is still the same from where they sit. Additionally, the banking institution implements a 2-factor extension for collaborators, once again, the usual tools they have to use remain unchanged, avoiding costly and lengthy internal staff trainings.
These are just two examples. There are a lot more.

I hear the open-source supporters say: "Well with open source software, I control the platform since I have the source, I can do anything I want!"

While this is true in absolute, stop a second and ask yourself: 
  • does my team actually have the knowledge required?
    In many cases, open source software comes with support only for vetted releases, any alterations and you are on your own. Also, without a clean interface to build extensions to the core product, you may need to delve into the product more than you may want. Do you have the time to learn the product THIS well? The want? When an organization gets to this level of involvement in an open source project, it often means that they become contributor to the project and have resources on staff dedicated to it. Bang goes the cost... it just exploded.
  • Where can I get commercial support for my much needed extension?
    For those open source projects that offer commercial support, you can only get the extension covered by its own commercial support if it has been developed by the company backing the open source project.
  • Indemnification?
    I will let yo make up your own mind on that.
So how is UnboundID Server SDK different?

  • Extension can be written by any organization's staff having undergone our qualification and training.
  • Writing extensions, as you will see in upcoming posts is very easy. You may write extensions achieving powerful features that the business may have delayed projects for in the past in as little as a week with testing a sandbox roll-outs. The complexity of the core is well insulated inside the core, the API to enhance the server shields the developer from it.
  • Extensions are supported. Period.
  • The Server SDK comes with our products. Use it to extend any product to do nearly anything you'd like.

Controling the platform is a great business enabler, as Google and Facebook can attest, and that's precisely what UnboundID brings to the identity sphere with its Server SDK.

Get it.

The official UnboundID Server SDK page can be found here.

January 24, 2011

Bullet Proofing UnboundID DS - part 2: Securing Connections

the rationale is pretty straightforward here, you want to secure connections from clients to your infrastructure in an effort to reduce risk of compromising access.
The Meat
  The flow
This helps understand how we will secure the connection without compromising the user credentials: the credentials actually get transmitted through the connection AFTER it has been secured with TLS, which is the whole point honestly.

Now comes the trick: It means we cannot completely block out unencrypted connections or it will mean the startTLS extended operation cannot be transmitted. Because the connection has to be initially unsecure, what we can do though is to only allow a BIND to come in with the startTLS operation. Any thing else will bounce but the startTLS extended operation to secure the connection.
It will look like this:
Now the "secure" connection handler will allow any operation for an authenticated user over the secured connection.

Here is how:
  1. create a "secure connection" connection criteria that will require secure-only communication, authentication through simple auth, sasl or internal and a secure-only authentication.
  2. create a "secure policy" client connection policy with a priority of 1 referring to the "secure connection" connection criteria
  3. update the the "default" client connection policy to only allow Bind and Extended operations and extended operation OID
Here is how to do it through our 3.0.0 GUI:

here is how to do it through our dsconfig CLI:

and finally, here are the dsconfig commands achieving the same result (from config-audit.log):

# Undo command: dsconfig delete-connection-criteria --criteria-name "secure connection"
dsconfig create-connection-criteria --criteria-name "secure connection" --type simple --set communication-security-level:secure-only --set user-auth-type:internal --set user-auth-type:sasl --set user-auth-type:simple --set authentication-security-level:secure-only

# Undo command: dsconfig delete-client-connection-policy --policy-name "secure policy"
dsconfig create-client-connection-policy --policy-name "secure policy" --set enabled:true --set evaluation-order-index:1 --set "connection-criteria:secure connection"

# Undo command: dsconfig set-client-connection-policy-prop --policy-name default --add allowed-operation:abandon --add allowed-operation:add --add allowed-operation:compare --add allowed-operation:delete --add allowed-operation:modify --add allowed-operation:modify-dn --add allowed-operation:search --remove allowed-extended-operation:
dsconfig set-client-connection-policy-prop --policy-name default --remove allowed-operation:abandon --remove allowed-operation:add --remove allowed-operation:compare --remove allowed-operation:delete --remove allowed-operation:modify --remove allowed-operation:modify-dn --remove allowed-operation:search --add allowed-extended-operation:

January 18, 2011

Bullet Proofing UnboundID DS - part 1: Certificates

We're in the business of making you safe. Here is a quick step by step guide to making your authoritative source of authentication and authorization so safe it will bring peace and tranquility to CIOs and CTOs across the globe.
The Meat
  1. TLS vs SSL
    I am only going to cover TLS here because I believe it is going to be the most useful. In fact, while there are arguments going for SSL, TLS is better. Security wise, the encryption is the same. As a matter of fact, SSL and TLS are the same very thing. The difference here is that SSL implies that the connection be encrypted and then LDAP traffic is carried over the encrypted link. With TLS, a connection is established to the directory server and then, at the client's request, its security can be elevated to a TLS encrypted one with the cunning use of the startTLS extended operation. This means that TLS allows the flexibility to go to and from a secure environment based on mutual agreement by both the server and the client. This means that clients only need to know a single port for both secure and insecure transactions. It greatly simplifies deployment and maintenance, and that too counts as part of improved security.
  2. Certificates
    Dealing with certificates certainly seems like the biggest hurdle to most people that have to go about securing their environment. In the enterprise, this always means that you need your own PKI. For now, I will just show how trivial it is to get started.
    1. Server Side
      1. choose a password and store it in a read protected file
        echo password1 > key.pwd
        echo password2 > keystore.pwd
        echo password3 > truststore.pwd
        chmod 400 *.pwd
      2. generate a key pair for the server
        #keytool -genkeypair -keyalg RSA -keysize 2048 -keypass password1 -alias server -keystore server.keystore -storepass password2
      3. export the server public key
        We will need the public key later to add it in the trust store on the client side.
        #keytool -exportcert -alias server -keystore server.keystore -storepass password2 -file server.cer
      4. generate a certificate signature request
        #keytool -certreq -alias server -keystore server.keystore -storepass password2 -keypass password1 -file server.csr
      5. submit the csr to your certificate authority (CA), you will get a signed certificate in return and to install it simply import it with the same alias as when you created the key pair
        #keytool -importcert -alias server -keystore server.keystore -storepass password2 -file yoursignedservercertificate.cer
      6. We will now create a trust store. The trust store is usually a good place to store the public keys that you trust and allows you to avoid polluting your keystore. Your key store contains private information. Your trust store contains information that is public. It allows for a clean separation of duties. You can also share your trust store on an NFS share to make it available to all servers while typically the key store would be more protected and specific to each server. In the example below, we create a trust store containing the public key for our CA and intermediate CA:
        #keytool -importcert -alias ca -keystore server.truststore -storepass password3 -file ca.cer #keytool -importcert -alias intermediateca -keystore server.truststore -storepass password3 -file intermediateca.cer
    2. Client Side
      Here it is much simpler, we will simply generate a self-signed certificate on the client side. Here's how:
      #keytool -genkey -alias client -keystore client.keystore -keyalg RSA -keysize 2048 -storepass password4 -keypass password4

      and now let's export the public key so we can later import it in the server trust store:
      #keytool -exportcert -alias client -keystore client.keystore -storepass password4 -file client.cer
    3. Mutual trust
      Here's how the server knows to trust the client and vice-versa. We need to import the client public key in the server trust store and the server public key in the client trust store.
      1. Server Side
        #keytool -importcert -alias client -keystore server.truststore -storepass password3 -file client.cer
      2. Client Side
        #keytool -importcert -alias server -keystore client.truststore -storepass password5 -file server.cer