I only recently got around to test on something more realistic although still not up to date. A Dell r610, two socket Intel Xeon E5520 @ 2.27GHz.
But let's look at our test setup:
In essence, on the source side (left), the test extension to the server simulates a database that has 1,000 new changes every time the server polls it. And the server is configured to poll every 1ms. On the destination side, the test extension gets the changes and returns immediately without doing anything with.
This setup has 2 main advantages:
- it rules out network latency to isolate just the box the Synchronization Server runs on
- it eliminates any latency due to either the source or the destination
That is, to date, the best way to test the absolute best performance the Synchronization server (or any piece of software, really) can achieve on particular rig.
I'm going to cut to the chase, since this is part 2 of the series, and show you 2 things:
and the less sexy capture:
[root@r610-02 UnboundID-Sync]# ./get-pipes-throughput
getting first measurement for all started sync pipes...
first measurement acquired. Reading: 3430978
Waiting for 10 seconds...
Getting second measurement...
Second measurement acquired. Reading: 4315672
884694 operations processed in 10 seconds
Throughput 88469 Sync/sec
So there you go, we have 8 physical cores on this platform, and the rule of thumb is that you can get about 10,000 transactions per second per core with sync (11,058 in this case). Note too that Sync scales really well vertically and is able to take advantage of all CPUs on the machine: there is only 7.74% CPU idle overall on this machine at the time these metrics were taken!
In reality, there is going to be a few things in the way to be able to achieve such numbers:
- Network latency, which can hit you either on the source side, the destination side or both if your Sync server is collocated with neither
- Source latency. For example, if you query an RDBMS source, there will be an inherent lag to the source engine processing the request and serving the results to the sync engine.
- Destination latency. Same as the source except the destination is actually written to, which can take an even longer amount of time.
In Part 3, I will get to how we deal with these hurdles and what you can tune to help keep the synchronization as fast as possible.