UnboundID offers a synchronization technology that has the particularity of doing near-real-time synchronization of two end-points, A and B. It does it by building an "object" from data gathered at both end points, comparing the two objects and applying the minimum changes to the destination so that the destination matches the source. The way we do bi-directional sync is to set up one pipe from A to B, and one pipe from B to A, taking care of ignoring changes applied by the synchronization engine itself to avoid infinite loops.
When we offer this to solve some tough business problems, we're often asked about the performance of Sync. Will it be as fast as my current
Most likely? Yes. I just wrote a couple of mock end-points simulating "ideal" end points to see how fast the sync engine is at processing the changes. I don't have hard numbers to share mainly because I tested on my laptop and I still need to iron out some of the wrinkles in my code to make it nicer but I just want to point out that:
- the sync core is really efficient. It could probably be improved slightly here and there but as it stands, it's pretty lean already.
- in the unlikely event that you have some mythical database outpacing Sync, we could scale out by using Sync on multiple machines, each sync managing its own subset of the data.
- More than likely, either or both your end-points are the issue.
The extensions are running inside Sync so it also nulls out the network latency. In a later version, I plan to add the ability to simulate network latency too in order to better determine the impact of the network on the system as a whole.