We currently have a consolidated database with a number of remotes and are using dbremote.exe in batch mode using the FILE method of replication. It is controlled with a custom synchronisation tool, which is simply an executable that automates running a three stage synchronisation process to replicate changes between them.

[Original Process]

Stage 1: Remote database send and receive

Stage 2: Consolidated database send and receive

Stage 3: Remote database send and receive

Stage 1: dbremote.exe -b -s -r -c "uid=dba;pwd=***;dbkey=***;dbn=RemoteDatabaseName;eng=RemoteEngineName;commlinks=sharedmemory"
Stage 2: dbremote.exe -b -s -r -c "uid=dba;pwd=***;dbkey=***;dbn=ConsolidatedDatabaseName;eng=ConsolidatedEngineName;commlinks=tcp(serverport=2638;host=ServerIPAddress)"
Stage 3: dbremote.exe -b -s -r -c "uid=dba;pwd=***;dbkey=***;dbn=RemoteDatabaseName;eng=RemoteEngineName;commlinks=sharedmemory"

This three stage process was put in place many years ago, when we were still using SQLA7 and it has not since been re-evaluated as to whether it's original design was optimal. We are now using SQLA17 and as part of the update to the synchronisation tool, we are questioning whether this is actually the most effective way to ensure all changes are passed between consolidated and all remotes. In particular, we have observed that the current process often leaves an unprocessed message from the remote waiting to be applied to the consolidated database on the next synchronisation run.

Therefore, I am proposing changing the 3 stage process to the following:

[Proposed Process Change A]

Stage 1: Consolidated database send only

Stage 2: Remote database send and receive

Stage 3: Consolidated database receive only

Stage 1: dbremote.exe -b -s -c "uid=dba;pwd=***;dbkey=***;dbn=ConsolidatedDatabaseName;eng=ConsolidatedEngineName;commlinks=tcp(serverport=2638;host=ServerIPAddress)"
Stage 2: dbremote.exe -b -s -r -c "uid=dba;pwd=***;dbkey=***;dbn=RemoteDatabaseName;eng=RemoteEngineName;commlinks=sharedmemory"
Stage 3: dbremote.exe -b -r -c "uid=dba;pwd=***;dbkey=***;dbn=ConsolidatedDatabaseName;eng=ConsolidatedEngineName;commlinks=tcp(serverport=2638;host=ServerIPAddress)"

In theory, this should be more efficient and never leave any unapplied messages waiting for the next synchronisation run. However, it then occurred to me that neither would the following 3 stage process, which is more similar to the original:

[Proposed Process Change B]

Stage 1: Remote database send only

Stage 2: Consolidated database send and receive

Stage 3: Remote database receive only

Stage 1: dbremote.exe -b -s -c "uid=dba;pwd=***;dbkey=***;dbn=RemoteDatabaseName;eng=RemoteEngineName;commlinks=sharedmemory"
Stage 2: dbremote.exe -b -s -r -c "uid=dba;pwd=***;dbkey=***;dbn=ConsolidatedDatabaseName;eng=ConsolidatedEngineName;commlinks=tcp(serverport=2638;host=ServerIPAddress)"
Stage 3: dbremote.exe -b -r -c "uid=dba;pwd=***;dbkey=***;dbn=RemoteDatabaseName;eng=RemoteEngineName;commlinks=sharedmemory"

Are there any advantages/disadvantages, in data consistency and efficiency, between the original process, or between either of the new proposed processes that I have failed to consider?

The goal is to maximise data consistency between databases and make the process as efficient as possible, prioritising data consistency over efficiency while ideally staying within a three stage process and leaving no unapplied messages in the consolidated and remote replication folders.

asked 12 Nov, 10:33

robertwigley's gravatar image

robertwigley
26691220
accept rate: 80%

edited 13 Nov, 13:46

Without an understanding of exactly how (and why) the "custom synchronization tool" changes the way dbremote works, it is unlikely that anyone here can offer an opinion. In other words, why are the three stages even necessary?

(12 Nov, 12:38) Breck Carter
Replies hidden
Comment Text Removed
1

FWIW, do you also consider switching to MobiLink, as that allows to "guarantee" consistent and fresh data after a synchronization?

In other words, after a successful sync, the cons has fresh data from the remote and vice versa, there are no messages left in someone's inbox...

(12 Nov, 13:08) Volker Barth
Replies hidden

Switching to MobiLink would require a fairly significant redesign of the application. We are simply looking to tighten up the existing SQL Remote process as much as possible. It has generally been working well for nearly 20 years, albeit with the the occasional complaint that not all data comes across immediately, which is what we are trying to tighten up on.

(12 Nov, 13:27) robertwigley

The "custom synchronisation tool" is a simple executable file that wraps dbremote.exe to be run three times in order to automate the process. I have updated the question to be clearer and included the actual commands being run.

The three stages are needed to send the messages out to the remotes from the consolidated and then apply them to the remotes and send them back etc. Is this not normal for all SQL Remote operations?

(12 Nov, 13:29) robertwigley

Just a few more questions:

  • So you have full control when each remote and the cons do run dbremote? According to your batch, it seems you let run dbremote on the same machine both for the remote and the cons? (That somewhat surprises me, I'm usually thinking of a SQL Remote setup built up by not directly connected machines, just sharing their messages via a message transport. In such a setup, you usually cannot directly influence when dbremote is run on both sides.)
  • Does the cons modify the data it gets from the remotes (such as adding a "consolidated" timestamp or a "registration key" to remote data or doing other kind of "central calculations"), so there is need to "bounce that back" to the remotes?
  • What about data sharing: Do the remotes have overlapping data, so data from one remote has to be sent to other remotes, too?
  • Are there possible update conflicts that make the cons send data back to the remote?
  • Does the cons have to spread centrally managed data, such as product lists or the like, or is it just a colllector of remote data?
  • Do you want that each message is confirmed to be applied by the receiver, i.e. the sender expects to get the according "ping message" immediately? (That will not work when you expect constant modifications.)
(13 Nov, 03:39) Volker Barth
Replies hidden

Yes, replication is manually instigated by the users running the synchronisation tool when they are connected to the network hosting the consolidated database. The tool wraps the three dbremote.exe commands. You can duplicate the behaviour by putting those three commands in a batch file. The tool additionally provides a nice GUI to the user and does some error handling by checking the logs for each dbremote.exe command.

Replication is via the FILE method, putting the messages into a file share on the server hosting the consolidated database.

I am not aware of the consolidated database modifying the received data in any way that would require a 'bounce back' to the remotes, unless SYS.SYSREMOTEUSER being updated with the timestamp of the last time received etc would cause that?

All data from all remotes is shared with the consolidated database and all other remotes, so that all databases are identical.

Yes, it is possible that an update conflict could be generating the unprocessed message we are seeing at the end of a synchronisation run. That had occurred to me as a possibility for it's existence, but it happens fairly regularly (most likely all the time) and I would not expect that if it was caused by conflict resolution.

By using the three stage sync process, according to my understanding of SQL Remote replication, this should mean that both the consolidated and the remote database should be fully up-to-date after a synchronisation run, as all messages should be sent, received and applied at both ends?

(13 Nov, 05:27) robertwigley
1

By using the three stage sync process, according to my understanding of SQL Remote replication, this should mean that both the consolidated and the remote database should be fully up-to-date after a synchronisation run, as all messages should be sent, received and applied at both ends?

Is it possible that there are data modifications by other transactions appearing between the first and last run of such a batch, say, can the cons be forced to replicate from a second remote while the first is within its batch?

A general remark: You talk of "database send and receive" but it is vice versa: The receiving phase takes place before the sending phase when it is combined within one dbremote run.

As to your proposed changes, the database in step 2 would not get the confirmation that the messages it has sent are applied by the counterpart, and in case it is a remote, it would not get "bounced back" update conflict messages within that batch, so for the latter part, data is not guaranteed to be identical even when there is no impact by other transactions.


We have used some logic to discover "ping messages" (1) and react on them in a special way: If a database has only "ping messages" in its inbox, they would just update the confirm status of the counterpart, so you might to ingore these (i.e. do not trigger DBREMOTE -r then).

(1) "Ping messages" do not contain DML statements but just confirm that the incoming messages have been applied, or they "ask" a remote whether it is "still alive" when the cons runs in batch mode and did not have to generate messages for the according remote for some runs. You can usually identify them by their short message file size.

(13 Nov, 06:39) Volker Barth

It is possible that another remote might be in the synchronisation process at the same time, but it is unlikely and we would expect additional messages to be produced if this were the case. The additional message appears when we know that there is only one synchronisation process running. Additionally, only one instance of dbremote.exe can be run at a time on each database. So if two synchronisation processes collide on the consolidated stage, dbremote.exe produces the following error:

"Cannot register ''sybase.asa.dbremote.both'' since another exclusive instance is running"

Yes, sorry, I know the receiving phase happens first before the sending one when both are run in the same command. To be technically correct, I should really be talking about receiving and sending.

That's really interesting regarding "ping messages". That could well be what we are seeing in that case. Could a "ping message" literally exist to update the SYS.SYSREMOTEUSER table to say they have been received by the remote?

So, if I understand you correctly, to get a complete synchronisation, including making sure both ends have received the "ping messsages", we would potentially need to run a four stage process, or split out the receive and send parts into separate dbremote.exe commands?

(13 Nov, 07:15) robertwigley

> Is this not normal for all SQL Remote operations?

No, not by any means.

Is it correct to assume you are using SQL Remote to implement "request and reply" conversations between the consolidated and remote?

What about frequent "broadcasts" of data coming from one remote and being immediately forwarded to other remotes?

Both kinds of operations might need repeated dbremote runs, and neither are common.

(13 Nov, 07:53) Breck Carter

FWIW a dbremote run that doesn't have anything to do also doesn't take much time to run.

(13 Nov, 07:56) Breck Carter
1

Yes, I think you would need a four part process in order to achive that:

  1. DB1: send
  2. DB2: reveive and send
  3. DB1: reveive and send
  4. DB2: receive

As stated, if one site has nothing to send (except its confirmation), you might skip the next step.

But as usually, testing should help to assure that, and I would recommend to run dbremote -v -o to get the contents of the applied messages - that will help to find out about possible update conflicts (or results of dbremote hook procedures, if you use those) and the like.

(13 Nov, 08:03) Volker Barth
1
(13 Nov, 08:47) Breck Carter
Replies hidden

Yes, we have thought about it and are potentially considering this for the consolidated database, in order to speed up stage 2.

(13 Nov, 13:37) robertwigley

Thank you for confirming what I had concluded from your previous comment. We will do some tests, but it does sound very much like this is the potential issue with the way it has always been run before (i.e. the original process). I am now pretty confident that this new four stage process will be optimal and fulfil our wish to tighten this up.

(13 Nov, 13:38) robertwigley

Yes, you could say it is a 'request and reply' type system that is run on demand by the users. There are no regular broadcasts of data as it is currently always run on demand in the way described by the original process.

(13 Nov, 13:44) robertwigley
showing 4 of 15 show all flat view

Thinking about this some more, based on what you have said, it actually sounds like the following would be the optimal process to make sure that both ends are fully updated and aware of any conflict resolution bounce back and ping messages, leaving no unprocessed messages in the replication folders:

Stage 1: Remote database send only

Stage 2: Consolidated database send and receive

Stage 3: Remote database send and receive

Stage 4: Consolidated database receive only

permanent link

answered 13 Nov, 08:28

robertwigley's gravatar image

robertwigley
26691220
accept rate: 80%

converted 03 Dec, 10:49

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×84
×51

question asked: 12 Nov, 10:33

question was seen: 80 times

last updated: 13 Nov, 13:48