Please be aware that the SAP SQL Anywhere Forum will be shut down on August 29th, 2024 when all it's content will be migrated to the SAP Community.

We use SQL-Remote on Sybase 10 to replicate a cons. against several tens of remotes. We have a very fast growing table on cons. and the problem that some of the remotes work on season basis, so that the cons. have to collect and keep the messages of such remotes several weeks, and then we come into a problem that the number of messages exceeded the maximum.

Question: is there any way to increase the length of message file extension? now it is 3 places so we can have max. 36^3 = 46655 Messages (which is sometimes not enough). At the moment I see only one workaround to increase the maximum size of messages (DBREMOTE -l).

One more question, does the DBREMOTE send the transactions exactly as they are shown in the transaction log files? i.e. if I execute 10X an insert statement on a table, would the DBREMOTE then repeat the table name and the column names 10 times?

I've synchronised (SYNCHRONIZE SUBSCRIPTION TO...) the server against this same remote (on which the 46655 was not enough) and then it could synchronize using only 700 messages!!

What has made the difference? Does the synchronization sends compressed messages (compressing the messages after generating them)? Or does it generate the messages with a different logic (maybe table per table, and then without doesnt need to mention column names any more).

BTW, the mentioned fast growing table has long column names, and with several columns as primary key, and most transactions are INSERT statement and too little UPDATE/DELETE.

asked 01 Apr '19, 03:55

Baron's gravatar image

accept rate: 48%

edited 01 Apr '19, 05:04

More or less some hints, not full answers to your questions:

As Reg has stated, one ne way to maximize the "overall message capacity" is to enhance the maximum message size. However, that won't help if the different messages are created by different runs of SQL Remote.

AFAIK, a SYNCHRONIZE SUBSCRIPTION statement does not use the transaction log but the current database contents to get all subscribed rows, therefore it can send them in compact form and use the maximum message size per message. For a row that has been changed several times, it will only send one instance of that row whereas a normal replication will send each INSERT/UPDATE/DELETE as its own operation. Based on transaction boundaries that may mean that many of the regular messages only contain a few operations.

Note that SQL Remote also uses "ping messages" to check whether remotes that have not "answered" for a while are still responding, even if there is no fresh data for them.

permanent link

answered 01 Apr '19, 06:11

Volker%20Barth's gravatar image

Volker Barth
accept rate: 34%

edited 01 Apr '19, 06:16

bold "For a row that has been changed several times, it will only send one instance of that row whereas a normal replication will send each INSERT/UPDATE/DELETE as its own operation. Based on transaction boundaries that may mean that many of the regular messages only contain a few operations" bold

I understand this, but most of the transactions on this (fast growing) table comes from INSERT statements, which means even with the replication there are (mostly) one instance of each row!!

(01 Apr '19, 06:27) Baron

"Note that SQL Remote also uses "ping messages" to check whether remotes that have not "answered" for a while are still responding, even if there is no fresh data for them."

This is not our case, because the cons. has fresh data for this remote, but the remote is turned off (or at least DBREMOTE and the message exchange is not taking place for this remote)

(01 Apr '19, 06:45) Baron

My question is then, would the length of the column names have an impact on the length of generated messages or not? I mean which of the following statements uses the DBREMOTE to deliver an INSERT Statement?

Insert into table1 (longcolumnname11, longcolumnname2, longcolumnname3) values (1,2,3).


Insert into table1 (?,?,?) values (1,2,3).

or maybe even

Insert into table1 values (1,2,3).

(01 Apr '19, 06:48) Baron
Replies hidden

AFAIK, when you set the SQL Remote COMPRESSION option to 0 (on a test system, I strongly suggest) you get readable messages. That may answer your question. - However, I don not know whether compression will modify the contents it self.

When using -v to log applied messages, the statements look like your first sample, i.e. with owner name and full column name list, not parametrized. And I guess the column list is required for SQL Remote because it is allowed to publish just a subset of a columns of a table, so a generic INSERT without column list could fail. - Nevertheless, the fact that DBREMOTE does log operations that way does not necessarily mean the (compressed) message contents is identical to that format.

(01 Apr '19, 08:05) Volker Barth

Thank you for the hint of COMPRESSION = 0 Now I can see that the column names would be repeated with each INSERT Statement:

values (1,2,3)?SINSERT INTO table1 (longcolumnname11, longcolumnname2, longcolumnname3)

Yes, I was reading the replicated transactions from the output of DBREMOTE -v (but this, as you told, a translated form of the original message).

(01 Apr '19, 08:24) Baron

Volker was correct that your best option is to increase your message size, but note that this will need to be done at all remote databases, and involves a change to the SQL Remote command line. You could also consider potentially decreasing the send frequency on the consolidated database to better take advantage of the larger message size. If you send every 30 minutes, and every 30 minutes you generate 40Kb of SQL Remote messages, then increasing your maximum message size from the default of 50Kb to 500Kb will not change anything, since you will continue to generate 40Kb messages every 30 minutes.

The maximum number of message is a little larger than 36^3, it's actually 47988. SQL Remote also uses files with one and two characters in the extension, so you can add and extra 36 + 36x36 messages.

The SQL Statements generated by SQL Remote include the column names, so a long list of long column names will increase the size of the message. Ten inserts on t1 will generate ten SQL statements that include all the column names ten times.

When you execute a SYNCHRONIZE SUBSCRIPTION, SQL Remote sends down deletes for all rows in the remote database followed by the syntax of the select statement executed to generate the data that follows. SQL Remote on the receiving end then translates the raw data after the SQL statement in the message into individual inserts at the remote database.

I. 2019-04-01 08:23:44. Received message from "cons" (0-0000720411-0000720656-0)
I. 2019-04-01 08:23:44. Applying message from "cons" (0-0000720411-0000720656-0)
I. 2019-04-01 08:23:44. DELETE FROM cons.child WHERE '1' IN (select "rep_id" from "cons"."parent" where "cons"."parent".
I. 2019-04-01 08:23:44. DELETE FROM cons.parent WHERE '1' = "rep_id"
I. 2019-04-01 08:23:44. DELETE FROM cons.admin
I. 2019-04-01 08:23:44. SELECT c_pkey, p_pkey, textcol FROM cons.child
I. 2019-04-01 08:23:44. 57 rows synchronized
I. 2019-04-01 08:23:44. SELECT p_pkey, rep_id, textcol FROM cons.parent
I. 2019-04-01 08:23:44. 19 rows synchronized
I. 2019-04-01 08:23:44. SELECT a_pkey, textcol FROM cons.admin
I. 2019-04-01 08:23:44. 40 rows synchronized
I. 2019-04-01 08:23:44. COMMIT

Ping messages should only be an issue if during a particular send phase of SQL Remote there is no new data for remote user "X", but SQL Remote has not received a confirmation of messages it has previously sent to remote user "X". It sounds like in your case, every time SQL Remote runs, because of your fast growing table, there will always be new data for remote user "X".

One final thing I should point out is that when SQL Remote is receiving messages, the names of the files don't matter. If you have a single remote database that is not picking up messages, and you can see that they currently have ~40K messages waiting to be applied, physically go into the directory where the files reside and rename then with a command similar to

for %i in (*.*) rename %i 190401_%i


permanent link

answered 01 Apr '19, 08:49

Reg%20Domaratzki's gravatar image

Reg Domaratzki
accept rate: 36%

edited 01 Apr '19, 08:57

One final thing I should point out is that when SQL Remote is receiving messages, the names of the files don't matter...

In my understanding, as SQL Remote picks up files in the usual file name order, it is helpful here to use a prefix that comes before the name of your regular message files (named after the sender, IIRC). SQL Remote will guarantee that is applies files in correct order by I guess this is an easier task if the file name order remains unchanged after the renaming.

(01 Apr '19, 09:52) Volker Barth
Replies hidden

Correct. Renaming the files will likely make the process of receiving and applying messages less optimal, but a less optimized process is better than a process that doesn't work. My suggestion to rename the files based on the current date (YYMMDD_%i) will likely ensure we still pick up messages in the order they are created, but I didn't actually test that.

I glossed over a lot of the details on renaming the files in this post since we're dealing with the directory for a remote database, which is typically only populated with messages from the consolidated database. The risk of creating a file that might be used later by SQL Remote is greatly reduced. More care is needed if you are renaming files in the directory used for incoming messages for the consolidated database.


(01 Apr '19, 10:31) Reg Domaratzki

Thanks for the answers, decreasing the send frequency is clear, but it does not help in our case. In our case the max. Message size is 200K, and the frequency is every 2 hours, and each 2 hours I see about 100 messages generated. This would mean that most of those 100 messages are really using the full space of 200K, and this will also mean that increasing the max. Message size will really help in our case. Am I right?

Will renaming the message files really make a change? At which time should this renaming process take place (after finishing DBREMOTE?).

I had a scenario where the remote was 3 months off, and once started, there were over 47000 messages waiting, and the server was already started logging that the maximum count is reached. The remote got those messages and (for some reason) could not apply them, then has ordered the messages again. Now the cons. is in a situation in which must generate say 50K messages within one round of DBREMOTE, and this will always end with (exceeding max count).

I think over years, when the cons. consumes all 47988 available file names (having that the remote reads and apply them regularly), then the cons. starts automatically from begining (this means, if the cons. generated the message 'server.zzz' then the next one will be 'server.0'), but the problem is if one of the remotes needs at once information from the cons. which is longer to be written in 47988 files. (does the renaming help in such case too)?

(02 Apr '19, 05:42) Baron
Replies hidden

So you are basically saying one remote might need to get data resent that cannot be contained in 47K files of 200K? Wow! That would be around 9.4 GB in my book...

If every remote gets around 100 messages per one single DBREMOTE run, then I think that increasing the message size (at all sites, as Reg has pointed out!) is really very reasonable. Apparently, your transport system must be able to deal with larger data, so using less files for the same amount of data should be helpful.

(02 Apr '19, 06:06) Volker Barth

Actually the generated files are not 200K, despite I am calling the DBREMOTE with (-l 200000), so in each run of DBREMOTE I see ~100 files generated for each remote, but neither of them exceed the file size of 20 KB.

Does the -l 200000 mean 20 KB or 200 KB? Why I even see some files with 1 KB (since all the message files are generated within the same run, then what can be the reason to have such 'fragmented messages'?

(02 Apr '19, 06:40) Baron

The message size you set is the pre-compressed message size. There is not a 100% guarantee that compression will actually reduce the size of a message, although in practice, it does. A large of number of inserts into the same table with long column names in the same message will compress very well, which explains the 1KB message size you see from time to time. As an experiment, if you set compression=0, you will likely see file sizes for messages much closer to the value specified on the -l switch.

Again, echoing the same response from Volker : Wow, ~10GB of messages. Given that every two hours you generate 100 messages with -l 20K, increasing the message size will definitely help.

I would consider renaming message files for a remote database as a manual process after dbremote has finished running if you see that the message directory for a given remote database is getting close to the ~48K message limit. It could a Monday morning entry on your to-do list to check the number of messages for each remote database.


(02 Apr '19, 09:31) Reg Domaratzki
showing 2 of 6 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 01 Apr '19, 03:55

question was seen: 1,287 times

last updated: 02 May '19, 09:29