Please be aware that the SAP SQL Anywhere Forum will be shut down on August 29th, 2024 when all it's content will be migrated to the SAP Community.

We are using SQL Anywhere v12.01.4231

Here is update from remote database's translated log file:

UPDATE DBA.ps_customer_product_line
   SET modified_datetime='2016-11-04 11:36:27.29',
       brand_description='LEINENKUGEL HONEY WEISS'
VERIFY (modified_datetime,
VALUES ('2016-04-12 16:22:12.734','Bills','LEINENKUGEL SUMMER SHANDY')
 WHERE company=3
   AND created_datetime='2010-12-31 11:24:28.0'
   AND created_user='Dellquist'

Here is update from translated log file when applied at consolidated database:

UPDATE DBA.ps_customer_product_line
   SET modified_datetime='2016-11-04 11:36:27.29',
       brand_description='LEINENKUGEL HONEY WEISS'
VERIFY (modified_datetime,
VALUES ('2016-04-12 16:22:12.734','Dbremote_new','LEINENKUGEL SUMMER SHANDY')
 WHERE company=3
   AND created_datetime='2010-12-31 11:24:28.0'
   AND created_user='Dellquist'

This particular statement was then replicated back to remote database. The issue is there were about 4 updates after this one that were applied at consolidated database (from remote) that did also not bounce back so that the data is now different between consolidate/remote. It is my understanding that statements should not bounce back to original sender. How does the system know to not send back and why did it send back this statement in this example?

Thanks, Brian Greiman

asked 11 Jul '17, 11:39

bgreiman's gravatar image

accept rate: 20%

edited 11 Jul '17, 11:54

Volker%20Barth's gravatar image

Volker Barth

This is a general SQL Remote feature: Operations are not sent back to the originator, unless the consolidated notices and handles an update conflict. So I would assume one has happened here for that particular row. If you have not implemented custom update conflict handling, I would assume the VERIFY clause has triggered that "bouncing back" in order to assure the last update "wins" at all sites.

(11 Jul '17, 12:00) Volker Barth

For this table, we have columns defined in both consolidated databases for modified_datetime and modified_user (these are needed for our MobiLink implementation) that have default values of timestamp and last user so that they are updated anytime the row is updated.

In the remote database, if user Bill updates row with a new value for column x and he was the last person to update the row previously, I believe only the update for column x will be written to the log file. Then dbremote will send to the consolidated database "update ps_customer_product_line set x = (new value) where primary key matches". When the consolidated db receives this update it will update column x to new value, modified_datetime to value passed and modified_user to dbremote_new (which is the user that connects to db in dbremote call). If a 2nd user were to make a another update to this row on the remote db later in day, the update statement will pass change to column x, modified_datetime, modified_user and since modified_user does not match consolidated value for modified_user (which is now dbremote_new), I assume conflict is triggered which then sends the row back to the remote.

Does this scenario sound correct?

If so, how to avoid. Stop replication of modified_datetime, modified_user columns in the publication setup?

Any other thoughts?

Thanks, Brian

(11 Jul '17, 13:40) bgreiman
Replies hidden

> default values of timestamp and last user so that they are updated anytime the row is updated

Generally, those are prime candidates for exclusion when designing a synchronization setup.

But... your post raises two other red flags: First, you mention both "dbremote" and "MobiLink", and second, you use the phrase "both consolidated databases"...

As Desi Arnaz once said,

(11 Jul '17, 15:20) Breck Carter

We have one consolidated database at our headquarters that facilitates replication between headquarters and our branch locations using dbremote. This consolidated database also facilitates mobilink synchronization between our headquarters and local databases on laptops/tablets.

(11 Jul '17, 15:23) bgreiman
Replies hidden

OK, you have one central database that acts as consolidated database for both SQL Remote and MobiLink, got it, that's fairly common.

The following is partly guesswork, since I haven't studied your setup...

The modified_datetime and modified_user columns should probably be excluded from the SQL Remote publications... changes to those columns should probably not be responsible for generating replication traffic in either direction via transaction log entries.

In fact, those two columns probably shouldn't even exist on the remote databases at all, for either MobiLink or SQL Remote purposes... they probably only exist for the purposes of MobiLink download_cursor WHERE clauses which execute only on the consolidated.

(11 Jul '17, 15:36) Breck Carter

...if I'm right (50% chance of that), there's a good chance (90%?) I've suggested a change that's gonna be really hard to implement since those columns already exist everywhere... apologies in advance :)

(11 Jul '17, 15:39) Breck Carter

We have a 3-tier setup with each branch database acts as a consolidated for mobilink local databases on tablets/laptops.

Headerquarters DB - replicates with branches via dbremtote. Syncs with laptops / tablets via Mobilink Branch DB - replicates with HQ via dbremote. Syncs with laptops/tablets via MobiLink. Laptops/Tablets - sync with local branch using Mobilink.

We need modified_datetime columns at HQ, Branches but could exclude from laptops/tablet local databases.

I think next step for us is to remove modified_datetime, modified_user columns from publications affecting dbremote.

(11 Jul '17, 15:44) bgreiman
Replies hidden

Without real testing, I think your reasoning is correct: SQL Anywhere treats a column as unmodified when the new and old value are exactly the same - I have stumbled over that when trying to prevent a DEFAULT TIMESTAMP column from being modified - see that FAQ.

Instead of trying to drop that "modified_user" column from the publication(s), which will almost impossible methinks, particularly in a combined SR/ML setup, you might try to force a real update on that "modified_user" when the previous and the new user are the same. You might do this by doing a doubled update, say, such as first setting that column to a fictious/empty/null value, and then setting it to the real user. That will prevent SQL Anywhere from "ignoring" that modification, and as SQL Remote applies each operation separately (instead to "merge them" like ML does), I would think then the modification should really be delivered by SR as two different update statements.

Note: You might be able to do that within a BEFORE UPDATE trigger as SQL Remote will replicate their actions usually, in contrast to other triggers. Probably you might need to use an IF CURRENT REMOTE USER IS NULL branch to prevent doubled updates...

(11 Jul '17, 16:42) Volker Barth

How do you set the "modified_user" column, explicitly or via a LAST USER column default?

(11 Jul '17, 16:46) Volker Barth

Last User column default

(11 Jul '17, 17:35) bgreiman
More comments hidden
showing 5 of 10 show all flat view

This is a classic SQL Remote problem that centers around conflict resolution. It is quite reasonable for an update to bounce back to the remote if a conflict occurs.

Let's imagine that at remote "X", my name is changed from "Reggie" to "Reg" and at the consolidated my name is changed from "Reggie" to "Reginald". SQL Remote runs at remote "X" and sends the change from Reggie->Reg. SQL Remote runs at the consolidated and applies the change, overwriting "Reginald". It detects a conflict, since the old value at the remote is not the same as the current value in the consolidated. SQL Remote now needs to send operations to all remote databases. Everybody except remote "X" will get two updates. The first update of Reggie->Reginald, and then the update of Reginald->Reg. However, the remote that sent the update of Reggie->Reg will only get one update (Reggie->Reginald), because updates sent from Remote "X" are not echoed back to Remote "X" (infinite loops are bad). This would result in all databases expect Remote "X" having a value of "Reg", but remote "X" would have a value of "Reginald". For this reason, updates ARE echoed back to the remote database if a conflict occurred when applying the change at the consolidated. This will result in all databases having "Reg" when they were done applying messages.

How this can get you into trouble :

Let's now imagine that remote "X" changes the row twice. Once from Reggie->Rig, then noticing the typo, from Rig->Reg. The consolidated database changes it from Reggie->Reginald. SQL remote is run at Remote "X" and then SQL Remote is run at the consolidated. The updates that are generate by SQL Remote always include a VERIFY clause that includes the old value at the remote, so that a conflict can be detected at the consolidated. The two updates will look like :

UPDATE customer SET firstName='Rig' VERIFY ('Reggie') where pkey=whatever;
UPDATE customer SET firstName='Reg' VERIFY ('Rig') where pkey=whatever;

When the first update in the message is applied, a conflict will be detected because the current value at the consolidated is 'Reginald', and not 'Reggie'. A conflict is flagged, and this update will be echoed back to remote "X". When the second update in the message is applied, no conflict is detected, because the old value at the remote was 'Rig' and the current value in the consolidated is 'Rig'. The second update is NOT echoed back to remote "X", resulting in inconsistent data in the databases.

How do you work around this issue :

If this is a serious issue for you, consider adding a column that is replicated that tracks the last time the row is modified, but don't just use "timestamp default timestamp" at the consolidated and remote. That's the default you should use at the remote, but not consolidated. If both defaults are the same, the default won't be used at the consolidated when applying messages, because the value is explicitly given in the update from the remote. Use a before update trigger to update this value at the consolidated instead.

  textcol char(64) NULL,
  last_mod timestamp NULL default NULL,
  PRIMARY KEY (a_pkey)

create trigger au_admin before update on admin
referencing new as new_row for each row
  set new_row.last_mod = CURRENT TIMESTAMP;

Now, the second update in the message in the above scenario will also be flagged as an conflict, because the last_mod column does not match, as it was changed when the first update occurs.


permanent link

answered 18 Jul '17, 15:47

Reg%20Domaratzki's gravatar image

Reg Domaratzki
accept rate: 36%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 11 Jul '17, 11:39

question was seen: 1,729 times

last updated: 18 Jul '17, 15:47