Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

From the documentation on Processing deletes after inserts and updates.

Deletes are applied to the consolidated database after all inserts and updates are applied. When deletes are applied, tables are processed in the opposite order from the way they appear in the upload. When a row being deleted references a row in another table that is also being deleted, this order of operations ensures that the referencing row is deleted before the referenced row is deleted.

Problem:

We are constantly having this scenario where our customers deletes a row from the remote database and reinserts the row where the value of an unique index is identical. Since the primary key value is an autoincrement the remote database will not treat this as a row update then uploading the data to the consolidated database. We have tried to avoid this by setting the synchronization intervals as low as 1 minute but we still have this recurring problem and since there are 15 databases syncrhonizing each minute I don't think we can set a tighter schedule.

The remote database uses a log-based synchronization subscription.

Simple repro

A synchronized table, InsertThenDelete, having two columns; aPrimaryKey(INT, PRIMARYKEY) and someUniqueData(VARCHAR, UNIQUE).

INSERT INTO InsertThenDelete(aPrimaryKey, someUniqueData) VALUES (1, 'FIRST');

Perform a synchronization.

Then:

DELETE FROM InsertThenDelete WHERE aPrimaryKey = 1;
INSERT INTO InsertThenDelete(aPrimaryKey, someUniqueData) VALUES (2, 'FIRST');

This is the result when synchronizing:

E. 2012-11-24 23:11:47. Error code from MobiLink server: -10002
E. 2012-11-24 23:11:47. Server error: Message: ODBC: [Sybase][ODBC Driver][SQL Anywhere]Index 'InsertThenDelete UNIQUE (someUniqueData)' for table 'InsertThenDelete' would not be unique (ODBC State = 23000, Native error code = -196).  Table Name: InsertThenDelete.  Primary Key(s):  2

Any tips for how I may change this ? I really need that unique index to be there and I cannot expect the users to know when the data are syncrhonized so they can time a delete and insert to be performed in two syncrhronization batches.

asked 24 Nov '12, 17:15

OskarEmil's gravatar image

OskarEmil
431161831
accept rate: 50%

edited 25 Nov '12, 11:39

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050


We have used a similar technique to the one Breck has suggested to "delay the insert" after all deletes are processed. Another method would be to switch to transactional uploads (dblmsync -tu):

When you use -tu, the order of transactions on the remote database is always preserved on the consolidated database.

Note however that this regards all changes sent to the consolidated, and that MobiLink still coalesces all changes to one single row within one transaction to only one insert/update/delete, so you might use small transactions on the client, too. So in your sample, you would have to add a COMMIT between the DELETE and the INSERT (or use AUTO COMMIT mode).

permanent link

answered 25 Nov '12, 15:21

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822
accept rate: 34%

edited 25 Nov '12, 15:42

Thanks, if the application commits between deletes and inserts I'll stick with this one :-)

(25 Nov '12, 15:28) OskarEmil
2

Please be sure to test the performance impact when switching to transaaction uploads. When using transaction uploads, each COMMIT in the transaciton log results in a separate synchronization. Since you've already said you are synchronizing each minute, you're probably OK, but I'd be wary of using -tu on a remote database that had not synhronized in a week and had 1000 commits to process.

(26 Nov '12, 08:56) Reg Domaratzki
Replies hidden

@Reg: To clarify:

When using transaction uploads, each COMMIT in the transaciton log results in a separate synchronization.

vs. the docs:

Specifies that each transaction on the remote database should be uploaded as a separate transaction within one synchronization.

Is the whole sync (including the download phase) done once per transaction or are only the upload events repeated per transaction?

(Yes, I'm aware that -tu comes at a price... maybe I'm just too SQL Remote-biased:))

(26 Nov '12, 09:51) Volker Barth
2

It is only the upload events that are repeated.

(26 Nov '12, 12:01) Bill Somers

Try changing the upload_insert script to INSERT into a GLOBAL TEMPORARY table instead of the base table, then copy the rows in the end_upload event.

Here's an excerpt from a MobiLink setup that tested OK...

CREATE TABLE t1 (
   pkey          UNSIGNED BIGINT NOT NULL PRIMARY KEY,
   ukey          VARCHAR ( 10 ) NOT NULL DEFAULT '' UNIQUE,
   data          VARCHAR ( 100 ) NOT NULL DEFAULT '',
   last_updated  TIMESTAMP NOT NULL DEFAULT TIMESTAMP );

CREATE GLOBAL TEMPORARY TABLE temp_t1 (
   pkey          UNSIGNED BIGINT PRIMARY KEY,
   ukey          VARCHAR ( 10 ) NOT NULL,
   data          VARCHAR ( 100 ) NOT NULL,
   last_updated  TIMESTAMP NOT NULL DEFAULT TIMESTAMP )
   NOT TRANSACTIONAL;

CALL ml_add_table_script  ( 'v1', 't1', 'upload_insert', '
INSERT temp_t1
       ( pkey,
         ukey,
         data )
VALUES ( {ml r."pkey"},
         {ml r."ukey"},
         {ml r."data"} )' );

CALL ml_add_connection_script ( 'v1', 'begin_upload', '
CALL ml_begin_upload ( 
   {ml s.username} )' );

CREATE PROCEDURE ml_begin_upload (
   IN @username   VARCHAR ( 128 ) )
BEGIN
   DELETE temp_t1;
END;

CALL ml_add_connection_script ( 'v1', 'end_upload', '
CALL ml_end_upload ( 
   {ml s.username} )' );

CREATE PROCEDURE ml_end_upload (
   IN @username   VARCHAR ( 128 ) )
BEGIN
   INSERT t1 SELECT * FROM temp_t1;
END;
permanent link

answered 25 Nov '12, 15:06

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

Thanks, I can use this if Volker Barths tip does not help.

If this table is referenced by other tables I guess I have to use this method on those aswell, so insert data in the correct table order?

(25 Nov '12, 15:30) OskarEmil
Replies hidden

Your sample uses the "re-use" of a primary key. That's a no-go in a key-based replication/synchronization setup - but I'm sure you don't use that in your real data model.

If you have to delay modifications on UNIQUE keys (i.e. the users can delete a row and insert a new row with a different PK but the same UNIQUE KEY), and there are other tables referencing the new row, then, you might need to delay these inserts as well, as long as you can not delay the FK check itself. With a SQL Anywhere consolidated, you can delay the FK check by setting the wait_for_commit option or by using the CHECK ON COMMIT clause of the according FK definition.

That would work since when not using transactional uploads, the whole upstream is handled as one big transaction, so the commit would only happen after all operations are applied.

(25 Nov '12, 15:39) Volker Barth
Comment Text Removed

wait_for_commit option mentions foreign key integrity. Does it apply to unique index integrity as well ?

(26 Nov '12, 02:44) OskarEmil

No, AFAIK unique and primary keys are checked immediately. For the same reason they don't support a clause like CHECK ON COMMIT.

(26 Nov '12, 03:31) Volker Barth

The source of your problem is two unique indexes on the same row.

I understand that schema changes are non-trivial, but in the long term, I'd suggest making the unqiue index you current have on the table the primry key and removing the auto-increment primary key, as long as you never update the unique index column.

permanent link

answered 26 Nov '12, 08:57

Reg%20Domaratzki's gravatar image

Reg Domaratzki
7.7k343118
accept rate: 37%

I think this is not a solution for all cases.

For obvious reasons, I can't tell about Oskar Emil's requirements, however I do think there are good reasons to use unique keys (with a "real world meaning", visible to users) and still use surrogate keys as PKs on the same row.

A very simply sample would be a common lookup table with a numeric PK and a further unique "sort number" that is used to enforce a particular numbering of the lookup values. In such a situation, when adding a new lookup value, one would sometimes need to adapt the sort numbers of existing entries. That might lead to ML problems as well (although here because of the order of inserts/updates, not of deletes).

Not to tell about UNIQUE KEYs that are used to mark a combination of two columns or a relationship as UNIQUE - for cases in which these combinations may change in future (whereas the PKs mustn't change)...

(26 Nov '12, 09:42) Volker Barth

Sometimes a table has several unique indexes on the same row, for both business and performance reasons.

(26 Nov '12, 11:59) Breck Carter

I first started working with databases 6 years ago and in that time I have already learned to avoid user-defined primary keys at all cost. They say it is a static value but it is only a matter of time when they want to change that value.

Those values have business logic attached to them and healthy businesses change over time while the underlying system that handles the economy remains the same.

That is why I think primary keys should always, if possible, consist of either an autoincremented number or a guid hidden to users.

(27 Nov '12, 04:00) OskarEmil
Replies hidden

Glenn has blogged on artificial PKs (and their preferred format) here...

FWIW: Have you already found out whether -tu has solved the problem (or has led to other problems like a worse performance)?

(27 Nov '12, 04:10) Volker Barth
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×371

question asked: 24 Nov '12, 17:15

question was seen: 2,472 times

last updated: 10 Dec '14, 04:59