Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

We have this consolidated database and remote database(s) running SQL Anywhere 12 and using SQL Remote for replication. In this example both tables are identical on each site and are members of the publication.

TableA is used by our applications, so rows will be added directly to this table. In addition we have a TableB used by another application. Both tables are residing in the same database.

My scenario is a trigger-based synchronization (comment: Trigger based synchronization might not be the best option, I am open for accepting other solutions) from TableB to TableA. When a row is Updated/Deleted/Inserted in TableB the same changes shall appear in TableA.

Problem: TableA's PK is a GLOBAL AUTOINCREMENT value, hence an insert to this table will generate a new ID, say 1. When SQL Remote transfers this to the other database the trigger on the other database will generate a new ID, say 2. We then have the situation where the same row has two different primary keys in those databases. Add 40 remote databases and the same row will have 42 different primary keys, one for each database. TableB does have it's own primary key but since this is a generic integration logic we don't want to reuse primary keys from other systems.

I am aware of the option to replicate trigger-based transactions, but I cannot see how that will solve the problem, the only result as I can see it is that both row 1 (from trigger insert remote) and row 2 (from trigger insert local) will exist in the database.

I am trying to write down some logic that allows me to keep the AUTOINCREMENT value generated at the remote site and use this for the trigger-generated INSERT at the local site.

asked 12 Dec '11, 08:02

OskarEmil's gravatar image

accept rate: 50%

edited 12 Dec '11, 08:03

Replicating trigger generated transactions (dbremote -t) and wrapping the trigger body with IF CURRENT REMOTE USER IS NULL did the trick :)

permanent link

answered 13 Dec '11, 04:35

OskarEmil's gravatar image

accept rate: 50%

On the assumption that TableA and TableB are both in the publication, the simplest solution might be to move the logic that inserts the row into TableA from the insert trigger of TableB to your application, or possible into a stored procedure.

Old Schema :

create table tableA (
  pkA integer primary key default global autoincrement,
  c1  integer not null

create table tableB ( 
  pkB integer primary key default global autoincrement,
  pkA integer not null references tableA,
  c2  integer not null

create trigger ai_tableA after insert on tableA
referencing new as nr for each row 
  insert into tableB(pkA,c2) values (nr.pkA, nr.c1);

New Schema (note table definitions unchanged) :

create table tableA (
  pkA integer primary key default global autoincrement,
  c1  integer not null

create table tableB ( 
  pkB integer primary key default global autoincrement,
  pkA integer not null references tableA,
  c2  integer not null

create procedure insert_tableA ( in @c1 integer )
  insert into tableA(c1) values (@c1);
  insert into tableB(pkA,c2) values (@@identity, @c1);

So, in your application, instead of executing "insert into TableA(c1) values (10)" you would execute "call insert_tableA( 10 )";

permanent link

answered 12 Dec '11, 16:34

Reg%20Domaratzki's gravatar image

Reg Domaratzki
accept rate: 36%

There seems to be a misunderstanding here:

(GLOBAL) AUTOINCREMENT is a column DEFAULT - i.e. if you add a new row and don't specify a value for that column (say, pkA), it will get a value automatically. And you usually will then have to use "SELECT @@identity" to get the freshly created value. - However, if you insert a row and specify a value for that column, exactly that value will be inserted (if there's no PK violation, apparently!).

When SQL Remote runs, it sends "complete SQL Statements" to the remote databases (unless you have omitted the pkA column from the article definition), including the generated default values, and therefore the generated pkA value in the first database will be inserted in the remote database as is. As a consequence, the row should be stored with identical values at each site it belongs to.

Your repication setup needs to have the same rows identified with the same PK, otherwise something strange is happening...

The reason to use DEFAULT GLOBAL AUTOINCREMENT with according DatabaseIDs is just to assure that two different databases won't create the same PK values for the rows. - By design, rows created a different remotes are different rows.

permanent link

answered 12 Dec '11, 08:17

Volker%20Barth's gravatar image

Volker Barth
accept rate: 34%

edited 12 Dec '11, 08:20

Just ignore most of my above statements (although they should be correct as such) - I have completely missed the "trigger-generated" part - sorry!

That being said, I have to note the trigger-generated actions usually are not replicated (unless you're using DBREMOTE -t).

So if I do understand your description right, it's the trigger logic that works at each remote and creates this unwanted distinct PKs?

Then you might delay the trigger logic on the original site - e.g. by putting the logic into a SQL Remote hook procedure. That would mean you don't do the insert in the table A via a trigger but you somehow "make a note" which table B rows have to be "cloned", and then you use a SQL Remote hook procedure (like sp_hook_dbremote_receive_end ) to actually build these missing rows in table A.

As these operations are not done within a trigger, they are replicated as usually - and as wanted here.

(12 Dec '11, 08:25) Volker Barth

Yes, it's the trigger logic at each site that will create one unwanted PK and ignore the wanted PK.

The SQL Remote hook procedures could be a viable option, though SQL Remote is set up at certain intervals to ensure that all messages have arrived from remote/consolidated site before replication is triggered. With a hook procedure the data will not be present in our system before next time SQL Remote has triggered, but that might not be a problem after all.

I will check this with the design team and see how they respond.

Thank you.

(12 Dec '11, 09:08) OskarEmil
Replies hidden

Obviously, adapting all triggers (i.e. adding if current remote user...) and using DBREMOTE -t is another option.

Whether this is more or less work will surely depend on the number of triggers on replicated tables compared to such particular trigger-does-clone-rows triggers...

(12 Dec '11, 09:11) Volker Barth

I think that using triggers one way or another is the only solution.

If we perform a table scan with Scheduled Event or SQL Remote hook procedures there still is a chance that the receiving database will get the insert from TableB but not TableA due to missing messages (our clients are vessels on GSM / VSAT links so that happens a lot), the insert from TableB will be accepted since it is a separated transaction and then the receiving database might generate a new row for TableA on its own (on replication og event schedule) while waiting for the missing message to arrive again.

(12 Dec '11, 09:30) OskarEmil
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 12 Dec '11, 08:02

question was seen: 4,136 times

last updated: 13 Dec '11, 04:35