We have three tables to be synchronized using Mobilink 16: Table1 and Table3 reside in a consolidated Oracle 11G database and Table2 in a remote SQL Anywhere 16. Table1 and Table3 reside in different schemas in the consolidated database.

We perform download synchronizations from Table1 to Table2 with one publication and upload synchronizations from Table2 to Table3 with a second publication. The upload synchronization will only successfully send an update from Table2 to Table3 once. Afterwards, Mobilink does not detect any additional changes in the database logs to send to Table3. However, there are 10+ other tables that it successfully sends updates for in the same upload publication (these only involve 1 table in the consolidated db and 1 table at the remote db)!

Help!

asked 10 Jun '14, 06:35

SymphiaTech's gravatar image

SymphiaTech
36226
accept rate: 0%


If an operation was downloaded and applied to the remote database by dbmlsync, dbmlsync will NOT consider that operation for upload to the MobiLink Server, even if it's part of a different publication.

permanent link

answered 10 Jun '14, 08:37

Reg%20Domaratzki's gravatar image

Reg Domaratzki
5.3k33577
accept rate: 40%

Thanks for the response.

I thought that may be the case. So, I duplicated Table2.. so I now have Table2a used for syncing with Table1 and Table2b used for syncing with Table3. I have application data changes made in Table2a applied to Table2b via triggers. Table1 and Table2a are in one publication (consolidated to remote) and Table2b and Table3 (remote to consolidated) are in the second publication. Changes are coming through on the download. However, the upload only sends one update; after, Mobilink does not find any additional changes to upload.

(10 Jun '14, 10:00) SymphiaTech
Replies hidden

However, the upload only sends one update;

What exactly does that mean? Does it mean a row inserted by an application (or directly via SQL) in Table2 is uploaded once, however following modifications (aka UPDATE/DELETE) on that row are not uploaded anymore?

Or are all modification in Table2 done from a remote site (aka the consolidated Table1)?

(10 Jun '14, 10:15) Volker Barth
3

Sorry, I should have been more clear in my first response.

When dbmlsync is scanning the transaction log to determine which operations need to be uploaded, it will NOT consider operations that were made by a connection to the database from dbmlsync for upload.

Your trigger actions that keep Table2a and Table2b in synch are triggers that are happening on the dbmlsync connection, so they are not considered.

(10 Jun '14, 10:16) Reg Domaratzki

That is correct. Only the first synchronization will effect change on Table3. After, all further modifications are not uploaded. The modifications include an application making changes to Table2 and additional download synchronizations from Table1.

(10 Jun '14, 11:26) SymphiaTech

It appears you may be right. I have made multiple changes in publications (including moving tables to additional publications) and the end result is the same. However, the application updates outside of the dbmlsyncs also are not being uploaded. I do notice a discrepancy on the publication details in the remote's SYSSYNC view versus consolidated's ML_SUBSCRIPTION including the progress offsets. This is a dev environment to assess Mobilink's capabilities so there have been many publication updates and syncs. I am going to drop and rebuild the Mobilink setup tables. I will let you know how I fare.

(10 Jun '14, 11:38) SymphiaTech
Replies hidden

You are indeed correct, Reg. After rebuilding the Mobilink setup tables, the discrepancies between the remote and consolidated progress offsets were resolved. However, no modifications performed against Table2 are propagated to Table3 (modifications from download synchronization or application). I will need to build a separate process (perhaps a scheduled copy event)independent of the dbmlsyncs to update Table2b.

(10 Jun '14, 13:36) SymphiaTech

Do you have any insight on how to perform a download to a remote table and then a subsequent upload to a different table in a consolidated database are welcome! ? Thanks.

(10 Jun '14, 13:41) SymphiaTech

In case that has to do with your new table layout with a "shadow table" Table2b which is filled by triggers:

AFAIK, modifications done through triggers are not included in the default upload. You may have to set the "SendTriggers" extended option to include them. However, that option comes with some important remarks...

(11 Jun '14, 03:34) Volker Barth
3

You're using dbmlsync in a way that isn't intended. Your situation is similar to trying to define two different consolidated databases for a single remote database.

One possiblity to automated the process with dbmlsync would be to have an sp_hook_dbmlsync_end stored procedure that triggered an event. The code for the event would keep table2a and table2b in sync, and because the event will fire on a different connection, it won't be filtered on upload during the next synch. The event can be coded in a few different ways. It could be code that just looks at the state of tables 2a and 2b and keeps them in synch. Alternatively, you could code a trigger on table2a that populated another table with SQL to execute on table2b to mimic the operations applied during download. The event that fires will execute the SQL in the table and then clear the table of SQL commands out.

A second possibility is to define the publication on table2 that uploads data to table3 in the consolidated database using scripted upload. A scripted upload publication does not scan the transaction log, but relies on stored procedures that you write to return result sets that define the upload stream. In this situation, you would likely need triggers on table2 that flagged rows that needed to be uploaded to table3 in the consolidated, so the stored procedure you wrote would be able to figure out what rows to put in the upload.

A third possibility is to somehow do all this at the consolidated instead. I don't know much about your schema or your business rules, but maybe defining triggers on table1 at the consolidated to update table3 is a simpler solution.

(11 Jun '14, 10:04) Reg Domaratzki

@Reg: Aside: IIRC, with SQL Remote, one possibility to force SQL Remote to replicate data modifications that are done via DBRemote itself is to use hook procedures, in a fashion similar to the first possibility: Within the normal applying of messages, "note" those operations (say via a trigger) that need to be "replicated back", and then use a "sp_hook_dbremote_receive_end" hook to execute those operations delayed so they become part of the transactions to send...

Does MobiLink hook procedures offer the same possibility without having to use an event? (If so, I guess I would prefer such a solution as it would prevent the asynchronous and therefore less controllable nature of events...)

(11 Jun '14, 11:07) Volker Barth
1

@Reg: I already had audit tables against table2a. They came in handy as I leveraged those in updating table2b with a scheduled event. It seems to work well. However, using sp_hook_dbmlsync_end seems like a more elegant approach. I will revisit the implementation when I have more cycles.

I have to say that you have been very helpful and very timely with your responses. I have appreciated all the help. Thanks.

(12 Jun '14, 10:54) SymphiaTech

Using a scheduled copy event that implements Table2a transactions against Table2b seems to resolve my issue.

@Volker: thanks for your input!

(12 Jun '14, 10:57) SymphiaTech
showing 2 of 12 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×295
×27
×10

question asked: 10 Jun '14, 06:35

question was seen: 2,745 times

last updated: 12 Jun '14, 10:57