I have a large replication system with over 2000 remotes. I need to make a massive update on one of the tables in the system that would generate well over 5000 messages to process. This is not realistic but I am pretty sure I have a solution to this but would like confirmation.
The table that I am dealing with is set up as 1 way replication to 99% of the remotes. The table is maintained by a handful of users and the data is replicated to all the remotes.
The solution: We have an application that replicated the database and than will mandate an update to the program files. Within this update we will send a data file containing all 1 million rows of the table. A script will do a truncate BIGTABLE followed by load BIGTABLE.... These statements be non replciatable I believe this will work. Basically since the APP mandates the replication prior to this update I am confident that the BIGTABLE in the remotes will be identical to BIGTABLE in the consolidated with a clean messaging system for this table. Also, during this process there will be no data changes on BIGTABLE on any of the databases remotes/consolidated till it is confirmed that all databases have had the bulk load completed.
The Question: Am I missing anything on this approach...
asked 12 Feb '14, 13:01
It's been several years since I've dealt with dbremote, but I've done a lot of manual re-syncs while I was doing it.
I'm assuming the remotes will break the replication link once the truncate/load table command gets replicated down. Your program is then intending to manually load the table, then "reset the replication tape" back to the consolidated transaction log offset directly following the LOAD TABLE command? This may work as long as all the tables involved in replication are brought into the correct state at that precise transaction log offset.
If there are any changes made to the consolidated database after that offset then those transactions will be found and applied once replication is brought up again (assuming the logs are still available)
Test the heck out of this, however you can. It's really hard to remotely bring two databases back into sync when you don't have access to both of them.
answered 12 Feb '14, 17:17
I do not fully understand your requirements, and besides all thoughs and planning, I would surely recommend to test this in a test environment heavily.
CORRECTION: The following paragraphs are complete bogus, it's really the other way around: SQL Remote hooks can be used to modify data within the SQL Remote run that will be replicated back to the sender... - i.e. you can use a hook like sp_hook_dbremote_receive_end to update a new row that has just been inserted via by SQL Remote's receiving phase, and the update will be replicated as well, i.e. the UPDATE operation will be part of the next sending phase. That would normally only happen on the consolidated in case of update conflicts.
At least that's my understanding - well, I guess my memory has been restored partially:)
So, apparently SQL Remote hooks won't help here in the way I had thought of.
A different approach would be to incorporate the massive data update into the SQL Remote run via PASSTHROUGH MODE:
AFAIK - and I hope my memory does not fail this time - DML operations in a SQL Remote passthrough session will be part of the transaction log but will not be replicated back to the sender. Therefore I guess here you would not be dependent on the nasty "delete/load mustn't write to the log" problem.
For step 1, if the data file is not available on site, you might also use a web client procedure or the like within the suggested procedure to load the data from a web resource - it's quite easy IMHO with SQL Anywhere.
Aside: We have traditionally tried to de-couple database schema updates from application updates, by incorporating the database schema updates via passthrough mode in the message stream and by making sure a new program version cannot be used until the database schema upgrade has taken place. That has avoided the painful need to "pause the system until all users have upgraded the application". However, it requires that the "new database schema" is compatible with the current application, apparently. - The above suggestion would follow that update principle.