Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

I have a large replication system with over 2000 remotes. I need to make a massive update on one of the tables in the system that would generate well over 5000 messages to process. This is not realistic but I am pretty sure I have a solution to this but would like confirmation.

The table that I am dealing with is set up as 1 way replication to 99% of the remotes. The table is maintained by a handful of users and the data is replicated to all the remotes.

The solution: We have an application that replicated the database and than will mandate an update to the program files. Within this update we will send a data file containing all 1 million rows of the table. A script will do a truncate BIGTABLE followed by load BIGTABLE.... These statements be non replciatable I believe this will work. Basically since the APP mandates the replication prior to this update I am confident that the BIGTABLE in the remotes will be identical to BIGTABLE in the consolidated with a clean messaging system for this table. Also, during this process there will be no data changes on BIGTABLE on any of the databases remotes/consolidated till it is confirmed that all databases have had the bulk load completed.

The Question: Am I missing anything on this approach...

asked 12 Feb '14, 13:01

TPS's gravatar image

TPS
196131320
accept rate: 0%

1

Have you considered STOP and START SUBSCRIPTION?

(13 Feb '14, 08:38) Breck Carter

It's been several years since I've dealt with dbremote, but I've done a lot of manual re-syncs while I was doing it.

I'm assuming the remotes will break the replication link once the truncate/load table command gets replicated down. Your program is then intending to manually load the table, then "reset the replication tape" back to the consolidated transaction log offset directly following the LOAD TABLE command? This may work as long as all the tables involved in replication are brought into the correct state at that precise transaction log offset.

If there are any changes made to the consolidated database after that offset then those transactions will be found and applied once replication is brought up again (assuming the logs are still available)

Test the heck out of this, however you can. It's really hard to remotely bring two databases back into sync when you don't have access to both of them.

permanent link

answered 12 Feb '14, 17:17

Erik%20Anderson's gravatar image

Erik Anderson
43681323
accept rate: 15%

Am I correct that the truncate table and the load table commands do not create any entries into the transaction logs? This is critical for my solution. If this is the case I feel confident that what I plan to do will work. All databases that can make updates to the table I am trying to manually sync are shutdown so this table is in a static mode. The remotes that have this table in a one way replication scheme will replicate to make sure that all transactions related to this table are up to date. After this replication occurs than I will so the truncate table followed by the load table command (loading data from a dump of the table from the consolidated database). The replication for the remotes will never be turned off since I do not think that the log offsets would get impacted by the truncate and load table statements...

(12 Feb '14, 19:28) TPS
Replies hidden

This is what I'm seeing in my v12.0.1 manual:

For TRUNCATE:

...the row deletions are not entered into the transaction log and therefore are not synchronized or replicated. This can lead to inconsistencies that can cause synchronization or replication to fail.

For LOAD TABLE:

WITH { FILE NAME | ROW | CONTENT } LOGGING Use this clause to control the level of detail logged in the transaction log during a load operation. The levels of logging are as follows:

WITH FILE NAME LOGGING clause The WITH FILE NAME LOGGING clause causes only the LOAD TABLE statement to be recorded in the transaction log. To guarantee consistent results when the transaction log is used during recovery, the file used for the original load operation must be present in its original location, and must contain the original data. This level of logging has the best performance; however, you should not use it if your database is involved in mirroring or synchronization. Also, this level can not be used when loading from an expression or a client file.

When you do not specify a logging level in the LOAD TABLE statement, WITH ROW LOGGING is the default level when specifying:

  • FROM filename-expression
  • USING FILE filename-expression

WITH ROW LOGGING clause The WITH ROW LOGGING clause causes each row that is loaded to be recorded in the transaction log as an INSERT statement. This level of logging is recommended for databases involved in synchronization and is the default for database mirroring when using FROM filename-expression or USING FILE filename-expression. However, when loading large amounts of data, this logging type can impact performance, and results in a much longer transaction log.

If there are no non-deterministic value, WITH CONTENT LOGGING likely results in better performance

This level is also ideal for databases where the table being loaded into contains non-deterministic values, such as computed columns, or CURRENT TIMESTAMP defaults.

WITH CONTENT LOGGING clause The WITH CONTENT LOGGING clause causes the database server to copy the input file to the transaction log in chunks. These chunks can be reconstituted into a copy of the input file later, for example during recovery from the transaction log. When loading large amounts of data, this logging type has a very low impact on performance, and offers increased data protection, but it does result in a longer transaction log. This level of logging is recommended for databases involved in mirroring, or where it is desirable to not maintain the original data files for later recovery provided there are no non-deterministic values.

The WITH CONTENT LOGGING clause cannot be used if the database is involved in synchronization. The WITH CONTENT LOGGING clause is required if the table is being loaded from a client file.

When you do not specify a logging level in the LOAD TABLE statement, WITH CONTENT LOGGING is the default level when specifying:

  • USING CLIENT FILE client-filename-expression
  • USING VALUE value-expression
  • USING COLUMN column-expression
(12 Feb '14, 22:10) Erik Anderson

The docs are somewhat inconsistent w.r.t. TRUNCATE TABLE, as this can be done internally in both a "fast form" and a "slow form".

For the "fast way", the cited doc page tells:

If a fast truncation is carried out, individual DELETEs are not recorded in the transaction log, and a COMMIT is carried out before and after the operation. Fast truncation cannot be used within snapshot transactions.

In my understanding, that may mean a "slow truncation" in contrast might be recorded in the transaction log.

In other paragraphs of the same page - like the one Erik has quoted - , the restriction of the "fast truncation" is not mentioned.

(13 Feb '14, 03:43) Volker Barth

I do not fully understand your requirements, and besides all thoughs and planning, I would surely recommend to test this in a test environment heavily.

CORRECTION: The following paragraphs are complete bogus, it's really the other way around: SQL Remote hooks can be used to modify data within the SQL Remote run that will be replicated back to the sender... - i.e. you can use a hook like sp_hook_dbremote_receive_end to update a new row that has just been inserted via by SQL Remote's receiving phase, and the update will be replicated as well, i.e. the UPDATE operation will be part of the next sending phase. That would normally only happen on the consolidated in case of update conflicts.

Besides that, I'd like to note that [SQL Remote hook procedures](http://dcx.sybase.com/index.html#1201/en/sqlremote/client-event-hook-ml-ref.html) can be used to let SQL statements run remotely that do not get replicated - i.e. if you have a published table MyTable, you can use a hook procedure like "sp_hook_dbremote_message_apply_end" to do DML statements on MyTable that will not get replicated back to the consolidated.

At least that's my understanding - well, I guess my memory has been restored partially:)

So just in case the TRUNCATE TABLE/LOAD TABLE approach won't work as expected (i.e. it will write the operations in the TL - I am not sure whether it will do or not), you might be able to use it within a hook procedure, so the according operations in the TL will not be replicated. Needless to say, that approach needs intense testing, as well.

So, apparently SQL Remote hooks won't help here in the way I had thought of.

permanent link

answered 13 Feb '14, 04:05

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822
accept rate: 34%

edited 13 Feb '14, 05:45

A different approach would be to incorporate the massive data update into the SQL Remote run via PASSTHROUGH MODE:

  • Make sure the data file is available at the remote site (via your application update).
  • Use passthrough mode to create a stored procedure, call that procedure once and drop it afterwards.
  • Within that procedure, truncate the table and load the data from the data file into the table.

AFAIK - and I hope my memory does not fail this time - DML operations in a SQL Remote passthrough session will be part of the transaction log but will not be replicated back to the sender. Therefore I guess here you would not be dependent on the nasty "delete/load mustn't write to the log" problem.

For step 1, if the data file is not available on site, you might also use a web client procedure or the like within the suggested procedure to load the data from a web resource - it's quite easy IMHO with SQL Anywhere.


Aside: We have traditionally tried to de-couple database schema updates from application updates, by incorporating the database schema updates via passthrough mode in the message stream and by making sure a new program version cannot be used until the database schema upgrade has taken place. That has avoided the painful need to "pause the system until all users have upgraded the application". However, it requires that the "new database schema" is compatible with the current application, apparently. - The above suggestion would follow that update principle.

permanent link

answered 13 Feb '14, 10:10

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822
accept rate: 34%

edited 13 Feb '14, 10:11

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×78

question asked: 12 Feb '14, 13:01

question was seen: 2,516 times

last updated: 13 Feb '14, 10:11