...under those certain circumstances, I should say... I just run into a strange replication issue: We use SQL Remote 8.0.3 for a salesforce application and generate PKs with GLOBAL AUTOINCREMENT. When extracting remotes, we use a customized reload.sql file. AFAIK, the max values for global autoincrement columns in syscolumn.max_identity are initially 0 and get updated by the according LOAD TABLE statements. When re-extracting remotes, we use a custom tool that allows to re-use the GLOBAL DATABASE ID, which is what we usually want. Generally, that works fine. There's only a small problem with re-organisation of sales persons: Say, we have a sales person S1 whose data is part of remote1 using DbId 1. Consequently, the data entries for S1 are entered with PKs in the DbId 1 range. If now S1 moves to a different sales group (and uses remote2 furtheron), and we then re-extract remote1, the data of S1 won't get extracted for remote1. As stated above, the max_identity values will be taken from the extracted data. Now, imagine that the consolidated contains a table in which the max PK (say 1000) for the DbId 1 range was entered by S1. Then that PK value won't be extracted to remote1 and won't be used as max_identity in remote1. As a consequence, remote1 might now generate PK values for that table starting with, say, 800. Those PKs are fine in remote1 but will lead to PK conflicts when inserted in the consolidated. Question: How can I avoid this unwanted situation while still re-using GLOBAL DATABASE IDs when re-extracting remotes? |
After studying several newsgroup threads (and finding it sooooo hard to search for several keywords), the following seems to be a solution: (cf. the thread "Inconsistent autoincrement behavior" in the general newsgroup, particularly Breck's answer from 24 Oct 2007) As part of the customized re-extraction, one could check for the maximum PK value in the particular GLOBAL DATABASE ID range of all consolidated tables that get extracted. Then one could include sa_reset_identity() calls in the reload.sql script (after the LOAD TABLE parts) and set the max_identity columns accordingly. That way it can be guaranteed that the remote won't use PK values that it has been generated before, even if the relevant rows don't "belong" to that remote anymore. speaking of soooo hard to search the newsgroups.... is that port coming anytime soon? @Jen: I don't know, I'm just an ordinary SQLA user. You might want to post that as a separate question here... |
FWIW, just some sample code I have found useful: I have added a stored produre and a function to our consolidated (somewhat different to these simplyfied samples) that can be used to list the maximum values for a particular GlobalDBId and to create an appropriate script with sa_reset_identity calls. Such a script can then easily be integrated in the reload phase, possibly by including a
command in the customized reload.sql file. It runs both on ASA 8.0.3.5574 and SA 12.0.0.2566. Note: The system catalog is used with the ASA 8 base system tables - AFAIK they are preserved as compatibility views in SA10 and above. Assumptions:
Feel free to correct/comment:)
Generates a file like:
Just to add: The sample could need some more comments, I guess, but for the moment, I'd like to publish it AS IS... Another addition: I used a local temp table to gather the string ouput (instead of concatenating lines and use xp_writefile()) simply because the unload select statement seems to be the only one to handle newline platform-independent, cf. that question: http://sqlanywhere-forum.sap.com/questions/1127. |