The forum will be down for maintenance at some point from Friday, November 16 at 19:00 EDT until Sunday, November 18 at 23:59 EDT. Downtime will be minimized but the exact timing is unknown.

We have an issue with our main consolidated database which means we need to re-build it from the remote sites data and then re-extract the remote sites.

I plan to copy over the remote site db's to our consolidated server. Then unload and reload them into the new copy of the consolidated db . Mu question is can this be done in one operation ? Or can I run a command to unload the remote DB data and then load it into to new copy of the consolidated DB ?

Also will I need to copy the log files from the remote site to complete this operation ?

Many thanks from a stressed guy !

Mark

asked 05 Nov, 04:20

ThelmaCottage's gravatar image

ThelmaCottage
1415617
accept rate: 0%

edited 05 Nov, 06:01

Volker%20Barth's gravatar image

Volker Barth
32.4k328476692

We have an issue with our main consolidated database which means we need to re-build it

Just to clarify: You cannot restore from a backup of the consolidated and then "just re-apply" missing changes/messages from remotes that have sync'ed/replicated in between?

Is that a SQL Remote or MobiLink setup?

Note, I have no "quick" answer to your rebuilt plans...

(05 Nov, 04:34) Volker Barth

Yes, that's correct. I have to rebuild consolidated from remote site DB's (we have SQL Remote)

My plan was to stop remote site DB, copy remote site DB over to CONSOL server. Then unload remote site DB into data file. Load this datafile into CONSOL DB. Extract remote site USER DB, copy over to remote site then start. This should then begin replicating.

I wanted to know if there was a way of unloading remote site DB and then load data into CONSOL in one hit ?

Thanks in advance

Mark (still stressed !)

(05 Nov, 05:30) ThelmaCottage
Replies hidden

Well, usually DBUNLOAD -ac -d can be used to unload into an existing database with identical schema in one go, however when building a new consolidated from a bunch of SQL Remotes I see several issues:

  • I don't know how SQL Remote-specific values (i.e. the contents of the according system tables or the "max_identity" values for GLOBAL AUTOINCREMENTS) are handled here.
  • Unless your remotes do contain fully different data sets each, you will have problems with duplicate data when simply reloading from all remotes: If a row has been shared between more than one remote, the unload from the second remote and the underlying LOAD TABLE will fail with a duplicate key violation.
  • Statistics etc. may be wrong.

Just a few thoughts. Sorry, I do assume you are not only stressed but somewhat running out of time, so I hope the real SQL Remote experts like Reg have time to help you further.

(05 Nov, 05:56) Volker Barth

Each remote DB has unique data and key's so shouldn't be too much of a problem.

I presume any record that has a autoincrement value will keep the value from the original store data ?

Mark

(05 Nov, 06:06) ThelmaCottage
Replies hidden

Also, do I need to bring the LOG files across from the remote sites ? (some of them are very large and take time to copy) - can I just run the DB on it's own with no log file while I unload/reload ?

Thanks

Mark

(05 Nov, 06:09) ThelmaCottage
Replies hidden

When you are sure that these remotes must be re-extracted, you can cleanly shut them down and just copy and use the .db files to rebuild your consolidated.

(05 Nov, 06:27) Volker Barth

I presume any record that has a autoincrement value will keep the value from the original store data ?

Yes, as these rows have already got their autoincrement value, that value will be kept. (Aside: DBUNLOAD uses LOAD TABLE with the default clause DEFAULTS OFF, so a new default value would not be set even if the according column would be missing within the optional LOAD TABLE column list.)

(05 Nov, 06:33) Volker Barth

Okay, I'm just about to run some tests on some copies to see if it works ok.

Thanks Mark (little less stressed 90% !)

(05 Nov, 06:51) ThelmaCottage

I presume I can run the new version on the consolidated with DBEng so no one can connect to it while we re-build ?

Can i just delete the old log files before we start re-build ?

Mark

(05 Nov, 07:04) ThelmaCottage

so no one can connect to it

No one except from the same machine...

Can i just delete the old log files before we start re-build ?

What files do you mean?

(05 Nov, 07:42) Volker Barth

The old consolidated .log files

(05 Nov, 08:17) ThelmaCottage

Well, if you build a new consolidated database, it will have its own series of transaction logs and won't be able to use older logs, so for replication purposes they are no longer needed.

However, in case your consolidated is not just a collector of remote data but does also have "its own data", you might check whether the old logs contain data that is not yet part of your new consolidated. For that purpose you might translate the old logs (possibly with the DBTRAN "-is Local" option) to check whether there are usable DML statements to run against the new database.

That being said, I would generally not immediately delete old logs but move them to a secure place, just in case...

Just my two cents, anyway.

(06 Nov, 03:23) Volker Barth

Hi Volker

I have extracted first remote site and at remote site when I try to connect it comes up with message below ? We extracted the DB on a 17.0.7.3390 build - the remote site is 17.0.9.4803 ?

Thanks

[SAP][ODBC Driver][SQL Anywhere]Unable to start specified database: Server must be upgraded to start database C:\sdata\store08.db SQLCODE=-934 SQLSTATE=08001

(06 Nov, 08:29) ThelmaCottage

Hm, I would ask that as a new question...

FWIW, I would expect that error message under switched conditions, i.e. when the database file version is newer than the database server version.

You may have a look at the SYSHISTORY entries for that database (checked on a database server that can run it, apparently:)...).

(06 Nov, 08:54) Volker Barth

Okay, solved this, it was a PATH issue - we upgraded to 64 bit version from .0.7 -> .0.9. and it was still looking at the old (x86) install.

(06 Nov, 10:05) ThelmaCottage

If I'm running dbeng and no one is connected can I run extract users at a level lower than 3 ?

(07 Nov, 06:57) ThelmaCottage
1

Yes, isolation level 0 is sufficient if there are no connections that modify the contents of the published data during the extraction.

(07 Nov, 08:03) Volker Barth

Hi Volker

We have now extracted all users (apart for some less urgent ones) However when I fire up the SQL Remote service on the consolidated I get an error.

Not sure what to do here ? All the reloads and extractions seem to work fine and the remote sites are all working (including their SQL Remote services)

HELP

Mark

I. 2018-11-10 08:36:44. 
I. 2018-11-10 08:36:45. Scanning logs starting at offset 01178032419905
I. 2018-11-10 08:36:45. Transaction log "c:/sdata_log/consol.log" starts at offset 01178199173531
I. 2018-11-10 08:36:45. Processing transaction logs from directory "c:\sdata_log"
I. 2018-11-10 08:36:45. Retrieving offline transaction logs through the engine from "c:/sdata_log/"
E. 2018-11-10 08:36:45. No off-line transaction log file found and on-line transaction log starts at offset 01178199173531.
I. 2018-11-10 08:36:45. We are looking for a transaction log file starting at offset 01178032419905.
I. 2018-11-10 08:36:46. Execution completed
(10 Nov, 03:46) ThelmaCottage

I think I have made a mistake (see post a bit further on) - I cleared the old consol db of all data, did a dbreload and then deleted the old log files by mistake. I then continued reloading data into the consolidated, then extracted all users.

However when I now try and start dbremote it is giving me an error, I guess it's still looking for the old files. Is there any way I can stop it looking for the older log files and just use the current one ?

Otherwise I might have to start all over again ?

I. 2018-11-10 08:36:44. 
I. 2018-11-10 08:36:45. Scanning logs starting at offset 01178032419905
I. 2018-11-10 08:36:45. Transaction log "c:/sdata_log/consol.log" starts at offset 01178199173531
I. 2018-11-10 08:36:45. Processing transaction logs from directory "c:\sdata_log"
I. 2018-11-10 08:36:45. Retrieving offline transaction logs through the engine from "c:/sdata_log/"
E. 2018-11-10 08:36:45. No off-line transaction log file found and on-line transaction log starts at offset 01178199173531.
I. 2018-11-10 08:36:45. We are looking for a transaction log file starting at offset 01178032419905.
I. 2018-11-10 08:36:46. Execution completed
(11 Nov, 06:03) ThelmaCottage

I cleared the old consol db of all data, did a dbreload and then deleted the old log files by mistake. I then continued reloading data into the consolidated, then extracted all users.

Mark, so you did not create a new consolidated database but just deleted old data within the same database and entered new data? (If so, that's not commonly called a "reload" - A "reload" usually means you create a new database and fill it with schema and data of the previous one via dbunload.)

Well, if you build a new consolidated database, it will have its own series of transaction logs and won't be able to use older logs, so for replication purposes they are no longer needed.

That's what I had written some comments ago, clearly relating to two separate database instances, an old and a new one. By that I meant the new one had been freshly created and therefore has its "own" series of log files.

In case you simply keep using the original database with freshly reloaded data, then you need to retain the old logs until they are no longer needed by DBREMOTE. A comparison of the contents of the SYSREMOTEUSER table and the starting and ending offsets of you old and current log files should help to ensure whether all needed logs are available.


Aside: I guess it would be way more helpful to contact technical support for further questions.

(11 Nov, 14:30) Volker Barth

Yes, I kept the original database. Is there any way I can get dbremote to ignore the old log files and just use the current one ?

(12 Nov, 03:23) ThelmaCottage

Well, if DEBREMOTE asks for a particular old log file, it usually needs them because the log offsets from the old file have not yet been confirmed by all remotes.

As stated, you should check the contents of SYSREMOTEUSER. I'm quite sure the lowest value for "confirm_sent" is below the start offset of your current log (01178199173531 as to your snippet). If so, DBREMOTE will need an older log with that offset.

If that is the case and you don't have the older log(s) available (note: I recommended to "move them", not to delete them...), you might get around the situation if you re-extract those remotes with a "confim_sent" value that is below that from the current log.


That being said, as my help seems to have been misleading based on the flow of the discussion here, I would again recommend to call support - they can help you based on your actual data and situation, whereas within this forum, we can only answer what you have asked, without knowing all important details.

(12 Nov, 04:41) Volker Barth

Hi Volker

I thnk what I need to do is another re-build. With the correct dbunload. If I use dbunload-an this would create me a new database with it's own new fresh logs. I could then re-extract from that -is that correct.

Mark

(12 Nov, 04:57) ThelmaCottage
Comment Text Removed

Hm, just re-extracting all remotes should do the trick with the current database. However, what does the mentioned system table reveal?

(2 days ago) Volker Barth

It looks like the remote users I have not yet extracted have these old log offsets. If I re-extract these will it stop looking for these offsets ? (so store04,store02,store20 and store 99

user_id,consolidate,type_id,address,frequency,send_time,log_send,time_sent,log_sent,confirm_sent,send_count,resend_count,time_received,log_received,confirm_received,receive_count,rereceive_count,time_sent_utc,time_received_utc
102,'N',1,STORE01,'P',00:15:00,0,,1178224106971,1178224106971,0,0,,0,0,0,0,,
107,'N',1,STORE02,'P',00:15:00,0,,1178032419905,1178031277857,126250,0,,949831462,949831462,137142,0,,
108,'N',1,STORE04,'P',00:15:00,0,,1178032419905,1178031277857,112729,0,,283774642,283774642,108462,1,,
112,'N',1,STORE05,'P',00:15:00,0,,1178224192871,1178224192871,0,0,,0,0,0,0,,
103,'N',1,STORE08,'P',00:15:00,0,,1178199246896,1178199246896,0,0,,0,0,0,0,,
105,'N',1,STORE11,'P',00:15:00,0,,1178224019173,1178224019173,0,0,,0,0,0,0,,
110,'N',1,STORE14,'P',00:15:00,0,,1178207233823,1178207233823,0,0,,0,0,0,0,,
111,'N',1,STORE15,'P',00:15:00,0,,1178224019282,1178224019282,0,0,,0,0,0,0,,
104,'N',1,STORE17,'P',00:15:00,0,,1178224062475,1178224062475,0,0,,0,0,0,0,,
106,'N',1,STORE18,'P',00:15:00,0,,1178224293606,1178224293606,0,0,,0,0,0,0,,
109,'N',1,STORE19,'P',00:15:00,0,,1178224237359,1178224237359,0,0,,0,0,0,0,,
114,'N',1,STORE20,'P',00:15:00,0,,1178032419905,1178031277857,106792,0,,3645573912,3645573912,146892,1,,
115,'N',1,STORE21,'P',00:15:00,0,,1178224150547,1178224150547,0,0,,0,0,0,0,,
113,'N',1,STORE99,'P',00:15:00,0,,1178032419905,1178031277857,120104,0,,501524808,501524808,126738,5,,
(2 days ago) ThelmaCottage

then extracted all users.

Well, that's what you have written above... So that's not correct, and you have not already re-extracted all remotes?

Then re-extracting those missing remotes should solve the issue, methinks. But I should start to add: That is based on my understanding of the information you have provided, and I certainly may be wrong.

(2 days ago) Volker Barth
showing 5 of 26 show all flat view
Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×83
×50
×8

question asked: 05 Nov, 04:20

question was seen: 116 times

last updated: 2 days ago