Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

I had a hard-drive failure recently where I lost the database and transaction log.

Unfortunately we did not have a robust backup strategy and the most recent backup was about a week old.

I'm wondering the best strategy to recover the database, and the missing weeks worth of data, by using the remote database(s) and/or transaction logs. * The main problem is that most of the remotes had replicated data during the last week of the database's life and therefore have confirmed received offsets that are newer than what the restored database expects. (DBREMOTE reports "Message does not belong to me...")

We only have 7 or so remote sites, which have similar but different subscriptions. So its not like I have to reconcile hundreds of remotes, however I do not have direct access to the databases but could setup a remote connection if necessary.

We are using SQL Remote 8.03 over FTP. There were multiple clients that had run dbremote between the date of the old backup (July 25) and the date of the database crash (July 30).

I know that I could use the "old" backup and then re-extract all the clients, which would get all the offsets in sync, however that would mean that a weeks of data would be lost.

Should I use the remote transaction files and extract the SQL ?

Is there a way to use the FTP messages on the server that were not picked up to restore data ? (some clients had not replicated in a while)

I am not averse to using undocumented procedures to reconcile the expected sent/confirmed etc values.

thanks, michael

asked 05 Aug '11, 12:54

MetaHertz's gravatar image

MetaHertz
76337
accept rate: 0%

2

The good news: Call Sybase Technical Support... they might be able to help you.

The bad news: I give you a 0.01 probability of being able to restart replication without re-extracting all the remotes... the time you spend trying (and inevitably failing) to restart replication would be better spent taking the re-extract approach (which might include digging data out of the remote transaction logs and applying it to the consolidated before re-extracting).

The good news: There are people working at Sybase who are masters at this kind of thing, I stand in awe of them... I have worked with SQL Remote since version 5.5 and I still don't know how those folks do what they do. Whether you can get your problem in front of them, that's up to you.

(05 Aug '11, 14:39) Breck Carter
Replies hidden

Thanks for the quick response. It always helps to get a push in the right direction opposed to going through n(n+1) scenario trees...

Couple of questions on "digging data out of the remote transaction logs".

Background *** The clients are using -c "dsn=..." -b -x switches. so -x is relevant as it is rename and restart. The remotes do not have a backup log directory.

I have only been able to "remotely connect" to one client and i only saw the aaa.db file and the aaa.log file. Shouldnt there be many log files ? [actually this site may have not dbremote'd yet]

1) I'm assuming that it doesnt hurt to specify a backup directory and maybe its necessary to do so because of the many logfiles ? dbtran -c "dsn=aaa" -m "c:dir" -n "c:dirout.sql"

2) Can I use passthrough mode somehow to get the out.sql files ?

3) Is there a common, best practice to minimize the remote database activity between the time I re-extract the remotes and (re)start the subscriptions ?

4) Also what is the best method to restart the subscriptions ? I guess the extract does that automatically correct ?

thanks Breck! your many articles and websites have been great references over the years.

(05 Aug '11, 15:17) MetaHertz

What's the setting of the "delete_old_logs" option?

In case it's 'On' or 'Delay' it would not be uncommon if there are no further (old) logs as they might have been automatically deleted after all their transactions have been confirmed by the consolidated.

(05 Aug '11, 15:34) Volker Barth
1

It is hard to recover from the remote databases without the old transaction log files. You could use passthrough, but you have to know what data operations to pass back to the consolidated database. By the way, if you still have some message files, you could run dbremote with -a -s -b -p -v -o file_name -c "uid=..." to get the operations from the message files.

(05 Aug '11, 15:40) Yufei Guo

Just to add:

Is there any chance to use the damaged database/log file? - I remember a case when the consolidated database couldn't be started because of damaged files (in that case a software bug) but each file could be used on its own: I.e. the database file could be started with -f and - more important - the log could be translated with DBTRAN.

The latter approach helped to give us those operations (both from remotes and in the consolidated) that weren't in the last backup as SQL statements. We could then apply them to the backup and get the database up-to-date that way (though not w.r.t. the log offsets - so a few remotes still had to be re-extracted). But there was no data loss.

Just in case...

(05 Aug '11, 16:07) Volker Barth
Replies hidden

um... yeah, the damaged database/log file? it was on a hard drive that got corrupted (bad blocks, etc). fortunately i made backups and was preparing a new, real server to host the db. this was because I knew that it was just a matter of time before, get this, the Virtual Machine that was hosting the db, was also the location of the backups. So when I came in on a monday morning and the VM file was no longer 15GB, but 0kb, well, that was that.

I kinda wish that there was atleast someone in the company that could empathise or understand what kind of maelstromocity this was and how its really tough to fix these problems when it hasnt even been recognized and identified as a FUBAR event. fortunately they hired a power builder. :)

(05 Aug '11, 16:22) MetaHertz
1

@MetaHertz: You have several good questions, I only have one poor answer: Bite the bullet, get on with re-extracting as quickly as possible, do not spend any time trying to dig data out of the logs before you do that. Get the users up and running, tell them they have to re-enter everything they lost. They will complain, but will they refuse? If they can refuse and get away with it without losing their jobs, then they didn't need the lost data in the first place so problem solved.

It's gangrene, the leg has to go... too bad, but the alternative is MORE lost data.

OK, really bad analogy. Maybe after the re-extract is done, you can help the squeaky wheels dig their data out of the logs. But it is going to be HARD ENOUGH to get the new remotes out into the field without having to worry about that at the same time. Divide and conquer, or go crazy and fail.

Or... give yourself a time limit: "I will spend x hours/days trying to do other things, then I will give up and do a full re-extract"

(06 Aug '11, 03:07) Breck Carter
showing 2 of 7 show all flat view

I don't think there are any automated ways to recover the consolidated database from all the remote databases even after you fixed all the offset problems, because dbremote will not send any operations back to the subscriber, if the operations originally were coming from the subscriber. However, you may be able to recover part or all of the data from the transaction log files of all the remote databases manually, assuming the transaction log files of the remote databases have not been deleted. Here are the steps:

  1. Run dblog against the backup consolidated database, dblog will generate something like

    Transaction log starting offset is O1

    SQL Remote truncation offset is T1

    Transaction log current relative offset is O2

    (Please don't try to start the database, because it would change the end log offset of the database). Then add O1 and O2 together that is the end log offset of the backup consolidated database;

  2. Go to a remote site and shutdown the remote database engine;
  3. Make a directory, say logs and copy all the transaction log files including the online transaction log to this newly created directory;
  4. Run dbtran against this directory using the following command line:

    dbtran -sr -is sr -y -m full_pathlogs -n my.sql

    Dbtran will retrieve the operations that were applied by dbremote from all the log files stored in full_pathlogs and save these operations to my.sql;

  5. Edit my.sql and look for the lines with the following style:

    --REMOTE-1016-0000439055-0102-RECEIVED-NA-0000443759

    In this line,

    1016: 16 is a connection ID;

    0000439055: current offset in the remote transaction log;

    0102: current user ID;

    0000443759: the commit offset in the subscriber's database.

    In this example, all the operations before this line in the my.sql file came from the subscriber with log offset less than 0000443759. Then remove all the operations that have already in the consolidated database from my.sql.

Then repeat steps 2) to 5) for each remote database. After you have the my.sql files from all the remote sites, you may need to check if they contain any duplicate data. If there are not, then apply all my.sql against the consolidated database and then re-extract all the remotes.

Please try these steps on the test environment first.

permanent link

answered 05 Aug '11, 15:26

Yufei%20Guo's gravatar image

Yufei Guo
401213
accept rate: 6%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×103
×78
×29
×13
×10

question asked: 05 Aug '11, 12:54

question was seen: 3,533 times

last updated: 06 Aug '11, 03:07