Using sqlanywhere 12, we run a full dbbackup daily, with -x option to truncate the active log. The backup is run on the same server, compressed, and ftp'd to a secondary server.

We want to add a continuous live backup to the secondary server, to cover the period between full backups.

When the full backup runs, the live backup stops[EDIT: the live backup continues and the live log is truncated and restarted] (because active log is truncated).

Is there a way to automatically restart the live log backup? [EDIT: It restarts automatically. How can we avoid losing the content of the live log when it is truncated? Surely we want to keep it until the new backup reaches the secondary server?]

Thanks!

asked 10 Feb '13, 22:48

Mark's gravatar image

Mark
2565917
accept rate: 20%

edited 11 Feb '13, 07:20

I'm on V10, and TBH I've never experienced such behavior. For me, live backup truncates its log and continues when main transaction log is truncated by backup process.

(11 Feb '13, 03:46) Dmitri

Dmitri, thank you, you are correct. I was confused. The behaviour in v12 is as you say. The question is edited to fix this.

(11 Feb '13, 07:25) Mark

Well, do you need to truncate the log during the backup (or do so every time)?

I don't think there's an option to make a live backup preserve its log (i.e. do not truncate it or delay truncation) when the active log is truncated - and I think it would not be senseful it there were one: By design, the live backup must be usable to be run against the last database backup...

Shouldn't the local backup of the truncated log be enough for a restore?

(11 Feb '13, 07:26) Volker Barth

Thanks Volker. I assumed truncating the log is a good idea, because it prevents it growing too large. Larger logs take longer to recover. The risk of log recovery failing increases the longer you wait before doing it.

Yes, live backup must run against latest backup. But what if the active server dies while transferring the latest full backup to the secondary server?

Can you please explain "the local backup of the truncated log"?

(11 Feb '13, 18:18) Mark
Replies hidden
1

Larger logs take longer to recover. The risk of log recovery failing increases the longer you wait before doing it.

AFAIK, that's not necessarily true. When the database file is restored, it will look for the last log offset applied, and will only restore operations from the log that appear after that particular offset - so a "larger" log (i.e. one that has not been truncated lately) will contain an initial part with already applied operations but the restore process will skip over them. (Yes, of course it will take some time to "skip over file contents" but I guess that's not too relevant.)

So, IMHO, the frequency of doing log backups does matter much more than the log length. In that respect, lots of folks do full backups once a while (say, daily or weekly) and do log (i.e. incremental of differential) backups more often (say, hourly). These types of backups do only work when the log is not truncated (or at least is renamed) during the backup. - Obviously, you are using the live backup facility instead of scheduled log backups to have an even up-to-date log backup.

Can you please explain "the local backup of the truncated log"?

My idea is that you have these "system components":

  1. The active local database
  2. A local full backup of the database (which will be compressed and FPTed)
  3. A full backup at the secondary location (the one from yesterday, until it is overwritten by the FTP process)
  4. A live backup at the secondary (which is truncated during the full backup).

In case the FTP dies, you may have lost both the previous and the current backup on the secondary. But shouldn't be the full backup (2.) available (or is that only on the same hardware/drives as the active database, so it will be lost/unavailable with the active database, too)?

(12 Feb '13, 04:53) Volker Barth

@Volker, thanks for your comments.

Restoring from log offset: So we could periodically stop the live log backup on secondary, apply to full backup and restart the live log backup?

You described our system components exactly.

Say primary data centre is hit by meteor. All primary disks are looking a bit rough. If live backup was going to the secondary server in a different data centre, the db can be recovered in a few minutes, and is up to date (almost).

Unless meteor hits during FTP. Then, secondary is 48 hours behind (assuming daily full backups that truncate the live log). We have a window of vulnerability proportional to the backup size and network speed.

It sounds like incrementals, periodically applying the live log and/or avoiding truncating the live log is required.

We used to have a server here (old and unused, luckily): http://www.computerworld.com/s/article/9092158/Explosion_has_data_center_scrambling_users_venting

(12 Feb '13, 20:33) Mark
showing 5 of 6 show all flat view

The live backup is working exactly as designed here. A live backup's job is to keep a live copy of the active transaction log. When you perform your full backup (and I assume rename+restart the active transaction log), that will cause the transaction log being maintained by the live backup to also be truncated. There is no option to delay the truncation of the transaction log being being maintained by the live backup while an external process does "something", which in this case is compressing + FTPing the backup.

Options :

  1. Run your full backup from the secondary server. The backed up database and log will be on another machine when the backup completes, at the time that the active transaction log (and transaction log being maintained by the live backup) are renamed.
  2. Take the full backup on the same machine, but to a different hard drive on the machine. This will at least keep you protected from media failure on a single device for the time that the backup is compressed and ftp'd to the secondary server.
  3. Your backup process could be to shut down the live backup, do your full backup, compress+ftp backup, and then restart the live backup. When you stop dbbackup -l, the transaction log will be left behind, so it should still be usefull for recovery.

Reg

permanent link

answered 11 Feb '13, 15:23

Reg%20Domaratzki's gravatar image

Reg Domaratzki
7.7k343118
accept rate: 37%

Reg, thank you for your answer. I realize the live backup transaction log must be truncated if the active transaction log is truncated.

We restart but do not rename the active transaction log.

Here's the output of the live backup log on the secondary machine while a full backup on the active server takes place:

...

(529 of estimated 529 pages, 100% complete)

(529 of estimated 529 pages, 100% complete)

Transaction log truncated by backup -- restarting ...

(0 of estimated 1 pages, 0% complete)

(1 of estimated 1 pages, 100% complete)

Live backup of transaction log waiting for next page...

Regarding the options:

  1. Full backup from secondary: This looks like the best option to keep the secondary up to date. I expect it would be slower and use more bandwidth because the backup is half the size when compressed, but that might not be a problem.

  2. Different disk: If the disk holding the operating system failed after a full backup but during compression or ftp, we would lose anything since the previous full backup on secondary because the active server is unusable, the secondary log was truncated, and the latest full backup had not reached the secondary. Is that fair to say?

  3. Stop live backup before full backup: This looks good. In the event of failure we would stand to lose only changes that occurred during backup + compress + ftp.

Would it be possible for the live backup to do option 3 automatically? Ie, just before the live backup log is truncated, instead of discarding the data, save it to a timestamped file. If the active server died, the timestamped log plus the live log could be applied to the secondary backup. Wouldn't this be like incremental backups?

Sorry for my limited understanding and thanks for your advice. It's helpful because we were seeing 'device has a bad block' errors on a disk a few days ago so moved to a different server and are reconsidering our backup strategy.

(11 Feb '13, 19:59) Mark
Replies hidden
(12 Feb '13, 04:56) Volker Barth

We tried HA for some time, and I can't say I'm impressed ;). Currently we do full and live backups on the secondary server, so if primary dies, all I need is to apply live backup log to the latest full backup and start server on the secondary.

(12 Feb '13, 05:11) Dmitri
3
  1. If the drive when the OS resides fails during the compress/ftp process, you don't need to recover from the secondary machine, you would recover from the full backup that was just taken and written to the other hard drive on the same machine. Your data loss would only be the transactions that occurred on the live database during the compress/ftp process.

  2. There is not a way for live backup to do this automatically right now. I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.

No need to apologize. It's much better answering questions about backup and recovery as opposed to answering questions on why a customer can't recover their database after a disk failure.

(12 Feb '13, 08:47) Reg Domaratzki

@Dmitri: The basis of our SA On Demand product relies on the SQL Anywhere HA capabilities. A significant amount of work was put into v1201 (and v16) for HA. If you liked the concept of HA, but were running into a few issues, it might be worth your time to re-visit the feature on the latest v1201 EBF or in the forth coming v16 release.

(12 Feb '13, 08:51) Reg Domaratzki
1

It's much better answering questions about backup and recovery as opposed...

That gave me a smile:) And I guess lots of folks share the "we were seeing <whatever> errors on a disk .. and are reconsidering our backup strategy" way of gaining insight...

(12 Feb '13, 09:06) Volker Barth
  1. I see, the backup is safe on the good drive. We need to find out how long it would take to access that drive again.

  2. Archived logs on secondary would be great!

Thanks Reg.

(12 Feb '13, 18:46) Mark

@Volker, thanks, we probably do need to consider it.

Live backup looks great because it offers an almost up to the second replica in a different data centre that could be recovered within minutes, and it's reasonably simple to set up.

(12 Feb '13, 18:59) Mark

I think it's a good suggestion though that if you added the "-r -n" switch to dbbackup -l that it would keep an archive of old log files on the other machine as well.

What about something like the "delete_old_logs" option, as used by SQL Remote and MobiLink clients? I.e. an option to tell the live backup to preserve old logs for n days or the like?

(13 Feb '13, 03:35) Volker Barth
1

FWIW multiple separate live backups can be created; e.g., local and offsite, in case of zombie apocalypse and the like.

(13 Feb '13, 08:51) Breck Carter
showing 1 of 10 show all flat view

Restoring from log offset: So we could periodically stop the live log backup on secondary, apply to full backup and restart the live log backup?

EDIT: I have turned that previous comment onto the question into an answer and I hope the experts like Reg will confirm the following:

In my understanding, you should be able to do the following at the secondary site:

  1. Have the full backup available (with "truncate log" mode).
  2. Run a live backup from the secondary against the active database on the primary site.
  3. Stop the live backup and make a copy of the log (say, as liveCopy.log).
  4. Restart the live backup.
  5. Start a secondary server with a copy of the full backup - it will recover automatically to the end time of the full backup - and shut down that server.
  6. Start the secondary server again to apply the liveCopy.log (-a option) - and shut down that server.
  7. Now you should have a new "up-to-date" backup consisting of the database and log file of the secondary server at the time of the stopped liveCopy.log.

So, that way you could "freshen" your secondary site during the day.

Note, however, that for a second "refresh", you will have to start with the full bakup again, i.e. it should not be possible to "freshen" the database from step 7 with an more current live backup log, as that log fits to the active database, and not to the restored and already "freshend" one.

So, in case the primary site goes down, you now have three options:

  • use the full backup from step 1 with the current live backup log to restore the database (with possibly some recovery time for today's operations)
  • use the freshened database from step 7, which is immediately available (and accept to lose the contents after the "refreshening" - )
  • use the freshened database from step 7 and use the current live backup to filter out the most recent contents (say, by running DBTRAN against the log limited to the time after the "freshening"), then running those operations against the database to make it up-to-date... (requiring skills and some time to recovery/apply operations).

Option 1 is the normal usage of a live backup log, so I can't tell whether options 2 or 3 would be useful for you...


No, I have never used that technique myself.

permanent link

answered 13 Feb '13, 09:52

Volker%20Barth's gravatar image

Volker Barth
40.1k361549819
accept rate: 34%

edited 13 Feb '13, 17:06

How best to stop a live backup?

Can combine 5 & 6 using -ad to recover using multiple logs.

The process sounds close to how incremental backups work.

Thanks Volker, your comments are helpful.

(14 Feb '13, 21:32) Mark
Replies hidden

How best to stop a live backup?

I'd recommend to ask that as a separate question... - and no, I don't have an answer...

(16 Feb '13, 05:25) Volker Barth
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×84
×10

question asked: 10 Feb '13, 22:48

question was seen: 3,453 times

last updated: 15 Apr '19, 08:59