Using sqlanywhere 12, we run a full dbbackup daily, with -x option to truncate the active log. The backup is run on the same server, compressed, and ftp'd to a secondary server. We want to add a continuous live backup to the secondary server, to cover the period between full backups. When the full backup runs, the live backup stops[EDIT: the live backup continues and the live log is truncated and restarted] (because active log is truncated). Is there a way to automatically restart the live log backup? [EDIT: It restarts automatically. How can we avoid losing the content of the live log when it is truncated? Surely we want to keep it until the new backup reaches the secondary server?] Thanks! |
The live backup is working exactly as designed here. A live backup's job is to keep a live copy of the active transaction log. When you perform your full backup (and I assume rename+restart the active transaction log), that will cause the transaction log being maintained by the live backup to also be truncated. There is no option to delay the truncation of the transaction log being being maintained by the live backup while an external process does "something", which in this case is compressing + FTPing the backup. Options :
Reg Reg, thank you for your answer. I realize the live backup transaction log must be truncated if the active transaction log is truncated. We restart but do not rename the active transaction log. Here's the output of the live backup log on the secondary machine while a full backup on the active server takes place: ... (529 of estimated 529 pages, 100% complete) (529 of estimated 529 pages, 100% complete) Transaction log truncated by backup -- restarting ... (0 of estimated 1 pages, 0% complete) (1 of estimated 1 pages, 100% complete) Live backup of transaction log waiting for next page... Regarding the options:
Would it be possible for the live backup to do option 3 automatically? Ie, just before the live backup log is truncated, instead of discarding the data, save it to a timestamped file. If the active server died, the timestamped log plus the live log could be applied to the secondary backup. Wouldn't this be like incremental backups? Sorry for my limited understanding and thanks for your advice. It's helpful because we were seeing 'device has a bad block' errors on a disk a few days ago so moved to a different server and are reconsidering our backup strategy.
(11 Feb '13, 19:59)
Mark
Replies hidden
@Mark, have you also thought over installing a High Availability solution, as is available for SQL Anywhere 10?
(12 Feb '13, 04:56)
Volker Barth
We tried HA for some time, and I can't say I'm impressed ;). Currently we do full and live backups on the secondary server, so if primary dies, all I need is to apply live backup log to the latest full backup and start server on the secondary.
(12 Feb '13, 05:11)
Dmitri
3
No need to apologize. It's much better answering questions about backup and recovery as opposed to answering questions on why a customer can't recover their database after a disk failure.
(12 Feb '13, 08:47)
Reg Domaratzki
@Dmitri: The basis of our SA On Demand product relies on the SQL Anywhere HA capabilities. A significant amount of work was put into v1201 (and v16) for HA. If you liked the concept of HA, but were running into a few issues, it might be worth your time to re-visit the feature on the latest v1201 EBF or in the forth coming v16 release.
(12 Feb '13, 08:51)
Reg Domaratzki
1
That gave me a smile:) And I guess lots of folks share the "we were seeing <whatever> errors on a disk .. and are reconsidering our backup strategy" way of gaining insight...
(12 Feb '13, 09:06)
Volker Barth
Thanks Reg.
(12 Feb '13, 18:46)
Mark
@Volker, thanks, we probably do need to consider it. Live backup looks great because it offers an almost up to the second replica in a different data centre that could be recovered within minutes, and it's reasonably simple to set up.
(12 Feb '13, 18:59)
Mark
What about something like the "delete_old_logs" option, as used by SQL Remote and MobiLink clients? I.e. an option to tell the live backup to preserve old logs for n days or the like?
(13 Feb '13, 03:35)
Volker Barth
1
FWIW multiple separate live backups can be created; e.g., local and offsite, in case of zombie apocalypse and the like.
(13 Feb '13, 08:51)
Breck Carter
|
EDIT: I have turned that previous comment onto the question into an answer and I hope the experts like Reg will confirm the following: In my understanding, you should be able to do the following at the secondary site:
So, that way you could "freshen" your secondary site during the day. Note, however, that for a second "refresh", you will have to start with the full bakup again, i.e. it should not be possible to "freshen" the database from step 7 with an more current live backup log, as that log fits to the active database, and not to the restored and already "freshend" one. So, in case the primary site goes down, you now have three options:
Option 1 is the normal usage of a live backup log, so I can't tell whether options 2 or 3 would be useful for you... No, I have never used that technique myself. How best to stop a live backup? Can combine 5 & 6 using -ad to recover using multiple logs. The process sounds close to how incremental backups work. Thanks Volker, your comments are helpful.
(14 Feb '13, 21:32)
Mark
Replies hidden
I'd recommend to ask that as a separate question... - and no, I don't have an answer...
(16 Feb '13, 05:25)
Volker Barth
|
I'm on V10, and TBH I've never experienced such behavior. For me, live backup truncates its log and continues when main transaction log is truncated by backup process.
Dmitri, thank you, you are correct. I was confused. The behaviour in v12 is as you say. The question is edited to fix this.
Well, do you need to truncate the log during the backup (or do so every time)?
I don't think there's an option to make a live backup preserve its log (i.e. do not truncate it or delay truncation) when the active log is truncated - and I think it would not be senseful it there were one: By design, the live backup must be usable to be run against the last database backup...
Shouldn't the local backup of the truncated log be enough for a restore?
Thanks Volker. I assumed truncating the log is a good idea, because it prevents it growing too large. Larger logs take longer to recover. The risk of log recovery failing increases the longer you wait before doing it.
Yes, live backup must run against latest backup. But what if the active server dies while transferring the latest full backup to the secondary server?
Can you please explain "the local backup of the truncated log"?
AFAIK, that's not necessarily true. When the database file is restored, it will look for the last log offset applied, and will only restore operations from the log that appear after that particular offset - so a "larger" log (i.e. one that has not been truncated lately) will contain an initial part with already applied operations but the restore process will skip over them. (Yes, of course it will take some time to "skip over file contents" but I guess that's not too relevant.)
So, IMHO, the frequency of doing log backups does matter much more than the log length. In that respect, lots of folks do full backups once a while (say, daily or weekly) and do log (i.e. incremental of differential) backups more often (say, hourly). These types of backups do only work when the log is not truncated (or at least is renamed) during the backup. - Obviously, you are using the live backup facility instead of scheduled log backups to have an even up-to-date log backup.
My idea is that you have these "system components":
In case the FTP dies, you may have lost both the previous and the current backup on the secondary. But shouldn't be the full backup (2.) available (or is that only on the same hardware/drives as the active database, so it will be lost/unavailable with the active database, too)?
@Volker, thanks for your comments.
Restoring from log offset: So we could periodically stop the live log backup on secondary, apply to full backup and restart the live log backup?
You described our system components exactly.
Say primary data centre is hit by meteor. All primary disks are looking a bit rough. If live backup was going to the secondary server in a different data centre, the db can be recovered in a few minutes, and is up to date (almost).
Unless meteor hits during FTP. Then, secondary is 48 hours behind (assuming daily full backups that truncate the live log). We have a window of vulnerability proportional to the backup size and network speed.
It sounds like incrementals, periodically applying the live log and/or avoiding truncating the live log is required.
We used to have a server here (old and unused, luckily): http://www.computerworld.com/s/article/9092158/Explosion_has_data_center_scrambling_users_venting