We are looking at using dbtools API to kick off a "live" log backup. The documentation is all pretty clear and we think we see how to use the DBBackup method and the associated a_backup_db structure (putting the required log file name into member hotlog_filename). However what we can't see is how to later stop the backup when that is needed. Other types of backup just terminate when they are finished, but by its nature the live log backup continues indefinitely. Obviously we could use a Windows API call to kill the process, but that seems rather crude and maybe runs the risk of a damaged log file? Is there a clean way of terminating a live log backup process started by the DBBackup method? Many thanks v16.0.0.2018 asked 07 Jan '16, 10:14 Justin Willey |
What you need to do is the following:
HTH answered 07 Jan '16, 11:20 Mark Culp Thanks Mark - that's clear.
(07 Jan '16, 11:57)
Justin Willey
|
It's rude to answer a question with a question, but why would you need to stop (or use) a live backup that is still running? Generally speaking, a live backup is only "needed" after the database fails, and when the database fails, the live backup stops automatically because it has lost its connection. answered 07 Jan '16, 12:47 Breck Carter It's the gap between my last copy of the live (backup) log and the full backup that I'm trying to minimise, in case my full back-up fails. If I left the live backup running it would get zapped at truncation time.
(07 Jan '16, 13:13)
Justin Willey
Replies hidden
1
That gap can be minimized by running (more frequent but remote) incremental backups; in tandem . . . if that helps any. But, then again, if you are not doing incrementals ... what exactly do you mean by 'truncate'?
(07 Jan '16, 14:25)
Nick Elson S...
Replies hidden
Thanks Nick - I mean using the -x option with dbbackup The problem is that I have to truncate the log (on the live server) at some point because it will just get bigger and bigger. When that truncation happens my live log backup gets trashed - there not being an option to keep the old one and start a new one as discussed here:http://sqlanywhere-forum.sap.com/questions/15476/how-to-run-a-live-backup Are you suggesting that I could do full but non-truncating backup leaving the live backup running; then do a separate transaction log backup with truncation? That would certainly minimize the exposure time.
(07 Jan '16, 15:01)
Justin Willey
If the full backup fails, you still have the original database and log files (even if you have truncated the log, presumably you have renamed the original). I am trying to understand the gap of which you speak since I have never experienced one.
(07 Jan '16, 15:37)
Breck Carter
1
Sorry, I'll try to explain more clearly: Two servers in different locations, Live & Standby. On the Standby Server there is a full copy of the database made earlier and a "live backup" running. Periodically (say every 30 minutes) we take a copy of the "live" back-up log and apply it to the Standby database. The live backup log gets steadily longer during the day, so the log application gets a bit slower as more and more of it is skipped. A some point in the 24 hours we start the Standby database in read only mode and validate it. A some point (could be daily, weekly etc) we want to replace the standby copy with a new copy of the Live database. Not strictly necessary from my point of view, but customers and their auditors seem to insist. Either way we do have to truncate the log file of the live database (and therefore the "live backup" log) as they will grow too large to handle. When the truncate log instruction is issued to the Live Server (either by doing a full back up or log backup only) the "live backup" log is immediately deleted by dbbackup and a new one started. If we successfully retrieve the full backup (db & log file) from the Live Server then we have no problem - we apply the backed up log to the backed up database and then we can continue applying copies of the "live backup" log. However if we lose the Live Server while the full backup it is being copied across (which could take hours with a 200 GB database), there are transactions that happened between our last copy of the "live" transaction log and when the truncation happened that we don't have anywhere. Even though the new "live backup" log has transactions between the truncation and the loss of the Live Server we can't use them because of the missing ones in the gap. But if I understand it properly,Nick's point is that I can minimise the gap by doing the log backup with truncation separately from the main backup. That certainly seems like a good idea. Hope that makes more sense :)
(07 Jan '16, 16:08)
Justin Willey
2
Yes indeed ... In fact, that has been part of my standing suggestion on how to implement a cold/semi-warm stand-by configuration for the better part of 20 years now. I prefer renaming (-r) combined with matching (-n) over truncation (-x) myself. That gives you extra convenience with the backup identification, protects synchronization and replication scenarios and does give you an extra degree of redundancy via the offline logs (which you will want to age off the system through a different mechanism if delete_old_logs is not operating).
(08 Jan '16, 08:29)
Nick Elson S...
Replies hidden
Many thanks for this Nick. I hadn't realised that -r would also restart the the "live backup" log in the same way as -x does. So if I use that with -n and carry out the backup direct to by StandBy Server, the "live backup" won't be truncated until I've successfully received the one from the Live Server, which is great. (And I'll have a extra copy one the Live Server) The only thing I'll have to work out then is how to manage the build-up of old logs on the Live Server. I'm not doing any replication and setting the delete_old_logs option doesn't seem to have any effect. The answer is probably a database preocedure to clear them out after that I call from time to time - that way I won't have to grant any folder rights on the Live Server to the user on the StandBy Server.
(08 Jan '16, 11:15)
Justin Willey
|
Shutting down the server works! {if that helps any?}
I don't believe there is any polite way to shut it down ... Any stopping of the live log will always result in an incomplete live log backup but that aspect would become moot on the next incremental/full backup anyway ... so maybe the concern is a bit misplaced ... since you are assuming a need to stop live logging.
It might be helpful to question about which scenarios you would find a need for this (¿feature?) ?
To me, I have been more concerned about how to auto-re-start the live log backup after a server is [re-]started. {issue there is that since a live-log operation is best done across the network how can/does the remote system detect when a service on another machine starts up and launch a new instance of the live-log backup)
I don't have an answer, but compared to use "DBBACKUP -l" instead of the DBTools API - how would you stop (and re-start) that utility then? If you would need to kill the process, killing the according DBTools API thread would be not worse, methinks...:)
Interestingly enough, starting DBBACKUP -l in a CMD window and then using CTRL-C to stop the batch seems to take quite a while (almost one minute in my tests) to interrupt DBBACKUP until it states "Backup terminated by user".
Just another thought:
As the live backup process is just another database client, you could drop the database connection via "DROP CONNECTION n" from a different DBA connection. And you might monitor the messages returned via the according a_backup_db callback (member msgrtn, I guess) to drop the connection while the backup process is idle "waiting on the next page" to prevent a termination while the backup is doing writes.
(That being said, the live log will still be incomplete and useless up to the next live log start...)
very true - I was assuming that the "correct" method would be the equivalent to CTL+C rather than just killing the process.
It is a fair point about what would you do with it anyway!
The scenario I imagined is
1 You are doing a live backup and the time has come to do your full backup.
2 You nicely stop the live backup and start the full backup (with log truncation).
3 You are unable to complete the full backup because of server failure on the live site / zombie apocalypse etc
4 You can still apply your saved log live file to your standby database and carry on (zombies permitting), accepting the loss of transactions between stopping the live log backup and the failure of the live site.
It is only useful to cope with a failure of the full backup. The rest of the time it would be discarded after the successfully full backup.
We are periodically applying the "live backup" file to the standby database during the course of the day. This is all currently done in batch files.
That seems to do it neatly, and immediately.