Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

We are looking at using dbtools API to kick off a "live" log backup.

The documentation is all pretty clear and we think we see how to use the DBBackup method and the associated a_backup_db structure (putting the required log file name into member hotlog_filename). However what we can't see is how to later stop the backup when that is needed. Other types of backup just terminate when they are finished, but by its nature the live log backup continues indefinitely.

Obviously we could use a Windows API call to kill the process, but that seems rather crude and maybe runs the risk of a damaged log file?

Is there a clean way of terminating a live log backup process started by the DBBackup method?

Many thanks

v16.0.0.2018

asked 07 Jan '16, 10:14

Justin%20Willey's gravatar image

Justin Willey
7.6k137179249
accept rate: 20%

2

Shutting down the server works! {if that helps any?}

I don't believe there is any polite way to shut it down ... Any stopping of the live log will always result in an incomplete live log backup but that aspect would become moot on the next incremental/full backup anyway ... so maybe the concern is a bit misplaced ... since you are assuming a need to stop live logging.

It might be helpful to question about which scenarios you would find a need for this (¿feature?) ?

To me, I have been more concerned about how to auto-re-start the live log backup after a server is [re-]started. {issue there is that since a live-log operation is best done across the network how can/does the remote system detect when a service on another machine starts up and launch a new instance of the live-log backup)

(07 Jan '16, 10:23) Nick Elson S...
Replies hidden
1

I don't have an answer, but compared to use "DBBACKUP -l" instead of the DBTools API - how would you stop (and re-start) that utility then? If you would need to kill the process, killing the according DBTools API thread would be not worse, methinks...:)


Interestingly enough, starting DBBACKUP -l in a CMD window and then using CTRL-C to stop the batch seems to take quite a while (almost one minute in my tests) to interrupt DBBACKUP until it states "Backup terminated by user".

(07 Jan '16, 10:37) Volker Barth
Replies hidden
1

Just another thought:

As the live backup process is just another database client, you could drop the database connection via "DROP CONNECTION n" from a different DBA connection. And you might monitor the messages returned via the according a_backup_db callback (member msgrtn, I guess) to drop the connection while the backup process is idle "waiting on the next page" to prevent a termination while the backup is doing writes.

(That being said, the live log will still be incomplete and useless up to the next live log start...)

(07 Jan '16, 11:11) Volker Barth
Replies hidden

very true - I was assuming that the "correct" method would be the equivalent to CTL+C rather than just killing the process.

(07 Jan '16, 11:41) Justin Willey
1

It is a fair point about what would you do with it anyway!

The scenario I imagined is

1 You are doing a live backup and the time has come to do your full backup.

2 You nicely stop the live backup and start the full backup (with log truncation).

3 You are unable to complete the full backup because of server failure on the live site / zombie apocalypse etc

4 You can still apply your saved log live file to your standby database and carry on (zombies permitting), accepting the loss of transactions between stopping the live log backup and the failure of the live site.

It is only useful to cope with a failure of the full backup. The rest of the time it would be discarded after the successfully full backup.

We are periodically applying the "live backup" file to the standby database during the course of the day. This is all currently done in batch files.

(07 Jan '16, 11:56) Justin Willey

That seems to do it neatly, and immediately.

(07 Jan '16, 13:10) Justin Willey
showing 3 of 6 show all flat view

What you need to do is the following:

  • set the backup_interrupted field in the a_backup_db structure to zero before starting the live backup
  • set up a method - e.g. GUI interface, sigint interrupt handler, whatever - on the client that allows the user to specify when the live backup is to be terminated.
  • start the live backup by calling DBBackup( &your_a_backup_db_info ) interface in dbtools
  • when the user wants the live backup to terminate, set the backup_interrupted field to 1. i.e. your_backup_db_info.backup_interrupted = 1;
  • the dbtools code will notice that backup_interrupted is non zero and will stop fetching log pages from the server and the DBBackup() call will return.

HTH

permanent link

answered 07 Jan '16, 11:20

Mark%20Culp's gravatar image

Mark Culp
24.9k10141297
accept rate: 41%

Thanks Mark - that's clear.

(07 Jan '16, 11:57) Justin Willey

It's rude to answer a question with a question, but why would you need to stop (or use) a live backup that is still running?

Generally speaking, a live backup is only "needed" after the database fails, and when the database fails, the live backup stops automatically because it has lost its connection.

permanent link

answered 07 Jan '16, 12:47

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

edited 07 Jan '16, 12:49

It's the gap between my last copy of the live (backup) log and the full backup that I'm trying to minimise, in case my full back-up fails. If I left the live backup running it would get zapped at truncation time.

(07 Jan '16, 13:13) Justin Willey
Replies hidden
1

That gap can be minimized by running (more frequent but remote) incremental backups; in tandem . . . if that helps any.

But, then again, if you are not doing incrementals ... what exactly do you mean by 'truncate'?
If you are truncating the log as part of your full backup, then you may be playing this ~'a little too close to the edge of the cliff there'~

(07 Jan '16, 14:25) Nick Elson S...
Replies hidden

Thanks Nick - I mean using the -x option with dbbackup

The problem is that I have to truncate the log (on the live server) at some point because it will just get bigger and bigger. When that truncation happens my live log backup gets trashed - there not being an option to keep the old one and start a new one as discussed here:http://sqlanywhere-forum.sap.com/questions/15476/how-to-run-a-live-backup

Are you suggesting that I could do full but non-truncating backup leaving the live backup running; then do a separate transaction log backup with truncation? That would certainly minimize the exposure time.

(07 Jan '16, 15:01) Justin Willey

If the full backup fails, you still have the original database and log files (even if you have truncated the log, presumably you have renamed the original). I am trying to understand the gap of which you speak since I have never experienced one.

(07 Jan '16, 15:37) Breck Carter
1

Sorry, I'll try to explain more clearly:

Two servers in different locations, Live & Standby.

On the Standby Server there is a full copy of the database made earlier and a "live backup" running. Periodically (say every 30 minutes) we take a copy of the "live" back-up log and apply it to the Standby database. The live backup log gets steadily longer during the day, so the log application gets a bit slower as more and more of it is skipped.

A some point in the 24 hours we start the Standby database in read only mode and validate it.

A some point (could be daily, weekly etc) we want to replace the standby copy with a new copy of the Live database. Not strictly necessary from my point of view, but customers and their auditors seem to insist. Either way we do have to truncate the log file of the live database (and therefore the "live backup" log) as they will grow too large to handle.

When the truncate log instruction is issued to the Live Server (either by doing a full back up or log backup only) the "live backup" log is immediately deleted by dbbackup and a new one started.

If we successfully retrieve the full backup (db & log file) from the Live Server then we have no problem - we apply the backed up log to the backed up database and then we can continue applying copies of the "live backup" log.

However if we lose the Live Server while the full backup it is being copied across (which could take hours with a 200 GB database), there are transactions that happened between our last copy of the "live" transaction log and when the truncation happened that we don't have anywhere. Even though the new "live backup" log has transactions between the truncation and the loss of the Live Server we can't use them because of the missing ones in the gap.

But if I understand it properly,Nick's point is that I can minimise the gap by doing the log backup with truncation separately from the main backup. That certainly seems like a good idea.

Hope that makes more sense :)

(07 Jan '16, 16:08) Justin Willey
2

Are you suggesting that I could . . . leaving the live backup running; then do a separate transaction log backup with truncation?

Yes indeed ... In fact, that has been part of my standing suggestion on how to implement a cold/semi-warm stand-by configuration for the better part of 20 years now.

I prefer renaming (-r) combined with matching (-n) over truncation (-x) myself. That gives you extra convenience with the backup identification, protects synchronization and replication scenarios and does give you an extra degree of redundancy via the offline logs (which you will want to age off the system through a different mechanism if delete_old_logs is not operating).

(08 Jan '16, 08:29) Nick Elson S...
Replies hidden

Many thanks for this Nick.

I hadn't realised that -r would also restart the the "live backup" log in the same way as -x does. So if I use that with -n and carry out the backup direct to by StandBy Server, the "live backup" won't be truncated until I've successfully received the one from the Live Server, which is great. (And I'll have a extra copy one the Live Server)

The only thing I'll have to work out then is how to manage the build-up of old logs on the Live Server. I'm not doing any replication and setting the delete_old_logs option doesn't seem to have any effect. The answer is probably a database preocedure to clear them out after that I call from time to time - that way I won't have to grant any folder rights on the Live Server to the user on the StandBy Server.

(08 Jan '16, 11:15) Justin Willey
showing 3 of 7 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×261
×15
×2

question asked: 07 Jan '16, 10:14

question was seen: 2,048 times

last updated: 08 Jan '16, 11:15