The forum will experience an outage sometime between February 10 at 7:00pm EST and February 12 at 11:59 EST for installation of security updates. The actual time and duration of the outage are unknown but attempts will be made to minimize the downtime. We apologize for any inconvenience.

We have a SQL anywhere 11 high availability cluster on site. Currently we are looking into disaster recovery, and we want to have a "30 seconds behind" database copy off site in two locations. The sole purpose of this is that "when our server room gets hit by a meteor" we will be able to move key personnel to our disaster recovery site and be back up and running with core functionality within a reasonable amount of time with minimal data loss. How do we achieve this?

Currently we are thinking about using either GlobalScape Continuous Data Protection or Microsoft Data Protection Manager to copy the database files to the disaster recovery locations. There is some fear that the database files might be in some sort of "in between" state and that we might not able to start it when using these solutions (which would obviously defeat the purpose). Alternately we are wondering what SQL Anywhere itself offers us in this area.

Is there anyone out there that is doing something like this, and how are they set up, and what are the advantages and disadvantages? Any input is appreciated!

asked 04 Jan '11, 21:08

RobertDD's gravatar image

accept rate: 42%

edited 07 Jan '11, 22:49

Volker%20Barth's gravatar image

Volker Barth

If you can't move to Version 12, and you must locate the HA primary and secondary servers locally, then running dbbackup -l at a remote location may be your only option.

As I remember the joys of Version 9 dbbackup -l between Miami and Brussels, the big issues were this:

(1) shipping the giant full backup to get the setup established at the remote location,

(2) actually starting the database at the remote location after the dbbackup -l has run for a long time to accumulate a very large backup log.

Ideally, you should do step (1) frequently so the that the log doesn't grow so large as to make remote startup (recovery mode) take forever.

In this respect, HA is live backup on steroids... rather than accumulate the log to be applied after failure, the log is applied on a continuous basis... a big advance.

If you want to learn about dbbackup -l, just ask... it is rather funky, lots of gotchas.

Version 12 brings a feature called "read-only scale-out" so you could move your HA secondary to the remote location and establish a third "copy node" locally.

The reason third-party solutions may not work is that the copies they make of the database file and transaction log may not be in synch, and it may not be possible for SQL Anywhere engine to bring them into synch when it starts up and goes through recovery mode. If you check their documentation, you may find that they have special "agents" or other modules or options dedicated to handling specific DBMS software like SQL Server, but no agent for SQL Anywhere. The definition of "in synch" is complex, given the existence of things like dirty pages, checkpoint log, rollback logs and so on.

However, it would be easy to test: Just have them back up a very busy database and then see if you can start it at the remote location.

permanent link

answered 07 Jan '11, 20:54

Breck%20Carter's gravatar image

Breck Carter
accept rate: 21%

@Breck: I guess SA12's copy nodes aren't really an answer here: AFAIK there can be as many as wanted, but they can't take the role of a mirror in a HA setup . What Robert seems to require is a HA system with truly 2 mirrors, and that's not available in SA today.

(07 Jan '11, 22:42) Volker Barth

@Volker: When it comes to failover, yes, HA currently makes you choose in advance where your one and only secondary is going to be located, and when your primary fails, that one single secondary becomes the primary. But BEFORE failover, during steady state, there's no difference between a secondary (which is read-only) and a copy node (which is read-only)... as I understand it. Caveat Emptor: I have not yet had any personal experience with read-only scale-out in a client production environment.

(08 Jan '11, 09:53) Breck Carter

Another Caveat Emptor: I have not had any personal experience with live backup using dbbackup -l coexisting with an HA setup. I can see some challenges; e.g., What do I do, or have to do, with the live backup after HA failover? OTOH, since I do know that multiple live backups in different geographical location WAS possible with Version 9, in production, I see no reason why live backup could not coexist with HA. The "Watcom Does Things The Way They Should Be Done" rule might apply here :)

(08 Jan '11, 09:58) Breck Carter

@Breck: I remember a NNTP discussion (but don't have a reference at the moment) in which one of the adored engineers told that it should be possible. - I agree on your other points and have made similar remarks, cf. my comments on my answer:)

(08 Jan '11, 15:43) Volker Barth

@Breck: I would very much like to read up on DBBackup -l and its gotchas. It does seem to be the solution. Where should I start?

(11 Jan '11, 14:53) RobertDD

Just one question: is there a reason no one is mentioning replication?

(11 Jan '11, 14:54) RobertDD

@Robert: I guess the reason replication has not been mentioned is the focus on HA so far. HA has the idea that in case of a failover, the mirror automatically can take over ownership, and that can be switched back to the primary. So roles can switch. - However, in a replication setup, the 2nd site (your offsite db) would still be a remote in case of a disaster, and it will be difficult (though not impossible!) to turn it into a consolidated and re-extract the primary site as remote, and then switch roles again. At least this would be very time-consuming, methinks.

(11 Jan '11, 16:00) Volker Barth

@Volker: I don't think "re-extract" would be used to create or re-create a full-copy replicated database... that indeed would be time-consuming. A file copy or backup copy would probaby be used instead. The trick might be in the timing: when to re-create/re-start the publications and subscriptions to guarantee no loss of data. MobiLink might be easier in that respect. In either case, you have latency delays which may be why nobody has suggested either.

(12 Jan '11, 11:18) Breck Carter

@Robert: I had a quick look around for articles on live backup, and other than one page in my book the Help is it. I did find the client docs and scripts from 2004 describing the actual ASA8 live backup setup of a 25G financial database from Miami to Brussels, and it brings back HORRIFYING memories of complexity and stability concerns. Apparently I lack the ability to remember pain. Bottom line: I do not recommend live backup, not in an age where HA and MobiLink are available. I am happy to answer questions, but I won't be writing any articles about it :)

(12 Jan '11, 11:29) Breck Carter

@Robert: The problems with live backup stemmed not from the software (it worked fine) but with the hardware and the wetware. Long-haul networks have problems which cause live backup to lose its connection and start over... you will be on vacation the night that happens ten times in a row. And satellite IT staff have been known to muck things up by, say, rebooting servers to upgrade software or deleting files they don't understand... that's the wetware component. Of all SQL Anywhere components, live backup is the most sensitive to its environment, more so than SQL Remote.

(12 Jan '11, 11:37) Breck Carter

@Breck: So what would you recommend now, if DBACKUP -l seems not to be the preferred choice anymore?

(12 Jan '11, 20:39) Volker Barth

@Volker (and @Robert): Another day has passed, and the horror has faded :)... Perhaps use live backup locally, with the primary server as a source for the log records. That eliminates the long-haul network issues and perhaps the administrative issues as well. Put the HA secondary at the remote site. At this point, however, back-and-forth Q&A should perhaps be replaced with a live discussion :)

(13 Jan '11, 09:17) Breck Carter

@Breck and @Volker: Thank you guys so much for your input on this topic. It is highly appreciated. I am still looking for the "perfect" HA on site + backup off site solution, but we might be starting to consider HA over VPN as well, and moving the physical location of the secondary off site. That still leaves some questions, and I will post a new topic on that issue and would very much like your input there as well. @Breck: How does a live discussion work on SQLA?

(14 Jan '11, 18:17) RobertDD

@Robert: SQLA doesn't do chat, but Google does (chat AND voice).

(14 Jan '11, 20:34) Breck Carter
More comments hidden
showing 5 of 14 show all flat view

Database file copying most likely won't work. I'd consider live backup (dbbackup -l) instead.

permanent link

answered 05 Jan '11, 20:50

Dmitri's gravatar image

accept rate: 11%

Hi Dmitri, what is the reason it would not work?

(06 Jan '11, 19:25) RobertDD

Database copy would be very likely inconsistent and hence unusable.

(06 Jan '11, 20:41) Dmitri

OK, yeah, that is exactly what I worry about.

(07 Jan '11, 13:59) RobertDD

I can't comment on HA as we have no production experience of using it. We do however have many installations with "live" back-up configurations which have proved very effective in genuine "disasters".

Unlike HA - you have to make the decision to start the replacement server, apply the logs and switch over - but batch files will do all the work. It does allow you to distinguish between a real emergency where a switch over is required and a brief outage of the main server.

permanent link

answered 07 Jan '11, 18:39

Justin%20Willey's gravatar image

Justin Willey
accept rate: 20%

Somebody has to say it, and it's worth saying loudly (hence the separate answer):

High availability is not the same as backup, and it does not replace backup. A straightforward, local, backup of the primary database should be implemented as a regular process, completely independent of the HA implementations.

The reason? You may want to restore to an earlier point, or go back and pluck data out of old backups, for technical or business or security or legal reasons that do NOT involve server failure or failover or availability.

Folks tend to concentrate on immediate failover reliability and performance when designing HA setups. That's as it should be... and a separate backup process that takes care of the "historical record" side of things lets you do that (concentrate on HA performance).

Note: A complete separation of regular backup and live backup processes is not possible; they are interrelated as discussed in the section "Live backups and regular backups" here. Like I said earlier, live backup is funky... and if memory serves, taking a full backup and truncating the transaction log necessitates sending the full backup to the remote live backup site because the remote live backup log file was truncated at the same instant as the primary log... and until that full backup gets there, the remote site is useless for failover.

Oh, and the hard part? Testing recovery. Not just testing HA failover but actually testing restore-from-backup. If you don't test recovery, it absolutely positively will not work when you need it.

Tip: Validate the backup. That makes validating the primary unnecessary, since if a backup is valid then by definition the primary was valid at the point the backup was taken. As well as being hard to do, validating the primary is completely pointless, and validating the backup is an absolute requirement... if you don't validate the backup then don't bother making one :)

permanent link

answered 08 Jan '11, 10:38

Breck%20Carter's gravatar image

Breck Carter
accept rate: 21%

There are some applications (command and control?) where the historical record is completely unimportant, availability is everything, so HA without backup makes sense. SQL Anywhere is used in some very interesting environments... like running databases sealed and buried in the ground for years and years :)

(08 Jan '11, 10:42) Breck Carter

@Breck: we have a backup going every night right now. This backup is restored, and we actually use it to review historical data when needed, often to prove or disprove without a doubt that something "was/was not there yesterday". I believe we keep the daily backups for 14 days, then we keep Friday backups for half a year or so.

(11 Jan '11, 14:22) RobertDD

In case I understand your current situation right:

If you're already using SQL Anywhere HA on site, wouldn't it be the easiest thing to extend this so that instead of using a local secondary server, use one at a geographically different location (and put the arbiter somewhere else, too)?

AFAIK, HA is made for such purposes.

And you would not have to mess around with copying database files and the like (which won't work in a few seconds given a typical x GB database size).

permanent link

answered 06 Jan '11, 08:45

Volker%20Barth's gravatar image

Volker Barth
accept rate: 32%

We want/need to have access to both DB machines in the HA setup in our primary building. The solution we are looking for should be one in addition to our current HA setup.

(06 Jan '11, 19:16) RobertDD

The two products I listed above do incremental backups, so we would ever only transport the new bits and not everything all the time. The amount of data that ends up going over the line will not be negligible, but it will likely also not be a bottleneck.

My main concern is that we might not be able to start from a copy of the DB file if it was copied while the DB was running.

(06 Jan '11, 19:24) RobertDD

Then I guess DBBACKUP -l would be the solution, as Dmitri and Breck are suggesting. And AFAIK, it should work in a HA environment, too. However, as DBBACKUP -l is then just another HA client app, you might have to add logic to cope with failover, i.e. what is DBBACKUP -l supposed to do when the current primary server goes down - will it automatically reconnect with the new primary? (Not that easy answered...)

(07 Jan '11, 22:36) Volker Barth

Or you might have to redefine the HA strategy: Could the current HA setup be reduced to a V12 "read-only scale-out" setup (i.e. you would have an active server and several read-only nodes in the main location), and could the mirror role be transferred abroad? Or could you switch the mirror role temporarily, say, you would usually use the remote location as mirror, but when doing planned adminstrative tasks on the primary (say, OS setup, hardware modifications), you would switch to a local mirror to have direct access.

(07 Jan '11, 22:47) Volker Barth

@Volker: Our HA backup strategy does not change: primary goes down => secondary takes over. The off site backup does not come into play, ever. We bring the old primary back online ASAP and restore quorum.

When the disaster recovery site comes into play, our primary site has just been obliterated (the "hit by a meteor" scenario.) We don't care about HA failover right now, we are counting on the fact that lightning doesn't strike twice. We just need to be up and running ASAP with minimal data loss. Losing a days worth of data is too much. Losing at most 10 to 20 minutes is about what we want.

(11 Jan '11, 14:31) RobertDD

I would favor Brecks suggestion to use the "read-only scale-out" in addition to the HA scenario. But has anyone checked, if the resulting DB can be started and used independently of the scale-out if necessary?

permanent link

answered 10 Jan '11, 08:38

Martin's gravatar image

accept rate: 14%

@Martin: I don't think looking at v12 is a likely solution for our company. We do not have the man-power available to upgrade our database again.

We are really looking for a v11 way.

(11 Jan '11, 15:01) RobertDD
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 04 Jan '11, 21:08

question was seen: 2,031 times

last updated: 10 Jan '11, 08:38