Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

A technical question has been asked about this function by a potential customer: If the connection between the running server and backup server is lost before this page is received, what data will be lost? My interpretation would be, is the live transaction log being saved in memory until a page is full and it is written to disc or is it only sent to the backup server once a complete page is filled? The client is worried about data being lost and how much would be.

asked 17 Feb '14, 16:45

SKova's gravatar image

accept rate: 0%

edited 27 Feb '14, 03:17

Volker%20Barth's gravatar image

Volker Barth

Can't answer that question, but in case you want a backup server as up-to-date as possible, have you considered using the high availability feature aka "database mirroring" offered by SQL Anywhere 10 and above?

(18 Feb '14, 03:15) Volker Barth
Replies hidden

We have, although this will take time to implement into our configurations and requires regulatory approvals in our line of business. That is why we are looking for the answer to this in the short term.

(18 Feb '14, 06:30) SKova

So you are asking whether the live backup would be similar to one of the asynchronous dabase mirroring modes (with "the primary server sends ... to the mirror" replaced by "the live backup process receives ... from the database server") or to even another method?

Asynchronous mode: In asynchronous mode, committed transactions are not guaranteed to be recorded on the mirror server. In this mode, the primary server sends transaction log pages to the mirror when a transaction is committed.

I.e. the live backup process would receive pages after a COMMIT.

Asyncfullpage mode: In asyncfullpage (or page) mode, pages are not sent on COMMIT; instead, they are sent when the page is full. This reduces the amount of traffic between the two database servers and improves the performance of the primary server.

I.e. the live backup process would receive pages after a page has been written to disk (no matter if it was written to disk because it was full or because there was a COMMIT - which always forces pages to be written to disk.)

As stated, I do not know that...

(18 Feb '14, 07:23) Volker Barth
Replies hidden

Volker, We are trying to understand when the data for the pages is received. Also related and I forgot to mention, what is the size of those pages, which would show how much data could be lost if the primary server were to fail.

(18 Feb '14, 07:55) SKova

AFAIK, the transaction log uses the same database page size as the database file (by default 4KB). Note, however, that a single transactions may span several pages, so a series of "uncommitted pages" with a missing "last page containing the COMMIT" could mean that more data than fit on a single TL page are lost.

Note, however, that you could use DBTRAN to translate the log to find any non-committed operations and apply them manually.

(Yes, I'm aware that this does not answer your original question.)

(18 Feb '14, 08:31) Volker Barth

Thanks. I am hopeful that someone would know the details of the question.

(18 Feb '14, 08:36) SKova
showing 2 of 6 show all flat view

By my reading of the code, once a live-backup client has read all currently available data in the transaction log, it will block and wait until the next time we fully flush transaction log data to the physical disk (ie, an 'fsync' or equivalent operation). This is not equivalent to simply writing a log page. Typically, we will do an fsync when a COMMIT is performed; however, a single large transaction (such as the insertion of a blob) might write several pages before ever doing an fsync. Fsyncs are done in other cases such as when we grow the transaction log file too.

There's good reason to wait until the fsync rather than just a write operation. Data that is written without an fsync isn't guaranteed to be on stable storage and could disappear from the filesystem in the event of a power outage. If data were sent to the client when transaction log data whenever there was as write operation, the client could (in the event of a server failure) end up with data in the log than the server does. Stated another way, the server (when restarted) could end up writing different data to pages that had been sent to the client previously. So, we wait for the fsync.

BTW, note that this mechanism doesn't guarantee that the live-backup client is only ever behind by at most one committed transaction. When the client wakes up and gets a copy of the page it was waiting for (ie, after an fsync), it operates asynchronously from the server. There could be lots of activity at the server while the client is writing that newly acquired page to its local filesystem and the database server will not wait for the client to catch up. For that sort of thing, you need the high availability feature.

permanent link

answered 25 Feb '14, 22:19

John%20Smirnios's gravatar image

John Smirnios
accept rate: 37%

So a live-backup is more similar to HA's aynchronous mode than to the asyncpagefull mode (besides the general differences between live-backup and HA and the differences between a poll and push approach)?

(26 Feb '14, 06:42) Volker Barth
Replies hidden

Yes, I think it is closer to asynchronous mode.

(26 Feb '14, 22:17) John Smirnios
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 17 Feb '14, 16:45

question was seen: 2,562 times

last updated: 27 Feb '14, 03:17