Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

Hello,

we have worked with sqla since 5.5 versions and also used dbremote style replication with very good success.

Now the databases have grown and the extraction process for new remotes takes 1-2 hours, which most of the time can be done during offline hours. But more and more we need to extract new remotes during normal work hours. This of course then blocks the consolidate database and is optimal ;)

We now tested the extraction against a copy of the db, and it works fine. But we did the following:

  1. Create/Start remote
  2. Shut database down
  3. Made a copy of the db files
  4. Restarted the consolidate

Of course this gives a problem in step 2-4, since then the consolidate DB is down, which is not ok during work hours.

My question is now, is it possible to use dbbackup to make the backup ? We tryed it 2-3 times, but we then always had message missing errors for the new remote.

I also looked at http://sqlanywhere-forum.sap.com/questions/7301/replication-extract-from-a-db-backup-or-similar and http://dcx.sybase.com/index.html#1201/en/sqlremote/sr-deploy-s-4842482.html

But for me it's not clear how we get a copy of the db wihout shutting down the consolidate, or what dbbackup options would be required...

asked 16 Aug '12, 06:29

ASchild's gravatar image

ASchild
777222740
accept rate: 14%

edited 16 Aug '12, 07:28

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822


In my experience, doing a backup from the live database and leave it running should work. We have done so with v8.0.3 for several years before switching to a custom extraction process. The latter made the extraction much faster (sometimes factor 100), and we could go back to extract from the live database... (As such, running DBXTRACT -v to check the underlying extraction statements is recommended to find out whether these statements may be supotimal...).

What we did is not using DBBACKUP but the BACKUP DATABASE statement, and with v8, it had way less options than now, so I can't tell which options are required. Immediately after, we made a REMOT RESET for all newly created remotes, and then did a commit.

We made sure that the consolidatdd database would not run SQL Remote meanwhile.

FWIW, here is a SQL script that made just the necessary steps on the live database - I leave the comments in German, as that won't hurt you, I bet:)

The newly created remotes are here filtered from a user-defined table called T_Replikat.

-- Erstellen einer Kopie der konsolidierten Datenbank
-- als Grundlage für das Extrahieren von Replikaten,
-- um dabei nicht den laufenden Betrieb der Original-Datenbank zu beeinträchtigen
--
-- Die Kopie der Datenbank wird mit IsolationLevel 3 (phantom reads) erstellt.
set option ISOLATION_LEVEL = 3;
checkpoint;
backup database directory 'D:\DATA\BACKUP\ZuDbExtract';

-- und dann in der aktuellen DB ein Remote Reset fuer alle neu erstellen/wiederangelegten Replikate
-- durchfuehren mit anschliessendem Commit;
begin
   for forNeueReplikate as crsNeueReplikate cursor
      for select user_name
            from sysremoteusers SRU inner join T_Replikat R on SRU.user_name = R.ReplikatName
            where flagIstFreigegeben = 0 and R.datLetzteAenderung > current date
      for read only
   do
      message 'Remote Reset für Replikat "' || user_name || '" ausführen'
         type info to client;
      execute immediate 'remote reset "' || user_name || '";';
   end for;
   commit;
end;

set option ISOLATION_LEVEL = 0;
commit;
-- Anschliessend darf die aktuelle DB wieder replizieren; die neuen Replikate werden aus der
-- Kopie heraus erstellt.
permanent link

answered 16 Aug '12, 07:27

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822
accept rate: 34%

edited 16 Aug '12, 07:31

Thanks, will give it a try. (Und deutsche Kommentare sind für mich auch kein Problem)

What are the steps for creating a own extraction process, and why is this the so much faster ?

(16 Aug '12, 11:39) ASchild
Replies hidden

Nun, das kann man nicht

Well, I don't think that a custom extraction process is generally faster. However, you might have a look at the statements DBXTRACT is using: They are built from the article and subscription definitions for that remote.

We had the situation that we use many subscriptions per table for the same remote user, and that turned into joins with WHERE clauses with lots of " ... OR ... OR = ... " conditions, making them perform very bad, particularly when there are several parent/child levels.

Looking at those statements made it very easy to understand why the extraction would take so long, and to rewrite them to test improvements.

A custom extraction might simply use a temporary table to store the according "subscribe by" values and then join directly with the required tables.

(I don't know if this is all to vague, I hope you get the idea.)


For further info on the details, you may cf. this FAQ...

(16 Aug '12, 12:46) Volker Barth

Hello Volker,

yep, I got the idea what can perhaps be optimized.

Thanks

André

(16 Aug '12, 15:25) ASchild
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×119
×103
×84
×78
×8

question asked: 16 Aug '12, 06:29

question was seen: 3,009 times

last updated: 16 Aug '12, 15:25