We have a setup with two DB instances, connected via sql-remote and FTP.
In one table we had about 6million rows, and we did then delete ~5millions of them on the remote site in ~500000 rows blocks. (This took ~20 minutes)
Now we got ~2700 messages in the flowing back from the remote to the consolidate.
Our problem is, that it now takes about 50 minutes per FTP message to apply these deletes from the remote side. So applying all messages will take arround 96 days..... And of course all other "real" updates/inserts etc. are just queued up in more messages...
We don't see a lot of io traffic from the database, but the database uses 100% cpu time of one (out of 8 available) cores.
What could we do to speedup applying these messages ? We did already add the usefull indexes, but that did not improve performance.
Is there a chance that this process is faster:
Actually we just can't affort to wait 96 days for the queue to clear up....
Or any other ideas how to speedup dbremote... (Using more than one CPU/core for example)
asked 09 Dec '10, 13:24
Lesson learnt from this question:
The strong recommendation that all replicated tables should have a primary key is not only focussed on avoiding inconsistent data (s. doc note below) but also on avoiding miserable performance...
I wasn't aware of this problem - though, fortunately, I have never used SQL Remote without fitting PKs.
answered 10 Dec '10, 08:38
I did now the following:
So the databases are now in sync and replicating again. What is still left to do, is to deduplicate the unloaded data from the table and then reload it in the table.
Thanks Volker for your help.
answered 10 Dec '10, 06:36