This is under SQL Anywhere 8.0.3.5594 on Windows Server 2003. I have been trying to resolve an issue of poor performance when running a series of updates on a single table. Testing with a copy of the customer's database, I found with repeated tests that making no change other than removing the table from the Mobilink publication reduced the time for updating 800 records from over 7 minutes down to 2 seconds. This was very unexpected for me. Can anyone explain why this might be and how it might be resolved (the table can't be dropped from the publication on the customer system)?

Notes: No Mobilink client was running in my test setup. The database service was configured to restart the transaction log at each checkpoint (-m). The table had a somwhat complex WHERE clause in the publication.

Thanks for any ideas on this!

asked 17 Jul '13, 16:44

Bob%20Leviton's gravatar image

Bob Leviton
1967716
accept rate: 0%

2

My money's on the "complex WHERE clause"... but I am completely baffled by your description...

This must be the MobiLink consolidated database, right? (because dbeng8 -m effectively prevents synchronization if used on the remote database).

On the other hand, you're talking about a publication, which is a remote thing... is this a three-tier synchronization setup, with this the middle level (both consolidated and remote)?

Still, dbeng8 -m seems impossible.

Why do you say "no MobiLink client was running"? You're talking about the consolidated database, right? ...or do you mean "no synchronization was running"?

What does the table look like? The publication? The ... everything? :)

Is there a corresponding difference in the increase in size of the transaction log during the update? e.g., much larger increase with the table in the publication.

(17 Jul '13, 17:17) Breck Carter

I completely agree with Breck, you seem to be talking about both a consolidated and a remote here.

The -m switch should never be used on a remote database, yet that is the only location you would have a publication.

I would suggest running dbunload -n (no schema) and posting the publications section from the reload.sql file so we can see how you have defined your publications and subscriptions.

You might as well throw in the dbmlsync command line too.

(18 Jul '13, 08:32) David Fishburn

Typo: "dbunload -n (no schema)": should have said "dbunload -n (no data, schema only)"

(18 Jul '13, 08:34) David Fishburn

Breck & David, Thanks for your replies and my apologies for not explaining better:

  • The database I am testing with is a remote database from the customer system. For testing purposes, in our lab, I am running it as a standalone system, without running a consolidated or attempting to synchronize. But the publication is there - it is set up as a Mobilink remote database.

  • Typically in our lab we add -m to keep the transaction logs from getting too large. That configuration is obviously not used on the live customer system. I've repeated my tests without the -m parameter and results are exactly the same as I reported above.

  • The transaction log grows by the same amount and logs the same transactions with and without the target table in the publication.

  • Here is the WHERE clause: WHERE (vms_vessel_id is null) or (asset_ak_unique = any(select cast(asset_id as varchar(12)) || ' ' || origin_site from rdm.rdm_resourcetracker_sitreps where DATEDIFF(day,position_dtg,current timestamp) < 3))

(18 Jul '13, 09:21) Bob Leviton
Replies hidden

That's two where clauses all mooshed together :) ... please show us the actual article from the publication.

(18 Jul '13, 14:13) Breck Carter

Thanks for the offer, Breck, but now that we understand what's going on we've already come up with a way to optimize the filter to almost completely eliminate the performance impact. Essentially, as Jeff explained, having the where clause in the publication was giving us a sort of hidden trigger that we were unaware of until now.

(18 Jul '13, 14:19) Bob Leviton
showing 4 of 6 show all flat view

A publication's WHERE clause is evaluated at INSERT/UPDATE/DELETE time, in order to mark the records in or out of the publication scope in the transaction log for synchronization or replication. The overhead of doing this record keeping is most likely the difference in the INSERT times you're seeing.

permanent link

answered 18 Jul '13, 10:46

Jeff%20Albion's gravatar image

Jeff Albion
10.7k171174
accept rate: 24%

edited 18 Jul '13, 10:58

Thanks Jeff, I guess that explains it.

(18 Jul '13, 10:53) Bob Leviton
Replies hidden

With SQL Remote, DBTRAN -sr would show the addional record-keeping for publications. I don't know whether the same is true for MobiLink clients...

(18 Jul '13, 11:58) Volker Barth

So... it's turning out to be a standard "Query From Hell" problem :)

Try running a SELECT * FROM WHERE [clause from publication] to make sure it really is slow.

If so, try obtaining a Graphical Plan With Statistics for the SELECT * FROM WHERE [clause from publication].

Scroll down in that article to see the Version 8 instructions.

Save the plan as a *.xml file and post it here.

permanent link

answered 18 Jul '13, 14:18

Breck%20Carter's gravatar image

Breck Carter
26.9k438609883
accept rate: 21%

edited 19 Jul '13, 06:36

1

You might also want to get the plan as Breck has suggested above but when running an UPDATE statement, as that is the problematic statement.

(18 Jul '13, 14:53) David Fishburn
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×298
×246
×14

question asked: 17 Jul '13, 16:44

question was seen: 925 times

last updated: 19 Jul '13, 06:36