Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

We are moving a complex application from v10.0.1 to v12.0.1. In general most things seem to work faster, but we are finding a few areas where performance is noticeably worse - in some cases dramatically worse. These problems appear to relate to certain user-defined functions where the execution times (as shown by profiling results) have increased dramatically.

The application makes extensive use of user-defined functions and procedures for areas of reasonably complex business logic, which are used widely in the application (and associated web services etc). In addition, in some areas they allow for the introduction of alternative custom rules for some customers.

We have been working with the Sybase support team who have been able to give us some helpful suggestions, such as re-writing to allow for the in-lining of simpler functions etc.

The databases we are working with are quite large (30GB - 150GB) with numbers of simultaneous users in the 50 - 350 range.

It is hard to pin down, but as well as increased execution times, some things seem to take longer in v12 as the database size grows, whereas in v10 performance was unaffected by database size - so long as the hardware was adequate. (In both cases the hardware is identical, cache available exceeds the database size and there are 24 virtual processors (two physical Intel multi-core processors), OS 2008 Server R2 64bit).

My question boils down to this - between v10 and v12, have there been changes to the way that SQL Anywhere works that might explain a general reduction in the efficiency of the execution of complex user-defined functions - or is it more likely to be simply something related to the particular things that we are doing? Additionally is there something that would explain a fall off in performance in v12 with database size, even when the number of rows in all the tables concerned in the process is the same?

asked 16 Feb '12, 15:41

Justin%20Willey's gravatar image

Justin Willey
7.6k137179249
accept rate: 20%

UPDATE - suggestions from support and clues from comments here got me looking at statistics - re creating statistics for all tables seems to have changed the situation very markedly for the better - now re-testing everything!

(17 Feb '12, 14:12) Justin Willey

Further update - it seems the gains seen were illusory. When under even light load the performance reverted to very poor (things taking 2 seconds now take 30). Same database, same hardware, identical queries - had to revert to 10 again. Performance instantly restored.

(18 Mar '12, 20:08) Justin Willey
Replies hidden
Comment Text Removed

Is there a VM involved? Is dbsrv12.exe using all of the CPUs?

(19 Mar '12, 08:10) Breck Carter

No VM, all processors in use - at least as measured by Task Manager.

(19 Mar '12, 08:51) Justin Willey

There are differences between V10 and V12 with respect to the caching of access plans and the inlining of UDF's, though I would argue that the changes the engineering team have made are to correct known problems with the techniques implemented in V10 than with changing the underlying technology significantly.

If I had to guess about the cause of the behaviour you are seeing, more than anything else my guess would be plan caching; to wit, the caching of plans that occasionally result in less-optimal strategies due to changes in volume or value distribution in the underlying tables. In V12 there are connection counters that you can query to count the number of requests that bypass optimization, or use cached plans. You should monitor these counters, and along with Application Profiling, try to pinpoint where caching and/or inlining may be impacting your execution times.

One thing you can do immediately is turn off plan caching altogether, and see what happens. This will cause the optimizer to re-optimize non-bypassed statements in all stored procedures and UDFs, but will avoid inefficient cached plans by not generating them in the first place.

permanent link

answered 16 Feb '12, 17:14

Glenn%20Paulley's gravatar image

Glenn Paulley
10.8k576106
accept rate: 43%

Many thanks - I try that see we what effect it has.

(17 Feb '12, 06:26) Justin Willey
Replies hidden

Initial tests with SET OPTION max_plans_cached =0; don't show any significant difference - but I'll check the counters.

(17 Feb '12, 08:18) Justin Willey

My guess is, it's you...

...it's something different you're doing.

At least, it's different from the Foxhound experience when upgrading from 9 to 10 to 11 to 12, and Foxhound has always used many user-defined functions... which are almost exclusively scalar functions that are called from SELECT lists, and almost never contain queries, complex or otherwise.

If you count result-set-returning stored procedures called from FROM clauses in other queries, there are even MORE of them, but the outer FROM clauses tend not to involve complex joins.

No offense intended, but when Glenn Paulley warns small children about the dangers of user-defined functions, your situation may be what he is talking about (he warns them about me as well, but for entirely different reasons :)

Depending on the situation, you may find "divide and conquer" to be a helpful approach, where individual function references are pulled out of larger queries and used to populate temporary tables... when Glenn and his team speak, even though they speak only in Alpha, I get the impression the query engine has an easier time with joins than with function references.

Sometimes (often? usually?), 1000 lines of straightforward code will run ten times faster than 100 lines of complex code... easier to write, too :)


Having said all THAT, it's more fun dealing with actual code... if you post some, maybe someone will come up with an ACTUAL idea :)

permanent link

answered 16 Feb '12, 16:37

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

Depending on the situation, you may find "divide and conquer" to be a helpful approach, where individual function references are pulled out of larger queries and used to populate temporary tables.

There are certainly a few places that could be done - so I can investigate.

I realise that there are dangers in u-d functions - but used with care, they've been remarkably good in the past v5 - v10!

(17 Feb '12, 06:30) Justin Willey
Replies hidden

...I share this "remarkably good" experience - though extended to v5 - v12...:)

(17 Feb '12, 06:38) Volker Barth
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×438
×275
×28

question asked: 16 Feb '12, 15:41

question was seen: 2,784 times

last updated: 19 Mar '12, 08:51