The forum will experience an outage sometime between February 10 at 7:00pm EST and February 12 at 11:59 EST for installation of security updates. The actual time and duration of the outage are unknown but attempts will be made to minimize the downtime. We apologize for any inconvenience.

I'm currently doing some comparisons between a database running on, and (i.e. the the latest builds) on a modest Windows 7 64-bit machine with 8 GB RAM and 4 logical processors. The database was rebuilt for each of these versions and each instance has run some tests on the 64-bit engine (using all logical processors) with identical settings in a (hopefully) almost identical environment.

With the default max_query_tasks option set to 0 (to allow intra-query parallelism), it shows that v16 and v17 make heavily use of parallel operators - property('ExchangeTasksCompleted') lists about 1.5 million tasks - whereas v12 does not (below 100 tasks).

For the tested workload, comparisons have shown that intra-query parallelism unfortunately does decrease the performance noticeably (the complete tests take about 25 % longer), so with v16/v17 I would need to temporarily set max_query_tasks to 1 for the according connections to prevent a significant performance degradation compared to v12. (The workload itself is quite "serial" by design, so I am not surprised that parallelism is not helpful here).

Question: Unless I have overlooked that information, the v16/v17 docs do not state that the usage of intra-query parallelism has been enhanced compared to v12. Is that result of my tests expected behaviour change?

(I won't be able to offer plans of the according queries as they are embedded in stored procedures and functions... - It's just a general question to theses versions.)

asked 11 Aug '15, 03:20

Volker%20Barth's gravatar image

Volker Barth
accept rate: 32%

> I won't be able to offer plans of the according queries as they are embedded in stored procedures and functions

See Capturing and Storing Plans With GRAPHICAL_PLAN()

(...and now you know how you're going to spend the rest of the day :)

(11 Aug '15, 06:41) Breck Carter
Replies hidden

Oh Breck, you made my day... to cite the resume from your elaborate list:

"3. When it's fast enough, stop."

(11 Aug '15, 06:52) Volker Barth

It might be that not the intra-query parallelism but the statistics have changed, so that the optimiser now prefers the parallel execution

(12 Aug '15, 10:56) Martin
Replies hidden

The statistics are indeed powerful! :)

(12 Aug '15, 11:56) Breck Carter

Hm, I have rebuild the databases with reloaded statistics, so I guess that should not matter here (unless identical stats would lead to different behaviour in newer versions).

(12 Aug '15, 15:48) Volker Barth

Dear SAP engineers, any official feedback is still highly appreciated:)

(26 Aug '15, 10:06) Volker Barth
showing 3 of 6 show all flat view

As Ani mentioned, you can get the short plans by using request logging (use level "all"). This is the easiest option.

You can also use diagnostic tracing to get full graphical plans:

dbinit -dba tracinguser,tracingpassword tracing.db
dbsrv## -x tcpip tracing.db

From your own database:

  insert into sa_diagnostic_tracing_level ( scope, identifier,
          trace_type, trace_condition, value, enabled ) 
          values( null, null, 'plans_with_statistics', null, null, 1 );
  attach tracing to "uid=tracinguser;pwd=tracingpassword;eng=tracing";
  --run your procedure
  detach tracing with save;

Now open Sybase Central/SQL Central, choose Application Profiling mode, "Open an analysis file or connect to a tracing database", "Connect to a tracing database", and input the information for your tracing database. You should now see the statements run by your procedures; right-click each statement to see the plans.

permanent link

answered 27 Aug '15, 10:26

Dan%20Farrar's gravatar image

Dan Farrar
accept rate: 80%

Sorry, my problem is the amount of data, as really a bunch of request is made by issuing just a few stored function calls. I had initially tried to generate a RLL with level "all" and got a really huge log file (as mentioned, with more than a million of parallel requests - even the according -o console log is several MB large...). - That made me ask for a more general information (which seems not available, according to your and Ani's answers).

I guess I will have to try to break things down in order to get something traceable, So a big thanks for the exact hints!

(27 Aug '15, 11:02) Volker Barth
Replies hidden

@Volker: If you can add code to your application or stored procedures to "pick and choose" which queries to capture the plans for, you can use the techniques described here... it's what I've used inside Foxhound to capture The Plans From Hell from among gigabytes of SQL requests.

It might be a lot of work to set up, but the filtering can be (almost) perfect.

(27 Aug '15, 15:41) Breck Carter

Yes, that's exactly the problem, as stated, I'm primarily issuing a few stored functions that themselves call other functions - a quite complex hierarchy of calls and queries. It will take a while to get something reproducible... - and believe it or not, my personal throughput is usually best when I choose "set option Volker.max_query_tasks = 1", too...

(28 Aug '15, 01:50) Volker Barth

Ha ha! ...maybe you need an assistant :-)

(28 Aug '15, 07:12) Breck Carter

We need to see plans and queries to do any comments on this. Could you please generate at least request level log with all plans and all statements for these particular tests?

Thanks! Ani

permanent link

answered 26 Aug '15, 12:35

Nica%20_SAP's gravatar image

Nica _SAP
accept rate: 3%

Sorry for the very late reply - I guess I'm facing some kind of "Premature optimization" problem here, i.e. taking the time to modify the procedures/functions to gather the plans of their relevant queries would take much longer than the execution times of the - rarely used - maintenance tasks themselves...(*) - and that would not even include the time and effort to compare and analyse the plans...

As the database was migrated to v16 a while ago and I'm doing frequently ad-hoc queries, I have learnt that trying to temporarily set max_query_tasks to 1 is often helpful when queries take longer than expected. So intra-query parallelism remains a "grab bag" for me:)

Anyway, thanks again for the helpful hints.

(*) Here's a - possibly - helpful chart:

Is It Worth the Time?

permanent link

answered 19 Sep '16, 04:20

Volker%20Barth's gravatar image

Volker Barth
accept rate: 32%

edited 19 Sep '16, 04:22

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 11 Aug '15, 03:20

question was seen: 638 times

last updated: 19 Sep '16, 04:22