Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

We are migrating since months our several tens of installations from SQL10 to SQL17, after several complains from our clients, that the performance is now worse, we tried some tests, and looks like that the SQL17 really slower as the SQL10 is:

I execute the following block on both SQL10 & SQL17, and as a result SQL17 needs always the double time compared to SQL10 for both write_duration and read_duration.

begin

declare i int;

declare pos1 int;

declare pos2 int;

declare pos3 int;

declare pos4 int;

declare trans1 int;

declare trans2 int;

declare mychar varchar(4);

declare write_duration int;

declare read_duration int;

declare starttime datetime;

declare endtime datetime;

if exists (select 1 from systable where table_name = 'mytest') then

drop table mytest;

end if;

create table mytest (sn int, testname varchar (5));

set starttime = now();

set i = 0;

while (i<456976) loop

set pos1 = (i/17576);

set trans1 = mod(i, 17576);

set pos2 = (trans1/676);

set trans2 = mod(trans1,676);

set pos3 = (trans2/26);

set pos4 = mod(trans2,26);

set mychar = char(pos1+65) + char(pos2+65) + char(pos3+65) + char(pos4+65);

insert into mytest (sn, testname) values (i, mychar);

set i = i+1;

end loop;

set endtime = now();

set write_duration = datediff (millisecond, starttime, endtime);

set starttime = now();

select testname into mychar from mytest where testname = 'SAKO';

set endtime = now();

set read_duration = datediff (millisecond, starttime, endtime);

select write_duration, read_duration ;

end

P.S. both databases are new created with DBINIT (with same parameters), and they run on the same machine (and same disc).

asked 18 Jun '20, 05:39

Baron's gravatar image

Baron
2.1k136149177
accept rate: 48%

edited 18 Jun '20, 05:47


In terms of real-world applications, I'm not sure what conclusions can be drawn from these results.

V17 performs much better on a brand-new empty database, and V10 performs better after several test runs, even after the cache is flushed and the engine is restarted.

The V17 performance is consistent over all tests.

The "read_duration" measurement is too small to be useful.

10.0.1.4310 write_duration = 40586 read_duration = 528
17.0.9.4882 write_duration = 15682 read_duration = 83

10.0.1.4310 write_duration = 13027 read_duration = 144
17.0.9.4882 write_duration = 15130 read_duration = 75

10.0.1.4310 write_duration = 8269 read_duration = 47
17.0.9.4882 write_duration = 14868 read_duration = 61

ROLLBACK;
CHECKPOINT;
CALL sa_flush_cache();

10.0.1.4310 write_duration = 7710 read_duration = 49
17.0.9.4882 write_duration = 14493 read_duration = 62

Stop and restart database.

10.0.1.4310 write_duration = 8659 read_duration = 727
17.0.9.4882 write_duration = 14481 read_duration = 62

Delete and dbinit database.

10.0.1.4310 write_duration = 47181 read_duration = 2046
17.0.9.4882 write_duration = 16390 read_duration = 123


Here's the code that was run...

begin
declare i int;
declare pos1 int;
declare pos2 int;
declare pos3 int;
declare pos4 int;
declare trans1 int;
declare trans2 int;
declare mychar varchar(4);
declare write_duration int;
declare read_duration int;
declare starttime datetime;
declare endtime datetime;

if exists (select 1 from systable where table_name = 'mytest') then
drop table mytest;
end if;

create table mytest (sn int, testname varchar (5));

set starttime = now();
set i = 0;

while (i<456976) loop
set pos1 = (i/17576);
set trans1 = mod(i, 17576);
set pos2 = (trans1/676);
set trans2 = mod(trans1,676);
set pos3 = (trans2/26);
set pos4 = mod(trans2,26);
set mychar = char(pos1+65) + char(pos2+65) + char(pos3+65) + char(pos4+65);
insert into mytest (sn, testname) values (i, mychar);
set i = i+1;
end loop;

set endtime = now();
set write_duration = datediff (millisecond, starttime, endtime);
set starttime = now();
select testname into mychar from mytest where testname = 'SAKO';
set endtime = now();
set read_duration = datediff (millisecond, starttime, endtime);

MESSAGE STRING ( @@VERSION, ' write_duration = ', write_duration, ' read_duration = ', read_duration ) TO CONSOLE;

select write_duration, read_duration ;
end;
permanent link

answered 18 Jun '20, 09:25

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

edited 18 Jun '20, 09:26

1

...well, what about a test with the latest v17.0.10 build? :) - But I certainly agree that this test is not telling much about real-world applications.

In my (former) tests, intra-query parallelism was sometimes responsible (more noticeable with v16/17 than v12) for slower queries compared to older versions.

(18 Jun '20, 09:34) Volker Barth
Replies hidden
1

I watched all 12 runs in Foxhound and intra-query parallelism was not involved... although it could be a factor in the real workload.

Yeah, I could upgrade and retest... lemme get right on that :)

(18 Jun '20, 09:45) Breck Carter

First Run:

10.0.1.4213 write_duration = 4212, read_duration = 269

17.0.9.4803 write_duration = 10189, read_duration = 61

Second Run:

10.0.1.4213 write_duration = 3944, read_duration = 40

17.0.9.4803 write_duration = 10766, read_duration = 61

Why should this not tell about the real-world applications? In our real world applications we have also several queries based on unindexed columns.

(18 Jun '20, 11:40) Baron
1

> we have also several queries based on unindexed columns

Do you know if those actual queries are slower in V17 than V10?

Your test retrieves exactly ONE row... read_duration is statistically meaningless.

It is very difficult to create representative benchmark tests. It is much easier to measure the performance of real applications.

(18 Jun '20, 13:09) Breck Carter
1

> First Run: ... Second Run: ...

Show us the dbsrv10.exe and dbsrv17.exe command lines.

What client software did you use to run the tests? dbisql?

(18 Jun '20, 13:13) Breck Carter
2

> Why should this not tell about the real-world applications?

Real-world applications don't (very often) consist of a single connection that inserts 400,000+ rows without committing.

(20 Jun '20, 08:54) Breck Carter
1

Also, rows usually contain more than one int and 5 chars.

(20 Jun '20, 10:20) Volker Barth

You have right, but the write_duration is not interesting for me, I am caring about the read_duration.

Just in order to have a complete code snippet, I included the write process too in my example!

(21 Jun '20, 16:16) Baron

But does it make difference if the row has another combination?

Our main problem (as stated from our clients) is that the searching in our ARTICLES tables takes too long time (for example when they want to look according to an unindexed column), and for this reason I wrote this example which reflects our real problem.

(21 Jun '20, 16:19) Baron
1

I would recommend to compare query plans for those typical queries between v10 and v17 . FWIW, "simple queries" like in your sample will usually "bypass" the query optimizer.

(22 Jun '20, 06:15) Volker Barth
2

The read_duration time is meaningless because (1) it is measuring the time spent for a SINGLE read, (2) the read is performed immediately after all the rows have been LOADED INTO THE CACHE, and (3) no attempt is made to exclude the SEPARATE OVERHEAD spent executing the code that calculates the read_duration time.

How do you EVEN KNOW where the problem lies? Do you have any proof that "queries based on unindexed columns" are the problem?... how do you know that the bottleneck isn't SOMEWHERE ELSE?

Have you run any tests to determine where the bottlenecks actually lie, in your actual application?

It is very difficult to create meaningful performance benchmark applications, and it is NEVER the first place to start when trying to improve application performance... studying the application is the best place to start.

(23 Jun '20, 10:23) Breck Carter
showing 1 of 11 show all flat view

I am sorry for my false suspects.

We are using NativeDB (which comes with the option WantRowCounts set to True as default), and this was the bottleneck!!

permanent link

answered 23 Jun '20, 14:53

Baron's gravatar image

Baron
2.1k136149177
accept rate: 48%

converted 23 Jun '20, 16:26

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050

2

Thank you for posting your comment!

I converted your comment to an answer, so that other people might benefit.

(23 Jun '20, 16:28) Breck Carter

Experience shows that most workloads run faster with V17 than V10.

IMO your benchmark test proves little since in fact it sometimes runs faster on V17 and other times faster on V10... and the mix of instructions bears no relationship to a real workload.

Every RDBMS software upgrade (all versions of all products from all vendors) comes the very real possibility that some queries will be slower because of changes to the query optimizer and other components. It's up to you to find those bottlenecks and fix them... Foxhound can help with that.

...or, you could go back to running V10 :)

permanent link

answered 18 Jun '20, 09:37

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×275
×20

question asked: 18 Jun '20, 05:39

question was seen: 1,106 times

last updated: 23 Jun '20, 16:28