Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in August and this forum will be retired.

Having started with Watcom 5.0 I have played with SQLAnywhere for a number of years, building bigger and better servers with faster arrays of spinning rust etc. all slowly giving increased performance.

Then came SSDs and the advantages of massively parallel IO should now have a major impact on the throughput of the database, especially for VALIDATE or other random IO solutions. And yet this has not materialised.

In 2014 Pedram Ghodsnia, Anisoara Nica and Ivan Bowman published the interesting paper “Parallel I/O Aware Query Optimization” where this subject was analysed and a proposal for QDTT put forward. Where is the ability in 16 or 17 to implement something of this nature?

I have a customer who are trialling a SSD array on 16GB FC, SQLA is optimising this as a single disk and therefore throughput is terrible!

Am I the only one who thinks that disk throughput/ I/O has in the fallen away from what is available in SSD/Hybrid disks and is impacting on SQLA as a result?


asked 20 Sep '17, 05:31

TimScotland's gravatar image

accept rate: 0%

edited 20 Sep '17, 06:03

Volker%20Barth's gravatar image

Volker Barth

Can't answer this very interesting question - just want to ask whether you have calibrated the server (and if so, what was the impact)?

(20 Sep '17, 06:02) Volker Barth

The trouble with CALIBRATE (in 16) is that it doesn't seem to have been updated to work with an SSD Try on a workstation with a single SSD and you get set at 1 That is nonsense! While things like elevator seeking are no longer required being Solid State the fact that a modern SSD with treat Random IO and sequential IO as nearly the same would imply that from a grouping point of view it should be in the 1,000s

Oh, and naturally as it starts at 1 it just stays there, that is why the QDTT is so interesting

(20 Sep '17, 06:15) TimScotland

Note that VALIDATE in v16 shouldn't be doing much random IO. We scan each table and each index once in as sequential a manner as possible.

Where SQLA does use random IO, the SSD will go much faster. If you are not seeing improvements in your performance tests, perhaps you are not doing (much) random IO. Are you running your tests after much of the database is cached? Have you monitored disk statistics while your performance tests are running?

(20 Sep '17, 10:12) John Smirnios

Looking at DISK IO with Perfmon:

Baseline I can copy a file from directory 1 to directory 3 at about 550MB/s

Backup of database to a directory on the same volume (disk partition) ~225MB/s

If I run a validate on a newly loaded DB (nothing cached) the peek read of the database is 55MB/s

Looking at the %idle at the same time shows that it is 95% idle and therefore we cane deduce that the disk is not stressed. Nor are the CPUs (8 off) maxed out.

If you can cast your mind back to when we were young :) In NetWare 2.x we had the elevator seeking queue, designed to ensue that requests were reordered so that a single sweep of the disk's arm collected data in a sequential manner - originally set a 8, it eventually ended up in NW6.5 at 4000.

This was later discovered to have another benefit by grouping requests in bundles (the same as the DTT style) allowed RAID devices the ability to return the data more effectively. Testing Linux in a similar manner (I use SUSE) shows again that you can massively increase the concurrent requests to an SSD

My question is that we know that QDTT was being looked at and tested in 2014 when the paper was written and presented. We know that the same advantages were seen in the paper... So here we are 3 years later and where is the ability to properly use an SSD? Only in 17? only in a future version? Or a switch that says....

(20 Sep '17, 10:41) TimScotland
Replies hidden

Hm, as stated, I can't tell - but version 17 claims to offer several perfomance improvements, so have you tried that out?

Of course I would also like to know whether such a research result has made its way into the product...

(21 Sep '17, 03:29) Volker Barth

That's that blasted GoFaster=YES switch I can never find in the docs.

(21 Sep '17, 06:38) Justin Willey

@John, I hate to be picky here but... :)

You read the tables and indexes in a sequential manner, but the blocks used in the storage will be anything but sequential. Any server that has run for a week in a production environment will naturally have extents and additions and as we know those will be scattered (especially by NTFS) all over the disk. And this is where an SSD "should" be so efficient and yet the performance difference is not seen in the real world.

There must have been some work on this by now, and if not there has to be some very shortly or it will impact more negatively on deployment and usage.

(21 Sep '17, 06:47) TimScotland
Replies hidden

Yes, things will get scattered but table pages will be read in order of increasing page number. Table pages are kept in blocks so we are able to read multiple per IO and we always have async read-ahead for such scans. For indexes, we primarily scan just along the leaf pages. Validating long blobs will still encounter random IO.

Fundamentally, the behaviour you are encountering doesn't match my expectations or experience although it has been a long time since I have tried it.

You might try a VALIDATE DATABASE statement for comparison. It is a true sequential scan of the file with lots of async read-ahead.

(21 Sep '17, 08:04) John Smirnios
You might try a VALIDATE DATABASE statement for comparison.

I think that is what TimScotland says he is doing in his comment above where he reports throughputs of 550 / 250 & 55 MB/s respectively for OS disk copy / SQLA backup & validation operations. Or am I mis-understanding what you mean?

(21 Sep '17, 08:32) Justin Willey

He just wrote "VALIDATE" which could have many meanings if the rest of the clause is not specified. I just tried a "VALIDATE DATABASE" on a 1GB db on an SSD and got 248MB/s.

(21 Sep '17, 10:08) John Smirnios

I was using VALIDATE DATABASE via SQL Central with the Express option. Using the same thing each time to ensure consistency.

If you got 248MB/s on the VALIDATE, please would you do a database backup and the file copy just to see? Also what version/OS?

My current plan is to try openSUSE Leap (latest) with a version of 16 and 17 to see just how things change.

(21 Sep '17, 10:22) TimScotland

Also - when looking at your DB Extended Information what do you have listed for:

Drive Bus:

Drive Model:

Drive Type

My test system that I am using shows




(21 Sep '17, 11:30) TimScotland

Backup was 469MB/s

SUSE 12.1, version is mainline (pre 17.0.9)

The Drive property functions don't return anything useful (perhaps because it is in a volume group. Model, acquired from other Linux commands, is MTFDDAK1T0MBF.

(21 Sep '17, 12:07) John Smirnios

So my understanding is that CALIBRATE Database will reference these 3 items as part of the calculation and therefore it impacts on what it might provide in terms of IO. Just for my own curiosity is you IOParallelism equal to 1 as is mine or is this where you get some of the uplift?

If you are using SLES are you coming to SUSEcon next week?

(21 Sep '17, 12:28) TimScotland

I don't know how you have tested the disk performance, but there is an assumption: if you work with files, they could be cached by OS. Maybe that is why some disk operations are blazing fast, because your data was in the cache?

(21 Sep '17, 18:20) Vlad

My IOParallelism property returns '1'. I just dbinit'd a database & grew the system dbspace. I had not done calibration. So I did calibration, the property still returned '1'.

I do not think the drive properties as returned by db_properties are used in calibration. Also, there are other disk characteristics determined used by the server other than ioparallelism. See dbo.sa_get_dtt() and dbo.sa_get_dtt_groupreads(). Also, an SSD is "detected" via a disk passing certain IO performance tests.

I'm not sure that the IO parallelism and calibration metrics affect validate database but IOparallelism does seem to be referenced in the backup code. I'm sure they affect query optimization and the execution plans that are chosen.

Sorry, I won't be at SUSEcon. SLES is just what was running on my development box (SLES is the approved distro for running Hana).

(21 Sep '17, 22:15) John Smirnios
showing 5 of 16 show all flat view
Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 20 Sep '17, 05:31

question was seen: 1,888 times

last updated: 21 Sep '17, 22:15