Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

Currently coexist proven recommendations and new reality hardware.

For example: "Performance can suffer if there is excessive fragmentation of database files". Or: "Calibrate the specified dbspace ...; Calibrate the parallel I/O capabilities of devices ..."

And, on the other hand, let defined the following Tiering Policy: - x% of the data on this disk resides on Flash Drives; - (100 - x)% split across a pool of disks.

Can be done defrag that at the Windows o/s level? Will such a defragmentation to improve performance?

Is calibrate the database ( http://sqlanywhere-forum.sap.com/questions/10243/calibration-best-practices ) dangerous? Is it possible to calibrate the data, if x is not a fixed value, as in the case of "Auto-tier"?

Is "separate physical drive" no longer important on RAID, SAN and/or NAS ( http://sqlanywhere-forum.sap.com/questions/1132/is-separate-physical-drive-no-longer-important-on-raid-san-andor-nas ) really?

Is there a [alpha, beta ...]-document that answers these questions?

P.S. I can not answer the question of whether it is dangerous to use the SQL Anywhere 12 (or 16) in a virtual environment ( http://sqlanywhere-forum.sap.com/questions/4663/virtualization-sqlanywhere ). In my opinion, the SQL Anywhere 12 "sharpened" by a physical PC and for virtual environment can have serious performance degradation. Is that so?

asked 15 Jan '13, 03:51

Ilia63's gravatar image

Ilia63
1.2k525782
accept rate: 44%

edited 16 Jan '13, 03:54

Volker%20Barth's gravatar image

Volker Barth
40.2k362550822


Can be done defrag that at the Windows o/s level? Will such a defragmentation to improve performance?

To start, file fragmentation on the file system is always the responsibility of the OS. SQL Anywhere will only report it to you for your convenience - we can't actually control that in any manner ourselves (as we merely ask the operating system for "more space" on our file when growing the database - the OS decides how/where that free space comes from, or denies the request because it can't find any free space).

x% of the data on this disk resides on Flash Drives; - (100 - x)% split across a pool of disks.

The impact of flash drives on database technology paradigms is a very interesting and arguably "still in research" topic - I won't pretend to know or comment upon the latest details in the field. However, it is certain that since access times are significantly shorter for these media types, random access speeds are not as big a performance hit as they once were on a traditional spinning drives, and thus the accumulated effect of fragmented files on an OS isn't as great a concern, in a general sense. Another concern though is wear-levelling, as Wikipedia correctly points out, so 'defragging often' is not recommended. What would be recommended is pre-allocating blank database space with 'ALTER DBSPACE ADD' to avoid OS-level fragmentation in the first place, and allowing contiguous access for the database file (to save on requests later to the OS to extend the database file during operation).

Notably, SQL Anywhere always writes to the OS file system in pages (so 4K, 8K, etc - depending on your database initialization settings - the default is 4K in recent versions of SQL Anywhere), so you will generally want to optimize your drive(s)/file system for that read/write size (possibly also examining your file system page size). Running a database performance insert test (see 'instest' in the SQL Anywhere samples) to test different write speeds of different drive configurations may be of benefit.

As for the configuration options on the database, yes, if you're running a RAID/mixed-media setting it would still be a good idea to ask the database to ALTER DATABASE CALIBRATE to configure the number of writers for the database. You would also want to leave 'AUTO TUNE WRITERS ON' (the default setting) if performing a BACKUP.

Whatever is implemented at the file system level, the file system as a whole must still adhere to write-ordering and write-flushing, in the general sense that is listed in the NAS whitepaper, and that is specifically listed in our Windows/Linux I/O requirements whitepaper.


I can not answer the question of whether it is dangerous to use the SQL Anywhere 12 (or 16) in a virtual environment

Virtualization is another discussion entirely. The current Sybase policy is to not guarantee performance on virtualization systems, with good reason - there are many variables that "can/may/probably will" impact performance on a virtualized system on shared hardware. "Predicting" these external variables from a performance-guarantee standpoint is difficult at best. Finally, if presented with a performance issue on a virtualized system in technical support, isolating the system to its own dedicated resources may be one of the troubleshooting steps we may suggest to try and isolate an issue.

permanent link

answered 15 Jan '13, 14:47

Jeff%20Albion's gravatar image

Jeff Albion
10.8k171175
accept rate: 25%

edited 15 Jan '13, 14:47

As to the SSD discussion - here are some blog articles from Glenn's (old) blog:

(16 Jan '13, 03:52) Volker Barth

We have some clients running SQLA productive in a virtual environment, as we do for our development, so I can state that it's feasable. But performance depends on a bunch of settings and resources for the VM and the unterlying hardware and software architecture, which have to be carefully considered.

IMHO it's reasonable Sybase won't guarantee SQAL working in VM environments, even if in many cases it will run smoothly.

(16 Jan '13, 04:18) Reimer Pods
2

We support about 40 sites, one of which runs Windows HyperVisor, several run VmWare, and one runs another Citrix virtualization server environment (not vmWare, but I can't remember all the names associated with it). We have found reliability to be excellent over several years now. Performance varies as you might expect, based mostly on "how cheap" they try to run their environments. The Windows HyperVisor runs slower than when they used a standard desktop PC for a database server, but they killed the performance of EVERY server they had by combining them all on one under-resourced box. The other servers all perform well, mostly faster than the older servers they replaced, and certainly at a lower overall cost for their server environments. I am certainly no performance guru, but it appears to me that SQLAs automatic cache adjustments and other automatic tuning features are helping us deliver excellent performance for our clients at a lower cost than the other database servers I am exposed to at client sites.

(16 Jan '13, 10:39) Bill Aumen
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×275
×12
×11
×9

question asked: 15 Jan '13, 03:51

question was seen: 3,024 times

last updated: 16 Jan '13, 10:42