Currently coexist proven recommendations and new reality hardware.
For example: "Performance can suffer if there is excessive fragmentation of database files". Or: "Calibrate the specified dbspace ...; Calibrate the parallel I/O capabilities of devices ..."
And, on the other hand, let defined the following Tiering Policy: - x% of the data on this disk resides on Flash Drives; - (100 - x)% split across a pool of disks.
Can be done defrag that at the Windows o/s level? Will such a defragmentation to improve performance?
Is calibrate the database ( http://sqlanywhere-forum.sap.com/questions/10243/calibration-best-practices ) dangerous? Is it possible to calibrate the data, if x is not a fixed value, as in the case of "Auto-tier"?
Is "separate physical drive" no longer important on RAID, SAN and/or NAS ( http://sqlanywhere-forum.sap.com/questions/1132/is-separate-physical-drive-no-longer-important-on-raid-san-andor-nas ) really?
Is there a [alpha, beta ...]-document that answers these questions?
P.S. I can not answer the question of whether it is dangerous to use the SQL Anywhere 12 (or 16) in a virtual environment ( http://sqlanywhere-forum.sap.com/questions/4663/virtualization-sqlanywhere ). In my opinion, the SQL Anywhere 12 "sharpened" by a physical PC and for virtual environment can have serious performance degradation. Is that so?
To start, file fragmentation on the file system is always the responsibility of the OS. SQL Anywhere will only report it to you for your convenience - we can't actually control that in any manner ourselves (as we merely ask the operating system for "more space" on our file when growing the database - the OS decides how/where that free space comes from, or denies the request because it can't find any free space).
The impact of flash drives on database technology paradigms is a very interesting and arguably "still in research" topic - I won't pretend to know or comment upon the latest details in the field. However, it is certain that since access times are significantly shorter for these media types, random access speeds are not as big a performance hit as they once were on a traditional spinning drives, and thus the accumulated effect of fragmented files on an OS isn't as great a concern, in a general sense. Another concern though is wear-levelling, as Wikipedia correctly points out, so 'defragging often' is not recommended. What would be recommended is pre-allocating blank database space with 'ALTER DBSPACE ADD' to avoid OS-level fragmentation in the first place, and allowing contiguous access for the database file (to save on requests later to the OS to extend the database file during operation).
Notably, SQL Anywhere always writes to the OS file system in pages (so 4K, 8K, etc - depending on your database initialization settings - the default is 4K in recent versions of SQL Anywhere), so you will generally want to optimize your drive(s)/file system for that read/write size (possibly also examining your file system page size). Running a database performance insert test (see 'instest' in the SQL Anywhere samples) to test different write speeds of different drive configurations may be of benefit.
As for the configuration options on the database, yes, if you're running a RAID/mixed-media setting it would still be a good idea to ask the database to ALTER DATABASE CALIBRATE to configure the number of writers for the database. You would also want to leave 'AUTO TUNE WRITERS ON' (the default setting) if performing a BACKUP.
Whatever is implemented at the file system level, the file system as a whole must still adhere to write-ordering and write-flushing, in the general sense that is listed in the NAS whitepaper, and that is specifically listed in our Windows/Linux I/O requirements whitepaper.
Virtualization is another discussion entirely. The current Sybase policy is to not guarantee performance on virtualization systems, with good reason - there are many variables that "can/may/probably will" impact performance on a virtualized system on shared hardware. "Predicting" these external variables from a performance-guarantee standpoint is difficult at best. Finally, if presented with a performance issue on a virtualized system in technical support, isolating the system to its own dedicated resources may be one of the troubleshooting steps we may suggest to try and isolate an issue.