We are running an internal management system application off SQL Anywhere, with about 160 connections on average. However, the client application creates 2 connections, one of which is not used often; It's basically used to query the database at an interval for "unread messages" using a simple, inexpensive query. So, there are only about 80 actual users, and a majority of them are idle. But how idle?
I have monitored "Active Requests" in the performance monitor in Sybase Central and throughout a typical day, the average Active Requests is 4.5. Spikes are rare and are around 20 or so. Because I of this, I set the -gn variable to 20, which may even be too high; The vendor initially recommended 80, which is way too high and adversely effected performance. When we switch it to 20, people were asking why the system was so fast!
That gives you the idea of the type of user load we have. As far as the type of load goes, it is fairly heavy. The database is highly normalized (600 tables/views) and even to get a small amount of information, it often requires many joins. There is a lot of grouping and ordering as well. On top of that, there are always users requesting binary data (mostly large PDFs) from the database. There are many users creating PDFs from the client side and inserting them into the database. A lot of the PDF generation requires an expensive query, the client generating the PDF result, and then inserting the PDF back into the database- Repeated sometimes hundreds of times in one sitting.
I have tried many monitoring tools, including Sybase Central and manually obtaining database properties from iSQL, but the best by far is Foxhound from RisingRoad software in my opinion. This allows you to see everything at a glance and keeps a nice history.
One concerning issue is the size of the temporary file. Obviously (or maybe not so obviously) when someone does an expensive query on the database, such as a search, and forgets to enter any search criteria, the temporary file spikes to 500MB or more. This is an issue with the application, in my opinion, so I am not too worried about that. What I am worried about is the normal use of the system, doing expensive, but reasonable queries. The temporary file starts at almost nothing at the beginning of the day and slowly grows to ~250MB by the end of the day.
But that's not the most concerning thing. The most concerning thing to me is looking at the Windows Performance Monitor at the disk I/O. The drives with the temporary file is often getting more I/O than the drives with the database! Essentially, the I/O is high on that drive all day long, as if every- or close to every- query is requiring the use of the temporary file to complete. This doesn't seem right to me- This seems like it's about the best way to kill performance of the database if every request is requiring some type of disk I/O.
I have turned "RememberLastStatement" on in the database, and I can see the queries everyone is running. I can run them myself and see that yes, they are indeed pretty complex. But should they be requiring disk I/O when the cache is set to 60GB?
In reading Glenn Paulley's blog post ( http://iablog.sybase.com/paulley/2008/04/dynamic-memory-management/ ) about how SQL Anywhere manages memory, I am inclined to think the issue is a lack of addressable memory for intensive queries. The solution would be to upgrade to a 64bit OS so SQL Anywhere has more addressable "working" memory.
Is this reasonable? Does anyone have similar experience? What other things could be done to investigate the temp file I/O issue?
Database: SQL Anywhere 184.108.40.20649
OS: Windows Server 2003 R2 32-bit
CPU: 4x Intel Xeon Dual Core 3.00GHz
RAM: 64GB (60GB Allocated to SQL Anywhere cache)
HDD: DB on RAID5 SCSI-320 (15k RPM - 4 Disks), TempFile/TranLog on RAID5 SCSI-320 (15k RPM - 4 Disks)
I think many developers misunderstand the mechanics of AWE and how it applies to 32-bit SQL Anywhere servers.
First, for the discussion below assume the defaults:
AWE permits the server to utilize machine memory above and beyond the ~1.6GB address space of the process (and a SQL Anywhere server is a user process like any other). However there are two significant characteristics of memory pages in bank-switched memory with SQL Anywhere:
With a machine with 60GB of RAM, it is entirely possible that the entire database will fit within AWE cache. Hence read requests will be entirely satisfied by the cache once the cache is populated with the database's working set. Hence read I/O to the database file will become negligible once the working set is established.
However, for complex queries the server must use query memory to compute the answer to each SQL request, and this query memory must be addressable within the 1.6GB address space. Moreover, with version 9, SQL Anywhere treats query memory conservatively and divides the amount of memory available (1.6GB) by the multiprogramming level (20) to come up with a quota of query memory that each active request can use. If much of the workload is executing complex joins, grouping, and sorting it is entirely possible that the memory requirements for a request will not fit in the 1.6GB/20 = 80MB of query memory quota. This will cause sorts to flush pages to disk, and will cause hash joins to partition the input to disk so that the join can be computed piecemeal. The disk that is used for this query processing is the SQL Anywhere temporary file.
Graphical plans contain counters that illustrate whether or not memory-intensive operators had to spool/partition their input to disk. If you are seeing this situation frequently, I would suggest that you consider migrating to a 64-bit server (on a 64-bit OS) which effectively eliminates the 2GB address space problem.
answered 20 Nov '09, 15:50
I don't know about other folks, but I am waiting for iAnywhere Engineering to respond.
Judging by the up-votes and favorites, other people share my opinion that this is a wonderfully well written and thoughtful question ( and not just because of the Foxhound shout-out :)
Or maybe they just think, like I do, that it's a really important topic. Huge.
So, in an effort to meet the requirement "to allow time for the community itself to provide the answer, rather than jumping right in with the first answer", I am posting the first response.
Here goes: Yes, other people do have similar experiences.
OK, the minimum requirements have now been met, let's carry on...
answered 19 Nov '09, 14:06
A day has passed, let's try a real answer, or reasonable facsimile thereof (don't mark this as the "answer", it isn't)...
Have you tried the "Log Expensive Queries" facility? You are using build 220.127.116.1149 of SQL Anywhere, which means you have it.
( In Version 10 and 11, Log Expensive Queries was merged into "Advanced application profiling using diagnostic tracing... I've never warmed up to the new implementation, but maybe I should try the 11.0.1 incarnation to see if it's more likable. )
Anyway, after you have picked the low-hanging fruit of obvious bottlenecks, it sometimes happens the next step involves a large number of queries. I don't know of any other way than to start with one and study it, then move on to the next. Along the way you may find some change that will affect them all, or many.
Version 9 also contains an early version of the Index Consultant (the current version is described here )... well worth the effort.
If you are doomed to have an active temporary file forever, defragmentation may help; e.g., run Diskeeper, or maybe put set up contig.exe as a Scheduled Task. See Defragmenting The Temporary File.
And, of course, The Ultimate Suggestion is always "upgrade to a later version"... Version 10 was a huge leap forward, continued in 11.
OK, there... now will someone who actually knows how the temporary file works, please chime in?
answered 20 Nov '09, 13:44