We are running an internal management system application off SQL Anywhere, with about 160 connections on average. However, the client application creates 2 connections, one of which is not used often; It's basically used to query the database at an interval for "unread messages" using a simple, inexpensive query. So, there are only about 80 actual users, and a majority of them are idle. But how idle? I have monitored "Active Requests" in the performance monitor in Sybase Central and throughout a typical day, the average Active Requests is 4.5. Spikes are rare and are around 20 or so. Because I of this, I set the -gn variable to 20, which may even be too high; The vendor initially recommended 80, which is way too high and adversely effected performance. When we switch it to 20, people were asking why the system was so fast! That gives you the idea of the type of user load we have. As far as the type of load goes, it is fairly heavy. The database is highly normalized (600 tables/views) and even to get a small amount of information, it often requires many joins. There is a lot of grouping and ordering as well. On top of that, there are always users requesting binary data (mostly large PDFs) from the database. There are many users creating PDFs from the client side and inserting them into the database. A lot of the PDF generation requires an expensive query, the client generating the PDF result, and then inserting the PDF back into the database- Repeated sometimes hundreds of times in one sitting. I have tried many monitoring tools, including Sybase Central and manually obtaining database properties from iSQL, but the best by far is Foxhound from RisingRoad software in my opinion. This allows you to see everything at a glance and keeps a nice history. One concerning issue is the size of the temporary file. Obviously (or maybe not so obviously) when someone does an expensive query on the database, such as a search, and forgets to enter any search criteria, the temporary file spikes to 500MB or more. This is an issue with the application, in my opinion, so I am not too worried about that. What I am worried about is the normal use of the system, doing expensive, but reasonable queries. The temporary file starts at almost nothing at the beginning of the day and slowly grows to ~250MB by the end of the day. But that's not the most concerning thing. The most concerning thing to me is looking at the Windows Performance Monitor at the disk I/O. The drives with the temporary file is often getting more I/O than the drives with the database! Essentially, the I/O is high on that drive all day long, as if every- or close to every- query is requiring the use of the temporary file to complete. This doesn't seem right to me- This seems like it's about the best way to kill performance of the database if every request is requiring some type of disk I/O. I have turned "RememberLastStatement" on in the database, and I can see the queries everyone is running. I can run them myself and see that yes, they are indeed pretty complex. But should they be requiring disk I/O when the cache is set to 60GB? In reading Glenn Paulley's blog post ( http://iablog.sybase.com/paulley/2008/04/dynamic-memory-management/ ) about how SQL Anywhere manages memory, I am inclined to think the issue is a lack of addressable memory for intensive queries. The solution would be to upgrade to a 64bit OS so SQL Anywhere has more addressable "working" memory. Is this reasonable? Does anyone have similar experience? What other things could be done to investigate the temp file I/O issue? Database: SQL Anywhere 9.0.2.3249 OS: Windows Server 2003 R2 32-bit CPU: 4x Intel Xeon Dual Core 3.00GHz RAM: 64GB (60GB Allocated to SQL Anywhere cache) HDD: DB on RAID5 SCSI-320 (15k RPM - 4 Disks), TempFile/TranLog on RAID5 SCSI-320 (15k RPM - 4 Disks) |
I don't know about other folks, but I am waiting for iAnywhere Engineering to respond. Judging by the up-votes and favorites, other people share my opinion that this is a wonderfully well written and thoughtful question ( and not just because of the Foxhound shout-out :) Or maybe they just think, like I do, that it's a really important topic. Huge. So, in an effort to meet the requirement "to allow time for the community itself to provide the answer, rather than jumping right in with the first answer", I am posting the first response. Here goes: Yes, other people do have similar experiences. OK, the minimum requirements have now been met, let's carry on... |
A day has passed, let's try a real answer, or reasonable facsimile thereof (don't mark this as the "answer", it isn't)... Have you tried the "Log Expensive Queries" facility? You are using build 9.0.2.3249 of SQL Anywhere, which means you have it. ( In Version 10 and 11, Log Expensive Queries was merged into "Advanced application profiling using diagnostic tracing... I've never warmed up to the new implementation, but maybe I should try the 11.0.1 incarnation to see if it's more likable. ) Anyway, after you have picked the low-hanging fruit of obvious bottlenecks, it sometimes happens the next step involves a large number of queries. I don't know of any other way than to start with one and study it, then move on to the next. Along the way you may find some change that will affect them all, or many. Version 9 also contains an early version of the Index Consultant (the current version is described here )... well worth the effort. If you are doomed to have an active temporary file forever, defragmentation may help; e.g., run Diskeeper, or maybe put set up contig.exe as a Scheduled Task. See Defragmenting The Temporary File. And, of course, The Ultimate Suggestion is always "upgrade to a later version"... Version 10 was a huge leap forward, continued in 11. OK, there... now will someone who actually knows how the temporary file works, please chime in? I did go through all the queries and indexes initially, and where I could, I made optimizations (this is all vendor software, so although I have DBA rights, I am limited). In the end, my fixes seemed to be marginal and unnoticeable for the average user, which lead me to believe there were larger architectural issues to address. Glenn made good points which seem to imply a major lack of query memory available to the DB. 1
I saw your Contig.exe tip once I learned of the existence of the tempfile (I think it's one of the first results for "SQL Anywhere temp file" in google! :) and purposefully grew it to 10GB using a CROSS JOIN and defragged it (since I couldn't restart the DB at the time). It lowered Split I/O in perfmon, but it did not noticeably help performance. |
60 GB cache - You are using an AWE cache on Windows 2003 R2, right? (In case I have overseen this information.)
Correct, the database is using AWE to address more than the 32bit 1.8GB limitation.