We are running some processes overnight, and we've seen some odd behavior.
We have one process that does a really big select statement, then exports this data to a comma delimited file. The export does not use SQL Anywhere unload functionality, but a custom written piece of software that grabs data row by row (and we will change that, but I am not even sure this info is relevant)
The thing is, on a database that is just started, this process will take 4 minutes. On a database that's been running a couple of days in production, this can take 40 minutes and longer. It's not that this process is stuck, it does eventually finish correctly, but it takes very, very long. We suspect we might be doing something that is leaking resources on the database. It might very well not even be this process, but something else that happens during normal business that we don't see the effects off until we try to run this process.
What resources does a SQL 11 database use? What statistics can/should we look at, and where?
Thanks for helping!
asked 19 Jan '10, 16:32
On my blog you'll find a whitepaper that discusses diagnosing performance issues, entitled "Diagnosing Application Performance Issues with SQL Anywhere":
The whitepaper documents how one can use SQL Anywhere's Application Profiling feature to profile an application and diagnose where the performance issues may lie. With your example specifically, I would be looking for execution plan differences between the query running "cold" and the instance running "hot". But there may be other factors (blocking, for example) that can account for the difference in elapsed time.
answered 19 Jan '10, 16:41
We have had a similar problem in the past. You should have a look at the query plans directly after restart of the database and after some workload has performed. I expect you will see a difference. Directly after start with a clean cache the query optimizer prefers plans which don't rely on cached data, as the cache is still empty. In our case that let to the effect, that directly after start it used an index to find the effected rows and was fast. With a warm cache then the optimizer choosed a sequential scan relying on a full cache, but in real it was not as fast as estimated ;-)
answered 20 Jan '10, 08:26
One thing that restarting the database does is flush and recreate the "database temporary file".
One thing that can happen to the temporary file over time, as more and more connections are started and complex queries are run, is that it grows in size... and becomes physically fragmented.
I'm not saying this is your problem, but... you can test it as follows: Wait until your Query From Hell slows down, and then defragment just the temporary file to see if that helps.
Step by step (leave the database running while you do this)...
(1) Download Microsoft contig from http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
(2) Run this dbisql query to find where the temporary file is:
SELECT DB_PROPERTY ( 'TempFileName' ); DB_PROPERTY('TempFileName') 'C:\\DOCUME~1\\bcarter\\LOCALS~1\\Temp\\sqla0001.tmp'
(3) Copy and paste the dbisql cell value into a contig command and run it:
C:\download\sysinternals\contig_1_55\contig.exe C:\DOCUME~1\bcarter\LOCALS~1\Temp\sqla0001.tmp Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\bcarter>C:\download\sysinternals\contig_1_55\contig.ex e C:\DOCUME~1\bcarter\LOCALS~1\Temp\sqla0001.tmp Contig v1.55 - Makes files contiguous Copyright (C) 1998-2007 Mark Russinovich Sysinternals - www.sysinternals.com Summary: Number of files processed : 1 Number of files defragmented: 1 Average fragmentation before: 13 frags/file Average fragmentation after : 1 frags/file C:\Documents and Settings\bcarter>
(4) Determine if the Query From Hell runs any faster. If it does, then we can look for the reason(s) the temporary file is getting messed up... post another question.
If not, then just down-vote THIS answer :)