We are using version 11.0.1 and our app was working fine until this morning when cpu usage went to 100% on one of the processes and we can not figure out what is causing it. Are there any suggestions as to at least what we should do to track down the problem? Is there something in Sybase Central we can use to track down what process is causing it? Please let me know if I can provide more information. asked 13 Apr '12, 12:31 javedrehman Calvin Allen |
You should take a look at the whitepaper entitled "Diagnosing Performance Issues with SQL Anywhere". You can use the dbconsole utility to monitor connections, and with it monitor the "ApproximateCPUtime" counter to pinpoint the connection(s) that are consuming the most CPU. In addition you can track the last prepared statement issued by each connection, which can help pinpoint a query or queries that are consuming the CPU. Alternatively, you can use Application Profiling in Sybase Central to log expensive statements, and the access plans that were executing at that time. answered 16 Apr '12, 10:37 Glenn Paulley |
For diagnosing which connection may be problematic, I would start using Interactive SQL and system procedures like sa_conn_info() in conjunction with sa_connection_properties() to find curious/high performance counters, such as the connection property "ApproximateCPUTime": select * from sa_conn_info() sci, lateral (sa_conn_properties(sci.number)) scp where scp.PropName like '%CpuTime%' -- or whatever property filter you favour order by scp.Value desc, scp.Number Other properties of interest may be ConnectedTime, LoginTime, the various ReqCountXyz counters (which require to start the engine with the -zt option), and surely more than I'm aware of. As said, that won't solve the issue but may help dignosis. AFAIK, these propeties are also visible in the Sybase Central GUI but I'm not familiar with that. answered 16 Apr '12, 04:02 Volker Barth |
I had a situation a couple of years ago with an earlier revision (8?). The db optimizer was causing a problem. Selecting one record using a primary key would be almost instantaneous, but selecting the very next record (relative to the primary key) would take up to 30 minutes of intense processing. Might try: DROP OPTIMIZER STATISTICS answered 14 Apr '12, 08:24 Bryan 2
DROP OPTIMIZER STATISTICS is a no-op - it is retained only to keep scripts from breaking with syntax errors. It has been disabled since version 8.0.0.
(16 Apr '12, 10:32)
Glenn Paulley
|
I have had the same problem recently with version 11.0.1. Rebuilding the index on the most intensively used table solved the issue: ALTER INDEX xxx ON xxx REBUILD; answered 16 Apr '12, 02:03 MichaelMange... |
As to your question w.r.t. logging the last statement per connection - you can use that
You can then check for the according statement with select connection_property('LastStatement', <theConnIdOfInterest>); Please note the several caution warnings on which statements might/might not be remembered in the cited docs. Checking the last plan might be worthwhile, too, cf. the -zp server option and the "RememberLastPlan" sa_server_option option... answered 18 Apr '12, 10:58 Volker Barth |
Can you tell us what build number you are using?
build 2584
will rebuilding statistics help? how do we rebuild statistics in sql anywhere
see Create Statistics command, it will also rebuild existing ones
thank u to all who contributed, recreatign the statistics and also rebuilding the indexes on three of our large tables did help and since we have done that we have not had any problems. We have over 600 users connected at any given time and sys is working fine. does this mean that we should rebuild these stats and may be rebuild indexes on periodic basis? Is this something dba's do on regular basis, if yes how often one should do that? just some thoughts to share. Someone suggested to track last prepared statement created by each connection how do we do that?
On our largest databases, we rebuild the indexes weekly. I've noticed that doing it regularly means that rebuilding the indexes do not take long. The longer I wait to rebuild them, the longer it takes to rebuild them.
Rebuilding indexes regularly isn't a bad idea if you can tolerate the maintenance period for doing so. Rebuilding statistics periodically is typically unnecessary but for tables in your database that undergo considerable numbers of updates then periodic rebuilding might be an advantage, particularly with 11.0.1. In version 12, statistics management is improved and rebuilding statistics automatically will be performed if the server discovers that the existing histograms have sufficient error.
However, rebuilding indexes once a week seems excessive, at least to me. My intuition about your issue is that you have one, or possibly several, long-running complex queries and some problem determination on why these queries are using all of the CPU needs to occur.
...me, too, though this seems quite common with other database systems like MS SQL Server:)
I agree that we should look at the queries that are consuming the most cpu time but I am having difficult time finding out whcih connection is executing what? I just started in this place and everbody login as an sa. Someone suggested to look at last prepared statement issued by each connection, how do I do that. I will see if I can run dbconsole untility to find out more info.
You can use the dbconsole GUI utility to track all connections. DBCONSOLE can display selected counters for each connection - you'll want to display "approximateCPUtime" and "laststatement". Alternatively you can use Application Profiling from Sybase Central. Take a look at the documentation and the whitepaper I mentioned above for instructions on how to do that.