Sometimes our customer has problems with performance when users run requests at normal priority and there are other running requests (long running SELECTs) at background priority. All normal priority requests perform very slow when there are ~10 active long running background priority queries. All these normal priority requests are short running when there are no active long running queries. According to Mark's answers in this and this questions, I would expect that normal priority requests were given more processing "time slices" than background priority queries. But the practice shows that it has no visible effect. Or I am misunderstanding something. Are there any other options to configure that such slow background priority queries (even if there are 10-20 of them) would not visibly impact performance of normal priority requests?
Does SA12 or SA16 behave differently in such situations?
-gn server option is set to 200. The server uses 2 physical (4 logical) CPUs.
SA version: 220.127.116.1113. Platform: Windows.
I have been discussing this issue with coworkers and have added a backlog item to reevaluate and change the algorithm used so that the number of lower priority tasks do not affect the percentage of time that higher priority tasks get.
We have also discussed implementing priority IO queues in the past and this task is already on the backlog (of a very long list of feature requests).
As always, I can not say when either of the above changes will show up in the product. Thank you for your input - we always appreciate everyone's suggestions!
answered 21 Feb '14, 12:54
As I describe in my answer to this question the background priority option simply changes the number of time slices that the worker gets. As the number of background workers increase the proportional amount of time that a single non-background worker will get decreases. In your example with 10 background workers and a single normal worker, the normal priority worker will get 8 time slices and then the 10 background workers will each get a time slice (for a total of 10 time slices)... so the normal priority worker is getting less than half of its 'normal' time. (This explanation is not exactly correct due to the 4 logical CPUs involved).
Also note that all workers, regardless of their priority, will wait exactly the same if they are needing pages read from disk into the cache. I.e. all reads (and writes) from (to) disk are sequentially queued on a first-come-first-queued basis. This means that if all of the requests are needing I/O in order to complete their requests then all of the workers will effectively be working (or waiting) for the I/O at the same rate.
Finally, I'll mention that a high -gn value is not necessarily going to give you a better response/throughput. In fact in our experiments we find that often lowering the -gn value actually gives better throughput of requests. This is one of the reasons that automatic multiprogramming level was implemented in v12.