I recently added code to my ADO.NET application to set the connection Priority option for a process in my app to Above Normal. The code also sets the MAX_PRIORITY option to the same value so the command to set the PRIORITY will work.

My question is what is the full impact of this setting on the server's processing of query requests? From an article on Glenn Pauley's blog, I know that:

The PRIORITY connection option establishes the processing priority of any SQL request for this connection. The default is “Normal”; other potential values are Critical, High, Above Normal, Below Normal, Low, and Background. When SQL requests are queued for service, the server will process the queue in priority order. Setting the priority option to different values for different connections (or users) permits the categorization of service levels across the entire server’s workload.

So this seems to say that if there are n requests queued up at priority Normal, and I queue up 1 request at priority Above Normal, and nothing higher is waiting, then the Above Normal query will execute first. But what happens if a long running query at priority Normal is already running? Will that query be suspended so the Above Normal query executes? Or does the Above Normal query wait like everything else for that query to complete?

Just how many requests can the server process at a time? My app is running on the dbsrv12 engine as an embedded database. The hardware is not optimized for use as a DB server. Where's the documentation on how the server handles these situations? I can't find any.

Thanks

asked 04 Mar '13, 16:09

TonyV's gravatar image

TonyV
1.2k333967
accept rate: 75%

edited 04 Mar '13, 16:10

1

Just to add: Mark has also explained this topic in this similar FAQ...

(04 Mar '13, 17:05) Volker Barth

The number of requests that the server can process at any time is controlled by the current multiprogramming level which is controlled by the -gn, -gna, -gnh switches as well as the corresponding sa_server_option options. The default value depends on which version you are using and if you are running the personal server (dbengX) or the network server (dbsrvX). Version 12 of SQL Anywhere introduced the ability of the server to adjust the multiprogramming level dynamically to improve the overall throughput (in terms of number of completed requests) of the system.

The priority of the request controls how many cycles each request gets once the request is being handled by a worker in the database server. Note that AFAIK (and someone can correct me if I am wrong) the priority of the request does not control how the request is queued. i.e. all requests get handled in a first-in-first-processed basis; higher priority requests do not get to 'jump the queue' ahead of normal priority requests.

The database server works internally on a cooperative scheduling scheme - this means that each request gets to run for a certain amount of time and then it relinquishes control to another request. Each ACTIVE request gets to run when its turn comes up. When there are requests with different priority levels then the higher priority active requests get more cycles before it must wait for lower priority levels. I.e. Once all requests at a particular priority have had a given number of turns (e.g. 8 turns) then the requests at the next lower priority get some turns. Doing this ensures that low priority requests do not get starved.

So for example, if all ACTIVE requests - i.e. requests that are currently being processed by a worker - are running at normal priority then each request gets processed in roughly round-robin fashion. The "roughly" is mentioned because any time a request needs to wait for a resource (disk io, a row lock, a page latch, etc) then it will be taken off of the ACTIVE list and hence will not get its full number of CPU cycles to run (but when this happens it does not consume a full "turn" - so it gets to continue once it has acquired the resource that it needed).

The above description is assuming the server only had one thread executing work, but this is generally not the case these days. The algorithm gets a bit more complicated when multiple core/threads are involved (hence multiple workers are running at the same time) but the general idea remains the same.

permanent link

answered 04 Mar '13, 16:59

Mark%20Culp's gravatar image

Mark Culp
22.6k9129265
accept rate: 40%

edited 04 Mar '13, 17:08

Interesting. So if a long running request is running and an Above Normal request comes in, the Above Normal request gets a worker thread when there's one available to process it, which may not be until some other request completes. Once its started, the Above Normal request gets more CPU attention than the long running request until it (the Abover Normal request) is finished. Is that right?

(04 Mar '13, 17:20) TonyV
Replies hidden

Yes, you are correct.

(04 Mar '13, 17:23) Mark Culp
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×25
×10

question asked: 04 Mar '13, 16:09

question was seen: 2,433 times

last updated: 04 Mar '13, 17:23