I recently added code to my ADO.NET application to set the connection
My question is what is the full impact of this setting on the server's processing of query requests? From an article on Glenn Pauley's blog, I know that:
The PRIORITY connection option establishes the processing priority of any SQL request for this connection. The default is “Normal”; other potential values are Critical, High, Above Normal, Below Normal, Low, and Background. When SQL requests are queued for service, the server will process the queue in priority order. Setting the priority option to different values for different connections (or users) permits the categorization of service levels across the entire server’s workload.
So this seems to say that if there are n requests queued up at priority
Just how many requests can the server process at a time? My app is running on the dbsrv12 engine as an embedded database. The hardware is not optimized for use as a DB server. Where's the documentation on how the server handles these situations? I can't find any.
The number of requests that the server can process at any time is controlled by the current multiprogramming level which is controlled by the -gn, -gna, -gnh switches as well as the corresponding sa_server_option options. The default value depends on which version you are using and if you are running the personal server (dbengX) or the network server (dbsrvX). Version 12 of SQL Anywhere introduced the ability of the server to adjust the multiprogramming level dynamically to improve the overall throughput (in terms of number of completed requests) of the system.
The priority of the request controls how many cycles each request gets once the request is being handled by a worker in the database server. Note that AFAIK (and someone can correct me if I am wrong) the priority of the request does not control how the request is queued. i.e. all requests get handled in a first-in-first-processed basis; higher priority requests do not get to 'jump the queue' ahead of normal priority requests.
The database server works internally on a cooperative scheduling scheme - this means that each request gets to run for a certain amount of time and then it relinquishes control to another request. Each ACTIVE request gets to run when its turn comes up. When there are requests with different priority levels then the higher priority active requests get more cycles before it must wait for lower priority levels. I.e. Once all requests at a particular priority have had a given number of turns (e.g. 8 turns) then the requests at the next lower priority get some turns. Doing this ensures that low priority requests do not get starved.
So for example, if all ACTIVE requests - i.e. requests that are currently being processed by a worker - are running at normal priority then each request gets processed in roughly round-robin fashion. The "roughly" is mentioned because any time a request needs to wait for a resource (disk io, a row lock, a page latch, etc) then it will be taken off of the ACTIVE list and hence will not get its full number of CPU cycles to run (but when this happens it does not consume a full "turn" - so it gets to continue once it has acquired the resource that it needed).
The above description is assuming the server only had one thread executing work, but this is generally not the case these days. The algorithm gets a bit more complicated when multiple core/threads are involved (hence multiple workers are running at the same time) but the general idea remains the same.