Our users are using applications that each, on a per app basis, create a connection to the database. There is some concern in my company that many connections (even unused ones) may create a drain on the resources of the server, and the thought is to limit the amount of connections one user may have.
I pushed back on this, from the POV that I don't want to write a whole bunch of code to solve a possible performance issue without having a real performance issue, but I got pushed back on that in return.
The question that was asked is: what is the real cost of a connection (one that has no locks, no open cursors, no nothing, just a connection)
I don't know the answer. My guess is all a connection is is a pointer to some (very tiny amount of) memory. Beyond that, there are no clock cycles involved, or anything. Is that right? Or is there a cost and should we plan to minimize connections?
asked 15 Jan '13, 13:49
You are correct, the cost of a connection that is holding no locks and has no open statements or cursors is very small. The last time we measured it - which was many many years ago now - the cost was under 4K of memory per connection. I suspect the memory cost is slightly higher now but it would not be by much.
There is a small amount of CPU cost per connection as well: in order to keep the connections alive and to detect dead connections there is a "liveness packet" that is sent across each connection every 40 seconds or so (depending on your liveness timeout value - default is 120 seconds, and we send a packet
The other thing to consider is the licensing of your server. If you are using a per seat license then each connection to the server may be using one of the seat licenses. If you are licenced per CPU then this is not an issue.