Customer wants to use sqlanywhere with 1000 connections. Currently, he is using XP Server with 60 connections. How does a server need to be configured ? or is a larger machine required ? How to size a machine for sqlanywhere with 1000 connections ? Where can I get this kind of information and advice ? |
See Glenn's blog for the general response to this question. To best determine your hardware information you have to create a performance test that mimics your production workload. This performance test should control as many different performance factors in the database server as possible, while modifying an individual server factor in a set of tests to establish its effects on your overall throughput. Once you have gathered this information (after varying things like the server cache size, number of worker threads, optimal levels of intra-query parallelism, etc.) you can then use statistical analysis to figure out which performance factors are the most important for your workload.
Once you have this information (e.g. How much server cache is required to get the desired throughput of your server for your workload? If you restrict the CPU to less cores, how much of an effect does this have on your results?, etc.), this can help translate into creating your hardware requirements. For example, let's say increasing the server's cache size past 1GB doesn't have a large effect on your workload test results (maybe because your database is small and can easily fit into 1GB of memory). In this case, investing more money to buy additional RAM over 1GB in your production system would not make sense since it does not result in a performance benefit, and so you can instead invest that money into other hardware areas (a faster CPU, perhaps). Unfortunately, you will only be able to determine this information accurately after completing the capacity planning tests described above - there is no "rule-of-thumb" that can accurately size all workload types from a rough description. 4
One quibble: Just because a database file is only 1GB in size does NOT necessarily mean that 1GB of cache is enough. For example, suppose 100 busy connections each required 100MB of temp space... try running THAT on a SQL Anywhere engine with 1GB of RAM cache :) The old rule of thumbs about cache size (10% of file size, no more than total file size, etcetera) no longer work with today's workloads and today's expectations of performance. Today's rule of thumb is: You can never have too much RAM!
(04 Jul '12, 12:13)
Breck Carter
Replies hidden
1
Thank you Breck for the quibble - I certainly agree! When originally writing the reply, I actually assumed in my head that the database was much smaller and that the "database plus workload requirements could fit into 1GB of RAM" - I definitely should have clarified in my response, so thank you for mentioning that. I have my own quibble with "today's rule-of-thumb" though: while everyone may desire "the most possible amount" of RAM, you or your purchasing department for your organization cannot afford purchasing an unlimited amount of RAM for a system. When purchasing a new server, vendors usually have different upgrade options on the system all at different price points: clock speed/number of cores of CPU, size and speed of RAM, size and speed of hard-disks, etc. The question really is then to answer: which upgrades (if any) are going to result in the best throughput for your workload requests, within your purchasing budget? Glenn's blog post really contains the quote for the "correct" rule-of-thumb here:
(09 Jul '12, 13:46)
Jeff Albion
|
1000 connections doing nothing will be easily served by the XP machine, so please describe in more detail the type of workload you expect for the average connection.
And what SQL Anywhere version and data access APIs are you using?
additionaly speaking,
SQL Anywhere version 12.0
Want to know a practical hardware sizing guide for sql anywhere with more than concurrent user 300 for voting system.
Maybe, transaction may be short and simple.