Hello,

I have an abstract question about SQL Anywhere speed limitations when performing many insert/update statements concurrently by different connections. There is a procedure P, which performs many updates of existing records of various tables and also inserts new data. This procedure processes some piece (interval) of data. We launch that procedure for N different (not intersecting) data intervals concurrently (in separate dbisql instances) to achieve more overall processed data amount per minute.

While N is 7-10 we see almost proportional increase of processed data amount, i. e. processed data amount is almost N-times larger than running only one instance of procedure P. Each instance uses almost full logical CPU core (max_query_tasks is set to 1 because when set to 0 or another value the speed decreases drastically in this case).

When N reaches or exceeds 7-10 instances we see that processed data amount per minute no longer increases. It stays the same, for example, when we increase N to 17 instances. Each instance processes less data per minute and each logical CPU usage drops noticeably. Free RAM for cache is enough. Customer reports that disk usage is not very high as well.

What can be the reasons for such a limitation? Are checkpoints the only possible cause or is there something that we can configure, try, etc.? Thanks.

Version: 11.0.1.3158.
Platform: Windows Server 2008 R2 Standard.

asked 22 Dec '14, 10:32

Arthoor's gravatar image

Arthoor
1.1k264056
accept rate: 0%

edited 22 Dec '14, 10:37

How many processors are being used by the SQL Anywhere server?

Are all the dbisql processes running on the same computer? Different from the SQL Anywhere server, or the same computer?

Have you tried setting dbsrv11 -zt and running Foxhound to see if anything is blocking or waiting for other reasons?

(22 Dec '14, 11:29) Breck Carter
Replies hidden

In addition to Breck's questions/suggestions it sounds like you could be hitting a limit in the number of worker threads in the server - have you tried increasing the -gn value? Generally increasing worker threads is not a good idea due to the increase probability in contention but if your workload is indeed non-overlapping (not touching the same data) then adding workers may improve the throughput?

(22 Dec '14, 12:55) Mark Culp
Replies hidden
Comment Text Removed

@Breck: SQL Anywhere server uses 2 physical and 24 logical processors.
All dbisql processes are running on the same computer. The result does not differ when using different or the same computer as DB server.
-zt option is turned on. The results from sa_performance_diagnostics(), when running 17 concurrent instances:

ReqTimeBlockLock - all zeros;
ReqTimeBlockContention is 12 to 18 percent of ReqTimeActive (15 percent average);
ReqTimeBlockIO is only about 0,4 - 0,75 percent of ReqTimeActive (0,5 percent average);
ReqTimeUnscheduled is near zero in all connections (e. g. ReqTimeUnscheduled=0,001... when ReqTimeActive=1520);
ReqStatus is Executing, BlockedContention or BlockedIO.

(23 Dec '14, 02:13) Arthoor

@Mark: -gn option is set to 200. It must be that high as there are often waitfor statements running in many connections, but not at the time when running these procedure P instances.

(23 Dec '14, 02:14) Arthoor
Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×241
×8
×4
×2

question asked: 22 Dec '14, 10:32

question was seen: 2,010 times

last updated: 23 Dec '14, 02:35