Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

Hi all, just wanting to know, are there any upper limits with respects to the size/complexity of a SQL Statement executed against SqlAnywhere 12 through the sacapi.dll?

The kinds of things I'm looking to understand are

  • Maximum character length (I am guessing this would be bounded by the maximum string length in SA, i.e. 2GB), but are there further restrictions placed on this by either the engine or the dll?
  • Maximum number of prepared parameters?
  • Any other hard limits we might run into?

Thanks

asked 21 Jul '14, 21:21

Scott%20Baldwin's gravatar image

Scott Baldwin
1567914
accept rate: 0%

edited 25 Jul '14, 03:21

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822

Can't tell for the SA CAPI but the maximum number of concurrent prepared statements per connection can be specified via the max_statement_count option...

(22 Jul '14, 02:58) Volker Barth
Replies hidden

Thanks Volker, multiple simultaneous statements on an individual connection are not my greatest concern at this stage.

The thing I'm worried about is that we might run into limits when trying to prepare large insert statements say when we want to use UNION ALL type statements to insert multiple rows at once as shown here http://stackoverflow.com/questions/2624713/how-do-i-insert-multiple-rows-without-repeating-the-insert-into-dbo-blah-part

I am worried about this because I have hit these limits in the past with SQL Server. It has a (default) limit of 256MB which is Batch Size (64KB) * Network Packet Size (defaults to 4KB).

(22 Jul '14, 03:22) Scott Baldwin
1

FWIW, when using SQL Anywhere 12 and above, you can simply add several rows within one INSERT statement, cf. this sample from the docs:

Support for more than one list of values
An INSERT statement can now contain more than one list of values, allowing several rows to be inserted at once. For example:

    INSERT INTO T (c1,c2,c3)
    VALUES (1,10,100), (2,20,200), (3,30,300);

That seems to work like in MS SQL 2008 and should make those batches much shorter.


That being said, I can't tell whether there is a limit as in SQL Server so others will have to answer that... On the other hand, if you just attempt to generate such statements (say, by constructing a statement based on a SELECT from a larger table) you might find out yourself. - AFAIK, SQL Anywhere usually does not have "fixed" limits for queries but might run into memory constraints at runtime.

(22 Jul '14, 03:36) Volker Barth
2

You might want to explore the various features of LOAD TABLE, in particular LOAD TABLE ( column names ) USING VALUE '... rows of data ...'.

Even on a bad day, LOAD TABLE will run rings around INSERT, BULK or otherwise :)

(22 Jul '14, 10:47) Breck Carter
Replies hidden

Thanks both of you, both these answers are helpful, but the fundamental issue still remains. What I am trying to solve is a "Copy" operation where objects in our database are copied and pasted. To achieve this we are using a statement like this

INSERT INTO (<columns that="" are="" updated="" from="" code="">, <columns that="" are="" copied="" from="" existing="" table="">)

SELECT (<fields derived="" in="" code="">, <fields from="" existing="" table=""> FROM <existing table=""> WHERE Id = <pk>

UNION ALL

SELECT (<fields derived="" in="" code="">, <fields from="" existing="" table=""> FROM <existing table=""> WHERE Id = <pk>

UNION ALL ...

each select statement is a single row from the existing table, as we need to derive some column values in code.

I am trying to determine just how far this will scale. The majority of the time users will only be copying a few objects (rows) at a time, maybe occasionally up to a few thousand, but I know from past (bitter) experience, that there is that 0.5% of users that are able to find the limits of the system (even if they are doing things we wouldn't really recommend). I guess I'd just like to know those limits before our users find them. Especially if those limits are fairly low (i.e. if we can't copy more than a few thousand rows like this, then we will actually need to put in place a work around).

LOAD TABLE may actually help, but we'd still actually have to create the file first by combining data from the existing table with our data derived from code, which would still mean performing all of the SELECT queries, which in Abbot and Costello parlance is "First Base".

(22 Jul '14, 21:33) Scott Baldwin

each select statement is a single row from the existing table, as we need to derive some column values in code.

So you say it's not possible to calculate these values as part of a select list of a big INSERT SELECT ... statement? Or possibly a MERGE statement? Possibly by calculating part of these "derived" values with the help of a procedure that takes data from the existing tables as input? - IMHO, these are really mighty ETL tools..

I am trying to determine just how far this will scale.

I do understand that you are interested in hard limits (and they may exist, I simply do not know, the SQL Anywhere engineers certainly will now...). However, as suggested, I'd recommend to test that yourself by constructing a giant INSERT over a UNION... Simply try to be one of the 0.1% top users:) - Even if hard limits do exist, you might run into "soft limits" beforehand. E.g. the number of tables a database can hold is limited (4293918719). While it would not violate "hard limits" to join all these tables (To cite: "Number of tables referenced per transaction: No limit") I guess such a query would typically fail because it would demand much more memory than available. That's a guess, I have not tried that myself:)

(23 Jul '14, 03:28) Volker Barth

create the file first

The USING VALUE form of LOAD TABLE uses an expression, not a file. Here is an example from the Help...

The following syntax specifies the value directly, inserting two rows with values of 4 and 5, respectively;

... USING VALUE '4\n5'...

(23 Jul '14, 13:09) Breck Carter
showing 2 of 7 show all flat view

I do not believe there are any specific hard limits you should run into when going through sacapi.dll, apart from the normal hard limits imposed by the server. To your specific points:

  • Maximum character length (I am guessing this would be bounded by the maximum string length in SA, i.e. 2GB), but are there further restrictions placed on this by either the engine or the dll?

INSERTING or SELECTING very large values over sacapi.dll may be very memory intensive. In particular, dealing with values > 500MB as a single block could cause low memory conditions. However, sacapi.dll has APIs for INSERTing and SELECTing chunked data. As long as the chunked APIs are used for large values, you should not have a problem.

  • Maximum number of prepared parameters?

The server's governor has some default values (such a maximum of 50 active prepared statements per connection), but none of these are specific to sacapi.dll.

  • Any other hard limits we might run into?

Nope, not that I am aware of.

permanent link

answered 24 Jul '14, 16:18

Eric%20Farrar's gravatar image

Eric Farrar
92621320
accept rate: 30%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×438
×11
×9

question asked: 21 Jul '14, 21:21

question was seen: 2,498 times

last updated: 25 Jul '14, 03:21