Hi all, just wanting to know, are there any upper limits with respects to the size/complexity of a SQL Statement executed against SqlAnywhere 12 through the sacapi.dll? The kinds of things I'm looking to understand are
Thanks |
I do not believe there are any specific hard limits you should run into when going through sacapi.dll, apart from the normal hard limits imposed by the server. To your specific points:
INSERTING or SELECTING very large values over sacapi.dll may be very memory intensive. In particular, dealing with values > 500MB as a single block could cause low memory conditions. However, sacapi.dll has APIs for INSERTing and SELECTing chunked data. As long as the chunked APIs are used for large values, you should not have a problem.
The server's governor has some default values (such a maximum of 50 active prepared statements per connection), but none of these are specific to sacapi.dll.
Nope, not that I am aware of. |
Can't tell for the SA CAPI but the maximum number of concurrent prepared statements per connection can be specified via the max_statement_count option...
Thanks Volker, multiple simultaneous statements on an individual connection are not my greatest concern at this stage.
The thing I'm worried about is that we might run into limits when trying to prepare large insert statements say when we want to use UNION ALL type statements to insert multiple rows at once as shown here http://stackoverflow.com/questions/2624713/how-do-i-insert-multiple-rows-without-repeating-the-insert-into-dbo-blah-part
I am worried about this because I have hit these limits in the past with SQL Server. It has a (default) limit of 256MB which is Batch Size (64KB) * Network Packet Size (defaults to 4KB).
FWIW, when using SQL Anywhere 12 and above, you can simply add several rows within one INSERT statement, cf. this sample from the docs:
That seems to work like in MS SQL 2008 and should make those batches much shorter.
That being said, I can't tell whether there is a limit as in SQL Server so others will have to answer that... On the other hand, if you just attempt to generate such statements (say, by constructing a statement based on a SELECT from a larger table) you might find out yourself. - AFAIK, SQL Anywhere usually does not have "fixed" limits for queries but might run into memory constraints at runtime.
You might want to explore the various features of LOAD TABLE, in particular LOAD TABLE ( column names ) USING VALUE '... rows of data ...'.
Even on a bad day, LOAD TABLE will run rings around INSERT, BULK or otherwise :)
Thanks both of you, both these answers are helpful, but the fundamental issue still remains. What I am trying to solve is a "Copy" operation where objects in our database are copied and pasted. To achieve this we are using a statement like this
INSERT INTO (<columns that="" are="" updated="" from="" code="">, <columns that="" are="" copied="" from="" existing="" table="">)
SELECT (<fields derived="" in="" code="">, <fields from="" existing="" table=""> FROM <existing table=""> WHERE Id = <pk>
UNION ALL
SELECT (<fields derived="" in="" code="">, <fields from="" existing="" table=""> FROM <existing table=""> WHERE Id = <pk>
UNION ALL ...
each select statement is a single row from the existing table, as we need to derive some column values in code.
I am trying to determine just how far this will scale. The majority of the time users will only be copying a few objects (rows) at a time, maybe occasionally up to a few thousand, but I know from past (bitter) experience, that there is that 0.5% of users that are able to find the limits of the system (even if they are doing things we wouldn't really recommend). I guess I'd just like to know those limits before our users find them. Especially if those limits are fairly low (i.e. if we can't copy more than a few thousand rows like this, then we will actually need to put in place a work around).
LOAD TABLE may actually help, but we'd still actually have to create the file first by combining data from the existing table with our data derived from code, which would still mean performing all of the SELECT queries, which in Abbot and Costello parlance is "First Base".
So you say it's not possible to calculate these values as part of a select list of a big INSERT SELECT ... statement? Or possibly a MERGE statement? Possibly by calculating part of these "derived" values with the help of a procedure that takes data from the existing tables as input? - IMHO, these are really mighty ETL tools..
I do understand that you are interested in hard limits (and they may exist, I simply do not know, the SQL Anywhere engineers certainly will now...). However, as suggested, I'd recommend to test that yourself by constructing a giant INSERT over a UNION... Simply try to be one of the 0.1% top users:) - Even if hard limits do exist, you might run into "soft limits" beforehand. E.g. the number of tables a database can hold is limited (4293918719). While it would not violate "hard limits" to join all these tables (To cite: "Number of tables referenced per transaction: No limit") I guess such a query would typically fail because it would demand much more memory than available. That's a guess, I have not tried that myself:)
The USING VALUE form of LOAD TABLE uses an expression, not a file. Here is an example from the Help...
The following syntax specifies the value directly, inserting two rows with values of 4 and 5, respectively;
... USING VALUE '4\n5'...