Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in August and this forum will be retired.

We have a custom process for loading data into a table of our database that is currently written using an INSERT statement that loads one row of data into the table at a time. I want to speed that up by using the INSERT INTO SELECT FROM OPENSTRING construct, where I'll pass a BLOB containing the data for a block of rows as a parameter.

Before anyone tells me to use SABulkCopy, we've tested it and we get better performance loading data the way we're currently doing it, one row at a time. I think that SABulkCopy must not be using the OPENSTRING construct.

In any case, it's my assumption that the data in the BLOB must be layed out as an array of rows, with the layout of a row matching the layout of the columns in the table. What I need is documentation on how to format the data. I'm guessing that C# ints are just 4 byte integers in little endian order, 'longs' are 8 byte integers in little endian order, etc. I think varchars are 2 bytes with a length followed by as many characters as their in the string. But how are uniqueidentifiers, DateTimes, and DateTimeOffsets stored in the BLOB?

asked 09 May '13, 08:27

TonyV's gravatar image

accept rate: 75%


So your data is layed out in binary form (as it is sent that way across the wire, as you have written in the other FAQ)?

As Breck has explained, OPENSTRING() usually expects data in string format, just like LOAD TABLE. Therefore you would usually have to convert the data yourself in a appropriate textual representation before running OPENSTRING() on them.

Note: OPENSTRING does also support OPTION (FORMAT BCP), however, I do not know whether that binary format used by ASE (and MS SQL, methinks) fits your data.

(09 May '13, 16:55) Volker Barth
Replies hidden

Just to add: Whereas I would think it's easier for you to format your own data to fitting SQL string literals yourself than to find out how to make OPENSTRING() parse your binary data accordingly, the usage of casting from binary values with the help of HEXTOINT and similar functions may work, as well - cf. this doc page.

(09 May '13, 17:00) Volker Barth

Actually, the data is transmitted over the wire in XML. When it is received, it is deserialized into .NET objects. By the time I am writing it to the database, it's in binary (this code never sees the XML).

I am going to create a memory stream and write the data in binary to that memory stream. Or I could write it to the same memory stream as string data and let OPENSTRING parse it. The problem is what field & record delimiters to use in that case. You don't want the data to contain the same characters that are being used as delimiters.

(10 May '13, 08:54) TonyV

My blog is your friend; try this search string in Google...


Then try this one...


(10 May '13, 09:09) Breck Carter

If the data is in xml you can use the openxml() function to parse and insert.

This is a sample from a procedure WHERE _XML is the parameter for the xml stream :

FROM openxml(_XML,'/CATALOGUE' )

You can see that is a standard INSERT SELECT STATEMENT and in the WITH CLAUSE you define the relation ship between the xml structure and the table structure

(10 May '13, 09:46) Costa

Generally, a good hint, but Tony has already stated that

this code never sees the XML.

(10 May '13, 12:59) Volker Barth

Yes, I have read that, but actually I understand the client :

1) Receive the data in XML format.

2) Transform the data in .NET OBJECT.

3) Build a big string with this objects to do only one call to the server.

My idea is :

1) receive the data in XML format.

2) send the data in XML format to the server in one call.

3) do the parsing on the server side as I show in my sample.

This contruct suppress the .NET Objects step. In my experience this is only between 2 and 3 time faster...

(13 May '13, 02:44) Costa
showing 1 of 7 show all flat view

It's not that hard... it's called openSTRING for a reason ( hint: it deals with strings :)

   a   BIGINT            NOT NULL PRIMARY KEY,
   b   INTEGER           NOT NULL,
   c   VARCHAR ( 10 )    NOT NULL,
   d   DECIMAL ( 11, 2 ) NOT NULL,
   e   DATE              NOT NULL,
   f   TIMESTAMP         NOT NULL );

SET @b = STRING ( 
   '1,101,''aaa'',123.45,2013-05-09,2013-05-09 16:11:16.057\x0d\x0a',
   '2,202,''bbb'',234.56,2013-05-10,2013-05-11 16:11:16.063\x0d\x0a',
   '3,303,''ccc'',456.78,2013-05-11,2013-05-11 16:11:16.069\x0d\x0a' );
             WITH ( TABLE t ( a, b, c, d, e, f ) )
           OPTION ( DELIMITED BY ',' ) AS openstring_t;

a           b c                      d e          f                       
-------------------- ----------- ---------- ------------- ---------- ----------------------- 
                   1         101 aaa               123.45 2013-05-09 2013-05-09 16:11:16.057 
                   2         202 bbb               234.56 2013-05-10 2013-05-11 16:11:16.063 
                   3         303 ccc               456.78 2013-05-11 2013-05-11 16:11:16.069 

See also: Pushing OPENSTRING and CROSS APPLY

and this: JOIN to a text file without a proxy table

permanent link

answered 09 May '13, 16:13

Breck%20Carter's gravatar image

Breck Carter
accept rate: 20%

edited 09 May '13, 16:22

Thank you, Breck. It'll be easier to build a string on the fly than a binary block.

(10 May '13, 08:46) TonyV
Replies hidden

Keep in mind that it may be easier, but maybe not memory friendly nor faster in the end to do. Using up more memory space in your .NET program to parse, re-generate, and send strings to the database server takes away memory from the database server to actually insert the rows against the database pages.

This is also why when MobiLink sends rows down to a remote, both the database server and the synchronization client dynamically manage memory to try and balance these two conditions. MobiLink also uses a packed binary format to efficiently transfer the rows.

(14 May '13, 10:24) Jeff Albion

Jeff: This has been running in production now for a few months. Performance is great and definitely faster than inserting the data one row at a time. We've had to limit the amount of memory that the database server uses to begin with, or other applications do not perform well.

Plus, by sending data for many rows in one I/O operation, we end up reducing the total time doing I/O. That is, the time it takes to upload N rows in one I/O is less than the time it takes to upload N rows one at a time. It's been a big win for us.

(07 Aug '13, 09:11) TonyV
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 09 May '13, 08:27

question was seen: 2,672 times

last updated: 07 Aug '13, 09:11