Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

We are in the process of updating a fairly large synchronizing schema to handle multiple row partitioning schemes on a per remote database basis (SQL AnyWhere 12.0.1). The current working scheme only supports a single partitioning scheme at a time, is SQL Script based and can synchronize against a version of the consolidated db hosted on 3 separate platforms (SQL AnyWhere, MS SQL Server, and Oracle).

Our first thought was that using the MobiLink Server .NET API might serve to reduce the many scripts required to support multiple consolidated db platforms, and give us a lot of flexibility in dynamically altering our row selection based on arbitrary configuration for each remote database. We still believe this is a path well worth pursuing, but we are perhaps running into a few of the outer edges in the current state of the API; especially as regards large data columns (BLOBs, CLOBs, etc.).

Due to our internal release schedule, we must fall back to using a tried and true SQL script implementation and different script versions based on the partitioning scheme to use for the moment. However, I'm advised to get an introductory question out there on the forum before I'm able to fully concentrate on the .NET solution again.

We are currently (slowly) putting together a smaller database than the large schema to try to set up a reproducible for the things we are running into, but due to deadlines, that needs to be a background task while we concentrate on the fallback implementation. Pending that, I'll just describe a couple of the blocking things we are running into.

  • We originally ran into some problems in the ODBC wrapper where the server would crash when reading large data types. That was mostly fixed with build 3554 (thank-you) and it allows our SQL AnyWhere consolidated implementation to run to completion. There may still be some problems when running against Oracle, the symptoms are similar, but we haven't had the opportunity to isolate that completely yet.
  • Under SQL Server, and using the connection received from ML Server, long data types - varchar(max) or varbinary(max) - are all null.
  • We understand that the capabilities of some ODBC drivers differ, but in this case, connecting to a separate .NET ODBC wrapper using the same DSN and the same query retrieves all the data, so we don't believe it is a limitation of the ODBC driver (unless of course the .NET framework is doing some strange magic under the covers to make the large data retrieval work in spite of lower level limitations).
  • We also tried creating a second reader as a workaround to retrieve single row single column results containing the large data types based on the pk of the current row being processed for download, but haven't been able to get the second reader to execute without error. This may be the result of some ignorance on our part because I confess I usually have at least one more layer of abstraction between my code and the actual ODBC driver, and am somewhat unfamiliar with any tricks that may be required.

Snippit of .NET reader code:

private void executeDownload(string sqlStatement, DBConnection connection, IDbCommand downloadCommand)
{
   var selectCommand = connection.CreateCommand();
   selectCommand.CommandText = sqlStatement;
   selectCommand.Prepare();

   var tableUpsertParameters = downloadCommand.Parameters;
   var reader = selectCommand.ExecuteReader();

   object[] row;
   while ((row = reader.NextRow()) != null)
   {
      for (int column = 0; column < row.Length && column < tableUpsertParameters.Count; column++)
...
      // snipped a bunch of stuff, but when the following condition is true, the value of row[column] is null whether or not it is null in the db row
      if (reader.ColumnTypes[column] == SQLType.SQL_WLONGVARCHAR && trgColumn.DbType == DbType.String)

The contents of sqlStatement is a pretty common SQL Script download_cursor script matching the columns and order from the remote table and a where clause comparing the last download timestamp to the consolidated timestamp value.

There are a couple non-blocking things that we've worked around but you may want to know about. We had the luxury of changing the data types for these particular columns to something that worked better.

  • When the column on the remote db is of type XML, and the type on the consolidated is perhaps a character type, writing the value to the download stream causes an exception. Interestingly, the exception occurs on the ExecuteNonQuery on the download stream, not on the assignment to the download stream parameter. The XML in question was valid and well-formed, but given the 3 different consolidated platforms, that type differed enough that we were OK with just converting them all to CLOB/long varchar types.
  • When the column type is tinyint, we get casting exceptions no matter how we cast our value first when assigning the value to the download stream parameter. I hadn't even remembered we still had a tinyint and we just changed the data type to a larger value since we don't expect a large number of rows in that table anyway.

My intent is to create a small reproducible that demonstrates all this but that will need to be a background activity for the moment. In the meantime, any suggestions that will clear up points of ignorance on my part will be greatly appreciated.

asked 17 Feb '12, 19:14

Ron%20Emmert's gravatar image

Ron Emmert
33651118
accept rate: 12%

edited 24 Feb '12, 19:00

Ron, can you help clarify some of the problems you're having?

Under SQL Server, and using the connection received from ML Server, long data types - varchar(max) or varbinary(max) - are all null.

Can you provide a sample code snippet on how you retrieve the row from the consolidated?

We also tried creating a second reader as a workaround to retrieve single row single column results containing the large data types based on the pk of the current row being processed for download, but haven't been able to get the second reader to execute without error.

Do you have a sample code snippet for this as well? What error are you getting?

Thanks, Reg

(23 Feb '12, 09:11) Reg Domaratzki
2

I'll convert this comment to an answer when I have answers to all your questions.

There is a problem in our .NET to ODBC bridge interpreting the varchar(max), varbinary(max) and nvarchar(max) data types for MS SQL Server. They are described as "zero length" by the MS SQL Server ODBC Driver, which is a special condition for MS SQL Server to indicate that they have no maximum length (well, 2GB). Our bridge was interpreting this as length zero, thus the NULL values. A fix is forthcoming, but for now, you can workaround the problem by using the TEXT or IMAGE data type.

(24 Feb '12, 14:23) Reg Domaratzki
1

I reproed the conversion error you're seeing with tiny ints. I'll try to get a fix in for it on Monday.

(24 Feb '12, 18:31) Bill Somers
Replies hidden

I added a snippet from the basic reader for SQL Server consolidated above. Unfortunately my project and the state of the db artifacts I have easily available are not in a good state to give you better code examples.

However, a fix to the problem described on the ODBC bridge would pretty much handle the blocking problems that we know of for SQL Server.

I'm embarrassed to say I'm unable to locate any of the versions of code we tried on the second snippet you asked for. Apparently none of the many variations we tried made it into source control. However, they were all attempts to work around the original issue and may not matter.

I hope to actually reproduce it in a smaller project next week and I'll get the repro with a stack trace to you when I can. In the meantime, the simplest thing we tried I believe was to just retrieve a single row and column (with ids embedded in the sql statement). We created a second command using connection.CreateCommand() open concurrently with the selectCommand and got an exception when executing secondCommand.ExecuteReader(). There may have been something subtle we missed in setting it up and if so, hopefully that will come out when I get the repro done.

We still experienced a crash running against an Oracle consolidated that seemed very similar to what was fixed in CR:695370. However, we were in the process of switching short term strategies and I never got to follow that up very well; I'll get back to it with the repro.

(24 Feb '12, 18:57) Ron Emmert

When we talked about this one internally we wondered why we've still got a tinyint in that column, and we changed it, so it's not a blocking problem for us at least. By all means, you probably want to take care of it, but there isn't a rush on it on our account.

(24 Feb '12, 18:59) Ron Emmert

I've fixed three of the issues you've raised:
1) CR# 701342 - NCHAR, XML, and VARBIT columns were not supported by the ML direct row APIs
2) CR# 701343 - unnecessary cast exceptions thrown for ML .NET APIs
3) CR# 701347 - varchar(max) and varbinary(max) columns in MSS did not work with the ML dnetodbc bridge


Edited to add the CR links - but they don't seem "filled with contents" so far... Volker

permanent link

answered 12 Mar '12, 14:14

Bill%20Somers's gravatar image

Bill Somers
1.2k820
accept rate: 40%

edited 13 Mar '12, 05:11

Volker%20Barth's gravatar image

Volker Barth
40.2k361550822

Version 12.0.1 build 3602 is the first build that will include these fixes.

(12 Mar '12, 14:16) Reg Domaratzki

From the short descriptions it looks like those will take care of anything we know about. I'll mark this as the answer when I can test in an ebf >= 3602. Thanks

(12 Mar '12, 18:27) Ron Emmert

EBF 3605 fixed the long datatype exceptions and the cast exceptions for the data types we still have in our synchronizing scheme.

This question can be considered answered to give it a bit of closure, but we are finding some related problems we'll follow up through either another question or other channels.

Thanks for your responsiveness.

We discovered one more data type related problem under MS SQL which we'll post as a separate question. Short description is that VarBinary(255) columns (exact size probably doesn't matter) returns an array of 255 bytes from the reader regardless of the exact size of the value in the database.

We are also still experiencing the AccessViolationException under the Oracle consolidated db, but it takes conditions I haven't been able to reproduce outside the context of our synchronizing db and app. I've initiated a support case for that one.

(12 Apr '12, 18:06) Ron Emmert
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×438
×371
×76

question asked: 17 Feb '12, 19:14

question was seen: 2,973 times

last updated: 12 Apr '12, 18:06