Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

Using v 17.0.11.7173, a SELECT INTO statement procedures this assertion, which stopps the database.

  • It does happen when I SELECT INTO a permanent table.
    (But the table itself is created before the assertion appears, i.e. it does exist when tha database is recovered - but of course empty.)
  • It does also happen when I first create the "result" table and then use INSERT SELECT.
  • It does not happen when I SELECT INTO a local temporary table.

The original query uses a proxy table but the error also arises when I (successfully) select from the proxy table into a local temporary table and then use SELECT INTO the permament table from the local temporary table.


I'm still checking whether v17's CREATE TABLE LIKE / AS enhancement helps as a workaround and will try to build a small reproducible sample - but currently I would appreciate any hint what this assertion does mean and why it might appear here.

The assertion never showed up with v16.


UPDATE: This seems to be related to NCHAR/CHAR issues: The proxy source data is NCHAR, and I copied that as such to the local tables (and they were created with NCHAR columns, too.) However, if I convert those columns via TO_CHAR(), the SELECT INTO works as expected. The database uses the default windows-1252 as CHAR charset and UTF-8 as NCHAR charset. So I got a workaround.

asked 09 Nov '22, 11:46

Volker%20Barth's gravatar image

Volker Barth
40.2k361549822
accept rate: 34%

edited 09 Nov '22, 13:00

1

This assertion, along with the related 100914, are reported when checking to that the string is not larger than the declared column size. 100913 is reported with byte semantic character columns and 100914 is reported for character semantic columns i.e., char( 10 char). The character semantic length is calculated based on the collation. This assertion should only occur when applying operations from a transaction log. If you can reproduce this, we would be interested in investigating the cause.

(09 Nov '22, 14:16) Chris Keating

The assertion failure means some internal checking & truncation is missing and we are logging an operation to the redo log with more data than a column can actually hold. For example, suppose you inserted 'ab' into a char(1) column but the server didn't truncate the value. The assertion stops us from logging an operation which represents an insert of a 2-byte string into a 1-byte column. Internally, I will forward this issue to someone who might be able to look at it. It might also be related to recent changes for NCHAR in remote sources.

permanent link

answered 09 Nov '22, 14:04

John%20Smirnios's gravatar image

John Smirnios
12.0k396166
accept rate: 37%

The thing that still puzzles me: The table definitions of both via SELECT INTO created local temporary table and the permanent table are almost identical, when I use

call sa_describe_query('select * from [AccordingTable]');

to get the table schema. (Aside: Is there a different method to get this for local temporary tables? sa_get_table_definition() seems to query the system catalog so does not help here.)

Furthermore: The description of the created permanent table is identical to the proxy table, whereas the local temporary table has a smaller width for timestamp columns - they are described with width "23" whereas the proxy and permanent table have width "26". For NCHAR columns, the descriptions are identical in all table instances.

I'll try to narrow it down to a sample.


A general question: As this seems related to the rollback log, I do not understand the difference between a permanent and a local temporary table in that respect - as I would assume the local temporary table is not created with a NOT TRANSACTIONAL clause, so its insert should also be part of the rollback log?

(10 Nov '22, 04:38) Volker Barth

Update: Inspite of the slightly varying schema description for timestamp columns, they do not seem to lead to issues.

However NCHAR columns do, and it does not seem to matter whether byte_length() and char_length() are different (as I would have assumed) or not (which would usually apply in my case to German words with umlauts): Even for columns with both values identical, the SELECT INTO permanent table fails if the according byte_length() is more than half of the original column width.

(10 Nov '22, 06:19) Volker Barth
Replies hidden

Inserts to temporary tables are not logged (and we are referring to the redo log here, not the undo log). That's why you will not encounter the assertion failure while inserting into a temporary table.

(10 Nov '22, 10:35) John Smirnios

I guess the issue is solved:

There was a difference between the proxy table's column definition (particularly the width of NVARCHAR columns) and the remote table's definition, say the proxy has NVARCHAR(30) whereas the remote table has NVARCHAR(50) and contains values with char_length() > 30. I can't tell but strongly assume that the length in the remote table has been enhanced after it was created as proxy table. (It's a third party database, so difficult to find out.)

The too short column description in the proxy table has made sa_describe_query() report NVARCHAR(30), so both local temporary table and permanent table would reasonably be created with NVARCHAR(30), and the INSERT of values with char_length() > 30 chars has apparently triggered that assertion. (I still do not understand why inserting into the local temporary tables has not triggered that assertion, and I generally would have expected an insert error as option string_rtruncation is set to ON here - which would be way better than an assertion...)

After dropping and recreating the proxy table, the problem is solved.


FWIW: As stated in the question, with v16 this assertion has never been raised, and values > 30 chars have been happily accepted by the freshly created permanent table and have successfully been merged into other permanent tables, so I do assume the changes in V17's ODBC/NCHAR treatment do make the difference here.

I have not provided a sample but do assume that it is quite easy to reproduce this with a remote table with an "inappropriate" proxy table definition with too short colum width definitions.


UPDATE: Here's a reproducible.

  • Step 1: Use a local SQL Anywhere 16 demo database to create a small table with a NCHAR column:
create table DBA.T_Remote (
   ID int default autoincrement primary key,
   Value nvarchar(30) not null
);
-- add 2 rows with maximum length
insert DBA.T_Remote (Value) values (N'This is a string with 30 chars');
insert DBA.T_Remote (Value) values (N'This is a string with รค umlaut');
select *, char_length(value) from DBA.T_Remote order by ID;
  • Step 2: Use a SQL 17 database (17.0.11.7173) to establish a proxy table to the SA 16 demo database:
create server SVR_SA16 class 'SAODBC'
using 'SERVER=demo16;DSN=SQL Anywhere 16 Demo;UID=DBA;PWD=sql';
create existing table DBA.Proxy_Remote at 'SVR_SA16..DBA.T_Remote';
select * from DBA.Proxy_Remote order by ID;
  • Step 3: Now enhance the NCHAR width of the original table and add according longer contents:
alter table DBA.T_Remote alter value nvarchar(50);
insert DBA.T_Remote (Value) values (N'This is a string with 50 chars, after ALTER TABLE!');
  • Step 4: Now leave the proxy definition unchanged and use SELECT INTO on the SQL 17 database.
select * from DBA.Proxy_Remote order by ID; -- list all rows with full contents
select * into table DBA.CopyTable from DBA.Proxy_Remote order by ID;
-- raises assertion and crashes the database
  • Step 5: In contrast, a local temporary table can be used with SELECT INTO but surprisingly it is described with NVARCHAR(30) although it happily contains values with 50 chars:
select * into local temporary table LtCopyTable from DBA.Proxy_Remote order by ID;
select *, char_length(value) from LtCopyTable order by ID;
call sa_describe_query('Select * from LtCopyTable'); -- lists nvarchar(30) for 2nd column. 
permanent link

answered 10 Nov '22, 06:58

Volker%20Barth's gravatar image

Volker Barth
40.2k361549822
accept rate: 34%

edited 10 Nov '22, 12:58

The person to whom I sent this issue was unable to reproduce it; however, there was a recent fix to NCHAR handling in remote tables. I wouldn't call the issue "solved" since no misconfiguration of a remote table should result in a server crash. If you have a concise repro, it might be useful to post it here.

(10 Nov '22, 10:40) John Smirnios
Replies hidden

I used "solved" from my own point of view, and I surely agree that a server crash should be prevented. As stated, I would have expected a string truncation error. And I'll try to build a repro.

(10 Nov '22, 11:00) Volker Barth

FWIW, could my repro be reproduced? :)

(17 Nov '22, 10:51) Volker Barth
Replies hidden
1

My apologies: I didn't get (or I missed) an email notification for your update. I will forward the information. Thank you for the repro!

(17 Nov '22, 11:42) John Smirnios
1

I'm told a fix is now in progress.

(17 Nov '22, 14:21) John Smirnios
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×246
×48

question asked: 09 Nov '22, 11:46

question was seen: 712 times

last updated: 17 Nov '22, 14:21