folks, i am struggling with a nasty problem: we had a database in ASA10 with wrong codepage. we created new DB with right codepage 1250, accent sensitive and reloaded the data. most data looks good through ISQL, no problem. All characters can be represented, thats fine. The char-columns are not sensitive, that is, some local characters are treated equal. the nchar-columns are sensitive and distinguish the necessary stings. all this works. when lookin at the data through the java-app, those columns, which depended on a domain, came back from DB as a long hex-string. we dropped domain and changed to direct definition of character-width. That removed the hex-string. we can write all local characters to DB from java, and they look fine through ISQL. However, we can read back only CHAR-columns. The local characters coming from a NCHAR-column look distorted (box, different char). it doesnt matter whether we read/write with get/setString() or get/setNString().
we have ASA10 patchlevel 4295, jdbc 7, java 6_30, platform is win2003.
How/where can i influence the read characters from DB? Any setting in environment? connection-parameters? locale? whatever?
i installed a cyrillic DB a while ago without any observable problems and am quite surprised by these obstacles.
asked 24 May '12, 08:11
The first step to take would be to determine if the data in the database is correct so that you rule out character set conversion problems. One way would be to unload the table containing the problematic NCHAR data using
UNLOAD TABLE table_name TO 'foo.dat' ENCODING 'utf8'
Look at the data file using an editor that understands UTF8 text files (such as Windows Notepad). Does the data look correct?
answered 24 May '12, 10:27
in a sybase-forum i found a hint, that somebody else had a similar problem with incoming 'hex-strings' when the column is defined with user-type(domain). dropping the domain fixed the problem.
answered 26 May '12, 00:04