Any disadvantage, if a long varchar is used instead of a varchar with explizit length?
asked 11 Oct '10, 16:39
The short answer is no, a string is a string.
It is the actual length of each value, not whether it is declared as binary, character and/or long, that determines how the data is stored in the database.
The long answer is, there are many differences in the way strings are treated in SQL code, and in many applications, depending on binary versus character versus long versus NVARCHAR, which collation / character set, and so on.
Also, the declared maximum length of a colum does have some performance implications for indexing, function and subquery caching and column statistics used by the query optimizer. And LONG VARCHAR does have a declared maximum length, it just happens to be implicitly declared as 2147483648.
My book goes on for two pages on the picky details, and they've gotten more interesting since Version 9, but that's not what you asked for.
Here's how I choose the data type for an ordinary (text) string column: If there is an absolutely well known maximum length, I specify it; e.g., CHAR ( 200 ) for employee_name. If the column's going to be important for performance, such as a primary key column in a 20 million row table, I think long and hard about the maximum length. Otherwise, if there is ANY DOUBT about how long the string will grow, I use LONG VARCHAR; e.g., picture_description, message_text, geographic_location.
So... I have a lot of LONG VARCHAR columns in my databases AND variables in my SQL code, and they have caused zero problems.
Caveat: I write very little application code, where LONG VARCHAR can wreak havoc with the client-server interface.
And I do not have to deal with fascist database designers who insist that phone_number cannot be VARCHAR ( 200 ) but must be broken down into the constituent subfields like area_code VARCHAR ( 3 ), or that zip_code must be VARCHAR ( 5 ), or that first_name and last_name are used to validate a credit card... crap like that is what makes me abandon so many shopping carts (how do I enter 'BRECK CARTER RISINGROAD INC'?)
answered 12 Oct '10, 09:08
I want to add a point to Breck's answer. Breck is largely correct in that, in the main, the SQL Anywhere server doesn't differentiate between VARCHAR and LONG VARCHAR types in a semantic way. However, there are circumstances where having a LONG type does matter. For example, the server memoizes prior executions of sub-selects so that the sub-select does not have to be executed on each invocation. However, to avoid expensive copying of BLOB (or CLOB) values at execution time, memoization will not occur if the expression's declared type is LONG.
The best answer, in my opinion, is to use the most appropriate type for every domain. In particular, in my view it is a mistake to use LONG types unnecessarily.
answered 12 Oct '10, 11:20
(As per Martin's request, I am turning my comment into an answer.)
I want to add that using long varchar/nvarchar/binary when it is not necessary will have a SERIOUS impact on the client side if your client is a JDBC application using the iAnywhere/SQL Anywhere JDBC driver. The iAnywhere/SA JDBC driver will do its very best to perform wide fetches (i.e. fetch multiple rows at a time) when retrieving result sets from the server. BUT, if any of the columns in the result set are declared long varchar/nvarchar/binary, then the JDBC driver will have no choice but to drop down into single fetch mode due to the fact that the long columns will need to be getdata'd.
By the same token, the Remote Data Access layer within SA will also attempt to perform wide fetches from the remote server over ODBC; but once again, if any of the columns are declared long, the Remote Data Access layer will also drop into single fetch mode in order to perform getdata's on the long columns.
Hence, from a JDBC client or Remote Data Access standpoint, declaring a column long when it really does not need to be will have fairly significant performance implications.
answered 13 Oct '10, 13:51