Hello, We use SQL Anywhere 12. We have a table defined as below called reservation. I added a column promotion_rn using the alter table statement below and this took an hour on my local development machine but when I ran the add column sql in production the statement took 5+ hours. Could anyone offer any advice on why this may have happened and possibly what I can do to improve this time when adding columns to this table in future? I ran the sql statement in iSQL. The reservation table contains approximately 750,000 rows. The result of "call sa_table_fragmentation( 'reservation' )" gives the following:
); asked 27 Oct '14, 23:39 JohnKC |
The Smart Money (i.e., Volker) says the ALTER TABLE is blocked by a schema lock. The next time you do this, start a second ISQL session and run SELECT * FROM sa_conn_info() to see if the BlockedOn column is non-zero for the first ISQL session. The following example shows that connection 2,'database-maintenance' is blocked by connection 3. SELECT * FROM sa_conn_info(); Number,Name,Userid,DBNumber,LastReqTime,ReqType,CommLink,NodeAddr,ClientPort,ServerPort,BlockedOn,LockRowID,LockIndexID,LockTable,UncommitOps,ParentConnection 4,'Foxhound-p001','DBA',0,'2014-10-28 07:38:44.294','COMMIT','local','',0,0,0,0,,'',0, 3,'adhoc-queries','k.delacruz',0,'2014-10-28 07:38:50.160','FETCH','local','',0,0,0,0,,'',0, 2,'database-maintenance','DBA',0,'2014-10-28 07:36:54.572','EXEC','local','',0,0,3,0,,'DBA.inventory',0, To see what locks are being held by connection 3, run SELECT * FROM sa_locks ( 3 ) which in the case of an ALTER TABLE that is being blocked by a SELECT with no COMMIT will show a 'Schema' lock: SELECT * FROM sa_locks ( 3 ); conn_name,conn_id,user_id,table_type,creator,table_name,index_id,lock_class,lock_duration,lock_type,row_identifier 'adhoc-queries',3,'k.delacruz','BASE','DBA','inventory',,'Schema','Transaction','Shared', In cases like this, you can wait until everyone else is off the database, or you can ask them politely to get off the database (or stop working after doing a commit), or you can kick them off by calling p_drop_other_connections(). Unlike row locks, AFAIK there is no mechanism that can exactly pinpoint "this block is caused by that schema lock"... in this case sa_conn_info() and sa_locks() work because connection 3 is only holding one lock so that must be the one. In the real world, however, evil-doer connections hold millions of locks so it can be a challenge to diagnose blocks caused by non-row locks. answered 28 Oct '14, 08:05 Breck Carter |
All rows need to be copied when a new column is added so for a huge table with a lot of rows it will need some time to do this copy, in my experience this happens even if the column is a nullable one. answered 28 Oct '14, 08:50 Martin AFAIK, that's not generally true and can be influenced with the help of the PCTFREE setting:
Furthermore, SQL Anywhere does not reserve space for NULL columns, so if you add a column without a value, no more space should be claimed. The shown result of sa_table_fragmentation() (assuming this is after the update) does not hint at a significant number of row splits, cf. that older FAQ - Kudo's to Glenn, one's more:)
(28 Oct '14, 09:15)
Volker Barth
Replies hidden
Comment Text Removed
1
> Furthermore, SQL Anywhere does not reserve space for NULL columns, so if you add a column without a value, no more space should be claimed. Not exactly: "Column space utilization for NULL values is 1 bit per column and space is allocated in multiples of 8 bits. The NULL bit usage is fixed based on the number of columns in the table that allow NULL values." ...so, adding the ninth nullable column will require an extra byte per row. This is surely not the reason for the symptom described here, but it's worth mentioning... well, maybe it isn't :)
(31 Oct '14, 09:13)
Breck Carter
|
How much users/connections have accessed that table during the ALTER TABLE? Altering a "normal" table (neither the number of rows nor the schema look extraordinary) should certainly not take much time, particularly when simply adding a NULLable column with no default value. I would suspect the actual operation has had to wait for other connections to give it a chance to alter the table, as that requires exclusive access, as the docs state here for example...
Thank you for your help. There were no customers on at the time using our web application. The backup utility started about half an hour after I ran the sql statements. The backup command was:
dbbackup.exe -c "dbn=x;eng=x;uid=x;pwd=x" -x -y f:\backups\
So there were several statements? In the question only one statement was mentioned.
Besides that, in a database accessed by only the DBA connection that does the schema update, such a small ALTER TABLE should only take a really short time on modest hardware, at maximum a few minutes but possibly less time.