I get the impression that every time I make a change in the structure of a table must run the command line "reoarganize table ..." Just added new fields, it may be that some field is coming from foreign key, and some uncommon fields. After the execution of my scripts clients then immediately complain of "slowness" in the system (this slowness, usually technical support here the company ends up being resolved with the option to UNLOAD / RELOAD), but it's a very time consuming job. I am using ASA 18.104.22.16851
In what situation is it really necessary to run the command line "reorganize table ..."?
If anyone can help me, thank you!
asked 09 Oct '15, 09:29
When creating the table for the first time, try using the
If you create a table like this:
the row width is 16 bytes. There is a small amount of overhead for each row, as well as a small amount of overhead for each page.
We'll fill the page with rows:
Notice exactly one page is used. Now add a column:
Because the page holding the row was already full, the new column had to go in a separate extended page. Now a
If we add
answered 09 Oct '15, 11:33
If unload/reload improves the performance it sounds like you are experiencing some degree of table fragmentation due to page splits and probably a higher page count. That could indicate you may have a marginally optimizable query and investigating that could be worthwhile exercise. So too would be a check for weak statistics (and a subsequent investigation of how much of a benefit a create statistics operation alone provides).
Adding columns to rows that have filled table pages has a significant potential of filling up the pages and require many (or most) to be split into more than one page. This would also impact the content of index leaf pages and could easily reduce the degree of clustering there. All of which could benefit from a reorganization. UNLOAD+TRUNCATE+RELOAD would accomplish that. So too would a REORGANIZE TABLE which can happen during production ours and live on an active database.
There could easily be more complexity to the issues surround the performance of multiple queries and those would need to be identified and analyzed individually but at the level of information provided the above should answer your main question.
answered 09 Oct '15, 11:19
Nick Elson S...
Run multiple tests, I onload / reload of the and changed the size of the database page to 8k and still and assigns pctfree 20 (listed tables with higher record - about 35 tables), the result was surprising (much improved the performance :)), the system is running smooth smooth smooth. thanks for the tip. Success for everyone!
answered 10 Dec '15, 08:35