The following statement is blindingly fast: SELECT count FROM SYSTAB WHERE table_name = 'whatever'but the result can be wildly incorrect if many rows have been inserted or deleted since the last CHECKPOINT. Forcing a CHECKPOINT is a simple solution in some contexts (e.g., single-user embedded database) but it is a [cough] unattractive solution for a heavily-used network database. Is there any other way to obtain an approximately-correct table row count? ( surely the query optimizer knows the answer :) |
...and you implicitly state that a simple query like "SELECT COUNT_BIG(*) FROM <whatever> WITH (READUNCOMMITTED)" is too expensive, right?
Yeah... that's good for demonstrating a runaway connection :)
I don't know the answer, but is it possible to get the table size in bytes and divide it by the row size? Maybe this will be not so expensive…
Good idea but... many tables have widely varying row sizes. Many data types can vary in size depending on content; e.g., DECIMAL and all the string types. Also, NULL values are represented by single bits.
In fact, "How do I precisely and accurately calculate individual row sizes?" is a question that doesn't have a simple answer IMO.
Foxhound calculates an overall average "row size" by dividing the sum of all the space allocated to table and index data by the row count. That kind of row size is valuable for growth estimates and the like, but it is far from "precise and accurate".
...alas, we've come full circle; "how do I quickly calculate row count without doing a CHECKPOINT?" :)
FWIW: What is the use case - a Foxhound feature?
Yes, exactly. The Foxhound "Display Schema" feature shows row counts and bytes per row for all the tables, plus a "Checkpoint and redisplay" button to bring the counts up to date. However, more and more clients are sensitive about security and don't want to grant Foxhound more privileges than necessary... not to mention the [cough] problems with poorly-timed CHECKPOINT statements.