Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

The following statement is running on a geological timescale (i.e., very slowly) in a version 11 database:

DELETE TOP 1000 FROM yyy ORDER BY pk ASC;

The table is quite small, but is subject to "high throughput" processing (many inserts, many deletes). The schema look OK in Foxhound except for one thing; a huge and wildly bogus "1.1G index" space value which is based on SYSPHYSIDX.leaf_page_count:

CREATE TABLE xxx.yyy ( -- 6,751 rows, 1.1G total = 18.7M table + 8k ext + 1.1G index, 175,190 bytes per row
   pk                  /* PK        */ INTEGER NOT NULL DEFAULT autoincrement,
   WorkstationName                     CHAR ( 30 ) NULL,
   SiteID                              CHAR ( 16 ) NULL,
   PatientID                           CHAR ( 16 ) NULL,
   PatLastName                         CHAR ( 20 ) NULL,
   PatFirstName                        CHAR ( 20 ) NULL,
   PatMiddleName                       CHAR ( 20 ) NULL,
   Sex                                 CHAR ( 1 ) NULL,
   BirthDate                           TIMESTAMP NULL,
   ExamID                              CHAR ( 8 ) NULL,
   Modality                            CHAR ( 5 ) NULL,
   ExamCode                            CHAR ( 32 ) NULL,
   AcquSiteID                          CHAR ( 16 ) NULL,
   ExamDate                            TIMESTAMP NULL,
   PlacerOrderNum                      CHAR ( 64 ) NULL,
   IPAddress                           CHAR ( 130 ) NULL,
   StationID                           CHAR ( 16 ) NULL,
   UserID                              CHAR ( 64 ) NULL,
   UserLastName                        CHAR ( 20 ) NULL,
   UserFirstName                       CHAR ( 64 ) NULL,
   AccessTime                          TIMESTAMP NULL,
   ActivityCode                        INTEGER NULL,
   AlertFlag                           INTEGER NULL,
   Comments                            LONG VARCHAR NULL,
   PatientIDIssuer                     CHAR ( 64 ) NULL,
   FillerOrderNum                      CHAR ( 64 ) NULL,
   UserAcctLoginID                     CHAR ( 32 ) NULL,
   RoleName                            CHAR ( 64 ) NULL,
   Client                              INTEGER NULL,
   ApplicationName                     CHAR ( 32 ) NULL,
   DatabaseUserName                    CHAR ( 32 ) NULL,
   OSUserName                          CHAR ( 32 ) NULL,
   DBAIndicator                        INTEGER NULL,
   ResourceIndicator                   INTEGER NULL,
   AuditIndicator                      INTEGER NULL,
   CONSTRAINT ASA356 PRIMARY KEY ( -- 1.1G
      pk )
);

Here's where the numbers come from in Foxhound...
rows   SYSTABLE.count
table  SYSTABLE.table_page_count
ext    SYSTABLE.ext_page_count 
index  SYSPHYSIDX.leaf_page_count for version 10 to 16, SYSATTRIBUTE.attribute_value for version 9

Clearly, it is logically impossible for a primary key index containing 6,751 INTEGER values to take up 1.1G... but if true physically that might explain why a DELETE TOP ORDER BY might run rather [cough] slowly.

Is this a symptom of corruption, or simply a table in dire need of reorganization?

(FWIW the "175,190 bytes per row" is a result of the "1.1G index" space, not the "8k ext" space used by the presumably-almost-always-empty LONG VARCHAR column.)

asked 18 Aug '14, 14:07

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

edited 19 Aug '14, 06:33

2

That does seem odd. If corruption is at the heart of this then the only good next step would be an obvious intervention.

Out of curiosity does this relation hold true?

"SYSPHYSIDX"."key_value_count" == "SYSTABLE"."count"

If it does, have you investigated the details of this index using sa_index_levels( ) and sa_index_density( )

at all?

Or the sequence and (possibly more important to this question) the gaps in the current key sequence (which should be fast to investigate since a covering index scan should be fast; and if it isn't then partial info there too)?

{It also may prove informative to look at the graphical plans (say using 16 or 12) of the (hopefully a) index scan query to see if additional details can be ascertained that way.}

Depending upon how the keys have churned in this table, it may be possible to get a sparse index with only a few keys per leaf but I would agree that a discrepency this large would usually indicate a bigger concern than just that.

There had been some key compression issue issues with 11.0.x and maybe a page leak issue too ... and the ones I am familiar with might have affected autoincrement keys ... so maybe that was a contributing factor.

Hopefully something here helps you investigating this deeper.

(19 Aug '14, 10:37) Nick Elson S...
2

And to add more to the list of possible bug contributions . . .

There was a cleaner bug (CR#635322) that would cause the cleaner to not clean some pages (including index pages) under some conditions.

If this is a hot table in the application, with long lived "schema locks" on it then you may have this conditions for this to be a contributing factor.

(19 Aug '14, 10:46) Nick Elson S...
Replies hidden

Thanks very much... the database in question is three time zones away, behind a HIPAA wall, so my curiosity may not be satisfied immediately :)

(19 Aug '14, 10:58) Breck Carter
Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×275

question asked: 18 Aug '14, 14:07

question was seen: 2,448 times

last updated: 19 Aug '14, 10:58