The forum will experience an outage sometime between February 10 at 7:00pm EST and February 12 at 11:59 EST for installation of security updates. The actual time and duration of the outage are unknown but attempts will be made to minimize the downtime. We apologize for any inconvenience.

We are using SQL Anywhere 12.0.1 as an embedded database in our application, which tracks information about vehicles that are observed by our company's proprietary license plate readers. As such, we install the minimum sized SA build we can on client machines. The only query tool that is installed is dbisqlc.exe.

When the application is installed for the first time, the database is initially 2 GB in size. For many installations, it never gets much bigger than this. But there are locations where it does grow.

There is one install of our software where that has anywhere from 15,000 to 18,000 vehicles that drive by it in both directions. We have a process that runs at midnight every day to purge data than a certain age which have replicated to the server. So the database has up to 11 days worth of data in it, and at midnight, the 11th day is purged. That means that it can have almost 200,000 rows in that one table at any time, not to mention another 400,000 in a related table & an average of 400,000 in a third.

This install replicates using a cell modem, and it turns out that the connectivity there is spotty. Plus it was recently upgraded and it took longer than the period the installer gives the database server to shut down for it to stop, so the upgrade failed. Upon remotely connecting into the location, we found that the database is currently 25 GB in size.

Given the history of spotty cell coverage at the site, my current hypothesis is there was a period in the past when the cell modem was unable to connect to our server for an extended period. If data doesn't replicate, it doesn't get purged, so this could cause the tables in question to grow beyond normal bounds. Plus the server isn't stopped unless there's an upgrade underway, so there's no opportunity for it to shrink.

To be honest, I don't even know if 12.0.1 will ever free unused database pages and shrink the file.

So I need to gather some statistics about each table, how many pages it has in the table itself & all indexes, add up all those numbers, multiply by the page size, and output a number telling me how much space is being used, in total, how much is free, and do it with dbisqlc. Computing other meaningful statistics would be cool, too, but as I'm a developer and not a DBA, I have no idea what those might be.

If someone could point me in the right direction, or just give me the queries, I'd appreciate it.

asked 22 Aug '14, 14:00

TonyV's gravatar image

accept rate: 75%

Hi Tony,

You are correct that the unused free space will not be reclaimed. if you would like this space back, you can unload and reload the database using the dbunload utility to reclaim the space.

The dbinfo utility might provide the functionality you require. If you would like to do this in SQL, you can check the number of set bits is the allocation bit maps (in SYSTAB and SYSPHYSIDX). Queries below are untested, but should do the job.

SELECT T.table_name, I.index_name, (DB_PROPERTY( 'PageSize' )*count_set_bits(p.allocation_bitmap))/1024 as "Usage (KB)"
FROM   sys.sysidx AS i
       JOIN sys.systab AS t
         ON T.table_id = I.table_id 
       JOIN sys.sysphysidx AS p
         ON p.table_id = i.table_id 
           AND  p.phys_index_id = i.phys_index_id 
SELECT T.table_name, 
    ISNULL((DB_PROPERTY( 'PageSize' )*count_set_bits(tab_page_list))/1024,0) as "Table Page Usage (KB)",
    ISNULL((DB_PROPERTY( 'PageSize' )*count_set_bits(ext_page_list))/1024,0) as "Ext Page Usage (KB)"

If you are interested in monitoring these items, you can create scheduled events that check these values. Such an event could email a notifcation to remedy the issue.

Thanks, Mikel

See related question

permanent link

answered 22 Aug '14, 14:49

Mikel%20Rychliski's gravatar image

Mikel Rychliski
accept rate: 30%

It turns out that there's a query in a module in my code that computes space usage for purging data that occupies more than x% of the table. I was able to use the following query:

SELECT TablePages, IndexPages, TableName 
FROM sa_table_page_usage() 
WHERE TableName IN ( Table1, 'Table2', . . . 'Free pages',  'Total pages' )
ORDER BY TableName

From this I was able to show that the database in question is 96% free.

permanent link

answered 22 Aug '14, 15:12

TonyV's gravatar image

accept rate: 75%

If you are only looking for free space percentage, and have only one dbspace, you can use the FileSize and FreePages database properties.


(22 Aug '14, 15:31) Tyson Lewis

Do you have the ability to recreate a remote database from scratch by downloading all the data into an empty database? (for example, after a catastrophic loss) If so, that might be an effective brute-force technique to shrink the database.

If not, then maybe it's a mechanism you should consider (see "catastropic loss" :)

(22 Aug '14, 15:32) Breck Carter
Replies hidden

Our product consists of a single server install and many client installs. The architecture uses SA as an embedded DB on the client side & SQL Server on the server side. The data on each client replicates to the server, so once the data has made it to the server, there's little need for it to stay on the client. In fact, we have a module that implements a data retention policy on the client which deletes all data that has replicated to the server & is older than a user configurable age.

The system architect is resistant to perform a DB rebuild on clients. It would probably take longer to go though that process than just uninstalling & reinstalling the client software takes. There seem to be no ill effects on performance from having a 25 GB database on that machine that is 96% empty.

If it were the server side, he'd have a different opinion, but once its made it to the server, the data on the client is just redundant. It's more important for the client to be collecting new data & sending it to the server than it is to take that database offline to shrink it.

(25 Aug '14, 09:11) TonyV

There seem to be no ill effects on performance from having a 25 GB database on that machine that is 96% empty.

Are the remote databases doing backups? If so, I would think a huge empty database does have some impact on performance, simply as the backup of the database file will take longer... The same ought to be true for validation. - However, I can't tell whether that would be a problem or not.

(25 Aug '14, 10:06) Volker Barth

> uninstalling & reinstalling the client software

...another way of saying "recreate a remote database from scratch" :) which in many environments is a lot easier than implementing a whole unload reload mechanism, sometimes even easier than implementing a local backup and restore mechanism.

(25 Aug '14, 10:16) Breck Carter
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 22 Aug '14, 14:00

question was seen: 3,318 times

last updated: 25 Aug '14, 10:59