The forum will be down for maintenance over the weekend of August 18-20, 2017. The forum will be shut down on the evening (EDT) of Friday, August 18. Downtime is unknown but may be up to two days. The forum will be restarted as soon as maintenance is complete.

I'm doing some testing of some relatively large databases (40 - 100 GB) in v12.0.1 (moving from v10.0.1).

I have been comparing times for full checks (using the Sybase Central wizard) and I'm getting broadly similar results on the same database (but reloaded into v12) on the same hardware.

There is mention of improved validation performance in the v12 New Features, so does this suggest that disk speed is my limiting factor?

On my simple single disk test I'm seeing read throughput of around 250MB /min (measured by the Win2008 Resource Monitor) during validation. When I was loading data I was seeing read throughput of about 890 MB/min, but I don't really know enough to compare the two activities). The server isn't doing anything else.

Any thoughts, suggestions, experience appreciated.

asked 03 Nov '11, 12:05

Justin%20Willey's gravatar image

Justin Willey
6.8k110144212
accept rate: 20%

1

What are the sizes of the tables: mostly one large table or lots of small ones? What cache size are you using? Are you starting from a cold cache or has the database been running for a while? Are your throughputs averages over the runs or the highest instantaneous ones you saw? 890MB/min is only 15MB/s and seems a bit low perhaps but might be reasonable. I would expect an average, off-the-shelf SATA drive to do at least 40MB/s on a pure sequential read test. Database loading will require some writes and some randomness but, again, the amount will depend on your cache size. Are you on a multicore server? Some of the v12 performance enhancements are due to parallelism.

(03 Nov '11, 13:33) John Smirnios
Replies hidden

John, thanks for getting back on this.

The tests were run immediately after loading the databases. I was using 6GB of cache (out of 8GB on the machine). The table size is very mixed, one huge on (25GB), 6 or so largeish up to 1.5GB, then a lot of smaller with probably a hundred or so very small tables.

The through puts are observed averages - it was pretty steady through the whole process. The server is a twin processor machine, each with 4 cores (Xeons).

Its hard to get an accurate picture, but most of the activity seems to be on one or two cores.

(04 Nov '11, 09:38) Justin Willey
1

That seems problematic. Can you do an express check (dbvalid -fx) on v12 and monitor the processor usage? It would expect that to be using almost all cores all the time.

Another interesting experiment would be to try dbinfo -u on both v10 and v12. If v12 isn't blindingly faster then something very strange is going on.

For the record, what is the exact build number of 12.0.1 that you are using?

(04 Nov '11, 09:43) John Smirnios

Its v 12.0.1.3484

I tried the express check (through the wizard): during the page checking part, one "core" (there are 16 displayed) was busy - 35%, the rest absolutely flat lining; during the table check, again one busy and very slight activity 1-2% (just noticeable on the graphs) on maybe six others.

I tried the dbinfo -u on v12 - took about 9 minutes (warmed cache if that's relevant). I'll have to do some rearranging to get a comparable v10 time.

(04 Nov '11, 15:30) Justin Willey

For dbinfo, assuming the database is 40GB in size, that's 75MB/s. For 100GB, that's 189MB/s. That seems to be in the right range for a cold cache. It's not clear how warm the cache is but it is still relatively small (6GB) compared to the database size. If it's awkward to try the v10 dbinfo, skip it for now. I thought you could easily test either configuration. We can tell, at least, that the system's IO capacity is normal.

The validation still concerns me. By any chance, have you changed the max_query_tasks option? Are you using dbeng or dbsrv? I have vague recollections that they may have different parallelism defaults but maybe not.

If you run VALIDATE TABLE ... WITH EXPRESS CHECK on one of your "largish (1.5GB)" tables, do you see CPU parallelism? If you run it again immediately afterwards (once everything is in cache), do you see parallelism?

I would run a few tests of my own to see if parallelism is somehow broken for validation in recent builds; however, everything is shutdown at the office for some electrical work that is being done this weekend.

(04 Nov '11, 23:08) John Smirnios
1

A quick test shows that parallelism isn't broken in 12.0.1.

(07 Nov '11, 11:26) John Smirnios

I've just done a partial test on a machine connected to a fast SAN, (I can't do the full thing at the moment as this is a live server). In this case the page check section used a single core, but the table checking clearly used ALL cores. I'll run a full test tonight to get the comparative timing.

(07 Nov '11, 11:41) Justin Willey

I don't expect the page check portion to be parallel. So it seems that we are down to a very parallel v12 validation that runs in a "broadly similar" time as v10? The benefits of the new validation shine when cache size is limited relative to table size and your 25GB table is relatively large compared to your 6GB cache. At the moment, I'm at a loss to explain the "broadly similar" time.

(07 Nov '11, 18:01) John Smirnios
showing 1 of 8 show all flat view
Be the first one to answer this question!
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×412
×242
×27

question asked: 03 Nov '11, 12:05

question was seen: 1,056 times

last updated: 07 Nov '11, 18:01