We have been investigating performance issues, particularly relating to validation, with a client who has a 75GB database running on v10.0.1. (planned to be upgraded to 16 but not immediately)
The server is virtualized, connected to an EMC SAN via Fibre Channel Over Ethernet (FCoE) running on a 10 Gb fibre network using Brocade switches. 11 GB of 18 GB of the RAM is allocated to cache.
Performance of the validation is approx 20% compared with a desktop PC (5 hours v 1 hour). We have other users of a similar configuration but with actual FiberChannel who get far better performance.
The database is built on 4k pages (like all our customers' databases). The ratio of table pages to extension pages is approximately 80:20 - I don't know if it's relevant but the same measure for the most important \ heavily used tables is approx 90:10.
The provider of the infrastructure is recommending changing the page size to 32k or 64k as an aid to disk I/O performance, but given what people have said on the forum before I am concerned as to what this would do to the the efficiency of the usage of the cache etc.
Can anyone give any guidance? Many thanks.
This is mostly a WAG, but it's easier to type an answer than a comment :)...
A database that size might have indexes that could benefit from an 8K page size, but it is doubtful that the page size will improve physical disk I/O by the amount you want. Methinks the server aleady does I/O in much larger chunks than the page size already, as mentioned here:
There aren't a lot of (any?) public success stories extolling the virtues of giant page sizes, so you are unlikely to find anyone telling you to "go ahead, listen to the vendor, ignore this exhortation on the same page as above":
Have you tried ALTER DATABASE CALIBRATE?
answered 21 Jan '15, 08:50