I'm trying to get to grips with the very varied performance we are seeing when rebuilding large databases (ie 50 - 200 GB). We are finding order of magnitude differences with apparently much less capable hardware outperforming the more capable, even when virtualisation is taken out of the equation.
Logically I would have thought that the speed of the process will be limited by one of factors of processor / disk / memory, however on the face of it we have systems where the processors are barely ticking over, the disk read / write speed is far below what can be seen in straightforward disk operations (ie copying files between the same disks or a SQLA backup) and lots of RAM (ie more than the size of the database) is available. The problems seem more acute where SANs are involved - these are enterprise SANs (EMC etc connected by FiberChannel etc).
Can anyone suggest what metrics would be worth looking at, or give some insight into the internals of the process that might help. I can get detailed stats, but so far nothing obvious is identifying the bottleneck - I'm beginning to wonder about things like latency, block size etc I notice that dbbackup now has a block size tuning facility.
v22.214.171.1243 with v16 and v11 format databases.
asked 18 Jan '16, 14:40