Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

I have found this Link http://www.principledtechnologies.com/clients/reports/Dell/PERC_H700_CacheCade.pdf on the web which shows an high performance gain for Database usage with the Dell Raid Controller Perc H700.

increased Database performance by a staggering 76 percent.

The key is to use a relative small SSD as a read cache for the underlying Raid System. As this idea seams good for read intensive systems it will no perform fine if you push gigs of Data into your system.

I think the SSD will be a single point of failure in the tested scenario.

Any thoughts on this ?

asked 23 May '11, 10:14

Thomas%20Duemesnil's gravatar image

Thomas Dueme...
2.7k293965
accept rate: 17%

edited 23 May '11, 10:16


Based on my reading of their testing, pushing gigs of Data into the system would at worst make it perform as good/bad as it did before configuring the CacheCade drive. What really drives the performance is not merely the use of SSD as a technology, but the intelligent placement of frequently read data on the SSD as well as using the SSD as an extension of the card's onboard cache.

I agree that the SSD would be a single point of hardware failure, even in the scenario with two SSDs as there was no discussion of mirroring or allocating the drives to their own array it seemed to imply that two SSDs just presented the card with a larger SSD CacheCade volume. By the same token, the card itself is a single point of failure so I do not know if the SSD single point makes or breaks a decision based on the performance gains. At that point it really depends on your environment and requirements what other steps need to be taken to avoid those single points (e.g. clustering two servers with the same card/drive configurations, having a dedicated spare available, etc).

Thanks for pointing it out.

permanent link

answered 23 May '11, 16:26

Siger%20Matt's gravatar image

Siger Matt
3.3k5672101
accept rate: 15%

Good point. The Card is always a single point of failure.

(24 May '11, 09:05) Thomas Dueme...

Instead of using the SSD as a cache for a larger RAID system regarding a database and the storage needs of a database system I would just go for the SSD alone. If you feel you need (R)edundancy then just use 2 SSDs in a RAID 1 configuration.

Especially if you think of the SSD just as a cache, then remeber harddisks usually have a RAM-cache of at least 8 GB (today often 32 GB). The RAID controller comes often with RAM-cache (256 or 512GB) and if a battery pack is installed it usually uses the RAM-cache also for caching writes. So with all this RAM-cache what can a "3rd Level" cache based on SSD improve?

permanent link

answered 24 May '11, 03:15

Martin's gravatar image

Martin
9.0k130169257
accept rate: 14%

So you say you would use a SSD for real database storage? I've heard SSDs still suffer from too low reliability when very frequent writes take place. But my information may really be outdated...

Cf. Jason's (quite old) blog article.

(24 May '11, 04:37) Volker Barth
Replies hidden

@Volker if you use a Raid 1 with two SSD Drives and configure a hot standby driver you should be save. But you don't know how often you need to change the drives.

(24 May '11, 09:10) Thomas Dueme...

@Volker I was given similar information a few years ago. I was told that the physical characteristics of SSD vs platter-based HDs meant that a single block on a SSD would fail after much fewer total writes to that single block than a block on a platter-based HD could tolerate. I do not have any data to back it up, and this was a few years ago before SSD production was mainstream. I would imagine the technology has improved a great deal.

Jason's last point seems to address this: Finally, depending on the form factor of the device, you might also consider moving to an SSD drive. These are solid state disks like flash cards, but have lifetimes that match traditional hard disk drives because they have lots of smarts to do things like wear-levelling for the disk. However, the price and form factor does reflect this.

I am now wondering if the additional drive smarts, and perhaps smarts on the RAID card would handle these concerns.

(24 May '11, 09:37) Siger Matt

@Martin - I believe the improvements come mainly from the card placing the most frequently-accessed data on the SSD. The performance benefit is measured against a similar setup with no SSDs involved. I agree that an all SSD setup may also show excellent performance, but this demonstration was targeted for this specific feature of that specific card, which itself seems to be targeted to a specific niche of customers who want better database performance without purchasing an all SSD setup.

(24 May '11, 09:42) Siger Matt
2

Current SSDs are quite intelligent in reducing the problem of writing to hot spots. Have a look at these also already "old" information which should give you a better confidence: http://www.storagesearch.com/ssdmyths-endurance.html and from 2009 a very good paper about the internals of SSDs http://www.wdc.com/WDProducts/SSD/whitepapers/en/NAND_Evolution_0812.pdf If you are concerned about reliablity the simple answer is use a bigger SSD and you gain reliablity (lifetime)

(24 May '11, 10:12) Martin

@Martin, @Thomas and @Siger: Thanks for updating my SSD knowledge, particularly via the articles Martin has pointed to.

(24 May '11, 10:41) Volker Barth
showing 2 of 6 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×275
×23
×11
×7

question asked: 23 May '11, 10:14

question was seen: 18,545 times

last updated: 24 May '11, 10:41