zephyr707
June 24th, 2011, 12:57 AM
Hello all,
been banging my head on this problem for the last couple of hours and I continue to turn up conflicting information wherever I look. Any RAID experts out there that can help clear things up for me? I'm trying to figure out the difference between various RAID terminology: stripe size, chunk size, cluster size, block size.
From my current understanding of RAID, a stripe is made up of chunks. The chunk size determines various performance characteristics depending on what type of data you are working with and the average I/O operations. The chunk size and number of drives (stripe width) determines the stripe size (basically: chunk size * stripe width = stripe size).
A cluster is a collection of blocks, where cluster size is determined at the file system level and block size determined at the physical hard disk level. To make things confusing, cluster size and block size are sometimes meant to mean the same thing...
So I think chunk size is for the RAID controller and cluster/block size is for the operating system (assuming hardware RAID). But it seems that the optimal size for these 2 parameters is somewhat counter intuitive. For video, it seems like you want a smaller chunk size, but a larger cluster/block size, is that correct?
This article (http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1735-4.html) from tomshardware.com seems to conflict with this other article (http://www.zdnet.com/blog/storage/chunks-the-hidden-key-to-raid-performance/130) from zdnet. I think the zdnet article is correct, smaller chunk size = better performance for large data such as video, but I am thoroughly confused now after all my research. This pdf from xyratex (http://www.xyratex.com/pdfs/whitepapers/Xyratex_White_Paper_RAID_Chunk_Size_1-0.pdf)helped illuminate some things, but I think the terminology is quite confusing and just googling around I don't appear to be the only one confused...
Any help would be greatly appreciated,
thanks!
zephyr
been banging my head on this problem for the last couple of hours and I continue to turn up conflicting information wherever I look. Any RAID experts out there that can help clear things up for me? I'm trying to figure out the difference between various RAID terminology: stripe size, chunk size, cluster size, block size.
From my current understanding of RAID, a stripe is made up of chunks. The chunk size determines various performance characteristics depending on what type of data you are working with and the average I/O operations. The chunk size and number of drives (stripe width) determines the stripe size (basically: chunk size * stripe width = stripe size).
A cluster is a collection of blocks, where cluster size is determined at the file system level and block size determined at the physical hard disk level. To make things confusing, cluster size and block size are sometimes meant to mean the same thing...
So I think chunk size is for the RAID controller and cluster/block size is for the operating system (assuming hardware RAID). But it seems that the optimal size for these 2 parameters is somewhat counter intuitive. For video, it seems like you want a smaller chunk size, but a larger cluster/block size, is that correct?
This article (http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1735-4.html) from tomshardware.com seems to conflict with this other article (http://www.zdnet.com/blog/storage/chunks-the-hidden-key-to-raid-performance/130) from zdnet. I think the zdnet article is correct, smaller chunk size = better performance for large data such as video, but I am thoroughly confused now after all my research. This pdf from xyratex (http://www.xyratex.com/pdfs/whitepapers/Xyratex_White_Paper_RAID_Chunk_Size_1-0.pdf)helped illuminate some things, but I think the terminology is quite confusing and just googling around I don't appear to be the only one confused...
Any help would be greatly appreciated,
thanks!
zephyr