kayot2
January 15th, 2015, 11:32 PM
Hello, I'm a long time reader of this forum. I need to talk about btrfs since my friends have no idea what I'm talking about and why I'm so excited.
So I've recently built a real server for my files and streaming etc.
Specs, not related, but in case anyone wants to know:
IBM ServeRaid M1015 PCI-E 8-port SAS-SATA Controller Card SAS9220-8i 46C8933 -> Cross Flashed to LSI9211-IT
2 of Crucial 8GB Single DDR3 1600 MT/s (PC3-12800) CL11 Unbuffered ECC UDIMM 240-Pin Server Memory CT102472BA160B
Intel Xeon E3-1220V3 Haswell 3.1GHz LGA 1150 80W Server Processor BX80646E31220V3
SUPERMICRO MBD-X10SLM-F-O uATX Server Motherboard LGA 1150 Intel C224 DDR3 1600
So I was reading about different file systems and what not since I have 9 2TB preexisting 80% full drives from a windows server system formatted in NTFS. I wanted to convert them to native linux file systems. I custom compiled the kernel to allow aufs and enable hnotify to avoid whiteout files from being a problem. It works great by the way, instruction are here -> http://blog.asiantuntijakaveri.fi/2014/07/adding-aufs-support-to-3.html
So I was going to go with EXT4 since it's well supported and has been tested extensively. My eventual plan was that once I purchased 10 3TB Green Drives (maybe 4TB if the price goes down a notch) and WDIDLE3 them to no timeouts, that I would put together a FreeNAS ZFS machine and call it. My goal was to keep the drives available until that day. Besides, the instructions for compiling the kernel were very specific so I was going to be stuck on 14.04.1 LTS. That means I wouldn't have access to newer implementations etc. My final plan was to use the 2TB drives + whatever I needed, to keep a backup and the data would be on the ZFS with a RAIDz2. I had to have all the drives before I could start since a ZFS is locked at the number of drives with raid functions.
I accepted this. Aufs isn't going to be in the kernel and recompiling it (with hnotify) was a lengthy procedure. Mhddfs is Fused base which has its own pitfalls. NTFS is old and lacking of newer features, such as checksums, and in ZFS's case, healing abilities when in a raid.
So I'm reading, and reading, etc. I remembered hearing about btrfs back in 2012 and decided to check it out.
It was like night and day. Dynamic volumes, adding and removing them without data loss, and now with experimental raid 5 and 6. This is huge and no ones talking about it.
I can take a drive, convert it to btrfs, put data on it, add another drive, rebalance, add more data, add another drive, rebalance, add more data, if enough space is free I can turn it into a raid 5, add another drive, turn it into a raid 6, add anther drive, rebalance, loose a drive (don't replace it), repair, rebalance.
All of that and it will still work. In November 2014 they added scrub/replace commands. It's a wet dream come to life.
For those who don't know, half of what I just listed is impossible for anything else. Adding raid 5 and 6 to that is just crazy. In my case, instead of waiting until I have all 10 disks, I can use 4 to start with a raid 6, then add disks as they arrive. I don't know if the drives stay online for this though. That would really be too much. I'm guessing rebuild times are going to be outrageous. Then the chance that I can lose the array during every step. With a full backup, it might be worth trying.
I'm not sure how plausible all the above is, however most of the raid work has been done recently. So, if you want to correct any of the above, keep in mind that this is a recent development. I'm currently experimenting with VMWare to see how these concepts hold in real life. I'm defiantly excited. The system also supports asynchronous deduplication. That's a really nice feature to have with no real downsides to deployment. To finish up, btrfs supports early abort compression which means it wont compress something if it can't compress the first 4096 bytes, so it skips it and moves on saving time in the process. A flag on compression can force it to do compression always, but isn't all that advisable.
All in all, a nice file systems worth noting.
So I've recently built a real server for my files and streaming etc.
Specs, not related, but in case anyone wants to know:
IBM ServeRaid M1015 PCI-E 8-port SAS-SATA Controller Card SAS9220-8i 46C8933 -> Cross Flashed to LSI9211-IT
2 of Crucial 8GB Single DDR3 1600 MT/s (PC3-12800) CL11 Unbuffered ECC UDIMM 240-Pin Server Memory CT102472BA160B
Intel Xeon E3-1220V3 Haswell 3.1GHz LGA 1150 80W Server Processor BX80646E31220V3
SUPERMICRO MBD-X10SLM-F-O uATX Server Motherboard LGA 1150 Intel C224 DDR3 1600
So I was reading about different file systems and what not since I have 9 2TB preexisting 80% full drives from a windows server system formatted in NTFS. I wanted to convert them to native linux file systems. I custom compiled the kernel to allow aufs and enable hnotify to avoid whiteout files from being a problem. It works great by the way, instruction are here -> http://blog.asiantuntijakaveri.fi/2014/07/adding-aufs-support-to-3.html
So I was going to go with EXT4 since it's well supported and has been tested extensively. My eventual plan was that once I purchased 10 3TB Green Drives (maybe 4TB if the price goes down a notch) and WDIDLE3 them to no timeouts, that I would put together a FreeNAS ZFS machine and call it. My goal was to keep the drives available until that day. Besides, the instructions for compiling the kernel were very specific so I was going to be stuck on 14.04.1 LTS. That means I wouldn't have access to newer implementations etc. My final plan was to use the 2TB drives + whatever I needed, to keep a backup and the data would be on the ZFS with a RAIDz2. I had to have all the drives before I could start since a ZFS is locked at the number of drives with raid functions.
I accepted this. Aufs isn't going to be in the kernel and recompiling it (with hnotify) was a lengthy procedure. Mhddfs is Fused base which has its own pitfalls. NTFS is old and lacking of newer features, such as checksums, and in ZFS's case, healing abilities when in a raid.
So I'm reading, and reading, etc. I remembered hearing about btrfs back in 2012 and decided to check it out.
It was like night and day. Dynamic volumes, adding and removing them without data loss, and now with experimental raid 5 and 6. This is huge and no ones talking about it.
I can take a drive, convert it to btrfs, put data on it, add another drive, rebalance, add more data, add another drive, rebalance, add more data, if enough space is free I can turn it into a raid 5, add another drive, turn it into a raid 6, add anther drive, rebalance, loose a drive (don't replace it), repair, rebalance.
All of that and it will still work. In November 2014 they added scrub/replace commands. It's a wet dream come to life.
For those who don't know, half of what I just listed is impossible for anything else. Adding raid 5 and 6 to that is just crazy. In my case, instead of waiting until I have all 10 disks, I can use 4 to start with a raid 6, then add disks as they arrive. I don't know if the drives stay online for this though. That would really be too much. I'm guessing rebuild times are going to be outrageous. Then the chance that I can lose the array during every step. With a full backup, it might be worth trying.
I'm not sure how plausible all the above is, however most of the raid work has been done recently. So, if you want to correct any of the above, keep in mind that this is a recent development. I'm currently experimenting with VMWare to see how these concepts hold in real life. I'm defiantly excited. The system also supports asynchronous deduplication. That's a really nice feature to have with no real downsides to deployment. To finish up, btrfs supports early abort compression which means it wont compress something if it can't compress the first 4096 bytes, so it skips it and moves on saving time in the process. A flag on compression can force it to do compression always, but isn't all that advisable.
All in all, a nice file systems worth noting.