Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Home NAS, configuration planning / guidance requested

  1. #1
    Join Date
    Apr 2012
    Beans
    58

    Home NAS, configuration planning / guidance requested

    I put Ubuntu as the prefix, but, this could easily switch between Ubuntu Server and Ubuntu Desktop.

    It has been some time since I have built a Ubuntu NAS system, my last build is about 9 years old (I think it was originally build with 16.04), and had constant hardware upgrades - but never a new build, though it is now running Ubuntu 22.04.X. But because of the large period of time that has elapsed, I wanted to do a brand new system, and update the architecture and infrastructure to be more modern, to make it easier to manage with some of the newer software that has come out.

    Currently, the plan is to have:

    • Ubuntu Desktop / Ubuntu Server 22.04 or 24.04 (whatever is available at the time of finishing and completing the build)
    • Anything that can be dockerized, will be.
      • Any service that isn't fundamental to the Operating System, my hope is, can and will be dockerized. For example, things like SMB/Samba, Plex, Hardware Monitoring tools, etc
      • Right now, I have about 20 Dockerized Services

    • Have 100 TB ZFS pool, debating putting a SLOG, ZIL, or L2ARC on an Optane Drive
    • Have a nvme "cache" drive for the services running on Docker (I have found that with my older NAS, the ZFS pool is not sufficient for data IO)
    • Running the main OS on USB (or I might re-purpose an old 2.5 SSD).


    Current Hardware is:
    • MSI Z790I Motherboard
    • 13600K
    • 64 GB of RAM (non-ECC)
    • 1x 2TB NVME drive
    • 1x Intel Optane Drive (128GB)
    • 5x WD Red Pro 22TB drives
    • Silverstone SATA Expander NVME Card


    With that said, some of the features I am looking to do that I need advice on:
    • Use guacamole to have a desktop interface if I ever need it / for ease of use when hooking up physical devices to the NAS
      • Ideally, that doesn't attach to the main display, but auto creates a new session when authorized and authenticated and rebinds to existing display sessions where possible

    • Run a 100TB ZFS pool
      • What is the most optimal configurations for ZFS now a days?
      • Being used for photos, large file storage (EG: PSD, Video Editing), pdf documents

    • Hardware Monitoring
      • Best use cases for SMART Monitoring, Power Efficiency Tweaking, Fan Speed tweaking, etc


    And these are the things I need help with planning, not necessarily the whole configuration and everything - I have already written up the docker compose configuration for several of the services I plan on running/using, but for the services that are going to run on the bare metal OS, just understanding how I can achieve it or what is the most optimal way to configure it through the configuration files or new packages that may be have been released, or maybe to learn about some new configuration options that were not previously available 7 years ago.


    For ZFS:

    As an example, I have been reading several more recent posts:
    https://forums.servethehome.com/inde...in-2023.40773/
    https://www.reddit.com/r/zfs/comment..._what_are_the/
    https://www.servethehome.com/what-is...es-a-good-one/

    And it seems like the idea of having the ZIL or SLOG is overall better, though I am not entirely sure if there are any performance tweaks, or changes I need to make in any of this configuration for the intended use case.
    The last time I built the ZFS pool above, I got heavily invested in needing to tweak very specific ZFS settings, but, not sure if that is still needed now a days for home nas use? If I was running a production business-critical system, I would imagine so.

    For Guacamole:

    Honestly, I am not sure where to begin. I am aware that Ubuntu can run a desktop environment over a VNC port or equivalent, but, not sure if there is newer protocols that can be used to also provide sound, better UX capabilities (such as file transferring, copy paste, etc).

    I have been reading:
    https://guacamole.apache.org/doc/0.9...guacamole.html
    https://www.linuxbabe.com/ubuntu/apa...p-ubuntu-20-04
    https://www.linode.com/docs/guides/i...tu-and-debian/

    And honestly a few others, I also just don't want to keep posting links - as I am worried it might flag a spam filter or something, but, what I see are a lot of things focusing solely on just getting the web interface and VNC or RDP session up and running, and not focusing on extra or additional features (such as file transfer, copy / paste, or audio). Are there any guides or anything that I need to go out of my way for to get up and running on the host itself (I had planned on running Guacamole on a container).

    I think the hardest / most difficult situation here, is that I want Guacamole to spin up new desktop environments rather than connect to the main display session, and when reading - it seems that most people are connecting to the main session.


    For Hardware Monitoring / Tweaking:
    I am honestly trying to keep this very simple, I don't expect to run TIG, ELK, or any other stack initially, but, I do want decent hardware monitoring and alerting for basic things like:
    • SMART status / failures
    • Temperature / Overheating Issues
    • Power Consumption tweaking


    For SMART tools, I have looked at services like Scrutiny (https://github.com/AnalogJ/scrutiny), and it looks like it would work well, but, as I have not been able to test it out yet - not sure if it can also send email alerts, hook into push notification services, use webhooks, etc. So not entirely sure if I need to actually do anything.

    For temp monitoring, I was thinking of using NetData initially, and then when I have the time / priority to run TIG/ELK effectively, then switch over to that. But is there anything specific that anyone could recommend?

    Power Consumption tweaking, I used powertop quite some time ago (even before the older NAS build), but, am not sure if it is still maintained or useful? ArchLinux's site seems to have documentation on it still, but, is there anything that needs to be done with it, other than running it with --auto-tune flag?


    I am sure I am overthinking or over planning, a lot of this stuff, but, just trying to be thorough in what I am doing and researching - so I can have a fairly stable and long lasting system that I won't have to break-fix a lot of the time.

    Really appreciate any solutions or offers for advice here, I might have some more questions as I come closer to finalizing the build.

  2. #2
    Join Date
    Mar 2010
    Location
    Been there, meh.
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Home NAS, configuration planning / guidance requested

    You've made some choices that I'd never make for different reasons.

    I'd never use docker over security defaults that undermine the best aspects of Linux containers. I'd use lxd/lxc instead. By default, lxc is more secure. https://cheatsheetseries.owasp.org/c...eat_Sheet.html is a start.

    I left Plex to gain more privacy. Jellyfin is the replacement and has much better support for F/LOSS audio files, if that matters to you. The layout for media is basically the same. Jellyfin isn't as polished, but it does easily hook into OTA recording if you have HDHR hardware. I always disliked the Plex playback clients. They were slow and bloated on my systems and didn't play 90% of my audio files. Jellyfin has clients and DLNA support, so my playback devices are raspberry pi devices. 1 supports 4K playback and the other just 1080p. Considering I don't have any 4K screens in the house, it isn't THAT important to support 4K, but new video hardware I've bought the last year all supports 4K @ 60 HZ for the day that happens. I've been watching for a 36inch-42inch 4K monitor a few years. We don't have TVs and our projectors are 720p, 720p, 1080p and 1080p devices. 75 inch screens just seem too small.

    I see no mention of backup storage. Backups are more important than any RAIDx level. Sometimes the only fix for a corrupted RAID system is to wipe it and restore from backups.

    Remote desktop - on the same LAN, just use ssh -X. Over the internet, use x2go or another NX-based tool. Avoid RDP or VNC for their less than good lack of security. Similarly, I'd never, ever, make a remote desktop on a webserver. That's just asking to be hacked.

    For monitoring, I use munin and SMART. 2 sides of the same need. Because I'm on Ryzen CPUs, temperature monitoring is completely different than for Intel CPUs.

    If you don't need a desktop, don't install one. They just make more software less secure and reduce overall support periods.

    I don't know what a "13600K" is. Is that a Core i3? What are the passmarks? I moved from a 3500 passmark system to a 19500 passmark system to better support transcoding for different clients. I don't have any RAID at all on my VM+NAS.

    We don't use Samba, just NFS. NFS is built-into the kernel, so it is native-fast to my other clients, especially media clients. All our playback devices are wired. I used to deploy wifi for companies and learned to never use wifi unless there is no other choice. Connecting floors of the house, I use powerline, though I'd happily switch to 2.5Gbps MoCA if I needed more bandwidth. Most rooms here have coax already. The drop in powerline performance between rooms and floors is so bad as to not be funny, but it is stable bandwidth, which matters for streaming use.

    I've had a few WD-Red drives fail over the years. About 4 yrs ago, I stopped buying RED and switched to Black for primary storage. I only use USB storage for backup use, never primary storage. Infiniband, eSATA and SATA storage for primary.

    I don't have as much storage as you, but a respectable 40TB+. About 50% of that is for backups.
    I like the idea of ZFS, but still use LVM+ext4. The file systems are limited to 4TB chunks, since that was what I could easily backup using K.I.S.S. methods.

    My 2 physical systems are nearly identical and use less than 50% of the CPU on each, with the intent that when 1 fails, the entire load will run on a single system as replacement/upgrade parts are shipped. Same RAM, same CPU, same on-board iGPU, same libvirt virtual devices setup (bridges), NICs, etc.

    For my photos, I have a customized website that I modified for my specific needs. It runs under a virtual machine, not a container. That VM is also my reverse proxy for nearly all the websites I host for internal/external views. For example, I have a nextcloud LXC using the nextcloud snap package. Nextcloud isn't available outside the LAN, without using my self-hosted VPN (wireguard). With a remote client on wireguard from anywhere in the world, I have full access, just like being on the LAN. This works for Jellyfin, nextcloud, and about 20 other websites and other servers, like email. I do have an email gateway system that handles all inbound/outbound SMTP, but doesn't actually hold any email more than a few seconds or while the main email server is off line for maintenance.

    For epub, pdf, and other reference documents, I run Calibre. This hooks into my tablet. The tablet also connects to a read-it-later clone called wallabag for off line reading of old web articles and recipes I've found. Taking the tablet into the kitchen for more complex cooking instructions has been good. My eyesight doesn't work like it did 20 yrs ago, so a phone just isn't large enough.

    I'm not about "new". I'm all about "stable" and "secure".

    I suspect others will have more ideas for your specific needs. Just thought I'd share so you know going in a completely different architecture is fine too.

  3. #3
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Home NAS, configuration planning / guidance requested

    My recommends... I chose MSI for my own...

    Look around at MSI's other Z790 boards... I passed on that one. It only supports 96GB of RAM, has 4 x SATA ports, limited 3 x M.2 gen 3&4 slots and only 1 PCIex16 slot. You say you have 5 20TB drives, but that board only has 4 SATA ports... Using an HBA in the PCIe X 16 slot?

    I have an MSI MPG Z790 Edge Wifi. Supports 192GB RAM, 5 x M.2 NVMe slots (Gen 4 & 5), 7 SATA drives (really only 6, 7th is shared w/ NVMe_3) 2 x PCIex16 slots (Gen 4 & 5).

    I have 6 SATA's and 9 NVMe drives in it. I have 4 of the NVMe's on a Quad PCIe M.2 NVMe Birfucation Card. ZFS-On-Root. I do my SLOG & L2ARC on NVMe.

    Those boards no longer support Intel Optane, as Intel abandoned that technology.

    I do a lot of ZFS and have for many years. for 100TB of pool,that would be 8GB based ram and 1GB per TB = for at least 108GB of RAM... You say that board only supports 96GB...

    With 100TB of vdev for that pool. I would do at least RAIDZ2, possibly RAIDZ3. That's about 90TB of max storage cap for that pool. About 72TB before it affects performance. With my tuning test benchmarks, I do ashift = 13 and recordsize = 128K. That gives me the best performance for what I do.

    I was trying out Sanoid/Syncoid for backup's, but lately have gone back to my own scripts that do the same with ZFS snapshots, ZFS Send/Recieve to other pools, based on what I want to check and keep track of. My own scripts do better for checking and managing storage spaces. Lessons learned.
    Last edited by MAFoElffen; December 16th, 2023 at 08:54 AM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  4. #4
    Join Date
    Mar 2010
    Location
    Been there, meh.
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Home NAS, configuration planning / guidance requested

    MAFoElffen does point out something important that I completely missed. Use a quality HBA. That means going with a name brand that is respected. LSI has some SAS HBAs that I've been watching. If you go with consumer SATA, then each SAS connector supports 4 SATA drives.

    When looking for an HBA, I look at what works well and is recommended by the NAS people running BSD-based NAS systems. If it works well with BSD, it almost certainly will work well with Linux.
    "Supermicro AOC-SAS2LP-MV8 Add-on Card, 8-Channel SAS/SATA Adapter with 600MB/s per Channel" and similar have been on my shopping list for some time. Add in some cables and for JBOD connected storage like Linux RAID or ZFS, it will be fantastic.

    I have add-on HBAs in my storage systems. Besides the 6 SATA slots built into the motherboard, my case has room for 10 HDDs thanks to using a few 4 HDD in 3 slot cages. The cages are hot-swap for convenience. They make steal and plastic cages like this. The steal cages aren't hot-swap, but they are cheaper (under $30). I like the plastic ones because they have a caddy system that makes accessing the HDDs simple ... and they have blue lights for each HDD.

  5. #5
    Join Date
    Apr 2012
    Beans
    58

    Re: Home NAS, configuration planning / guidance requested

    Apologies if I was unclear, the hardware I already have - and don't have much of an opportunity to return or change it up for some things. That window is closing soon for others RAM, NVME, and Intel Optane drive, WD Red Pro drives are still within the return period. The CPU, Case, Motherboard, cannot be returned now. I could try and resell it in the country I am living in (I am sure it would be a few dollars in profit) but, let's assume I don't in this case for sake of convenience or at least for planning - because the last thing I need is to upend the motherboard and CPU pieces.

    The case, which I didn't originally mention, is a Jonsbo N2 case - as dimensions wise, it is the only thing that will fit in the space that I have to work with to not upset anyone . So I am limited to a 5 drive bay. I know the Jonsbo N3 exists, but, when measuring the space, I don't think the N3 would have fit, nor do I have the money to buy 8 drives. Which also means I am limited to ITX boards and can't expand on to ATX or similar.

    Going to try and tackle some of the key things in the posts above, but, not going to quote everything.

    I left Plex to gain more privacy. Jellyfin is the replacement and has much better support for F/LOSS audio files, if that matters to you.
    I have thought about, and looked at it - and could run it in tandem - but most of my stuff is built around Plex. Would consider it though, and it is actually commented out currently in my docker file.

    I don't know what a "13600K" is.
    It is an Intel i5 CPU. https://ark.intel.com/content/www/us...-5-10-ghz.html

    I see no mention of backup storage. Backups are more important than any RAIDx level. Sometimes the only fix for a corrupted RAID system is to wipe it and restore from backups.
    Agreed, some of this stuff that is in the 100TB is not necessary to do backups on, but the stuff that is - goes to an offsite backup system in another country, and also (though limited in backup space) to Google Drive to meet the 5TB requirements they are beginning to implement on accounts. This is primarily for very specific files and content, like Images, PDFs, documents, etc.

    I can always re-rip my music collection or Blu-Ray collection, so I am not worried about the data loss from that. However, photos, and other things will get backed up to offsite/external storage mediums.

    If you don't need a desktop, don't install one. They just make more software less secure and reduce overall support periods.
    I am thinking the use case is more for when I want to easily hook up cameras, or use software that does require a GUI, or more easily manage the file/folder structure (that CLI could do - but, is clunky with). Honestly, have thought about installing the base Ubuntu Server, and then when/if I feel comfortable or have the need - then installing the DE on top.

    Using an HBA in the PCIe X 16 slot
    Yes, the plan is to use a Silverstone 5port HBA Adapter in one of the 3 nvme slots (https://www.tomshardware.com/news/si...-ecs07-adapter). Alternatively, I could use a PCIE low profile card, but, I figured that the NVME was already in use.

    Any suggestions on that line of thinking (also because of the next quoted piece around the Intel Optane)?

    The plan was: Samsung 990 Pro in slot 1, Intel Optane in slot 2, and the Silverstone in Slot 3 (top most slot) - plugging hard drives from the Jonsbo N2 hotplane into the HBA ports.


    Those boards no longer support Intel Optane, as Intel abandoned that technology.
    When you say, this, do you mean that the Intel Optane will just not work, or, that the BIOS/UEFI no longer has additional configurations for Optane SSDs? Would it be better off to get another NVME drive for the ZIL/SLOG? Is there a performance hit when it comes to Optane vs NVME? I wouldn't be opposed to getting an NVME based drive, but it seemed like everything I read on ZIL/SLOG was based on Optane.

    for 100TB of pool,that would be 8GB based ram and 1GB per TB = for at least 108GB of RAM
    The board in question only has 2 RAM Slots, so at this point, I have maxed it out with what I can find, which is 2x 32GB RAM Slots totalling 64.

    I do ashift = 13 and recordsize = 128K
    This is currently something similar to what I did with my old NAS setup I recall, so I guess if that is still the generally approved recommendation, I will keep at it with that. Thanks very much for this bit!

    MAFoElffen does point out something important that I completely missed. Use a quality HBA.
    I have an Enterprise grade LSI HBA in my old server that I could repurpose, but wasn't planning on using it. Can't recall the exact name at the moment, but, it was one of the commonly found on ebay ones. 9620 or something like it.


    EDIT:

    The reason for wanting to avoid the hardware in the current NAS, is that after such a long period of time... it has started to have hardware failure - and I don't have the time, energy, or live where it is - to be able to actually invest in troubleshooting the issue (and it is running ESXi as a host OS with Ubuntu Server as the Guests). So while I can take some hardware from there, if it ends up causing issues in this new build - I won't be able to do much other than take it out, so I don't want to count on any of that as usable hardware.
    Last edited by Scrumps; December 16th, 2023 at 02:22 PM.

  6. #6
    Join Date
    Mar 2010
    Location
    Been there, meh.
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Home NAS, configuration planning / guidance requested

    You'll want to keep up with VMware ESXi announcements. They completely changed their support model earlier this week. They tried something massive in 2010 too. We stopped using VMware stuff the last time they did something like this and actively moved our clients off VMware to save them hassle and money.

    OTOH, if a client can't pay $12K for VMware, how will they ever pay us? The way clients think - F/LOSS is free and the knowledge to setup using it is somehow less valuable, so it should pay less. VMware architects were earning $150K+ back then. F/LOSS architects more like $90K at the time. Sad, but true.

    I haven't been in the job market in over a decade, does VMware skills return a premium salary still?

    When we moved to kvm/qemu + libvirt, all sorts of things just worked. One of the best decisions I've ever made. I started in virtualization in the late 1990s. Tried all the options until around 2012 when KVM/QEMU became enterprise ready and just worked. Never looked back. I certainly don't miss the VMware bills nor the $1000 addon software to make common things workable in VMware.
    Last edited by TheFu; December 16th, 2023 at 03:19 PM.

  7. #7
    Join Date
    Apr 2014
    Location
    Tucson AZ, USA
    Beans
    1,107
    Distro
    Ubuntu

    Re: Home NAS, configuration planning / guidance requested

    I am thinking the use case is more for when I want to easily hook up cameras, or use software that does require a GUI, or more easily manage the file/folder structure (that CLI could do - but, is clunky with). Honestly, have thought about installing the base Ubuntu Server, and then when/if I feel comfortable or have the need - then installing the DE on top.
    I don't know what you intend to use for software for this. I put together a Zoneminder docker. I don't know if it's any good of course, but it works for me. The entire gui to it is a web interface. I don't use it anymore and haven't for awhile so I'm not 100% that this will build properly. But it's a place to start. This way you don't need to do the DE, at least for cameras and such.

    https://gitlab.com/jmgibson1981/home...ref_type=heads

  8. #8
    Join Date
    Mar 2010
    Location
    Been there, meh.
    Beans
    Hidden!
    Distro
    Ubuntu

    Re: Home NAS, configuration planning / guidance requested

    Nobody ever needs a DE. A Window Manager, WM, is all that is needed to have a GUI on Linux. Often, using just a WM will save 500MB of RAM.
    Also, if you plan to use remote access to manage the system, you can use Cockpit webapp (this is actively being worked by Canonical and Redhat) on the LAN or even x2go as a remote desktop over a secure NX connection from anywhere in the world.

  9. #9
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Home NAS, configuration planning / guidance requested

    For ZFS performance tuning, search this Forum on my user name. For backups, search on TheFu's username.

    Yes, you'll want to keep that large storage pool on the same HBA. Distributing it across multiple bus'es sometimes adds a bottleneck, if you try to stripe them in a RAID array. I really like and believe in ZFS RAIDZ. It's done me well integrity and performance wise.

    I mean that new boards don't support Intel Optane any more. Intel abandoned Intel Optane as a project January 2021. Micron sold off the 3D XPoint fab immediately after that in 2021. Any chance of buying those Optane Modules now is quickly drying up. Any board made after 2021, stopped following that offering as a new technology.

    You really need to read the MSI Owners Manual in detail. I have with mine. MSI and ASUS tend to hide details about how their PCIe and SATA buses are laid out, and if any of them are shared across the same bus, where if you use an M.2 or SATA in a certain slot, that it turns off other things. That also goes for some M.2 slots only recognizing SATA instead of PCIe M.2 NVMe drives. The tech on that is still new, so on some models they ended up doing some weird things.

    96GB is going to be tight for that 100GB pool. Especially if you are storing 4K media. I would tune your ARC to 48GB max limit. The NAS Forums think that it automatically stops at half memory. That is no true. Then do a 1TB L2ARC. I do a 1TB SLOG. I do both those on just on one single 2TB NVMe, with two partitions. The people in the NAS forums will try to tell you that SLOG will not use more than 16GB. I don't know where they got that number. I ignored that, and tested that on my own. (Show me the data!) It not only does use it, the performance kept climbing until the 1TB mark, where anything larger than that, then it crested and started to fall on sizes larger than that. Disk caches will help greatly with your memory.

    There's a few trick I have for flushing the caches if you run into a memory problem...

    You can still do a lot with what you have. Pay attention to what you have on a gen 3 bus. That board has both Gen3 and Gen 4. Your HDD's will do fine on Gen3. Keep your NVMe's on Gen4.

    I don't claim to know everything. I have just been exposed to a lot of things. Lots of those things have been self-inflicted. I've only worked with ZFS since 2005. I tend to want to see things for myself, sometimes to see what really happens when I do something. That has been fun!
    Last edited by MAFoElffen; December 16th, 2023 at 06:15 PM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  10. #10
    Join Date
    Apr 2012
    Beans
    58

    Re: Home NAS, configuration planning / guidance requested

    [QUOTE=MAFoElffen;14169607]For ZFS performance tuning, search this Forum on my user name. For backups, search on TheFu's username.

    Yes, you'll want to keep that large storage pool on the same HBA. Distributing it across multiple bus'es sometimes adds a bottleneck, if you try to stripe them in a RAID array. I really like and believe in ZFS RAIDZ. It's done me well integrity and performance wise.

    Quote Originally Posted by MAFoElffen View Post
    I mean that new boards don't support Intel Optane any more. Intel abandoned Intel Optane as a project January 2021. Micron sold off the 3D XPoint fab immediately after that in 2021. Any chance of buying those Optane Modules now is quickly drying up. Any board made after 2021, stopped following that offering as a new technology.
    Hm, yeah, I realized that some special features may be disabled, but, the overall functionality of what the NVME drives had configured without the special software would still assist there.

    [QUOTE=MAFoElffen;14169607]
    You really need to read the MSI Owners Manual in detail. I have with mine. MSI and ASUS tend to hide details about how their PCIe and SATA buses are laid out, and if any of them are shared across the same bus, where if you use an M.2 or SATA in a certain slot, that it turns off other things. That also goes for some M.2 slots only recognizing SATA instead of PCIe M.2 NVMe drives. The tech on that is still new, so on some models they ended up doing some weird things.
    [/qutote]

    Unfortunately, I have, the only thing I can't figure out specifically with this board on the PCIE side is how it is laid out, but also if it supports splitting or Bifurcation at all.
    https://www.msi.com/Motherboard/MPG-.../Specification

    Doesn't list anything, nor does: https://download.msi.com/archive/mnu...0IEDGEWIFI.pdf
    The only bit is this:


    SLOT 1x PCI-E x16 slot
    PCI_E1 Gen PCIe 5.0 supports up to x16 (From CPU)

    But that doesn't go into how the channels are working with the rest of the system, or if there are limitations when everything is enabled/used.

    Quote Originally Posted by MAFoElffen View Post
    96GB is going to be tight for that 100GB pool. Especially if you are storing 4K media. I would tune your ARC to 48GB max limit. The NAS Forums think that it automatically stops at half memory. That is no true. Then do a 1TB L2ARC. I do a 1TB SLOG. I do both those on just on one single 2TB NVMe, with two partitions. The people in the NAS forums will try to tell you that SLOG will not use more than 16GB. I don't know where they got that number. I ignored that, and tested that on my own. (Show me the data!) It not only does use it, the performance kept climbing until the 1TB mark, where anything larger than that, then it crested and started to fall on sizes larger than that. Disk caches will help greatly with your memory.

    Any suggestions on the NVME drives? Such as partitioning under the total storage size so that you don't hit any limitations with NVME read / write - or does that matter?

    What are your drive writes/endurance looking like after a year, 2 years, 3 years, etc? That was one of the initial reasons I went with the Optane, since it is rated for 1200TBW , compared to the 990 Pros being rated in TB.



    Quote Originally Posted by MAFoElffen View Post
    There's a few trick I have for flushing the caches if you run into a memory problem...

    You can still do a lot with what you have. Pay attention to what you have on a gen 3 bus. That board has both Gen3 and Gen 4. Your HDD's will do fine on Gen3. Keep your NVMe's on Gen4.

    I don't claim to know everything. I have just been exposed to a lot of things. Lots of those things have been self-inflicted. I've only worked with ZFS since 2005. I tend to want to see things for myself, sometimes to see what really happens when I do something. That has been fun!
    I might ask you for those, when I am closer to assembling the system.

    Yeah, that is the plan, with the current board to have the HDDs on the M2_2 slot,with the rest being on M2_1, and M2_3.

    Fully understand, I mean, everything is a path of learning.

Page 1 of 2 12 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •