The ASUS X99 is ATX and has all the ports you need for expansion down the road...
The ASUS X99 is ATX and has all the ports you need for expansion down the road...
"Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags
In my haste, I completely forgot /confused with a different Asus board that the Asus X99-A and I7-5900K that I have obtained / ordered doesn't come with a onboard graphics... oops lol NP.
I'm sure I can either find one, or use one that is plugged into a system that I have (the Nvida Geforce GTX 10XX Founders edition comes to mind as that system would default to the onboard graphics system if need be ) to validate the MB / processor /PSU is in good working order upon arrival of the items, before installation into the case. Will just need need to pick up some RAM (DDR4, most my existing systems have DDR3) before testing components as well as a CPU cooler.
I did go to pcpartspicker web site to see what the estimated wattage draw would /might be with the max number of drives, etc, just to ballpark. the estimate. The results was 751 Watts or so. The 1000 Watt PSU I picked up should be good to go. I just wish that the site did not have SAS drives greyed out thus causing me to use SATA drives for the estimate.
Now once the items test good to go, I plan on running the operating system (Ubuntu Server 22.04.4 LTS) on the m.2 NvME slot provided, separate from the ZFS pools which will reside on SAS/SATA drives.
I was looking at as small as possible in capacity /cost NvME .
Now here is where I have a actual question is there any brand name of NvME I should avoid ?
I do understand that the OS will only need roughly < 2Gbs of space, so I was thinking 32-64ish would be more than enough.
Per the manual it on page 1-22 it states on the following information to that M.2 socket
" This socket allows you to install an M.2 (NGFF) SSD module. .....This socket supports M Key and type 2242/2260/2280/22110 storage devices......" Just in case anybody needs specifics in order to answer the question.
In closing like I said there is really only 1 question, and that is there a brand of NvME to avoid?
And by that a brand that just doesn't work well with Linux or has a higher than normal failure rate in your individual experience.
Last edited by sgt-mike; April 9th, 2024 at 12:25 PM.
I did find this in the correct manual.. I originally down loaded the manual for the Asus X99-A II , the correct one is Asus X99-A ......
"M.2 Support*
This motherboard features the M.2 slot, which shares bandwidth with PCI Express 3.0 x4 slot
to speed up data transfer up to 32 Gb/s. This helps enhance the performance of your SSD
(Solid State Drive) that is dedicated only to the operating system.
* Supports PCIe Mode only "
so if I am reading and comprehending correctly I can ONLY use 1 at a time M,2 Or PCI Express Which if understand it is also called a U.2 in the manual.
The great thing about typing and posting all this for is the ability for me to go back read over this and realize what should be obvious
I had posted two NvME's and then deleted them they was PCIe ver 4.0 x 4 slot if I had not done this, I probably wouldn't have caught it until I had ordered it.
my board is PCIe version 3.0 x 4 slot
OK back to searching......
For a PCIe 3.0x4 lane NvME ...
1.Corsair Force MP500 120 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive
didn't find a Amazon listing
Power tech review https://www.techpowerup.com/228667/c...nvme-pcie-ssds
2. Intel Optane P1600X 58 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive
https://www.amazon.com/Intel-Optane-...05&sr=8-1&th=1
Toms Hardware website https://www.tomshardware.com/news/in...ane-ssd-p1600x
3. HP EX900 120 GB M.2-2280 PCIe 3.0 X4 NVME Solid State Drive
https://www.amazon.com/HP-EX900-Inte...s%2C375&sr=8-1
Power Up tech
https://www.techpowerup.com/ssd-spec...00-120-gb.d417
I like what I read on the Intel Optane while not the fastest but the latency and caching is attention grabbing .... the Cost I like... just not sure if's a good choice and at $15.00 and after writing this part I find this white Paper from Intel addressing that exact Drive
https://www.google.com/url?sa=t&sour...uSgXf0hCsbYw5r
I think I will go this route UNLESS sage advice says something like ohh it won't work as after I read that I was tipped over the edge, Also recalling that marketing departments also write that not the IT guys building developing.
Well looks like my small campfire got peed on (only because the Chipset predates the Optane, not a big deal)
https://community.intel.com/t5/Intel...p/285396#M1867
Last edited by sgt-mike; April 9th, 2024 at 12:27 PM.
Well, you are receiving this (keep this manual for your reference): https://dlcdnets.asus.com/pub/ASUS/m...?model=x99a_ii
This Silicon Image Orion Expansion card is what Dell uses for their Servers and Desktops to add a DVI-D port to their computers (only $10):
https://www.ebay.com/itm/23426631091...EaAjU_EALw_wcB
Dang... Use that one as an example. That comes with a low profile slot plate. You rwould need a high-profile DVI-D plate... Wait 1.Here is a high profile one for $49. (You could still shop around, now that you know what you are looking for.)
https://www.ebay.com/itm/32577960055...caApYsEALw_wcB
Let's see if I still have one of the Dell PN's...
EDIT: Here you go. Dell branded, high-profile, $10:
https://www.ebay.com/itm/33459618276...YaAhxWEALw_wcB
Your PCIe x16-3 slot is only x4 lane... Put it in that slot. You wouldn't be using that x4 slot for a storage controller anyways. LOL
It will see any Intel i915 based GPU and adding output of that video through it's port. That way you are using the iGPU video from your CPU... (It's usually used to add a video port.)
Last edited by MAFoElffen; April 9th, 2024 at 02:10 AM.
"Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags
------------------Update Apr 13th 2024---------
Conducted testing of Motherboard/CPU components before case arrival and installation. All tested good to go .
I used the GTX10XX Geforce GPU for testing only into bios screen. Now to get the Video card /HBA that you pointed out to dedicate to that Motherboard.
As well as populating the board with max RAM after a Bios update.
-------------------------Added Apr 15th--------------
Did a little more research into the MB. Found a interesting part dealing with the M.2 Slot which is sort of mentioned in another post on here https://ubuntuforums.org/showthread.php?t=2496339
going through the PCIe x16 slots in the manual . Seems slot 4 (x16_4) shares band width with the M.2 slot and is disabled when the M.2 slot is utilized.
So if I use the M.2 slot as a boot device as I had originally planned. I lose one X16 slot, or if the slot is needed I could shift the boot device to a SSD/Sata via the 10 onboard sata connections.
This isn't really a problem as I could configure to boot from either just need to plan.
so I install a LSI SAS controller (LSI 9300-16i 16 port) in X16 _ 1 slot and another LSI 9300-16i 16 port in X16 _2 slot, that leaves slot X16_3 for Video /GPU. If NvME is installed then X16_4 (which is actually X4) is disabled.
Which the two HBA's will allow me 32 sas ports plus I have 10 sata ports unused.
Now I could just use a 4 port HBA with the 16 Port HBA which will allow 20 sas ports and the 10 sata ports. which is more than the case can house either way. So there isn't a down side These configurations only leaves me with two PCIe X1 slots available.
Or I could boot from a sata port with a drive similar to this https://www.ebay.com/itm/20425432266...Bk9SR8q3iJ7bYw
That means I could put the video card in X16_4 slot freeing up one x16 slot in the HBA's & video / GPU configuration listed above
Either way of booting Linux both options is still faster than a spinning sata drive.
Thinking this through as I type all of this I'm wondering if the safest bet wouldn't be the Enterprise SSD Sata configuration versus the NvME M.2, even though the NvME would be blistering in speed.
Remembering I'm talking boot devices not the ZFS vdev's drives here.
Thoughts and which way would you lean?
P.S. I'm not whining over the loss of the X16 slot as I honestly don't think it will affect me down the road.
Last edited by sgt-mike; April 16th, 2024 at 05:23 AM.
Just a update ...
Life has put a temp hold on the completion of this NAS. BUT now the project NAS is trudging forward.
I had to do a bit of repair on the case, installed the PSU (1000 watt), the Motherboard, located /installed a missing I/O shield, NvME drive (256 GB for the base operating system).
As of this writing I'm awaiting the CPU Cooler arrival which should be here in the next 48 hours as it has been shipped.
Then I can install the operating system after that add another 200mm fan to the case, add the missing 3.5" drive caddy, increase the existing 4GB Ram to 64 GB, then populate the first array with drives still torn between SATA or SAS for the first array (zpool). I'm leaning toward 5 drives, I have not settled on 4 TB each or just go 8TB each drive. I really like the idea of the 8TB each.
Reality (Time / cost, mostly time) will probably dictate SATA 4TB for each drive for the first zpool, but a SAS ZFS array will be installed into a different zpool.
UNLESS....
As a thought in the mean time I could rob the Universal Media Server's drives and enclosure and set them up to practice with ZFS. As that is a system not in use right now, I replaced it with a separate headless Plex Server for my Media which is performing better with the differing TV's in the house hold. (which even that system will be updated to a faster than a 2nd gen i7, which I will move that system to serving a RV vs the house)
That idea will allow me to play with that setup then tear it down when the actual drives I want to install purchase / arrives. Which if I go that route , it will change the drives in the true actual first (zpool) array to 8TB SAS drives, as the enclosure from the UMS server only supports 6 thin 2.5" SATA drives.
Last edited by sgt-mike; 4 Weeks Ago at 10:00 PM.
Finally got the parts needed to crank up and install the base operating system on the NAS/NFS.
What I will order this upcoming week is the LSI SAS HBA, cabling, a lot of five 4TB SAS drives (located them at a great price).
Edit (9-16-2024) add --- aggh they sold before I could buy them so I located a lot of three at the same price as the lot of 5 , so I ordered 2 lots in order to get 6 drives
I went with the idea to use the SATA ports in a 6 bay 2.5" enclosure utilizing some old scrub drives (500GB laptop drives) to set up a zpool as temporary test.
Just to see what ZFS is like and work out idea's before. I will destroy this pool soon as it was just a test before getting the actual drives I intent to use
But thus far it does pose a question I have that I will post in another thread in a day or so, as it deals with pool creation not hardware.
During the initial operating system setup I did have a few small challenges which was easily overcome. Luckily I updated the Bios prior to spending any time trying to install the operating system on the NVMe 128gb drive. As I originally only had 4GB of Ram, I quickly upgraded to 32 GB Ram. Right after that when installing the SATA drive I noticed it seems that when the X99-A has NVMe installed it disabled SATA P1 and P2 thus losing two ports, just like MAFoEffen mentioned some boards do. Which is not a issue or compliant. merely a observation. Right now I'm extremely happy with the outcome thus farCode:mike@beastie:~$ zpool iostat -vl teststore capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- teststore 1.34T 947G 0 239 2.84K 39.9M 30ms 7ms 30ms 3ms 1us 1us 4us 3ms 52ms - raidz1-0 1.34T 947G 0 239 2.84K 39.9M 30ms 7ms 30ms 3ms 1us 1us 4us 3ms 52ms - sda1 - - 0 49 564 7.98M 30ms 6ms 29ms 3ms 1us 2us 1us 2ms 28ms - sdb1 - - 0 49 623 7.98M 30ms 7ms 30ms 3ms 2us 1us 1us 3ms 62ms - sdc1 - - 0 49 575 7.98M 30ms 6ms 29ms 3ms 1us 1us 16us 2ms 50ms - sdd1 - - 0 45 573 7.98M 32ms 8ms 31ms 4ms 1us 1us 1us 4ms 60ms - sde1 - - 0 46 572 7.98M 30ms 7ms 29ms 4ms 1us 1us 1us 3ms 63ms - ---------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- mike@beastie:~$ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT teststore 2.27T 1.34T 947G - - 0% 59% 1.00x ONLINE - mike@beastie:~$ zpool status pool: teststore state: ONLINE scan: scrub repaired 0B in 00:00:26 with 0 errors on Sun Sep 15 07:19:36 2024 config: NAME STATE READ WRITE CKSUM teststore ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sda1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 sde1 ONLINE 0 0 0 spares sdf1 AVAIL errors: No known data errors mike@beastie:~$ mike@beastie:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 63.9M 1 loop /snap/core20/2105 loop1 7:1 0 63.9M 1 loop /snap/core20/2318 loop2 7:2 0 87M 1 loop /snap/lxd/27037 loop3 7:3 0 87M 1 loop /snap/lxd/29351 loop4 7:4 0 40.4M 1 loop /snap/snapd/20671 loop5 7:5 0 38.8M 1 loop /snap/snapd/21759 sda 8:0 0 465.8G 0 disk └─sda1 8:1 0 465.8G 0 part sdb 8:16 0 465.8G 0 disk └─sdb1 8:17 0 465.8G 0 part sdc 8:32 0 465.8G 0 disk └─sdc1 8:33 0 465.8G 0 part sdd 8:48 0 465.8G 0 disk └─sdd1 8:49 0 465.8G 0 part sde 8:64 0 465.8G 0 disk └─sde1 8:65 0 465.8G 0 part sdf 8:80 0 465.8G 0 disk └─sdf1 8:81 0 465.8G 0 part nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi └─nvme0n1p2 259:2 0 237.4G 0 part /
All in all this has been absolutely great which I cannot thank @MAFoEffen enough with the guidance on the main board, processor, case and the detailed listing of components.
Which I may have strayed from the exact LSI HBA. I think, but the one I have in my cart is a LSI 9300-16i 16 port, which should be close.
Last edited by sgt-mike; 3 Weeks Ago at 08:20 PM.
Ok a Update on the progress.
The HBA I had ordered arrived last week was DOA.... no bios screen after post..... nothing. Even when I attempted to find the HBA using multiple methods via CLI nothing kapoot.
I then went through the return process with them, now waiting on a refund. But in the meantime I'm still down with six 4 TB sas drives that I can't do anything with right now.
So this past weekend I war gamed out how to quickly fix this. So this past Monday, I felt generous and decided to give that same vendor a chance at redemption I made a offer although extremely low for two of the same LSI 9300-16i HBA's in the hopes of a good one. Waited 20 hours which brings us to today ...nothing ... crickets frustrated I canceled the bid and then placed a order.
so I ordered via ebay:
1- LSI 9003-8i
1- LSI 9003-16i
and a 15 pin 5 drive extension cable (right angle) because when using either the sata power connection or even a straight extension I can't close the case ( my connection between the HBA to drives in the internal 3.5" bays is currently SFF-8643 to SFF 8482 breakout cable which requires the sata power connection).
Cost was around $58.00 would have been a little bit cheaper if I lived in a state that didn't collect taxes on online sales.
My thought was if the 16i that I ordered today didn't work and the 8i did I could at least check out the drives and re-format them to 512 if need be until I can get a working 16i.
If both work great I have a spare HBA on hand.
--------- now in my searching I found a cooler master haf 932 case on auction, which currently doesn't seem to be any activity on but there is still 6 days on it------------
which is bigger than my current case (haf922) by 1 external 5.25" bay , I need to win that thing which I 'll switch the NFS server (Beastie) into it
That bring me up to this a plan I hatched, currently my media server is old HP 8200 elite ultra - slim with a i7-second gen, works fine for a portable and in the house as a media server.
But I could re-do this build setup for a replacement media server (here is where the haf 922 comes in, use same processor and motherboard except add a Nvida GPU vs a simple video card) for the house and move the HP to the RV.
In my mind there are several up sides in this route ,matching components for both servers except video cards and of course at least a 1200 watt PSU.
But that means I could use the bay's of the haf 922 case (5-5.25" external & 5-3.5" bays internal, as well as the HAF 932 6-5.25" & 5-3.5" drive bays) to host drives to Beastie (the NFS server) effectively almost doubling the cases drive capacity of the HAF 932 case.
(now before anyone comes with OHHHH the drives spinning up at the same time I hope all realize that most if not all sas expander cards support staggered spin up, I know that the ones I'm planning on using do support that)
That will change the way I was intending to use the NAS/NFS server to a degree. As Beastie would be used as a true NFS hosting the media to the media server. ( which would only need the 128 gb NVMe drive on the media server) and run 24/7 versus my intended just when needed.
Not that I couldn't do the same with the HP, and yeah I could add a mxm nvida card to the HP to use the GPU. But I could never get the drive bays of using another tower case using the HP. And the i7 second gen is good it will be over loaded at some point.
Last edited by sgt-mike; 2 Weeks Ago at 08:28 PM.
If you're looking for assistance and advice with setting up an NFS (Network File System) Server, I recommend considering the Machinist X99 RS9 motherboard, which provides robust support for server tasks. This motherboard features the LGA 2011-3 socket, offering high compatibility and performance for your server needs.
For optimal performance, ensure that the necessary ports are correctly configured to facilitate smooth data transfer. You can find more details about the Machinist X99 RS9 and its specifications here.
If you have specific questions regarding the NFS setup, feel free to ask!
Last edited by asirimahanama; 2 Weeks Ago at 05:55 AM.
The Asus X99A that was originally suggested is working well. And by that I do mean it does have some quirks that has been noted on this forum, but they are easily worked around. With the exception of RAM currently I have 96 gb, went today to purchase and install another 32 to get up to 128. Bios never showed the presence of the additional ram, stumped I re-installed several times taking great care to ensure that they was seated properly. Bios never showed the additional ram, which left me unsure if the limitation is (a.) the CPU , (b.) that a dimm slot is broken /defective??, (c.) the latest bios from Asus just doesn't address above 96gb. Actually not a problem regardless 96 should more than plenty to do the task, and is 32gb higher than the original listing of 64gb in the manual before the latest bios upgrade was made available. Personally I suspect (c.) bad DDR4 udimm slot.
Now the latest recommendation of the Machinist X99 board ... it was considered and was thrown out early on. Why you ask ? Simple m-ATX simply too small, and not enough PCI-e slots . As listed 1 -x16 and 2 -x1, totaling 3 PCI-e with only 4 sata ports. And I'm pretty sure when I reviewed it, there wasn't any onboard graphics, so the only PCI-e x16 slot would have to have a graphics card. So where would I stick a LSI 9300-16i HBA?? in a PCI-e x1 huh no, not too sure anyone makes a SAS capable HBA in a PCI-e x1.
Thus a ATX or -ATX format was needed with as much PCI-e x16 slots as possible.
With saying that I'm no way attempting to be negative nor ungrateful of the suggestion made by Asirimahanama . That might work for some others able to accept the boards limitations or be willing to use a SATA HBA in addition to the onboard 4- SATA ports. Now would it work if I wanted to use it to replace my media server absolutely. But early on the choice of a SAS HBA was made for the NFS, that way I could use SAS or SATA drives from the same HBA, and with enough expanders the LSi 9300-16 can address up to 1024/6 drives. Can't recall exactly the number but it's higher than a 1020 drives.
Also today ordered the hot swap bay drive modules for the Cooler Master HAF 922 case 5 - 5.25" bays. I chose the 3.5" format because I can fit either 3.5" or 2.5" with adapter. That configuration will allow me 7 drives in the 5.25" bays in addition to the internal 5 3.5" bays (12 drives). Now when I attempt to upgrade the Media server I'll steal those bays in that case to externally provide /power drives to the NFS via a expander.
Which is now pretty much built and in service providing files to my current media server.
Checked the network throughput and at 900 + Mb/s on a 1 Gig network I was happy.
I'm using ZFS for the soft raid and enterprise class SAS drives. Will need to adjust 1 pool in it's vdev currently in raidz1 x 3 4TB drives, got another 3 4TB's inbound. My goal for that Pool (storage side) was a vdev with a drive of 5 to 10 drives (wide) in raidZ2 or better. Then dedicate 1 or 2 six or eight TB drives as spares to the Z2/3 vdev. That way when I use greater than 4TB drives to expand the pool in the replacing drives and is expanded fully the spare (s) will be big enough to step in the event of a drive failure. Which will also allow me either spares or additions to the stripped pool or create another pool for other data.
Then I have another vdev configures in ZFS as just a simple stripe or raid 0 to feed the media files to the media server, this was just for the read speed as it was a good bit faster than a raidZ1 or Z2 vdev pool. Yes I'm fully aware that configuration offers NO drive loss protection. That is why the same files are in a raidZ1/2/3 vdev pool, which I'm also looking at understanding the ZFS snapshot commands.
That backup (snapshot) file will be stored on a external drive (s) USB or if need be do to another server for backup is my thoughts. Not sure how much space snapshots take. <<< this part is where @The Fu advice is sorely sought which is best WoL backup server and / or external USB drive?
Last edited by sgt-mike; 1 Week Ago at 08:14 AM.
Bookmarks