Originally Posted by
1fallen
You chose wisely.
Sorry, couldn't resist
Originally Posted by
1fallen
Please keep us updated, it helps others like myself for one, when considering new zfs hardware
Thanks, will do.
Originally Posted by
MAFoElffen
Strike through bbcodes are like this [s] Text[/s]... Will show up like this: Text
Ah, ok!
Originally Posted by
MAFoElffen
Yes. I remember when TheFu got his mother board (new), he shopped around and did his homework. He finally ordered a match set of two RAM sticks (at first), then months later made sure that he ordered "the same memory" from "the same vendor"...
He was so P'ed off when his system RAM slowed down about 20-30% just by adding those additional two chips in. Sometimes that just doesn't make sense.
Well... I'm no expert but just based on what I've recently learned that usually boils down to (in no real order):
1. The mobo is a poor design
2. The position of the memory was incorrect (not optimal)
3. The memory was different (different PN)
4. The memory was a different type
A friend of mine used to work for a small business that refurbed tape drives. The dodgy guy who owned it would print his own PN labels..... yup, you read that right.... he said it used to frustrate the boss that the unit was 'identical' but because the part number was 1 digit different (like a slightly older hw rev.), they would not be able to shift certain things.. so he'd remove the PN label and put their own on. Dodgy dodgy ******* business...... I never saw any of these labels, but they were doing it for years before the business folded (for an entirely different, but equally dodgy reason) so they must have been at least half convincing, I think they were quite well known as well, as a business I mean.. they did a lot of trade. I'm not saying this was the case with TheFu, but it can/does happen. For example, I'm sure that IJ memory will work just fine in my board if it's all the IJ stuff. As will all the ones listed on Crucial if it's all that PN. But the ones on Crucial for example will definitely slow my board down if I mix them, even though they work. Could even be that the label only showed the main part of the PN.. like dmidecode only showed 36ASF2G72PZ-2G1A2 but there seem to be 2 variants of this (IG and IJ) as well as a 36ASF2G72PZ-2G1B2.
So at the moment:
Code:
free -m
total used free shared buff/cache available
Mem: 32010 9999 13893 61 8117 21480
Swap: 12286 0 12286
and I get:
Code:
rm /mnt/Tank/testfile && dd if=/dev/zero of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync
6+0 records in
6+0 records out
6442450944 bytes (6.4 GB, 6.0 GiB) copied, 10.3578 s, 622 MB/s
and:
Code:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [W(1)][53.8%][w=484MiB/s][w=484 IOPS][eta 00m:06s]
Jobs: 1 (f=1): [W(1)][86.7%][w=525MiB/s][w=525 IOPS][eta 00m:02s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=446642: Sat Dec 2 00:04:32 2023
write: IOPS=473, BW=473MiB/s (496MB/s)(10.0GiB/21639msec); 0 zone resets
slat (usec): min=233, max=3623, avg=1521.72, stdev=671.83
clat (usec): min=3, max=6053.1k, avg=65226.74, stdev=329496.49
lat (usec): min=329, max=6054.9k, avg=66749.10, stdev=329553.81
clat percentiles (msec):
| 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 16],
| 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 57],
| 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 67], 95.00th=[ 77],
| 99.00th=[ 88], 99.50th=[ 94], 99.90th=[ 6007], 99.95th=[ 6074],
| 99.99th=[ 6074]
bw ( KiB/s): min=51200, max=2265088, per=100.00%, avg=638016.00, stdev=400354.97, samples=32
iops : min= 50, max= 2212, avg=623.06, stdev=390.97, samples=32
lat (usec) : 4=0.01%, 10=0.04%, 500=0.02%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.08%, 4=0.15%, 10=0.65%, 20=21.60%, 50=16.66%
lat (msec) : 100=60.46%, >=2000=0.30%
fsync/fdatasync/sync_file_range:
sync (nsec): min=1353, max=1353, avg=1353.00, stdev= 0.00
sync percentiles (nsec):
| 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[ 1352], 20.00th=[ 1352],
| 30.00th=[ 1352], 40.00th=[ 1352], 50.00th=[ 1352], 60.00th=[ 1352],
| 70.00th=[ 1352], 80.00th=[ 1352], 90.00th=[ 1352], 95.00th=[ 1352],
| 99.00th=[ 1352], 99.50th=[ 1352], 99.90th=[ 1352], 99.95th=[ 1352],
| 99.99th=[ 1352]
cpu : usr=3.15%, sys=19.69%, ctx=75103, majf=0, minf=15
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=473MiB/s (496MB/s), 473MiB/s-473MiB/s (496MB/s-496MB/s), io=10.0GiB (10.7GB), run=21639-21639msec
and:
Code:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][-.-%][r=3020MiB/s][r=3020 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=515253: Sat Dec 2 00:05:26 2023
read: IOPS=2920, BW=2921MiB/s (3063MB/s)(10.0GiB/3506msec)
slat (usec): min=273, max=1151, avg=339.36, stdev=47.28
clat (usec): min=3, max=31257, avg=10501.61, stdev=1305.05
lat (usec): min=324, max=32409, avg=10841.35, stdev=1340.08
clat percentiles (usec):
| 1.00th=[ 6718], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159],
| 30.00th=[10159], 40.00th=[10290], 50.00th=[10290], 60.00th=[10290],
| 70.00th=[10421], 80.00th=[10552], 90.00th=[12125], 95.00th=[12125],
| 99.00th=[13698], 99.50th=[15270], 99.90th=[26346], 99.95th=[28705],
| 99.99th=[30802]
bw ( MiB/s): min= 2344, max= 3040, per=99.76%, avg=2913.71, stdev=252.00, samples=7
iops : min= 2344, max= 3040, avg=2913.71, stdev=252.00, samples=7
lat (usec) : 4=0.05%, 500=0.05%, 750=0.05%, 1000=0.02%
lat (msec) : 2=0.13%, 4=0.31%, 10=0.96%, 20=98.21%, 50=0.22%
cpu : usr=0.54%, sys=99.43%, ctx=12, majf=0, minf=8205
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=2921MiB/s (3063MB/s), 2921MiB/s-2921MiB/s (3063MB/s-3063MB/s), io=10.0GiB (10.7GB), run=3506-3506msec
During each job, memory usage climbs by a few gigs then drops back down again. Looks like I'm in a reasonable patch at the moment.
But again 5 mins later:
Code:
rm /mnt/Tank/testfile && dd if=/dev/zero of=/mnt/Tank/testfile bs=1G count=6 oflag=dsync
6+0 records in
6+0 records out
6442450944 bytes (6.4 GB, 6.0 GiB) copied, 72.1945 s, 89.2 MB/s
and:
Code:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [W(1)][50.0%][w=593MiB/s][w=593 IOPS][eta 00m:07s]
Jobs: 1 (f=1): [W(1)][72.2%][w=123MiB/s][w=123 IOPS][eta 00m:05s]
Jobs: 1 (f=1): [W(1)][76.0%][w=31.0MiB/s][w=31 IOPS][eta 00m:06s]
Jobs: 1 (f=1): [W(1)][75.8%][w=10.0MiB/s][w=10 IOPS][eta 00m:08s]
Jobs: 1 (f=1): [W(1)][75.6%][w=2050KiB/s][w=2 IOPS][eta 00m:10s]
Jobs: 1 (f=1): [W(1)][80.4%][w=133MiB/s][w=133 IOPS][eta 00m:09s]
Jobs: 1 (f=1): [W(1)][97.7%][w=148MiB/s][w=148 IOPS][eta 00m:01s]
Jobs: 1 (f=1): [W(1)][98.0%][eta 00m:01s]
Jobs: 1 (f=1): [W(1)][98.1%][eta 00m:01s]
TEST: (groupid=0, jobs=1): err= 0: pid=520493: Sat Dec 2 00:11:57 2023
write: IOPS=199, BW=199MiB/s (209MB/s)(10.0GiB/51363msec); 0 zone resets
slat (usec): min=295, max=1187.2k, avg=4112.57, stdev=17844.91
clat (usec): min=3, max=9229.9k, avg=154951.33, stdev=640495.91
lat (usec): min=348, max=9231.6k, avg=159064.75, stdev=647743.56
clat percentiles (msec):
| 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 27], 20.00th=[ 29],
| 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 56],
| 70.00th=[ 56], 80.00th=[ 78], 90.00th=[ 201], 95.00th=[ 426],
| 99.00th=[ 2265], 99.50th=[ 5000], 99.90th=[ 9194], 99.95th=[ 9194],
| 99.99th=[ 9194]
bw ( KiB/s): min= 2048, max=1263616, per=100.00%, avg=245982.07, stdev=305800.34, samples=83
iops : min= 2, max= 1234, avg=240.22, stdev=298.63, samples=83
lat (usec) : 4=0.02%, 10=0.02%, 20=0.01%, 500=0.01%, 750=0.01%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.06%, 4=0.09%, 10=0.63%, 20=2.89%, 50=34.20%
lat (msec) : 100=43.91%, 250=10.29%, 500=3.43%, 750=1.62%, 1000=0.98%
lat (msec) : 2000=0.74%, >=2000=1.08%
fsync/fdatasync/sync_file_range:
sync (nsec): min=1345, max=1345, avg=1345.00, stdev= 0.00
sync percentiles (nsec):
| 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[ 1352], 20.00th=[ 1352],
| 30.00th=[ 1352], 40.00th=[ 1352], 50.00th=[ 1352], 60.00th=[ 1352],
| 70.00th=[ 1352], 80.00th=[ 1352], 90.00th=[ 1352], 95.00th=[ 1352],
| 99.00th=[ 1352], 99.50th=[ 1352], 99.90th=[ 1352], 99.95th=[ 1352],
| 99.99th=[ 1352]
cpu : usr=1.32%, sys=10.72%, ctx=70195, majf=0, minf=14
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,10240,0,1 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=199MiB/s (209MB/s), 199MiB/s-199MiB/s (209MB/s-209MB/s), io=10.0GiB (10.7GB), run=51363-51363msec
and:
Code:
fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
TEST: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.28
Starting 1 process
Jobs: 1 (f=1): [R(1)][-.-%][r=2833MiB/s][r=2833 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=1): err= 0: pid=559190: Sat Dec 2 00:13:26 2023
read: IOPS=2754, BW=2755MiB/s (2889MB/s)(10.0GiB/3717msec)
slat (usec): min=290, max=1156, avg=359.97, stdev=47.84
clat (usec): min=2, max=32686, avg=11134.73, stdev=1355.69
lat (usec): min=337, max=33843, avg=11495.11, stdev=1391.41
clat percentiles (usec):
| 1.00th=[ 7111], 5.00th=[10814], 10.00th=[10814], 20.00th=[10814],
| 30.00th=[10945], 40.00th=[10945], 50.00th=[10945], 60.00th=[10945],
| 70.00th=[10945], 80.00th=[11076], 90.00th=[12911], 95.00th=[12911],
| 99.00th=[14091], 99.50th=[16057], 99.90th=[27919], 99.95th=[30278],
| 99.99th=[32375]
bw ( MiB/s): min= 2192, max= 2846, per=99.54%, avg=2742.29, stdev=242.91, samples=7
iops : min= 2192, max= 2846, avg=2742.29, stdev=242.91, samples=7
lat (usec) : 4=0.05%, 500=0.05%, 750=0.05%
lat (msec) : 2=0.15%, 4=0.29%, 10=0.83%, 20=98.33%, 50=0.25%
cpu : usr=1.26%, sys=98.68%, ctx=5, majf=0, minf=8203
IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=98.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=10240,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=2755MiB/s (2889MB/s), 2755MiB/s-2755MiB/s (2889MB/s-2889MB/s), io=10.0GiB (10.7GB), run=3717-3717msec
Memory usage seemed heavier this time. Used more like 5-6gigs and only dropped back down to:
Code:
free -m
total used free shared buff/cache available
Mem: 32010 10180 13712 60 8117 21301
Swap: 12286 0 12286
But this does seem to solve part of the riddle... it's nothing to do with running out of memory.
Bookmarks