ZFS pool extremne pomaly scrub
- Chris
- Pokročilý používateľ
- Príspevky: 5252
- Dátum registrácie: Pi 13. Jan, 2006, 02:00
- Bydlisko: Bratislava
Re: ZFS pool extremne pomaly scrub
kde tu vidis 13GB volnej ram ?
mas 34.2GB alokovanej, 13G zobral system a kedze vyzral aj swap 4.62G, tak tej volnej je malo + este raz swap je smrt
Master of PaloAlto NGFWs, Cisco ASAs
- molnart
- Pokročilý používateľ
- Príspevky: 7023
- Dátum registrácie: Ut 19. Jún, 2012, 23:03
- Bydlisko: Bratislava/Samorin
Re: ZFS pool extremne pomaly scrub
tak mozno tomu nerozumiem, ale 34.2G / 47.1G mi evokuje ze je tam 13 Giga volnych.
obcas aj viacej, vid vcera pocas synchornizacie ZFS datastorov
obcas aj viacej, vid vcera pocas synchornizacie ZFS datastorov
Na prezeranie priložených súborov nemáte dostatočné oprávnenia.
Spoiler: ukázať
- Chris
- Pokročilý používateľ
- Príspevky: 5252
- Dátum registrácie: Pi 13. Jan, 2006, 02:00
- Bydlisko: Bratislava
Re: ZFS pool extremne pomaly scrub
1. sprav testy:
fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting
fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting
2. potom zmen co som ti pisal:
3. sprav testy znova
fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting
fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting
2. potom zmen co som ti pisal:
3. sprav testy znova
Master of PaloAlto NGFWs, Cisco ASAs
- molnart
- Pokročilý používateľ
- Príspevky: 7023
- Dátum registrácie: Ut 19. Jún, 2012, 23:03
- Bydlisko: Bratislava/Samorin
Re: ZFS pool extremne pomaly scrub
test na povodny dataset s 128k recordsize, lz4 compression:
seq write:
rand write:
znizeny ARC cache na 4 GB, nastaveny swappiness na 1 (nieco zo swapu ubudlo, ale vacsinou je tam stalo to co bolo predtym). vytvoreny novy dataset s 64k recordsize a zstd compression. dedup som nemal ani predtym, write cache na diskoch bol vypnuty a raid karta bola v IT mode
seq write
rand write:
TLDR: pri seq write rychlosti nizsie o 30%, pri rand je to pomalsie az 20x
seq write:
Kód: Vybrať všetko
/StoragePool/Storj$ fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting
seqwritetest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
seqwritetest: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=188MiB/s][w=188 IOPS][eta 00m:00s]
seqwritetest: (groupid=0, jobs=1): err= 0: pid=173316: Wed Dec 18 09:36:26 2024
write: IOPS=471, BW=471MiB/s (494MB/s)(27.6GiB/60003msec); 0 zone resets
slat (usec): min=86, max=88376, avg=2111.86, stdev=2282.68
clat (nsec): min=1051, max=1959.1k, avg=5991.77, stdev=20432.60
lat (usec): min=87, max=88384, avg=2117.85, stdev=2283.78
clat percentiles (nsec):
| 1.00th=[ 1448], 5.00th=[ 1944], 10.00th=[ 2512], 20.00th=[ 2832],
| 30.00th=[ 3344], 40.00th=[ 3920], 50.00th=[ 4640], 60.00th=[ 5344],
| 70.00th=[ 6048], 80.00th=[ 6752], 90.00th=[ 7776], 95.00th=[ 9536],
| 99.00th=[ 25984], 99.50th=[ 41216], 99.90th=[197632], 99.95th=[313344],
| 99.99th=[888832]
bw ( KiB/s): min=53248, max=3891200, per=100.00%, avg=483955.69, stdev=423295.49, samples=119
iops : min= 52, max= 3800, avg=472.53, stdev=413.39, samples=119
lat (usec) : 2=5.54%, 4=35.49%, 10=54.45%, 20=2.94%, 50=1.17%
lat (usec) : 100=0.18%, 250=0.14%, 500=0.07%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=1.73%, sys=9.25%, ctx=37019, majf=0, minf=20
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,28281,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=471MiB/s (494MB/s), 471MiB/s-471MiB/s (494MB/s-494MB/s), io=27.6GiB (29.7GB), run=60003-60003msec
Kód: Vybrať všetko
/StoragePool/Storj$ fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 4 processes
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=38.5MiB/s][w=9866 IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=4): err= 0: pid=176271: Wed Dec 18 09:39:05 2024
write: IOPS=22.7k, BW=88.7MiB/s (93.0MB/s)(5322MiB/60031msec); 0 zone resets
slat (usec): min=4, max=434147, avg=169.74, stdev=1486.49
clat (nsec): min=801, max=150947k, avg=4219.68, stdev=252918.26
lat (usec): min=5, max=434150, avg=173.96, stdev=1511.10
clat percentiles (nsec):
| 1.00th=[ 844], 5.00th=[ 860], 10.00th=[ 876],
| 20.00th=[ 900], 30.00th=[ 996], 40.00th=[ 1096],
| 50.00th=[ 1224], 60.00th=[ 1432], 70.00th=[ 1720],
| 80.00th=[ 2008], 90.00th=[ 2416], 95.00th=[ 2832],
| 99.00th=[ 8768], 99.50th=[ 17024], 99.90th=[ 252928],
| 99.95th=[ 905216], 99.99th=[4947968]
bw ( KiB/s): min= 8558, max=518872, per=100.00%, avg=91536.43, stdev=21392.42, samples=472
iops : min= 2139, max=129718, avg=22883.42, stdev=5348.17, samples=472
lat (nsec) : 1000=30.85%
lat (usec) : 2=48.69%, 4=18.70%, 10=0.84%, 20=0.49%, 50=0.20%
lat (usec) : 100=0.08%, 250=0.06%, 500=0.03%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=1.69%, sys=16.62%, ctx=481370, majf=0, minf=52
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1362474,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=5322MiB (5581MB), run=60031-60031msec
seq write
Kód: Vybrať všetko
/StoragePool/Test$ sudo fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G
--numjobs=1 --runtime=60 --time_based --group_reporting
[sudo] password for molnart:
seqwritetest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
seqwritetest: (groupid=0, jobs=1): err= 0: pid=1354790: Wed Dec 18 23:10:58 2024
write: IOPS=362, BW=363MiB/s (380MB/s)(21.3GiB/60001msec); 0 zone resets
slat (usec): min=89, max=4635.4k, avg=2751.00, stdev=88232.09
clat (nsec): min=1093, max=465161, avg=3339.53, stdev=10258.30
lat (usec): min=91, max=4635.4k, avg=2754.34, stdev=88232.25
clat percentiles (nsec):
| 1.00th=[ 1304], 5.00th=[ 1496], 10.00th=[ 1640], 20.00th=[ 1864],
| 30.00th=[ 2024], 40.00th=[ 2192], 50.00th=[ 2352], 60.00th=[ 2544],
| 70.00th=[ 2800], 80.00th=[ 3184], 90.00th=[ 3952], 95.00th=[ 5216],
| 99.00th=[ 16768], 99.50th=[ 38144], 99.90th=[146432], 99.95th=[238592],
| 99.99th=[387072]
bw ( MiB/s): min= 6, max= 2787, per=100.00%, avg=1343.41, stdev=691.78, samples=32
iops : min= 6, max= 2787, avg=1343.12, stdev=691.73, samples=32
lat (usec) : 2=28.53%, 4=61.96%, 10=7.76%, 20=0.94%, 50=0.39%
lat (usec) : 100=0.18%, 250=0.20%, 500=0.04%
cpu : usr=0.71%, sys=5.25%, ctx=161294, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,21769,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=363MiB/s (380MB/s), 363MiB/s-363MiB/s (380MB/s-380MB/s), io=21.3GiB (22.8GB), run=60001-60001msec
Kód: Vybrať všetko
/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
--numjobs=4 --runtime=60 --time_based --group_reporting
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 4 processes
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=1693KiB/s][w=423 IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=4): err= 0: pid=1358297: Wed Dec 18 23:13:45 2024
write: IOPS=1290, BW=5163KiB/s (5287kB/s)(303MiB/60118msec); 0 zone resets
slat (usec): min=5, max=211920, avg=3090.53, stdev=6897.92
clat (nsec): min=806, max=11556k, avg=2873.17, stdev=45488.70
lat (usec): min=5, max=211924, avg=3093.41, stdev=6899.00
clat percentiles (nsec):
| 1.00th=[ 860], 5.00th=[ 916], 10.00th=[ 964], 20.00th=[ 1012],
| 30.00th=[ 1080], 40.00th=[ 1288], 50.00th=[ 1896], 60.00th=[ 2544],
| 70.00th=[ 2832], 80.00th=[ 3440], 90.00th=[ 4448], 95.00th=[ 5856],
| 99.00th=[ 15552], 99.50th=[ 18816], 99.90th=[ 47360], 99.95th=[ 80384],
| 99.99th=[378880]
bw ( KiB/s): min= 768, max=111272, per=100.00%, avg=5196.18, stdev=2861.22, samples=477
iops : min= 192, max=27818, avg=1299.03, stdev=715.31, samples=477
lat (nsec) : 1000=17.94%
lat (usec) : 2=33.11%, 4=35.76%, 10=11.61%, 20=1.16%, 50=0.33%
lat (usec) : 100=0.06%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 4=0.01%, 10=0.01%, 20=0.01%
cpu : usr=0.19%, sys=1.47%, ctx=41770, majf=0, minf=39
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,77594,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=5163KiB/s (5287kB/s), 5163KiB/s-5163KiB/s (5287kB/s-5287kB/s), io=303MiB (318MB), run=60118-60118msec
Spoiler: ukázať
- zoom
- Používateľ
- Príspevky: 2354
- Dátum registrácie: Št 16. Jún, 2005, 20:00
- Bydlisko: Bratislava (40)
Re: ZFS pool extremne pomaly scrub
Tak, teraz staci uz len podakovat . Skusil som aj ja tie prikazy. Random write je podla ocakavania biednejsi nez u teba pred zmenou nastaveni (RAID-Z2, 6x 16TB IronWolf Pro, 128K recordsize, 2% fragmentacia). Latenciami som na tom trosicku lepsie ale nic zavratne - ja obsluzim 94% poziadaviek do 2 mikrosekund, ty do 4 mikrosekund.
Napriek tej nizsej rychlosti skoncil scrubbing pred par dnami podla ocakavania:
Zopar assorted veci, co som ponachadzal a vyskumal:
Spoiler: ukázať
Napriek tej nizsej rychlosti skoncil scrubbing pred par dnami podla ocakavania:
Kód: Vybrať všetko
root@nas[~]# zpool status
pool: StoragePool
state: ONLINE
scan: scrub repaired 0B in 03:57:48 with 0 errors on Sun Dec 15 03:57:49 2024
- Utilita ioztat - je to ako iostat, ale zobrazuje I/O pre jednotlive datasety. Pri scrubingu mozes pozorovat, ze ked sa spomali, ci je to vtedy ked ide na nejaky konkretny dataset (napr. ten s milionmi suborov).
- Zistil som, ze existuju prikazy arcstat a arc_summary. Mozno sa tam bude dat vycitat niekde, ci prilisne zapratanie ARC nesposobuje narast swapu na disku.
- Tuto typek pise preco vypnut prefetch pri scrubbingu a pozit starsi algoritmus. Mozes vyskusat, akurat na Linuxe sa to asi aktivuje trochu inak. Prikaz modinfo zfs zobrazi vsetky pritomne, ich popis je tu. Zrovna tie dve nastavenia som videl na viacerych miestach po nete spomenute, ze pomohli ludom.
- Chris
- Pokročilý používateľ
- Príspevky: 5252
- Dátum registrácie: Pi 13. Jan, 2006, 02:00
- Bydlisko: Bratislava
Re: ZFS pool extremne pomaly scrub
Rebootol si systém po zmene swapiness?
Master of PaloAlto NGFWs, Cisco ASAs
- molnart
- Pokročilý používateľ
- Príspevky: 7023
- Dátum registrácie: Ut 19. Jún, 2012, 23:03
- Bydlisko: Bratislava/Samorin
Re: ZFS pool extremne pomaly scrub
restart som nerobil, aplikoval nastavenia swapu cez sudo sysctl -p nejaky efekt to ma, lebo vyuzivanost swapu postupne klesa, momentalne je na 3.04/4.65
znie dobre, vyskusam. ale na 95% viem ze spomalenie bude na pbs a storagenode datasetoch.
toto nie je problem spravit, kedze mi to bezi vo virtualke, takze staci prehodit passtrough raid karty na druhe VM, naimportovat tam konfiguraciu z TrueNas kde som to pole povodne vytvaral a idem. problemom se prave ten storagenode kde som penalizovany za kazdy cas ked to nie je online a rozbehnut pod truenas vanilla docker je o drzku. nehovoriac o tom, ze priebeh scrubu moze byt diametralne odlisny ak pocas toho mi na pole nezapisuje storagenode svojich 150-200 GB denne.
Spoiler: ukázať
- molnart
- Pokročilý používateľ
- Príspevky: 7023
- Dátum registrácie: Ut 19. Jún, 2012, 23:03
- Bydlisko: Bratislava/Samorin
Re: ZFS pool extremne pomaly scrub
ako ten fio pracuje/nepracuje s cache ? ked to pustim ihned po sebe je normalne ze dostavam takto diametralne odlisne vysledky?
WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=1024MiB (1074MB), run=61118-61118msec
WRITE: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2727-2727msec
WRITE: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=1024MiB (1074MB), run=2567-2567msec
tu som vypol vsetky torrenty, storagenody apod. veci zapisujuce na pool a pustil test okamzite po sebe:
WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=1024MiB (1074MB), run=48467-48467msec
WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=1024MiB (1074MB), run=79735-79735msec
priebeh je tiez zaujimavy , obcas tam vidim aj ETA cez 30 minut a rychlosti pod 1 MB/s. tu som zapol nahravanie ked uz bol celkom rozbehnuty a dovtedy to uz bezalo nejako minutu:
https://www.youtube.com/watch?v=cjOifHCThww
WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=1024MiB (1074MB), run=61118-61118msec
WRITE: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2727-2727msec
WRITE: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=1024MiB (1074MB), run=2567-2567msec
tu som vypol vsetky torrenty, storagenody apod. veci zapisujuce na pool a pustil test okamzite po sebe:
WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=1024MiB (1074MB), run=48467-48467msec
WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=1024MiB (1074MB), run=79735-79735msec
priebeh je tiez zaujimavy , obcas tam vidim aj ETA cez 30 minut a rychlosti pod 1 MB/s. tu som zapol nahravanie ked uz bol celkom rozbehnuty a dovtedy to uz bezalo nejako minutu:
https://www.youtube.com/watch?v=cjOifHCThww
Kód: Vybrať všetko
molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][95.4%][w=348MiB/s][w=89.2k IOPS][eta 00m:03s]
writetest: (groupid=0, jobs=1): err= 0: pid=841725: Fri Dec 20 17:48:58 2024
write: IOPS=4289, BW=16.8MiB/s (17.6MB/s)(1024MiB/61118msec); 0 zone resets
slat (usec): min=4, max=414375, avg=231.06, stdev=2054.89
clat (nsec): min=766, max=19628k, avg=1382.33, stdev=51522.92
lat (usec): min=5, max=414381, avg=232.44, stdev=2056.12
clat percentiles (nsec):
| 1.00th=[ 796], 5.00th=[ 812], 10.00th=[ 812], 20.00th=[ 820],
| 30.00th=[ 836], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 884],
| 70.00th=[ 972], 80.00th=[ 1240], 90.00th=[ 1432], 95.00th=[ 1784],
| 99.00th=[ 6880], 99.50th=[ 10816], 99.90th=[ 22912], 99.95th=[ 31616],
| 99.99th=[197632]
bw ( KiB/s): min= 136, max=397815, per=96.73%, avg=16596.49, stdev=65989.98, samples=122
iops : min= 34, max=99453, avg=4149.10, stdev=16497.42, samples=122
lat (nsec) : 1000=72.70%
lat (usec) : 2=22.92%, 4=1.92%, 10=1.94%, 20=0.40%, 50=0.10%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 20=0.01%
cpu : usr=0.90%, sys=4.95%, ctx=10643, majf=0, minf=12
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=1024MiB (1074MB), run=61118-61118msec
--------------------
molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=359MiB/s][w=91.9k IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=843084: Fri Dec 20 17:49:21 2024
write: IOPS=82.9k, BW=324MiB/s (340MB/s)(1024MiB/3161msec); 0 zone resets
slat (usec): min=4, max=44464, avg=10.28, stdev=94.92
clat (nsec): min=769, max=1295.2k, avg=1181.26, stdev=8552.87
lat (usec): min=5, max=44472, avg=11.46, stdev=95.39
clat percentiles (nsec):
| 1.00th=[ 796], 5.00th=[ 804], 10.00th=[ 812], 20.00th=[ 820],
| 30.00th=[ 828], 40.00th=[ 844], 50.00th=[ 860], 60.00th=[ 900],
| 70.00th=[ 1064], 80.00th=[ 1160], 90.00th=[ 1320], 95.00th=[ 1480],
| 99.00th=[ 2160], 99.50th=[ 6496], 99.90th=[ 20864], 99.95th=[ 48384],
| 99.99th=[432128]
bw ( KiB/s): min=191328, max=522392, per=96.29%, avg=319423.17, stdev=120797.88, samples=6
iops : min=47832, max=130598, avg=79856.00, stdev=30199.55, samples=6
lat (nsec) : 1000=66.37%
lat (usec) : 2=32.41%, 4=0.66%, 10=0.17%, 20=0.29%, 50=0.05%
lat (usec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=14.46%, sys=72.66%, ctx=1978, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=324MiB/s (340MB/s), 324MiB/s-324MiB/s (340MB/s-340MB/s), io=1024MiB (1074MB), run=3161-3161msec
--------------------
molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][75.0%][w=358MiB/s][w=91.7k IOPS][eta 00m:01s]
writetest: (groupid=0, jobs=1): err= 0: pid=843179: Fri Dec 20 17:49:31 2024
write: IOPS=96.1k, BW=376MiB/s (394MB/s)(1024MiB/2727msec); 0 zone resets
slat (usec): min=4, max=2128, avg= 8.75, stdev=20.87
clat (nsec): min=784, max=1284.1k, avg=1107.85, stdev=6259.16
lat (usec): min=5, max=2136, avg= 9.86, stdev=22.01
clat percentiles (nsec):
| 1.00th=[ 812], 5.00th=[ 828], 10.00th=[ 828], 20.00th=[ 836],
| 30.00th=[ 844], 40.00th=[ 852], 50.00th=[ 860], 60.00th=[ 876],
| 70.00th=[ 908], 80.00th=[ 988], 90.00th=[ 1208], 95.00th=[ 1432],
| 99.00th=[ 2320], 99.50th=[ 5920], 99.90th=[ 23680], 99.95th=[ 51968],
| 99.99th=[264192]
bw ( KiB/s): min=184744, max=512712, per=99.71%, avg=383410.80, stdev=130015.57, samples=5
iops : min=46186, max=128178, avg=95852.60, stdev=32503.94, samples=5
lat (nsec) : 1000=81.90%
lat (usec) : 2=16.73%, 4=0.76%, 10=0.23%, 20=0.26%, 50=0.07%
lat (usec) : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=15.88%, sys=76.05%, ctx=1633, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2727-2727msec
--------------------
molnart@omv6:/StoragePool/Storj$ sync; sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][-.-%][w=357MiB/s][w=91.3k IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=843543: Fri Dec 20 17:49:58 2024
write: IOPS=102k, BW=399MiB/s (418MB/s)(1024MiB/2567msec); 0 zone resets
slat (usec): min=4, max=3662, avg= 8.23, stdev=21.32
clat (nsec): min=771, max=1190.0k, avg=1053.94, stdev=5319.98
lat (usec): min=5, max=3669, avg= 9.29, stdev=22.14
clat percentiles (nsec):
| 1.00th=[ 796], 5.00th=[ 804], 10.00th=[ 812], 20.00th=[ 820],
| 30.00th=[ 820], 40.00th=[ 828], 50.00th=[ 836], 60.00th=[ 844],
| 70.00th=[ 860], 80.00th=[ 948], 90.00th=[ 1208], 95.00th=[ 1432],
| 99.00th=[ 2096], 99.50th=[ 6688], 99.90th=[ 22656], 99.95th=[ 41216],
| 99.99th=[181248]
bw ( KiB/s): min=165372, max=538784, per=99.59%, avg=406807.20, stdev=155799.14, samples=5
iops : min=41343, max=134696, avg=101701.80, stdev=38949.78, samples=5
lat (nsec) : 1000=84.70%
lat (usec) : 2=14.17%, 4=0.58%, 10=0.14%, 20=0.28%, 50=0.08%
lat (usec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=15.08%, sys=78.64%, ctx=1370, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=1024MiB (1074MB), run=2567-2567msec
-----------
molnart@omv6:/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][96.1%][w=28.8MiB/s][w=7360 IOPS][eta 00m:02s]
writetest: (groupid=0, jobs=1): err= 0: pid=850664: Fri Dec 20 17:57:59 2024
write: IOPS=5408, BW=21.1MiB/s (22.2MB/s)(1024MiB/48467msec); 0 zone resets
slat (usec): min=4, max=280564, avg=182.60, stdev=1655.04
clat (nsec): min=809, max=826375, avg=1405.34, stdev=3415.91
lat (usec): min=5, max=280571, avg=184.01, stdev=1655.42
clat percentiles (nsec):
| 1.00th=[ 844], 5.00th=[ 860], 10.00th=[ 868], 20.00th=[ 876],
| 30.00th=[ 892], 40.00th=[ 908], 50.00th=[ 972], 60.00th=[ 1020],
| 70.00th=[ 1144], 80.00th=[ 1464], 90.00th=[ 2480], 95.00th=[ 2704],
| 99.00th=[ 6752], 99.50th=[13504], 99.90th=[22144], 99.95th=[26752],
| 99.99th=[63744]
bw ( KiB/s): min= 3032, max=108976, per=96.73%, avg=20928.11, stdev=15783.22, samples=96
iops : min= 758, max=27244, avg=5232.01, stdev=3945.79, samples=96
lat (nsec) : 1000=56.06%
lat (usec) : 2=29.19%, 4=12.62%, 10=1.54%, 20=0.47%, 50=0.12%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
cpu : usr=1.52%, sys=9.62%, ctx=42021, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=1024MiB (1074MB), run=48467-48467msec
-----------
molnart@omv6:/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][90.9%][w=32.8MiB/s][w=8406 IOPS][eta 00m:08s]
writetest: (groupid=0, jobs=1): err= 0: pid=851444: Fri Dec 20 17:59:23 2024
write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(1024MiB/79735msec); 0 zone resets
slat (usec): min=4, max=321111, avg=301.77, stdev=2383.22
clat (nsec): min=789, max=1595.3k, avg=1487.11, stdev=5063.03
lat (usec): min=5, max=321117, avg=303.26, stdev=2383.70
clat percentiles (nsec):
| 1.00th=[ 828], 5.00th=[ 844], 10.00th=[ 852], 20.00th=[ 868],
| 30.00th=[ 884], 40.00th=[ 900], 50.00th=[ 956], 60.00th=[ 996],
| 70.00th=[ 1128], 80.00th=[ 1496], 90.00th=[ 2576], 95.00th=[ 3056],
| 99.00th=[ 7584], 99.50th=[14016], 99.90th=[24192], 99.95th=[32640],
| 99.99th=[67072]
bw ( KiB/s): min= 1016, max=114624, per=97.07%, avg=12765.28, stdev=11326.78, samples=159
iops : min= 254, max=28656, avg=3191.30, stdev=2831.68, samples=159
lat (nsec) : 1000=60.12%
lat (usec) : 2=24.12%, 4=12.17%, 10=2.89%, 20=0.52%, 50=0.15%
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=1.00%, sys=6.33%, ctx=46951, majf=0, minf=10
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=1024MiB (1074MB), run=79735-79735msec
Spoiler: ukázať