ZFS pool extremne pomaly scrub

Všetko o pevných diskoch, solid-state diskoch, optických mechanikách, USB diskoch...
Používateľov profilový obrázok
Chris
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 5252
Dátum registrácie: Pi 13. Jan, 2006, 02:00
Bydlisko: Bratislava

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa Chris »

Obrázok

kde tu vidis 13GB volnej ram ?
mas 34.2GB alokovanej, 13G zobral system a kedze vyzral aj swap 4.62G, tak tej volnej je malo :-) + este raz swap je smrt
Master of PaloAlto NGFWs, Cisco ASAs
Používateľov profilový obrázok
molnart
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 7023
Dátum registrácie: Ut 19. Jún, 2012, 23:03
Bydlisko: Bratislava/Samorin

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa molnart »

tak mozno tomu nerozumiem, ale 34.2G / 47.1G mi evokuje ze je tam 13 Giga volnych.

obcas aj viacej, vid vcera pocas synchornizacie ZFS datastorov
htop2.png
Na prezeranie priložených súborov nemáte dostatočné oprávnenia.
Spoiler: ukázať
PC: CPU: Intel Core i5 12600K with Silentium Fortis 5 ARGB MB: MSI Tomahawk Z690 DDR4 RAM: 2x 16GB G.Skill Ripjaws V 4400-19 DDR4 GPU: GigaByte Eagle GeForce RTX 3060 Ti OC HDD: Samsung 970 1GB GB PSU: Corsair RMx (2018) 650W Case: Fractal Meshify 2 Compact Monitor: Philips 272B7QPJEB OS: Win 11 64-bit
Notebook: HP EliteBook 840 G6 Core i5 8265U, 16 GB RAM, 512 GB SSD
Server: HP Microserver Gen8 Xeon E3-1265Lv2, 16GB ECC DDR3 OS: PVE + OMV + OPNsense
Phone: Samsung Galaxy A52s
Tablet: iPad Pro 11 (2018)
Používateľov profilový obrázok
Chris
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 5252
Dátum registrácie: Pi 13. Jan, 2006, 02:00
Bydlisko: Bratislava

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa Chris »

1. sprav testy:

fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting
fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting

2. potom zmen co som ti pisal:

3. sprav testy znova
Master of PaloAlto NGFWs, Cisco ASAs
Používateľov profilový obrázok
molnart
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 7023
Dátum registrácie: Ut 19. Jún, 2012, 23:03
Bydlisko: Bratislava/Samorin

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa molnart »

test na povodny dataset s 128k recordsize, lz4 compression:

seq write:

Kód: Vybrať všetko

/StoragePool/Storj$ fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting
seqwritetest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
seqwritetest: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=188MiB/s][w=188 IOPS][eta 00m:00s]
seqwritetest: (groupid=0, jobs=1): err= 0: pid=173316: Wed Dec 18 09:36:26 2024
  write: IOPS=471, BW=471MiB/s (494MB/s)(27.6GiB/60003msec); 0 zone resets
    slat (usec): min=86, max=88376, avg=2111.86, stdev=2282.68
    clat (nsec): min=1051, max=1959.1k, avg=5991.77, stdev=20432.60
     lat (usec): min=87, max=88384, avg=2117.85, stdev=2283.78
    clat percentiles (nsec):
     |  1.00th=[  1448],  5.00th=[  1944], 10.00th=[  2512], 20.00th=[  2832],
     | 30.00th=[  3344], 40.00th=[  3920], 50.00th=[  4640], 60.00th=[  5344],
     | 70.00th=[  6048], 80.00th=[  6752], 90.00th=[  7776], 95.00th=[  9536],
     | 99.00th=[ 25984], 99.50th=[ 41216], 99.90th=[197632], 99.95th=[313344],
     | 99.99th=[888832]
   bw (  KiB/s): min=53248, max=3891200, per=100.00%, avg=483955.69, stdev=423295.49, samples=119
   iops        : min=   52, max= 3800, avg=472.53, stdev=413.39, samples=119
  lat (usec)   : 2=5.54%, 4=35.49%, 10=54.45%, 20=2.94%, 50=1.17%
  lat (usec)   : 100=0.18%, 250=0.14%, 500=0.07%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=1.73%, sys=9.25%, ctx=37019, majf=0, minf=20
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,28281,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=471MiB/s (494MB/s), 471MiB/s-471MiB/s (494MB/s-494MB/s), io=27.6GiB (29.7GB), run=60003-60003msec
rand write:

Kód: Vybrať všetko

/StoragePool/Storj$ fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 4 processes
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=38.5MiB/s][w=9866 IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=4): err= 0: pid=176271: Wed Dec 18 09:39:05 2024
  write: IOPS=22.7k, BW=88.7MiB/s (93.0MB/s)(5322MiB/60031msec); 0 zone resets
    slat (usec): min=4, max=434147, avg=169.74, stdev=1486.49
    clat (nsec): min=801, max=150947k, avg=4219.68, stdev=252918.26
     lat (usec): min=5, max=434150, avg=173.96, stdev=1511.10
    clat percentiles (nsec):
     |  1.00th=[    844],  5.00th=[    860], 10.00th=[    876],
     | 20.00th=[    900], 30.00th=[    996], 40.00th=[   1096],
     | 50.00th=[   1224], 60.00th=[   1432], 70.00th=[   1720],
     | 80.00th=[   2008], 90.00th=[   2416], 95.00th=[   2832],
     | 99.00th=[   8768], 99.50th=[  17024], 99.90th=[ 252928],
     | 99.95th=[ 905216], 99.99th=[4947968]
   bw (  KiB/s): min= 8558, max=518872, per=100.00%, avg=91536.43, stdev=21392.42, samples=472
   iops        : min= 2139, max=129718, avg=22883.42, stdev=5348.17, samples=472
  lat (nsec)   : 1000=30.85%
  lat (usec)   : 2=48.69%, 4=18.70%, 10=0.84%, 20=0.49%, 50=0.20%
  lat (usec)   : 100=0.08%, 250=0.06%, 500=0.03%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=1.69%, sys=16.62%, ctx=481370, majf=0, minf=52
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1362474,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=5322MiB (5581MB), run=60031-60031msec

znizeny ARC cache na 4 GB, nastaveny swappiness na 1 (nieco zo swapu ubudlo, ale vacsinou je tam stalo to co bolo predtym). vytvoreny novy dataset s 64k recordsize a zstd compression. dedup som nemal ani predtym, write cache na diskoch bol vypnuty a raid karta bola v IT mode

seq write

Kód: Vybrať všetko

/StoragePool/Test$ sudo fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G
--numjobs=1 --runtime=60 --time_based --group_reporting
[sudo] password for molnart:
seqwritetest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
seqwritetest: (groupid=0, jobs=1): err= 0: pid=1354790: Wed Dec 18 23:10:58 2024
  write: IOPS=362, BW=363MiB/s (380MB/s)(21.3GiB/60001msec); 0 zone resets
    slat (usec): min=89, max=4635.4k, avg=2751.00, stdev=88232.09
    clat (nsec): min=1093, max=465161, avg=3339.53, stdev=10258.30
     lat (usec): min=91, max=4635.4k, avg=2754.34, stdev=88232.25
    clat percentiles (nsec):
     |  1.00th=[  1304],  5.00th=[  1496], 10.00th=[  1640], 20.00th=[  1864],
     | 30.00th=[  2024], 40.00th=[  2192], 50.00th=[  2352], 60.00th=[  2544],
     | 70.00th=[  2800], 80.00th=[  3184], 90.00th=[  3952], 95.00th=[  5216],
     | 99.00th=[ 16768], 99.50th=[ 38144], 99.90th=[146432], 99.95th=[238592],
     | 99.99th=[387072]
   bw (  MiB/s): min=    6, max= 2787, per=100.00%, avg=1343.41, stdev=691.78, samples=32
   iops        : min=    6, max= 2787, avg=1343.12, stdev=691.73, samples=32
  lat (usec)   : 2=28.53%, 4=61.96%, 10=7.76%, 20=0.94%, 50=0.39%
  lat (usec)   : 100=0.18%, 250=0.20%, 500=0.04%
  cpu          : usr=0.71%, sys=5.25%, ctx=161294, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,21769,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=363MiB/s (380MB/s), 363MiB/s-363MiB/s (380MB/s-380MB/s), io=21.3GiB (22.8GB), run=60001-60001msec
rand write:

Kód: Vybrať všetko

/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
--numjobs=4 --runtime=60 --time_based --group_reporting
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 4 processes
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=1693KiB/s][w=423 IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=4): err= 0: pid=1358297: Wed Dec 18 23:13:45 2024
  write: IOPS=1290, BW=5163KiB/s (5287kB/s)(303MiB/60118msec); 0 zone resets
    slat (usec): min=5, max=211920, avg=3090.53, stdev=6897.92
    clat (nsec): min=806, max=11556k, avg=2873.17, stdev=45488.70
     lat (usec): min=5, max=211924, avg=3093.41, stdev=6899.00
    clat percentiles (nsec):
     |  1.00th=[   860],  5.00th=[   916], 10.00th=[   964], 20.00th=[  1012],
     | 30.00th=[  1080], 40.00th=[  1288], 50.00th=[  1896], 60.00th=[  2544],
     | 70.00th=[  2832], 80.00th=[  3440], 90.00th=[  4448], 95.00th=[  5856],
     | 99.00th=[ 15552], 99.50th=[ 18816], 99.90th=[ 47360], 99.95th=[ 80384],
     | 99.99th=[378880]
   bw (  KiB/s): min=  768, max=111272, per=100.00%, avg=5196.18, stdev=2861.22, samples=477
   iops        : min=  192, max=27818, avg=1299.03, stdev=715.31, samples=477
  lat (nsec)   : 1000=17.94%
  lat (usec)   : 2=33.11%, 4=35.76%, 10=11.61%, 20=1.16%, 50=0.33%
  lat (usec)   : 100=0.06%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=0.19%, sys=1.47%, ctx=41770, majf=0, minf=39
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,77594,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=5163KiB/s (5287kB/s), 5163KiB/s-5163KiB/s (5287kB/s-5287kB/s), io=303MiB (318MB), run=60118-60118msec
TLDR: pri seq write rychlosti nizsie o 30%, pri rand je to pomalsie az 20x
Spoiler: ukázať
PC: CPU: Intel Core i5 12600K with Silentium Fortis 5 ARGB MB: MSI Tomahawk Z690 DDR4 RAM: 2x 16GB G.Skill Ripjaws V 4400-19 DDR4 GPU: GigaByte Eagle GeForce RTX 3060 Ti OC HDD: Samsung 970 1GB GB PSU: Corsair RMx (2018) 650W Case: Fractal Meshify 2 Compact Monitor: Philips 272B7QPJEB OS: Win 11 64-bit
Notebook: HP EliteBook 840 G6 Core i5 8265U, 16 GB RAM, 512 GB SSD
Server: HP Microserver Gen8 Xeon E3-1265Lv2, 16GB ECC DDR3 OS: PVE + OMV + OPNsense
Phone: Samsung Galaxy A52s
Tablet: iPad Pro 11 (2018)
Používateľov profilový obrázok
zoom
Používateľ
Používateľ
Príspevky: 2354
Dátum registrácie: Št 16. Jún, 2005, 20:00
Bydlisko: Bratislava (40)

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa zoom »

Tak, teraz staci uz len podakovat :good:. Skusil som aj ja tie prikazy. Random write je podla ocakavania biednejsi nez u teba pred zmenou nastaveni (RAID-Z2, 6x 16TB IronWolf Pro, 128K recordsize, 2% fragmentacia). Latenciami som na tom trosicku lepsie ale nic zavratne - ja obsluzim 94% poziadaviek do 2 mikrosekund, ty do 4 mikrosekund.
Spoiler: ukázať

Kód: Vybrať všetko

root@nas[~]# fio --name=seqwritetest --ioengine=libaio --rw=write --bs=1M --direct=1 --size=10G --numjobs=1 --runtime=60 --time_based --group_reporting --directory=/mnt/StoragePool/StorageMain/Temp
seqwritetest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
seqwritetest: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=832MiB/s][w=832 IOPS][eta 00m:00s]
seqwritetest: (groupid=0, jobs=1): err= 0: pid=7206: Thu Dec 19 03:20:31 2024
  write: IOPS=642, BW=642MiB/s (673MB/s)(37.6GiB/60001msec); 0 zone resets
    slat (usec): min=114, max=50436, avg=1554.83, stdev=876.48
    clat (nsec): min=661, max=1848.3k, avg=1435.65, stdev=9549.37
     lat (usec): min=114, max=50456, avg=1556.27, stdev=877.35
    clat percentiles (nsec):
     |  1.00th=[  740],  5.00th=[  764], 10.00th=[  788], 20.00th=[  860],
     | 30.00th=[ 1004], 40.00th=[ 1128], 50.00th=[ 1176], 60.00th=[ 1240],
     | 70.00th=[ 1320], 80.00th=[ 1448], 90.00th=[ 1768], 95.00th=[ 2160],
     | 99.00th=[ 8096], 99.50th=[13504], 99.90th=[21376], 99.95th=[23680],
     | 99.99th=[45312]
   bw (  KiB/s): min=192512, max=5720064, per=99.77%, avg=656065.61, stdev=516023.43, samples=119
   iops        : min=  188, max= 5586, avg=640.69, stdev=503.93, samples=119
  lat (nsec)   : 750=1.84%, 1000=27.74%
  lat (usec)   : 2=64.05%, 4=4.67%, 10=0.96%, 20=0.56%, 50=0.17%
  lat (usec)   : 100=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=0.54%, sys=10.94%, ctx=41960, majf=0, minf=13
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,38531,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=642MiB/s (673MB/s), 642MiB/s-642MiB/s (673MB/s-673MB/s), io=37.6GiB (40.4GB), run=60001-60001msec

Kód: Vybrať všetko

root@nas[~]# fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G --numjobs=4 --runtime=60 --time_based --group_reporting --directory=/mnt/StoragePool/StorageMain/Temp
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
...
fio-3.33
Starting 4 processes
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 4 (f=4): [w(4)][100.0%][w=27.4MiB/s][w=7007 IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=4): err= 0: pid=7284: Thu Dec 19 03:23:53 2024
  write: IOPS=8839, BW=34.5MiB/s (36.2MB/s)(2072MiB/60001msec); 0 zone resets
    slat (usec): min=7, max=24140, avg=450.30, stdev=294.43
    clat (nsec): min=481, max=2259.7k, avg=1183.16, stdev=7399.29
     lat (usec): min=8, max=24255, avg=451.48, stdev=295.26
    clat percentiles (nsec):
     |  1.00th=[   532],  5.00th=[   620], 10.00th=[   652], 20.00th=[   700],
     | 30.00th=[   764], 40.00th=[   844], 50.00th=[   900], 60.00th=[   940],
     | 70.00th=[  1012], 80.00th=[  1080], 90.00th=[  1400], 95.00th=[  2192],
     | 99.00th=[  7072], 99.50th=[  9536], 99.90th=[ 17280], 99.95th=[ 21888],
     | 99.99th=[113152]
   bw (  KiB/s): min=16445, max=296120, per=100.00%, avg=35394.56, stdev=6364.84, samples=476
   iops        : min= 4109, max=74030, avg=8848.61, stdev=1591.22, samples=476
  lat (nsec)   : 500=0.04%, 750=28.09%, 1000=40.18%
  lat (usec)   : 2=26.17%, 4=3.01%, 10=2.07%, 20=0.38%, 50=0.05%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=0.53%, sys=7.22%, ctx=495735, majf=0, minf=37
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,530400,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=2072MiB (2173MB), run=60001-60001msec

Napriek tej nizsej rychlosti skoncil scrubbing pred par dnami podla ocakavania:

Kód: Vybrať všetko

root@nas[~]# zpool status
  pool: StoragePool
 state: ONLINE
  scan: scrub repaired 0B in 03:57:48 with 0 errors on Sun Dec 15 03:57:49 2024
Zopar assorted veci, co som ponachadzal a vyskumal:
  • Utilita ioztat - je to ako iostat, ale zobrazuje I/O pre jednotlive datasety. Pri scrubingu mozes pozorovat, ze ked sa spomali, ci je to vtedy ked ide na nejaky konkretny dataset (napr. ten s milionmi suborov).
  • Zistil som, ze existuju prikazy arcstat a arc_summary. Mozno sa tam bude dat vycitat niekde, ci prilisne zapratanie ARC nesposobuje narast swapu na disku.
  • Tuto typek pise preco vypnut prefetch pri scrubbingu a pozit starsi algoritmus. Mozes vyskusat, akurat na Linuxe sa to asi aktivuje trochu inak. Prikaz modinfo zfs zobrazi vsetky pritomne, ich popis je tu. Zrovna tie dve nastavenia som videl na viacerych miestach po nete spomenute, ze pomohli ludom.
Vikendovy projekt pre odvaznych: TrueNAS Scale aj OMV su zalozene na Linuxe. Vypnes pole, vyberies boot disk s OMV, strcis tam hoci aj USB kluc s TrueNAS Scale, nastartujes s minimalnou konfiguraciou a naimportujes existujuce disky, das scrubovat. Takto porovnas, ci s terajsim zaplnenim diskov nebude situacia zasadne odlisna v inom systeme.
Používateľov profilový obrázok
Chris
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 5252
Dátum registrácie: Pi 13. Jan, 2006, 02:00
Bydlisko: Bratislava

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa Chris »

Rebootol si systém po zmene swapiness?
Master of PaloAlto NGFWs, Cisco ASAs
Používateľov profilový obrázok
molnart
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 7023
Dátum registrácie: Ut 19. Jún, 2012, 23:03
Bydlisko: Bratislava/Samorin

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa molnart »

restart som nerobil, aplikoval nastavenia swapu cez sudo sysctl -p nejaky efekt to ma, lebo vyuzivanost swapu postupne klesa, momentalne je na 3.04/4.65
zoom napísal: Št 19. Dec, 2024, 07:03 Utilita ioztat - je to ako iostat, ale zobrazuje I/O pre jednotlive datasety.
znie dobre, vyskusam. ale na 95% viem ze spomalenie bude na pbs a storagenode datasetoch.
zoom napísal: Št 19. Dec, 2024, 07:03 Vypnes pole, vyberies boot disk s OMV, strcis tam hoci aj USB kluc s TrueNAS Scale, nastartujes s minimalnou konfiguraciou a naimportujes existujuce disky, das scrubovat.
toto nie je problem spravit, kedze mi to bezi vo virtualke, takze staci prehodit passtrough raid karty na druhe VM, naimportovat tam konfiguraciu z TrueNas kde som to pole povodne vytvaral a idem. problemom se prave ten storagenode kde som penalizovany za kazdy cas ked to nie je online a rozbehnut pod truenas vanilla docker je o drzku. nehovoriac o tom, ze priebeh scrubu moze byt diametralne odlisny ak pocas toho mi na pole nezapisuje storagenode svojich 150-200 GB denne.
Spoiler: ukázať
PC: CPU: Intel Core i5 12600K with Silentium Fortis 5 ARGB MB: MSI Tomahawk Z690 DDR4 RAM: 2x 16GB G.Skill Ripjaws V 4400-19 DDR4 GPU: GigaByte Eagle GeForce RTX 3060 Ti OC HDD: Samsung 970 1GB GB PSU: Corsair RMx (2018) 650W Case: Fractal Meshify 2 Compact Monitor: Philips 272B7QPJEB OS: Win 11 64-bit
Notebook: HP EliteBook 840 G6 Core i5 8265U, 16 GB RAM, 512 GB SSD
Server: HP Microserver Gen8 Xeon E3-1265Lv2, 16GB ECC DDR3 OS: PVE + OMV + OPNsense
Phone: Samsung Galaxy A52s
Tablet: iPad Pro 11 (2018)
Používateľov profilový obrázok
molnart
Pokročilý používateľ
Pokročilý používateľ
Príspevky: 7023
Dátum registrácie: Ut 19. Jún, 2012, 23:03
Bydlisko: Bratislava/Samorin

Re: ZFS pool extremne pomaly scrub

Príspevok od používateľa molnart »

ako ten fio pracuje/nepracuje s cache ? ked to pustim ihned po sebe je normalne ze dostavam takto diametralne odlisne vysledky?

WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=1024MiB (1074MB), run=61118-61118msec
WRITE: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2727-2727msec
WRITE: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=1024MiB (1074MB), run=2567-2567msec

tu som vypol vsetky torrenty, storagenody apod. veci zapisujuce na pool a pustil test okamzite po sebe:
WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=1024MiB (1074MB), run=48467-48467msec
WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=1024MiB (1074MB), run=79735-79735msec

priebeh je tiez zaujimavy , obcas tam vidim aj ETA cez 30 minut a rychlosti pod 1 MB/s. tu som zapol nahravanie ked uz bol celkom rozbehnuty a dovtedy to uz bezalo nejako minutu:
https://www.youtube.com/watch?v=cjOifHCThww

Kód: Vybrať všetko

molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][95.4%][w=348MiB/s][w=89.2k IOPS][eta 00m:03s] 
writetest: (groupid=0, jobs=1): err= 0: pid=841725: Fri Dec 20 17:48:58 2024
  write: IOPS=4289, BW=16.8MiB/s (17.6MB/s)(1024MiB/61118msec); 0 zone resets
    slat (usec): min=4, max=414375, avg=231.06, stdev=2054.89
    clat (nsec): min=766, max=19628k, avg=1382.33, stdev=51522.92
     lat (usec): min=5, max=414381, avg=232.44, stdev=2056.12
    clat percentiles (nsec):
     |  1.00th=[   796],  5.00th=[   812], 10.00th=[   812], 20.00th=[   820],
     | 30.00th=[   836], 40.00th=[   844], 50.00th=[   852], 60.00th=[   884],
     | 70.00th=[   972], 80.00th=[  1240], 90.00th=[  1432], 95.00th=[  1784],
     | 99.00th=[  6880], 99.50th=[ 10816], 99.90th=[ 22912], 99.95th=[ 31616],
     | 99.99th=[197632]
   bw (  KiB/s): min=  136, max=397815, per=96.73%, avg=16596.49, stdev=65989.98, samples=122
   iops        : min=   34, max=99453, avg=4149.10, stdev=16497.42, samples=122
  lat (nsec)   : 1000=72.70%
  lat (usec)   : 2=22.92%, 4=1.92%, 10=1.94%, 20=0.40%, 50=0.10%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 20=0.01%
  cpu          : usr=0.90%, sys=4.95%, ctx=10643, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=1024MiB (1074MB), run=61118-61118msec

--------------------
molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][w=359MiB/s][w=91.9k IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=843084: Fri Dec 20 17:49:21 2024
  write: IOPS=82.9k, BW=324MiB/s (340MB/s)(1024MiB/3161msec); 0 zone resets
    slat (usec): min=4, max=44464, avg=10.28, stdev=94.92
    clat (nsec): min=769, max=1295.2k, avg=1181.26, stdev=8552.87
     lat (usec): min=5, max=44472, avg=11.46, stdev=95.39
    clat percentiles (nsec):
     |  1.00th=[   796],  5.00th=[   804], 10.00th=[   812], 20.00th=[   820],
     | 30.00th=[   828], 40.00th=[   844], 50.00th=[   860], 60.00th=[   900],
     | 70.00th=[  1064], 80.00th=[  1160], 90.00th=[  1320], 95.00th=[  1480],
     | 99.00th=[  2160], 99.50th=[  6496], 99.90th=[ 20864], 99.95th=[ 48384],
     | 99.99th=[432128]
   bw (  KiB/s): min=191328, max=522392, per=96.29%, avg=319423.17, stdev=120797.88, samples=6
   iops        : min=47832, max=130598, avg=79856.00, stdev=30199.55, samples=6
  lat (nsec)   : 1000=66.37%
  lat (usec)   : 2=32.41%, 4=0.66%, 10=0.17%, 20=0.29%, 50=0.05%
  lat (usec)   : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=14.46%, sys=72.66%, ctx=1978, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=324MiB/s (340MB/s), 324MiB/s-324MiB/s (340MB/s-340MB/s), io=1024MiB (1074MB), run=3161-3161msec

--------------------
molnart@omv6:/StoragePool/Storj$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][75.0%][w=358MiB/s][w=91.7k IOPS][eta 00m:01s]
writetest: (groupid=0, jobs=1): err= 0: pid=843179: Fri Dec 20 17:49:31 2024
  write: IOPS=96.1k, BW=376MiB/s (394MB/s)(1024MiB/2727msec); 0 zone resets
    slat (usec): min=4, max=2128, avg= 8.75, stdev=20.87
    clat (nsec): min=784, max=1284.1k, avg=1107.85, stdev=6259.16
     lat (usec): min=5, max=2136, avg= 9.86, stdev=22.01
    clat percentiles (nsec):
     |  1.00th=[   812],  5.00th=[   828], 10.00th=[   828], 20.00th=[   836],
     | 30.00th=[   844], 40.00th=[   852], 50.00th=[   860], 60.00th=[   876],
     | 70.00th=[   908], 80.00th=[   988], 90.00th=[  1208], 95.00th=[  1432],
     | 99.00th=[  2320], 99.50th=[  5920], 99.90th=[ 23680], 99.95th=[ 51968],
     | 99.99th=[264192]
   bw (  KiB/s): min=184744, max=512712, per=99.71%, avg=383410.80, stdev=130015.57, samples=5
   iops        : min=46186, max=128178, avg=95852.60, stdev=32503.94, samples=5
  lat (nsec)   : 1000=81.90%
  lat (usec)   : 2=16.73%, 4=0.76%, 10=0.23%, 20=0.26%, 50=0.07%
  lat (usec)   : 100=0.03%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=15.88%, sys=76.05%, ctx=1633, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=376MiB/s (394MB/s), 376MiB/s-376MiB/s (394MB/s-394MB/s), io=1024MiB (1074MB), run=2727-2727msec

--------------------
molnart@omv6:/StoragePool/Storj$ sync; sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][-.-%][w=357MiB/s][w=91.3k IOPS][eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=843543: Fri Dec 20 17:49:58 2024
  write: IOPS=102k, BW=399MiB/s (418MB/s)(1024MiB/2567msec); 0 zone resets
    slat (usec): min=4, max=3662, avg= 8.23, stdev=21.32
    clat (nsec): min=771, max=1190.0k, avg=1053.94, stdev=5319.98
     lat (usec): min=5, max=3669, avg= 9.29, stdev=22.14
    clat percentiles (nsec):
     |  1.00th=[   796],  5.00th=[   804], 10.00th=[   812], 20.00th=[   820],
     | 30.00th=[   820], 40.00th=[   828], 50.00th=[   836], 60.00th=[   844],
     | 70.00th=[   860], 80.00th=[   948], 90.00th=[  1208], 95.00th=[  1432],
     | 99.00th=[  2096], 99.50th=[  6688], 99.90th=[ 22656], 99.95th=[ 41216],
     | 99.99th=[181248]
   bw (  KiB/s): min=165372, max=538784, per=99.59%, avg=406807.20, stdev=155799.14, samples=5
   iops        : min=41343, max=134696, avg=101701.80, stdev=38949.78, samples=5
  lat (nsec)   : 1000=84.70%
  lat (usec)   : 2=14.17%, 4=0.58%, 10=0.14%, 20=0.28%, 50=0.08%
  lat (usec)   : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=15.08%, sys=78.64%, ctx=1370, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=399MiB/s (418MB/s), 399MiB/s-399MiB/s (418MB/s-418MB/s), io=1024MiB (1074MB), run=2567-2567msec

-----------
molnart@omv6:/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
writetest: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][96.1%][w=28.8MiB/s][w=7360 IOPS][eta 00m:02s] 
writetest: (groupid=0, jobs=1): err= 0: pid=850664: Fri Dec 20 17:57:59 2024
  write: IOPS=5408, BW=21.1MiB/s (22.2MB/s)(1024MiB/48467msec); 0 zone resets
    slat (usec): min=4, max=280564, avg=182.60, stdev=1655.04
    clat (nsec): min=809, max=826375, avg=1405.34, stdev=3415.91
     lat (usec): min=5, max=280571, avg=184.01, stdev=1655.42
    clat percentiles (nsec):
     |  1.00th=[  844],  5.00th=[  860], 10.00th=[  868], 20.00th=[  876],
     | 30.00th=[  892], 40.00th=[  908], 50.00th=[  972], 60.00th=[ 1020],
     | 70.00th=[ 1144], 80.00th=[ 1464], 90.00th=[ 2480], 95.00th=[ 2704],
     | 99.00th=[ 6752], 99.50th=[13504], 99.90th=[22144], 99.95th=[26752],
     | 99.99th=[63744]
   bw (  KiB/s): min= 3032, max=108976, per=96.73%, avg=20928.11, stdev=15783.22, samples=96
   iops        : min=  758, max=27244, avg=5232.01, stdev=3945.79, samples=96
  lat (nsec)   : 1000=56.06%
  lat (usec)   : 2=29.19%, 4=12.62%, 10=1.54%, 20=0.47%, 50=0.12%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  cpu          : usr=1.52%, sys=9.62%, ctx=42021, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=21.1MiB/s (22.2MB/s), 21.1MiB/s-21.1MiB/s (22.2MB/s-22.2MB/s), io=1024MiB (1074MB), run=48467-48467msec
  
-----------
molnart@omv6:/StoragePool/Test$ sudo fio --name=writetest --ioengine=libaio --rw=randwrite --bs=4k --direct=1 --size=1G
writetest: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [w(1)][90.9%][w=32.8MiB/s][w=8406 IOPS][eta 00m:08s]
writetest: (groupid=0, jobs=1): err= 0: pid=851444: Fri Dec 20 17:59:23 2024
  write: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(1024MiB/79735msec); 0 zone resets
    slat (usec): min=4, max=321111, avg=301.77, stdev=2383.22
    clat (nsec): min=789, max=1595.3k, avg=1487.11, stdev=5063.03
     lat (usec): min=5, max=321117, avg=303.26, stdev=2383.70
    clat percentiles (nsec):
     |  1.00th=[  828],  5.00th=[  844], 10.00th=[  852], 20.00th=[  868],
     | 30.00th=[  884], 40.00th=[  900], 50.00th=[  956], 60.00th=[  996],
     | 70.00th=[ 1128], 80.00th=[ 1496], 90.00th=[ 2576], 95.00th=[ 3056],
     | 99.00th=[ 7584], 99.50th=[14016], 99.90th=[24192], 99.95th=[32640],
     | 99.99th=[67072]
   bw (  KiB/s): min= 1016, max=114624, per=97.07%, avg=12765.28, stdev=11326.78, samples=159
   iops        : min=  254, max=28656, avg=3191.30, stdev=2831.68, samples=159
  lat (nsec)   : 1000=60.12%
  lat (usec)   : 2=24.12%, 4=12.17%, 10=2.89%, 20=0.52%, 50=0.15%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=1.00%, sys=6.33%, ctx=46951, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=1024MiB (1074MB), run=79735-79735msec


Spoiler: ukázať
PC: CPU: Intel Core i5 12600K with Silentium Fortis 5 ARGB MB: MSI Tomahawk Z690 DDR4 RAM: 2x 16GB G.Skill Ripjaws V 4400-19 DDR4 GPU: GigaByte Eagle GeForce RTX 3060 Ti OC HDD: Samsung 970 1GB GB PSU: Corsair RMx (2018) 650W Case: Fractal Meshify 2 Compact Monitor: Philips 272B7QPJEB OS: Win 11 64-bit
Notebook: HP EliteBook 840 G6 Core i5 8265U, 16 GB RAM, 512 GB SSD
Server: HP Microserver Gen8 Xeon E3-1265Lv2, 16GB ECC DDR3 OS: PVE + OMV + OPNsense
Phone: Samsung Galaxy A52s
Tablet: iPad Pro 11 (2018)

Návrat na "Pevné disky, SSD, M.2, úložný priestor a mechaniky"