Page 13 of 14 FirstFirst ... 311121314 LastLast
Results 121 to 130 of 135

Thread: Seemingly sporadic slow ZFS IO since 22.04

  1. #121
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    I'm on it. It's taking a LONG time.... Long in the kind of way that makes me nervous. If anything else, this is a good time to be backing up!

    Edit: Quick question:

    Code:
    logs                                                                    -      -      -        -         -      -      -      -  -
      nvme-Samsung_SSD_980_PRO_with_Heatsink_2TB_S6WRNS0W5XXXXXX-part2   928G   132K   928G        -         -     0%  0.00%      -    ONLINE
    Does this mean that only 132K is being used on this partition? If so, that's a hideous waste of 1TB Maybe I should swap it out for something smaller?

    Edit2: Ok, rsync is doing my head in. Truth be told, I have a lot more files than the last time I backed up - a lot of music files and tiny metadata files. It's been assessing the files for about 5 hours..... The array doesn't contain mission critical stuff, so I'll hold my hands up and say I haven't backed it up for some time, the last time I did, it definitely did not take this long.

    I've been looking at ZFS send/recv which I haven't used before. From what I can tell, I snap the pool, send it, rename mount point, then snap it on the other end to send back?

    Code:
    zfs send Tank/Docker@before_erase | pv | zfs receive Tank-Backup/Docker
    I'll start with something small!

    Edit3: Sorry, I missed what you said about -r and -R... we are copying now.....

    Code:
    zfs send Tank@before_erase_20240206 -R | pv | zfs receive -F Tank-Backup
    Last edited by tkae-lp; February 6th, 2024 at 04:34 AM. Reason: ZFS Send/Recv

  2. #122
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    You would use either a full snapshot with Send/Receive... Or an rsync backup to somewhere. Both are not needed. Right? That would result in two "separate" backups of the same thing. I mean, that would be safer, but duplicating your efforts.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  3. #123
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Sorry my previous post probably made it look like I am doing both. I've ditched rsync because for whatever reason it was too slow.

    I'm just using send/recv now. By full snapshot you mean using the -r flag right? I've never used this before so I hope this is right. I've only ever snapped the tank and sub vols individually.

    This is what I've done:

    Code:
    zfs snapshot -r Tank@before_erase_20240206
    Code:
    zfs send Tank@before_erase_20240206 -R | pv | zfs receive -F Tank-Backup
    It's about half way...

    Code:
    5.92TiB 12:52:20 [ 158MiB/s] [       <=>                   ]
    It must be on some of the larger media files right now because the speed has picked up quite a bit.
    Last edited by tkae-lp; February 6th, 2024 at 05:31 PM.

  4. #124
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    The -r flag on zfs snapshot means recursive. That will create snapshots recursively of the current and all descendent datasets underneath what you asked for.

    The -R flag on zfs send will will replicate the specified file system, and all the descendant file systems, up to when the named snapshot was taken. That is what you need to restore it from blank. If you do not do that, it will only send the changes since the snapshot, not the actual filesystem... And you would not have everything.

    That is why I asked you to recheck the sizes of what is "sent", so you can double-check that you have a good backup.

    These are the options I use for my full backups. Then in between, I use snapshots of the changes to do incrementals.
    Last edited by MAFoElffen; February 6th, 2024 at 07:22 PM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  5. #125
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok, so what I am doing is correct?

    1. I snapped the entire tank using -r
    2. Then sent using -R

    I will mount the vols after completion and check. It certainly appears to be sending the lot. It's copied 7.23TiB so far.

  6. #126
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    LOL. Yes.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  7. #127
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok fab. You'll have to excuse the obviously daft questions - I have sleep issues and right now they are bad. A daft question is less daft if it saves a ton of data and 24 hours ;D

    Getting there!

    Code:
    7.85TiB 16:39:37 [ 130MiB/s]

  8. #128
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    Ok, so this is after the send:

    Code:
    ~ » zfs list
    NAME                       USED  AVAIL     REFER  MOUNTPOINT
    Tank                      11.3T  9.03T     11.0T  /mnt/Tank
    Tank-Backup               11.3T  3.08T     11.0T  /mnt/Tank
    Tank-Backup/Docker        16.3G  3.08T     16.3G  /mnt/Docker
    Tank-Backup/Proxmox       14.9G  3.08T     14.9G  /mnt/Proxmox
    Tank/Docker               17.0G  9.03T     16.9G  /mnt/Docker
    Tank/Proxmox              15.5G  9.03T     15.5G  /mnt/Proxmox
    After mount point reassignment:

    Code:
    zfs list
    NAME                       USED  AVAIL     REFER  MOUNTPOINT
    Tank                      11.3T  9.03T     11.0T  /mnt/Tank
    Tank-Backup               11.3T  3.08T     11.0T  /mnt/Tank-Backup
    Tank-Backup/Docker        16.3G  3.08T     16.3G  /mnt/Tank-Backup-Docker
    Tank-Backup/Proxmox       14.9G  3.08T     14.9G  /mnt/Tank-Backup-Proxmox
    Tank/Docker               17.0G  9.03T     16.9G  /mnt/Docker
    Tank/Proxmox              15.5G  9.03T     15.5G  /mnt/Proxmox
    The main tank looks like its size matches, but not the sub vols... what do I do? Could this be trash and temp files?
    Last edited by tkae-lp; February 7th, 2024 at 06:33 AM.

  9. #129
    Join Date
    Mar 2010
    Location
    USA
    Beans
    Hidden!
    Distro
    Ubuntu Development Release

    Re: Seemingly sporadic slow ZFS IO since 22.04

    You could check with du right? Using the '--exclude' flag
    Last edited by MAFoElffen; February 7th, 2024 at 03:55 PM.

    "Concurrent coexistence of Windows, Linux and UNIX..." || Ubuntu user # 33563, Linux user # 533637
    Sticky: Graphics Resolution | UbuntuForums 'system-info' Script | Posting Guidelines | Code Tags

  10. #130
    Join Date
    Nov 2023
    Beans
    75

    Re: Seemingly sporadic slow ZFS IO since 22.04

    *facepalm* I didn't even think of du.. I had tunnel vision and was thinking there must be some ZFS command. I've had like an hours sleep in the last 36 hours... probably not the right time to be doing this, but I'll just double check everything I type!

    It's looking good:

    Code:
    du -sh --exclude "./.*" --block-size=1G /mnt/Tank /mnt/Tank-Backup /mnt/Docker /mnt/Tank-Backup-Docker
    
    11220    /mnt/Tank
    11221    /mnt/Tank-Backup
    17    /mnt/Docker
    17    /mnt/Tank-Backup-Docker
    Looks like I'm ready to destroy & recreate the pool then send back.

    Edit:

    Code:
    zfs send Tank-Backup@before_erase_20240206 -R | pv | zfs receive Tank -F
    7.24GiB 0:00:37 [ 228MiB/s] [ <=>                                                           ]
    And we're off!

    Edit2:

    Having not used ZFS send/recv before, I can say that I am a convert based on this experience! It's made life so much easier than rsync, and it's way quicker for the data I have here. Another new thing taken away from this whole experience. Great stuff.
    Last edited by tkae-lp; February 7th, 2024 at 11:43 PM.

Page 13 of 14 FirstFirst ... 311121314 LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •