Results 1 to 3 of 3

Thread: ZFS Zpool Issue

  1. #1
    Join Date
    Nov 2007
    Beans
    7

    ZFS Zpool Issue

    When replacing a failed drive in a raidz configuration I typed
    Code:
    sudo zpool add PoolName /dev/NewDrive
    To my dismay this added it as a new part of the raid. How do I remove "sdb" from pool without destroying the raid?

    Additional information about system:
    Raidz w/ 6 total drives (basically 5+1 for parity) named Disk2

    Output of zpool status after disconnecting sdb as a test and finding this breaks the raid.
    Code:
    sudo zpool status Disk2
    
          NAME                                                       STATE     READ WRITE CKSUM
            Disk2                                                      FAULTED      0     0     0  corrupted data
              raidz1-0                                                 DEGRADED     0     0     0
                disk/by-id/ata-Hitachi_HDS723020BLA642_MN1220F320V0UD  ONLINE       0     0     0
                disk/by-id/ata-ST3000DM001-9YN166_S1F0HKQJ             ONLINE       0     0     0
                disk/by-id/ata-Hitachi_HDS723020BLA642_MN1220F31VJ58D  ONLINE       0     0     0
                disk/by-id/ata-Hitachi_HDS723020BLA642_MN1220F32268AD  ONLINE       0     0     0
                disk/by-id/ata-Hitachi_HDS723020BLA642_MN1220F3214VSD  ONLINE       0     0     0
                disk/by-id/ata-Hitachi_HDS723020BLA642_MN1220F32253WD  UNAVAIL      0     0     0  cannot open
              sdb                                                      FAULTED      0     0     0  corrupted data

  2. #2
    Join Date
    Feb 2010
    Location
    Obscurial Springs
    Beans
    15,210
    Distro
    Ubuntu Budgie Development Release

    Re: ZFS Zpool Issue

    Moved Per Request
    "Our intention creates our reality. "

    Ubuntu Documentation Search: Popular Pages
    Ubuntu: Security Basics
    Ubuntu: Flavors

  3. #3
    Join Date
    Feb 2009
    Location
    Dallas
    Beans
    1,494

    Re: ZFS Zpool Issue

    ZFS doesn't allow for the removal of a vdev from a storage pool, they can only grow not shrink. I believe you'll have to start from scratch again if you want to remove vdev "sdb" from the pool. For future reference the proper command to replace a failed disk is below.

    Code:
    sudo zpool replace *BAD DISK* *GOOD DISK*
    In your case it would look something like this assuming the replacement disk was similar (where the last five digits of the ID would be something else).

    Code:
    sudo zpool replace ata-Hitachi_HDS723020BLA642_MN1220F32253WD ata-Hitachi_HDS723020BLA642_MN1220F32XXXXX
    Always use a unique name when adding devices whether this is /dev/disk/by-id or another method like the /etc/vdev_id.conf file. Also read the manual first before you do anything with "sudo" in the command.

    Code:
    man zpool

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •