How to scrub/zero out data on a decommissioned VNX or Clariion

datawipe

Our audit team needed to ensure that we were properly scrubbing the old disks before sending our old Clariion back to EMC on a trade in.  EMC of course offers scrubbing services that run upwards of $4000 for an array.  They also have a built in command that will do the same job:

navicli -h <SP IP> zerodisk -messner B E D
B Bus
E Enclosure
D Disk

usage: zerodisk disk-names [start|stop|status|getzeromark]

sample: navicli -h 10.10.10.10 zerodisk -messner 1_1_12

This command will write all zero’s to the disk, making any data recovery from the disk impossible.  Add this command to a windows batch file for every disk in your array, and you’ve got a quick and easy way to zero out all the disks.

So, once the disks are zeroed out, how do you prove to the audit department that the work was done? I searched everywhere and could not find any documentation from emc on this command, which is no big surprise since you need the engineering mode switch (-messner) to run it.  Here were my observations after running it:

This is the zeromark status on 1_0_4 before running navicli -h 10.10.10.10 zerodisk -messner 1_0_4 start:

 Bus 1 Enclosure 0  Disk 4

 Zero Mark: 9223372036854775807

 This is the zeromark status on 1_0_4 after the zerodisk process is complete:

(I ran navicli -h 10.10.10.10 zerodisk -messner 1_0_4 getzeromark to get this status)

 Bus 1 Enclosure 0  Disk 4

Zero Mark: 69704

 The 69704 number indicates that the disk has been successfully scrubbed.  Prior to running the command, all disks will have an extremely long zero mark (18+ digits), after the zerodisk command completes the disks will return either a 69704 or 69760 depending on the type of disk (FC/SATA).  That’s be best I could come up with to prove that the zeroing was successful.  Running the getzeromark option on all the disks before and after the zerodisk command should be sufficient to prove that the disks were scrubbed.

Advertisements

23 thoughts on “How to scrub/zero out data on a decommissioned VNX or Clariion”

    1. Hi Shawn,

      Unfortunately I haven’t come across that error so I can’t give you a specific cause for it. I would suggest trying to zero out the disk using SP B instead of SPA, there could be a hardware issue with SPA on your clariion. I would also suggest destroying all of your LUNs and RAID groups prior to zeroing. I’ve always done that prior to zeroing so I’m not sure if it’s a requirement, but it’s worth a shot.

      Thanks,

      Steve

    2. You get this error message when you run the diskzero command on a B_E_D that already is in the process of disk zeroing.
      Bus 0 Enclosure 6 Disk 0
      Error: zerodisk command failed
      Error returned from Agent
      SP A: FRU not available for disk zeroing

  1. Thanks for the excellent post!

    Will this command clobber the vault, PSM LUN, FLARE, and OS if run against the 0-0-0 through 0-0-4 drives?

    1. Yes, it will delete FLARE if you run it on 0-0-0 to 0-0-4 and will make the array inaccessible. You should avoid zeroing out those 5 drives if you want to be able to continue to manage the array.

      1. Thank you. That’s what I thought. I ended up creating a RAID group containing only the vault drives, made a single large LUN on it, and am in the process of writing data to it using:

        badblocks -ws /dev/emcpowerdf

      2. zerodisk will not delete the flare OS, but it will erase the (user) space on the Vault drives. Also, it will not run on a disk with a bound lun.

        1. It was my assumption that zerodisk deletes the Flare OS as I made the mistake (once) of running it on the first five disks of an array which became inaccessible immediately after. I was then unable to scrub the remaining disks. Either way, I’d recommend running it on the Vault drives last to be safe. You’re correct about it not running on a disk with a bound LUN. I’ve always deleted all LUNs & RAID Groups prior to scrubbing the array.

  2. How do you find out what your B E D are? Is there a way to run a cli command to list that information of all the iscsi hard drives on the san unit? I have searched for a command to show me that so I know how many drives and what their respective locations are to run the zerodisk command.

    1. You can run naviseccli -h getdisk to get a listing of every single disk in your array, but that’s going to give you much more information than you need. The easy way to get a list of all your disks is to simply cut and paste directly out of Unisphere from the ‘Disks’ page right into Excel. It’s already listed in the format of Bus X Enclosure Y Disk Z, you can then use some basic text commands in excel to filter out just the numbers to use in a script.

      1. What is Unisphere? We have a Celerra NS20. I used the /nas/sbin/setup_clariion -init to get me information about our array. I believe that has the units and disks.
        0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
        0_2: 300 300 300 300 300 300 300 300 300 300 300
        FC 8 8 8 8 8 9 9 9 9 9 UB EMP EMP EMP EMP R5
        0_1: 500 500 500 500 500 500 500 1000 1000 1000 1000 1000 1000 500
        ATA 1 1 1 1 1 1 1 5 5 5 5 5 HS EMP HS R5
        0_0: 146 146 146 146 146 146 146 146 146 146 146 146 146 146 146
        FC 0 0 0 0 0 2 2 3 3 4 4 4 4 4 HS MIX

        Would this be correct?

  3. Personally, for most folks I think this method is fine. However, although this does zero out the disks, it is not a certified data erasure done to the DoD specifications with 7-passes (DoD 5220.22-M). That’s what the paid service does.

    Another “cheap” alternative I’ve thought of recommending before is get one of the software packages (some are free/shareware) that does DoD wiping. Install it on a SAN attached host and carve up the entire array into very large LUNs. Then assign all the LUNs to the server you’ve installed the software on and let it run from there.

    1. Very good point, Dan, using this method does not meet DoD specifications. Good idea using software to wipe the array as well. If DoD certification isn’t necessary using the zerodisk option is probably the only way to wipe the disks that’s completely free, which is why most people would want to do it that way. 🙂

  4. Hey emcsan…we have 2 old NS40s and want to take all of the DAEs on one of the odl one and add it to the other. How do I destroy the Vault Drives so I can connect that DAE to the other which will already have Vault Drives of course. TIA

    1. Based on your plan I’d recommend opening an SR or contacting your local technical account rep to verify the correct steps for relocating the DAE. You should be able to run the same commands on the vault drives as the other drives, you’d just need to do them last. I would also assume you’d be fine simply zeroing them out after they are moved to the new NS-40 as the new NS-40 won’t know about any of the original RAID groups, but I haven’t attempted what you’re planning on doing. Sorry I can’t be more helpful than that.

      1. Thanks…we cannot open an SA as we decided to not renew support on those NS40s. We have a new VNX and are using the old NS40 for dev/test only.

  5. Thanks for these steps, it seems to be working like a charm and is very easy. Just had to get naviseccli tools from EMC downloads via support.emc.com. I’m doing it on a CX4 with 15 x 750GB SATA drives. Looks like it’s going to take all night at the rate it’s going 🙂 Also interesting that in Unisphere the drives went from “unbound” to “binding”.

  6. thanks for the helpful info! does anyone know if there’s a trick to getting this to work on a VNX 7600 (VNX2)? after running the command, Zero Mark just shows N/A.

  7. Hello,

    We normally do not use naviseccli and zerodisk subcommand. This is only compatible with CLARiiON or maybe with VNX arrays and requires some Excel work if you have tens or hundreds of drives to be wiped out.

    More general method of doing erase is the following:

    1. Create new RAID groups (or pools) of up to 8 disks.
    2. Create a RAID 1+0 LUN on each RAID group (one LUN per RAID group). RAID 1 or 10 is best for this purpose since it is optimized for 100% sequential writes operations.
    3. Present this LUN to a host or a VM (as an RDM e.g.).
    4. Use DBAN (Darik’s Boot and Nuke) ISO to erase the data -> http://www.dban.org/ . There is a free (no certificate) version or a paid (~15 USD) version which produces certificate/report for your boss.

    Remark: having all of your disks divided into RAID groups you will help you run DBAN erase process in parallel (one for each LUN). This will speed up the whole thing because every LUN will be loaded with exactly one write process, so there will be no disk arm competition on the RAID group and all will be sequential writes with minimal heads movement.

    This is a general method, which can be used on any disk array.

    If you have some SSD drives (and CX4 arrays were the first EMC CLARiiONs that were able to use them), this is another story and the above method should not be used.

    Regards,
    S.O.

Leave a Reply