Tag Archives: navicli

Adding a Celerra to a Clariion storage domain from the CLI

If you’re having trouble joining your Celerra to the storage domain from Unisphere, there is an EMC service Workaround for Joining it from the Navisphere CLI. When attempting it from Unisphere, it would appear to work and allow me to join but would never actually show up on the domain list.  Below is a workaround for the problem that worked for me. Run these commands from the Control Station.

Run this first:

curl -kv “hps://<Celerra_CS_IP>/cgi-bin/set_incomingmaster?master=<Clariion_SPA_DomainMaster_IP>,”

Next, run the following navicli command to the domain master in order to add the Celerra Control Station to the storage domain:

/nas/sbin/naviseccli -h 10.3.215.73 -user <userid> -password <password> -scope 0 domain -add 10.32.12.10

After a successful Join the /nas/http/domain folder should be populated with the domain_list, domain_master, and domain_users files.

Run this command to verify:

ls -l /nas/http/domain

You should see this output:

-rw-r–r– 1 apache apache 552 Aug  8  2011 domain_list
-rw-r–r– 1 apache apache  78 Feb 15  2011 domain_master
-rw-r–r– 1 apache apache 249 Oct  5  2011 domain_users
 

You can also check the domain list to make sure that an entry has been made.

Run this command to verify:

/nas/sbin/naviseccli -h <Clariion_SPA_DomainMaster_IP> domain -list

You should then see a list of all domain members.  The output will look like this:

Node:                     <DNS Name of Celerra>
IP Address:           <Celerra_CS_IP>
Name:                    <DNS Name of Celerra>
Port:                        80
Secure Port:          443
IP Address:           <Celerra_CS_IP>
Name:                    <DNS Name of Celerra>
Port:                        80
Secure Port:          443
Advertisements

Reporting on the state of VNX auto-tiering

 

To go along with my previous post (reporting on LUN tier distribution) I also include information on the same intranet page about the current state of the auto-tiering job.  We run auto-tiering from 10PM to 6AM in the morning to avoid the movement of data during business hours or our normal backup window in the evening.

Sometimes the auto-tiering job will get very backed up and would theoretically never finish in the time slot that we have for data movement.  I like to keep tabs on the amount of data that needs to move up or down, and the amount of time that the array estimates until it’s completion.  If needed, I will sometimes modify the schedule to run 24 hours a day over the weekend and change it back early on Monday morning.  Unfortunately, EMC did not design the auto-tiering scheduler to allow for creating different time windows on different days. It’s a manual process.

This is a relatively simple, one line CLI command, but it provides very useful info and it’s convenient to add it to a daily report to see it at a glance.

I run this script at 6AM every day, immediately following the end of the window for data to move:

naviseccli -h clariion1_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion1_hostname.autotier.txt

naviseccli -h clariion2_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion2_hostname.autotier.txt

naviseccli -h clariion3_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion3_hostname.autotier.txt

naviseccli -h clariion4_hostname autotiering -info -state -rate -schedule -opStatus > c:\inetpub\wwwroot\clariion4_hostname.autotier.txt

 ....
 The output for each individual clariion looks like this:
Auto-Tiering State: Enabled
Relocation Rate: Medium

Schedule Name: Default Schedule
Schedule State: Enabled
Default Schedule: Yes
Schedule Days: Sun Mon Tue Wed Thu Fri Sat
Schedule Start Time: 22:00
Schedule Stop Time: 6:00
Schedule Duration: 8 hours
Storage Pools: Clariion1_SPB, Clariion2_SPA

Storage Pool Name: Clariion2_SPA
Storage Pool ID: 0
Relocation Start Time: 12/05/11 22:00
Relocation Stop Time: 12/06/11 6:00
Relocation Status: Inactive
Relocation Type: Scheduled
Relocation Rate: Medium
Data to Move Up (GBs): 1854.11
Data to Move Down (GBs): 909.06
Data Movement Completed (GBs): 2316.00
Estimated Time to Complete: 9 hours, 12 minutes
Schedule Duration Remaining: None

Storage Pool Name: Clariion1_SPB
Storage Pool ID: 1
Relocation Start Time: 12/05/11 22:00
Relocation Stop Time: 12/06/11 6:00
Relocation Status: Inactive
Relocation Type: Scheduled
Relocation Rate: Medium
Data to Move Up (GBs): 1757.11
Data to Move Down (GBs): 878.05
Data Movement Completed (GBs): 1726.00
Estimated Time to Complete: 11 hours, 42 minutes
Schedule Duration Remaining: None
 
 

Reporting on Trespassed LUNs

 

All of our production clariions are configured with two large tiered storage pools, one for LUNs on SPA and one for LUNs on SPB.  When storage is created on a server, two identical LUNs are created (one in each pool) and are striped at the host level.  I do it that way to more evenly balance the load on the storage processors.

I’ve noticed that LUNs will occassionally trespass to the other SP.  In order to keep the SP’s balanced how I want them, I will routinely check and trespass them back to their default owner.  Our naming convention for LUNs includes the SP that the LUN was initially configured to use, as well as the pool ID, server name, filesystem/drive letter, last four digits of serial number, and size.  Having all of this information in the LUN name makes for very easy reporting.  Having the default SP in the LUN name is required for this script to work as written.

Here’s what our LUN names looks like:     P1_LUN100_SPA_0000_servername_filesystem_150G

To quickly check on the status of any mismatched LUNs every morning, I created a script that generates a daily report.  The script first creates output files that list all of the LUNs on each SP, then uses simple grep commands to output only the LUNs whose SP designation in the name does not match the current owner.   The csv output files are then parsed by the csv2html perl script, which converts the csv into easy to read HTML files that are automatically posted on our intranet web site.  The csv2html perl script is from http://www.jpsdomain.org/source/perl.html and is under a GNU General Public License.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

Here’s the shell script (I have one for each clariion/VNX):

naviseccli -h clariion_hostname getlun -name -owner |grep -i name > /reports/sp/lunname.out

sleep 5

naviseccli -h clariion_hostname getlun -name -owner |grep -i current >  /reports/sp/currentsp.out

sleep 5

paste -d , /reports/sp/lunname.out /reports/sp/currentsp.out >  /reports/sp/clariion_hostname.spowner.csv

./csv2htm.pl -e -T -i /reports/sp/clariion_hostname.spowner.csv -o /reports/sp/clariion_hostname.spowner.html

#Determine SP mismatches between LUNs and SPs, output to separate files

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP B' > /reports/sp/clariion_hostname_spb.csv

grep SPA /reports/sp/clariion_hostname_spb.csv > /reports/sp/clariion_hostname_spb_mismatch.csv

cat /reports/sp/clariion_hostname.spowner.csv | grep 'SP A' > /reports/sp/clariion_hostname_spa.csv

grep SPB /reports/sp/clariion_hostname_spa.csv > /reports/sp/clariion_hostname_spa_mismatch.csv

#Convert csv output files to HTML for intranet site

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spa_mismatch.csv -o /reports/sp/clariion_hostname_spa_mismatch.html

./csv2htm.pl -e -d -T -i /reports/sp/clariion_hostname_spb_mismatch.csv -o /reports/sp/clariion_hostname_spb_mismatch.html
 The output files look like this (clariion_hostname_spa_mismatch.html from the script):
Name: P1_LUN100_SPA_0000_servername_filesystem1_150G       Current Owner: SPB

Name: P1_LUN101_SPA_0000_servername_filesystem2_250G      Current Owner: SPB

Name: P1_LUN102_SPA_0000_servername_filesystem3_350G      Current Owner: SPB

Name: P1_LUN103_SPA_0000_servername_filesystem4_450G
Current Owner: SPB

Name: P1_LUN104_SPA_0000_servername_filesystem5_550G      
Current Owner: SPB
 The 0000 represents the last four digits of the serial number of the Clariion.

That’s it, a quick and easy way to report on trespassed LUNs in our environment.

How to scrub/zero out data on a decommissioned VNX or Clariion

datawipe

Our audit team needed to ensure that we were properly scrubbing the old disks before sending our old Clariion back to EMC on a trade in.  EMC of course offers scrubbing services that run upwards of $4000 for an array.  They also have a built in command that will do the same job:

navicli -h <SP IP> zerodisk -messner B E D
B Bus
E Enclosure
D Disk

usage: zerodisk disk-names [start|stop|status|getzeromark]

sample: navicli -h 10.10.10.10 zerodisk -messner 1_1_12

This command will write all zero’s to the disk, making any data recovery from the disk impossible.  Add this command to a windows batch file for every disk in your array, and you’ve got a quick and easy way to zero out all the disks.

So, once the disks are zeroed out, how do you prove to the audit department that the work was done? I searched everywhere and could not find any documentation from emc on this command, which is no big surprise since you need the engineering mode switch (-messner) to run it.  Here were my observations after running it:

This is the zeromark status on 1_0_4 before running navicli -h 10.10.10.10 zerodisk -messner 1_0_4 start:

 Bus 1 Enclosure 0  Disk 4

 Zero Mark: 9223372036854775807

 This is the zeromark status on 1_0_4 after the zerodisk process is complete:

(I ran navicli -h 10.10.10.10 zerodisk -messner 1_0_4 getzeromark to get this status)

 Bus 1 Enclosure 0  Disk 4

Zero Mark: 69704

 The 69704 number indicates that the disk has been successfully scrubbed.  Prior to running the command, all disks will have an extremely long zero mark (18+ digits), after the zerodisk command completes the disks will return either a 69704 or 69760 depending on the type of disk (FC/SATA).  That’s be best I could come up with to prove that the zeroing was successful.  Running the getzeromark option on all the disks before and after the zerodisk command should be sufficient to prove that the disks were scrubbed.