The SAN Guy

tips and tricks from an IT veteran…

Pure Storage data reduction report

I had a request to publish a daily report that outlines our data reduction numbers for each LUN on our production Pure storage arrays.  I wrote a script that will log in to the Pure CLI and issue the appropriate command, using ‘expect’ to do a screen grab and output the data to a csv file.  The csv file is then converted to an HTML table and published on our internal web site.

The ‘expect’ commands (saved as purevol.exp in the same directory as the bash script)

#!/usr/bin/expect -f
spawn ssh pureuser@10.10.10.10
expect “logon as: ”
send “pureuser\r”
expect “pureuser@10.10.10.10’s password: ”
send “password\r”
expect “pureuser@pure01> ”
send “purevol list –space\r”
expect “pureuser@pure01> ”
send “exit\r”

.

The bash script (saved as purevol.sh):

#!/bin/bash

# Pure Data Reduction Report Script
# 11/28/16

#Define a timestamp function
#The output looks like this: 6-29-2016/8:45:12
timestamp() {
date +”%m-%d-%Y/%H:%M:%S”
}

#Remove existing output file
rm /home/data/pure/purevol_532D.txt

#Run the expect script to create the output file
/usr/bin/expect -f /home/data/pure/purevol.exp > /home/data/pure/purevol_532D.txt

#Remove the first ten lines of the output file
#The first 12 lines contain login and command execution info not needed in the report
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt
sed -i ‘1d’ /home/data/pure/purevol_532D.txt

#Remove the last line of the output file
#This is because the expect script leaves a CLI prompt as the last line of the output
sed -i ‘$ d’ /home/data/pure/purevol_532D.txt

#Add date to output file, remove previous temp files
rm /home/data/pure/purevol_532D-1.csv
rm /home/data/pure/purevol_532D-2.csv
echo -n “Run time: ” > /home/data/pure/purevol_532D-1.csv
echo $(timestamp) >> /home/data/pure/purevol_532D-1.csv

#Add titles to new csv file
echo “Volume”,”Size”,”Thin Provisioning”,”Data Reduction”,” “,” “,”Total Reduction”,” “,” “,”Volume”,”Snapshots”,”Shared Space”,”System”,”Total” >> /home/data/pure/purevol_532D-1.csv

#Convert the space delimited file into a comma delimited file
sed -r ‘s/^\s+//;s/\s+/,/g’ /home/data/pure/purevol_532D.txt > /home/data/pure/purevol_532D-2.csv

#Combine the csv files into one
cat /home/data/pure/purevol_532D-1.csv /home/data/pure/purevol_532D-2.csv > /home/data/pure/purevol_532D.csv

#Use the csv2htm perl script to convert the csv to an html table
#csv2html script available here:  http://web.simmons.edu/~boyd3/imap/csv2html/
./csv2htm.pl -e -T -i /home/data/pure/purevol_532D.csv -o /home/data/pure/purevol_532D.html

#Copy the html file to the www folder to publish it
cp /home/data/pure/purevol_532D.html /cygdrive/C/inetpub/wwwroot

Below is an example of what the output looks like after the script is run and the output is converted to an HTML table.  Note there are columns missing to the right in order to fit the formatting of this post.  Also included are the numbers for Total reduction and snapshots.

Name Size Thin Provisioning Data Reduction
LUN001_PURE_0025_ESX_5T 5T 78% 16.4 to 1
LUN002_PURE_0025_ESX_5T 5T 75% 7.8 to 1
LUN003_PURE_0025_ESX_5T 5T 71% 9.3 to 1
LUN004_PURE_0025_ESX_5T 5T 87% 10.5 to 1

 

 

 

 

NetApp FAS Zero disk procedure

We recently had a need to zero out and reinstall a NetApp FAS 8080 in order to move it from test to production.  Below are the steps to zero out the disks in the array.

Steps:

  1. SSH to each nodes service-processor
  2. Halt each node.
    1. system node halt -node Node1 -inhibit-takeover true
    2. system node halt -node Node2
  3. At the Loader prompt for each node boot ONTAP (You might want to do these one at a time so you don’t miss the CTRL-C for the Boot Menu)
    1. LOADER A> boot_ontap
    2. LOADERB> boot_ontap
  4. Press Ctrl + C when you see the message below to enter the Boot Menu and select option 4 to wipe the configuration and zero disks (Do this on each node)
******************************* 
*                             * 
* Press Ctrl-C for Boot Menu. * 
*                             * 
******************************* 

(1) Normal Boot.
(2) Boot without /etc/rc. 
(3) Change password. 
(4) Clean configuration and initialize all disks. 
(5) Maintenance mode boot. 
(6) Update flash from backup config. 
(7) Install new software first. 
(8) Reboot node. Selection (1-8)? 4

5. Enter y to the questions that follow:

Zero disks, reset config and install a new file system?: y

This will erase all the data on the disks, are you sure?: y

The node will reboot and start initializing the disks.  Once the disks are zeroed the system should reboot to the cluster setup.

Web interface disabled on brocade switch

I ran into an issue where one of our brocade switches was inaccessible via the web browser. The error below was displayed when connecting to the IP:

Interface disabled
This Interface (10.2.2.23) has been blocked by the administrator.

In order to resolve this, you’ll need to allow port 80 traffic on the switch.  It was disabled on mine.

First, Log in to the switch and review the existing IP filters (Look for port 80 set to deny):

switcho1:admin> ipfilter –show

Name: default_ipv4, Type: ipv4, State: active
Rule Source IP Protocol Dest Port Action
1 any tcp 22 permit
2 any tcp 23 deny
3 any tcp 897 permit
4 any tcp 898 permit
5 any tcp 111 permit
6 any tcp 80 deny
7 any tcp 443 permit
8 any udp 161 permit
9 any udp 111 permit
10 any udp 123 permit
11 any tcp 600 – 1023 permit
12 any udp 600 – 1023 permit

Next, clone the default policy, as you cannot make changes to the default policy.  Note that you can name the policy anything you like, I chose to name it “Allow80”.

ipfilter –clone Allow80 -from default_ipv4

Delete the rule that denys port 80 (rule 6 in the above example):

ipfilter –delrule Allow80 -rule 6

Add a rule back in to permit it:

ipfilter –addrule Allow80 -rule 12 -sip any -dp 80 -proto tcp -act permit

Save it:

ipfilter –save Allow80

Activate it (this will change default policy to a “defined” state):

ipfilter –activate Allow80

 

That’s it… you should now be able to access your switch via the web browser.

VPLEX Unisphere Login hung at “Retrieving Meta-Volume Information”

I recently had an issue where I was unable to log in to the Unisphere GUI on the VPLEX, it would hang with the message “Retrieving Meta-Volume Information” after progressing about 30% on the progress bar.

This was caused by a hung Java process.  In order to resolve it, you must restart the management server. This will not cause any disruption to hosts connected to the VPLEX.

To do this, run the following command:

ManagementServer:/> sudo /etc/init.d/VPlexManagementConsole restart

If this hangs or does not complete, you will need to run the top command to identify the PID for the java service:

admin@service:~>top
Mem:   3920396k total,  2168748k used,  1751648k free,    29412k buffers
Swap:  8388604k total,    54972k used,  8333632k free,   527732k cached

  PID USER      PR  NI  VIRT  RES  SHR S   %CPU %MEM    TIME+  COMMAND
26993 service   20   0 2824m 1.4g  23m S     14 36.3  18:58.31 java
 4948 rabbitmq  20   0  122m  42m 1460 S      1  1.1  13118:32 beam.smp
    1 root      20   0 10540   48   36 S      0  0.0  12:34.13 init

Once you’ve identified the PID for the java service, you can kill the process with the kill command, and then run the command to restart the management console again.

ManagementServer:/> sudo kill -9 8798
ManagementServer:/> sudo /etc/init.d/VPlexManagementConsole start

Once the management server restarts, you should be able to log in to the Unisphere for VPLEX GUI again.

Default Passwords

Here is a collection of default passwords for EMC, HP, Cisco, VMware, TrendMicro and IBM hardware & software.

EMC Secure Remote Support (ESRS) Axeda Policy Manager Server:

  • Username: admin
  • Password: EMCPMAdm7n

EMC VNXe Unisphere (EMC VNXe Series Quick Start Guide, step 4):

  • Username: admin
  • Password: Password123#

EMC vVNX Unisphere:

  • Username: admin
  • Password: Password123#
    NB You must change the administrator password during this first login.

EMC CloudArray Appliance:

  • Username: admin
  • Password: password
    NB Upon first login you are prompted to change the password.

EMC CloudBoost Virtual Appliance:
https://<FQDN&gt;:4444

  • Username: local\admin
  • Password: password
    NB You must immediately change the admin password.
    $ password <current_password> <new_password>

EMC Ionix Unified Infrastructure Manager/Provisioning (UIM/P):

  • Username: sysadmin
  • Password: sysadmin

EMC VNX Monitoring and Reporting:

  • Username: admin
  • Password: changeme

EMC RecoverPoint:

  • Username: admin
    Password: admin
  • Username: boxmgmt
    Password: boxmgmt
  • Username: security-admin
    Password: security-admin

EMC XtremIO:

XtremIO Management Server (XMS)

  • Username: xmsadmin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Management Secure Upload

  • Username: xmsupload
    Password: xmsupload

XtremIO Management Command Line Interface (XMCLI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Management Command Line Interface (XMCLI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: tech
    password: 123456 (prior to v2.4)
    password: X10Tech! (v2.4+)

XtremIO Graphical User Interface (XtremIO GUI)

  • Username: admin
    password: 123456 (prior to v2.4)
    password: Xtrem10 (v2.4+)

XtremIO Easy Installation Wizard (on storage controllers / nodes)

  • Username: xinstall
    Password: xiofast1

XtremIO Easy Installation Wizard (on XMS)

  • Username: xinstall
    Password: xiofast1

Basic Input/Output System (BIOS) for storage controllers / nodes

  • Password: emcbios

Basic Input/Output System (BIOS) for XMS

  • Password: emcbios

EMC ViPR Controller :
http://ViPR_virtual_ip (the ViPR public virtual IP address, also known as the network.vip)

  • Username: root
    Password: ChangeMe

EMC ViPR Controller Reporting vApp:
http://<hostname&gt;:58080/APG/

  • Username: admin
    Password: changeme

EMC Solutions Integration Service:
https://<Solutions Integration Service IP Address>:5480

  • Username: root
    Password: emc

EMC VSI for VMware vSphere Web Client:
https://<Solutions Integration Service IP Address>:8443/vsi_usm/

  • Username: admin
  • Password: ChangeMe

Note:
After the Solutions Integration Service password is changed, it cannot be modified.
If the password is lost, you must redeploy the Solutions Integration Service and use the default login ID and password to log in.

Cisco Integrated Management Controller (IMC) / CIMC / BMC:

  • Username: admin
  • Password: password

Cisco UCS Director:

  • Username: admin
  • Password: admin
  • Username: shelladmin
  • Username: changeme

Hewlett Packard P2000 StorageWorks MSA Array Systems:

  • Username: admin
  • Password: !admin (exclamation mark ! before admin)
  • Username: manage
  • Password: !manage (exclamation mark ! before manage)
IBM Security Access Manager Virtual Appliance:
  • Username: admin
  • Password: admin

VCE Vision:

  • Username: admin
  • Password: 7j@m4Qd+1L
  • Username: root
  • Password: V1rtu@1c3!

VMware vSphere Management Assistant (vMA):

  • Username: vi-admin
  • Password: vmware

VMware Data Recovery (VDR):

  • Username: root
  • Password: vmw@re (make sure you enter @ as Shift-2 as in US keyboard layout)

VMware vCenter Hyperic Server:
https://Server_Name_or_IP:5480/

  • Username: root
  • Password: hqadmin

https://Server_Name_or_IP:7080/

  • Username: hqadmin
  • Password: hqadmin

VMware vCenter Chargeback:
https://Server_Name_or_IP:8080/cbmui

  • Username: root
  • Password: vmware

VMware vCenter Server Appliance (VCSA) 5.5:
https://Server_Name_or_IP:5480

  • Username: root
  • Password: vmware

VMware vCenter Operations Manager (vCOPS):

Console access:

  • Username: root
  • Password: vmware

Manager:
https://Server_Name_or_IP

  • Username: admin
  • Password: admin

Administrator Panel:
https://Server_Name_or_IP/admin

  • Username: admin
  • Password: admin

Custom UI User Interface:
https://Server_Name_or_IP/vcops-custom

  • Username: admin
  • Password: admin

VMware vCenter Support Assistant:
http://Server_Name_or_IP

  • Username: root
  • Password: vmware

VMware vCenter / vRealize Infrastructure Navigator:
https://Server_Name_or_IP:5480

  • Username: root
  • Password: specified during OVA deployment

VMware ThinApp Factory:

  • Username: admin
  • Password: blank (no password)

VMware vSphere vCloud Director Appliance:

  • Username: root
  • Password: vmware

VMware vCenter Orchestrator :
https://Server_Name_or_IP:8281/vco – VMware vCenter Orchestrator
https://Server_Name_or_IP:8283 – VMware vCenter Orchestrator Configuration

  • Username: vmware
  • Password: vmware

VMware vCloud Connector Server (VCC) / Node (VCN):
https://Server_Name_or_IP:5480

  • Username: admin
  • Password: vmware
  • Username: root
  • Password: vmware

VMware vSphere Data Protection Appliance:

  • Username: root
  • Password: changeme

VMware HealthAnalyzer:

  • Username: root
  • Password: vmware

VMware vShield Manager:
https://Server_Name_or_IP

  • Username: admin
  • Password: default
    type enable to enter Privileged Mode, password is 'default' as well

Teradici PCoIP Management Console:

  • The default password is blank

Trend Micro Deep Security Virtual Appliance (DS VA):

  • Login: dsva
  • password: dsva

Citrix Merchandising Server Administrator Console:

  • User name: root
  • password: C1trix321

VMTurbo Operations Manager:

  • User name: administrator
  • password: administrator
    If DHCP is not enabled, configure a static address by logging in with these credentials:
  • User name: ipsetup
  • password: ipsetup
    Console access:
  • User name: root
  • password: vmturbo

Scripting an alert for checking the availability of individual CIFS server shares

It was recently asked to come up with a method to alert on the availability of specific CIFS file shares in our environment.  This was due to a recent issue we had on our VNX with our data mover crashing and causing the corruption of a single file system when it came back up.  We were unaware for several hours of the one file system being unavailable on our CIFS server.

This particular script would require maintenance whenever a new file system share is added to a CIFS server.  A unique line must to be added for every file system share that you have configured.  If a file system is not mounted and the share is inaccessible, an email alert will be sent.  If the share is accessible the script does nothing when run from the scheduler.  If it’s run manually from the CLI, it will echo back to the screen that the path is active.

This is a bash shell script, I run it on a windows server with Cygwin installed using the ‘email’ package for SMTP.  It should also run fine from a linux server, and you could substitute the ‘email’ syntax for sendmail or whatever other mail application you use.   I have it scheduled to check the availability of CIFS shares every one hour.

 

DIR1=file_system_1; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR1 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR1 is offline” emailaddress@email.com

DIR2=file_system_2; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR2 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR2 is offline” emailaddress@email.com

DIR3=file_system_3; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR3 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR3 is offline” emailaddress@email.com

DIR4=file_system_4; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR4 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR4 is offline” emailaddress@email.com

DIR5=file_system_5; SRV1=cifs_servername;  echo -ne $DIR1 && echo -ne “: ” && [ -d //$SRV1/$DIR5 ] && echo “Network Path is Active” || email -b -s “Network Path \\\\$SRV1\\$DIR5 is offline” emailaddress@email.com

EMC World 2015

I’m at EMC World in Las Vegas this week and it’s been fantastic so far.  I’m excited about the new 40TB XtremIO X-bricks and how we might leverage that for our largest and most important 80TB oracle database, also excited about possible use cases for  the Virtual VNX in our small branch locations, and all the other exciting futures that I can’t publicly share because I’m under an NDA with EMC.  Truly exciting and innovative technology is coming from them.  VXblock was also really impressive, although that’s not likely something my company will implement anytime soon.

I found out for the first time today that the excellent VNX Monitoring and Reporting application is now free for the VNX1 platform as well as VNX2.  If you would like to get a license for any of your VNX1 arrays, simply ask your local  sales representative to submit a zero dollar sales order for a license.  We’re currently evaluating ViPR SRM as a replacement for our soon to be “end of life” Control Center install, but until then VNX MR is a fantastic tool that provides nearly the same performance data for no cost at all.  SRM adds much more functionality beyond just VNX monitoring and reporting (i.e., monitoring SAN switches) and I’d highly recommend doing a demo if you’re also still using Control Center.

We also implemented a VPLEX last year and it’s truly been a lifesaver and is an amazing platform.  We currently have a VPLEX local implantation in our primary data center and it’s allowed us to easily migrate workloads from one array to another seamlessly with no disruption to applications.   I’m excited about the possibilities with RecoverPoint as well, I’m still learning about it.

If anyone else who’s at EMC World happens to read this, comment!  I’d love to hear your experiences and what you’re most excited about with EMC’s latest technology.

Rescan Storage System command on Celerra results in conflict:storageID-devID error

I was attempting to extend our main production NAS file pool on our NS-960 and ran into an issue.  I had recently freed up 8 SATA disks from a block pool and was attempting to re-use them and extend a Celerra file pool.  I created a new RAID Group and LUN that used the maximum capacity of the RAID Group.  I then added the LUN to the celerra storage group, making sure to set the HLU to a number greater than 15.  I then changed the setting on our main production file pool to auto-extend, and clicked on the “Rescan Storage Systems” option.  Unfortunately rescanning produced an error every time it was run.  I have done this exact same procedure in the past and it’s worked fine.  Here is the error:

conflict:storageID-devID: disk=17 old:symm=APM00100600999,dev=001F new:symm=APM00100600999,dev=001F addr=c16t1l11

I checked the disks on the Celerra using the nas_disk –l command, and the new disk shows up as “in use” even though the rescan command didn’t properly complete.

[nasadmin@Celerra tools]$ nas_disk -l
id   inuse  sizeMB    storageID-devID      type   name  servers
17    y     7513381   APM00100600999-001F  CLATA  d17   <BLANK>

Once the dvol is presented to Celerra (assuming the rescan goes fine) it should not be inuse until it is assigned to a storage pool and a file system uses it.  In this case that didn’t happen.  If you run /nas/tools/whereisfs (depending on your DART version, it may be “.whereisfs” with the dot) it shows a listing of every file system and which disk and which LUN they reside on.  I verified that the disk was not in use using that command.

In order to be on the safe side, I opened an SR with EMC rather than simply deleting the disk.  They suggested that the NAS database has a corruption. I’m going to have EMC’s Recovery Team check the usage of the diskvol and then delete it and re-add it.  In order to engage the recovery team you need to sign a “Data Deletion Form” absolving EMC of any liability for data loss, which is standard practice when they delete volumes on a customer array.  If there are any further caveats or important things to note after EMC has taken care of this I’ll update this post.

VPLEX initiator paths dropped

We recently ran into an SP bug check on one of our VNX arrays and after it came back up several of the initiator paths to the VPLEX did not come back up.  We were also seeing IO timeouts.  This is a known bug that happens when there is an SP reboot and is fixed with Patch 1 for GeoSynchrony 5.3.  EMC has released a script that provides a workaround until the patch can be applied: https://download.emc.com/downloads/DL56253_VPLEX_VNX_SCRIPT.zip.zip

The following pre-conditions need to happen during a VNX NDU to see this issue on VPLEX:
1] During a VNX NDU, SPA goes down.
2] At this point IO time-outs start happening on IT nexus’s pertaining to SPA.
3] The IO time-outs cause the VPLEX SCSI Layer to send LU Reset TMF’s. These LU Reset TMF’s get timed out as well.

You can review ETA 000193541 on EMC’s support site for more information.  It’s a critical bug and I’d suggest patching as soon as possible.

 

VPLEX Health Check

This is a brief post to share the CLI commands and sample output for a quick VPLEX health check.  Our VPLEX had a dial home event and below are the commands that EMC ran to verify that it was healthy.  Here is the dial home event that was generated:

SymptomCode: 0x8a266032
SymptomCode: 0x8a34601a
Category: Status
Severity: Error
Status: Failed
Component: CLUSTER
ComponentID: director-1-1-A
SubComponent: stdf
CallHome: Yes
FirstTime: 2014-11-14T11:20:11.008Z
LastTime: 2014-11-14T11:20:11.008Z
CDATA: Compare and Write cache transaction submit failed, status 1 [Versions:MS{D30.60.0.3.0, D30.0.0.112, D30.60.0.3}, Director{6.1.202.1.0}, ClusterWitnessServer{unknown}] RCA: The attempt to start a cache transaction for a Scsi Compare and Write command failed. Remedy: Contact EMC Customer Support.

Description: The processing of a Scsi Com pare and Write command could not complete.
ClusterID: cluster-1

Based on that error the commands below were run to make sure the cluster was healthy.

This is the general health check command:

VPlexcli:/> health-check
 Product Version: 5.3.0.00.00.10
 Product Type: Local
 Hardware Type: VS2
 Cluster Size: 2 engines
 Cluster TLA:
 cluster-1: FNM00141800023
 
 Clusters:
 ---------
 Cluster Cluster Oper Health Connected Expelled Local-com
 Name ID State State
 --------- ------- ----- ------ --------- -------- ---------
 cluster-1 1 ok ok True False ok
 
 Meta Data:
 ----------
 Cluster Volume Volume Oper Health Active
 Name Name Type State State
 --------- ------------------------------- ----------- ----- ------ ------
 cluster-1 c1_meta_backup_2014Nov21_100107 meta-volume ok ok False
 cluster-1 c1_meta_backup_2014Nov20_100107 meta-volume ok ok False
 cluster-1 c1_meta meta-volume ok ok True
 
 Director Firmware Uptime:
 -------------------------
 Director Firmware Uptime
 -------------- ------------------------------------------
 director-1-1-A 147 days, 16 hours, 15 minutes, 29 seconds
 director-1-1-B 147 days, 15 hours, 58 minutes, 3 seconds
 director-1-2-A 147 days, 15 hours, 52 minutes, 15 seconds
 director-1-2-B 147 days, 15 hours, 53 minutes, 37 seconds
 
 Director OS Uptime:
 -------------------
 Director OS Uptime
 -------------- ---------------------------
 director-1-1-A 12:49pm up 147 days 16:09
 director-1-1-B 12:49pm up 147 days 16:09
 director-1-2-A 12:49pm up 147 days 16:09
 director-1-2-B 12:49pm up 147 days 16:09
 
 Inter-director Management Connectivity:
 ---------------------------------------
 Director Checking Connectivity
 Enabled
 -------------- -------- ------------
 director-1-1-A Yes Healthy
 director-1-1-B Yes Healthy
 director-1-2-A Yes Healthy
 director-1-2-B Yes Healthy
 
 Front End:
 ----------
 Cluster Total Unhealthy Total Total Total Total
 Name Storage Storage Registered Ports Exported ITLs
 Views Views Initiators Volumes
 --------- ------- --------- ---------- ----- -------- -----
 cluster-1 56 0 299 16 353 9802
 
 Storage:
 --------
 Cluster Total Unhealthy Total Unhealthy Total Unhealthy No Not visible With
 Name Storage Storage Virtual Virtual Dist Dist Dual from Unsupported
 Volumes Volumes Volumes Volumes Devs Devs Paths All Dirs # of Paths
 --------- ------- --------- ------- --------- ----- --------- ----- ----------- -----------
 cluster-1 203 0 199 0 0 0 0 0 0
 
 Consistency Groups:
 -------------------
 Cluster Total Unhealthy Total Unhealthy
 Name Synchronous Synchronous Asynchronous Asynchronous
 Groups Groups Groups Groups
 --------- ----------- ----------- ------------ ------------
 cluster-1 0 0 0 0
 
 Cluster Witness:
 ----------------
 Cluster Witness is not configured

This command checks the status of the cluster:

VPlexcli:/> cluster status
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok

This command checks the state of the storage volumes:

VPlexcli:/> storage-volume summary
Storage-Volume Summary (no tier)
---------------------- --------------------

Health out-of-date 0
storage-volumes 203
unhealthy 0

Vendor DGC 203

Use meta-data 4
used 199

Capacity total 310T