Tag Archives: vnx

First day at EMC World 2013

It’s been a great first day at EMC World 2013 so far.  I’ve been to three breakout sessions, two of which were very informative and useful. I focused my day today on learning more about best practices for the EMC technologies that I already use.  I’m not going to go into great detail now as I need to get to the next session. 🙂 The notes I made are specific to a brocade switch implemenation as that’s what I use. Here are some useful bullet points from a few of my sessions today.

Storage Area Networking Best Practices:

1.  Single Initiator Zoning.  Only put one initiator and the targets it will access in one zone.  I’ve already practiced this for many years, but it was interesting to hear the reasons for doing it that way.  It drastically reduces the number of server queries to the switch.

2.  Dynamic Interface Management.  Use Brocade port fencing.  It can prevent a SAN outage by giving you the ability to shut down a single host or port.

3.  Monitor for Congestion.  Congestion from one host can spread an cause problems in other operating environments.  Definitely enable Brocade bottleneck detection.

4.  Periodic SAN health checks.  Use the EMC SANity or Brocade SAN health tools regularly.  If you’re reading this, you’re verly likely already doing that. 🙂

5.  Monitor for bit errors.  These can lead to bb_credit loss.  The Brocade parameter portcvglongdistance should be set (bb_credit recovery option), which will prevent the problem.  Bit errors could still be a problem on F Ports, however.

6.  Cable Hygeine.  Always clean your cables!  They are a major contributor to physical connectivity problems.

7.  Target Releases.  Always run a target release of Firmware.

VNX NAS Configuration Best Practices:

1. For transactional NAS, always use the correct transfer size for your workload.  The latest VNX OE release supports up to 1MB.

2. Use Jumbo frames end to end.

3. 10G is available now. Don’t use 1G anymore!

4. Use link aggregation for redundancy and load balancing, failsafe for high availability.

5. Use AVM for typical NAS configurations, MVM for transactional NAS where you have high concurrency from a single NFS stream to a single NFS export. 

6. Use Direct Writes for apps that use concurrent IO or are asynchronous (Oracle, VMWare, etc.).

7. For NAS Pool LUNs, use thick LUNs that are all the same size, balance the SP designation for each one, use the same tiering policy for each one, use approximately 1 LUN for every 4 drives, and use thin enabled file systems to maximize the consumption of the highest tier.

8. Always enable FASTCache for NAS LUNs.  They benefit from it even more than block LUNs.

 

Advertisements

Storage Performance Metrics

I often get requests from application owners to review storage performance stats.  I thought I’d give a quick overview of some of the things I look at, what the myriad of performance metrics in commonly used storage performance software tools actually mean, and how you might use some of them to investigate a performance problem.  Performance analysis is very much an art (not a science) and it’s sometimes difficult to pinpoint exact causes based on the mix of applications and workload on the array. Taking all of the metrics into account with a holistic view is needed to be successful. Performing data collection of application workloads over time is recommended because application workload characteristics will likely vary over time. If you have a major problem, I would always recommend opening a service ticket with your hardware vendor.

This post is just an overview of storage performance metrics and isn’t meant to dive in to every possible scenario from every angle. Dell EMC has some excellent guides for performance best practices that you can read here:

Ive used a variety of software tools in my tenure as a storage administrator.  EMC’s Performance Manager, Windows PerfMon, NetApp OnCommand Insight, Solar Winds SRM, ViPR SRM, and of course the ubiquitous Navisphere Analyzer.  All of them basically use the same metrics, so the following information will be useful regardless of which method you use.

The first thing I do when reviewing a potential storage array performance problem is a quick look at the Storage Processors.  This will give you a good indication of the overall health of the array before you dive into the specific LUN (or LUNs) used by the application.

  • SP Cache Dirty Pages (%). These are pages in write cache that have received new data from hosts but have not yet been flushed to disk.  You should have a high percentage of dirty pages as it increases the chance of a read coming from cache or additional writes to the same block of data being absorbed by the cache. If an IO is served from cache the performance is better than if the data had to be retrieved from disk.  That’s why the default watermarks are usually around 60/80% or 70/90%.  You don’t want dirty pages to reach 100%, they should fluctuate between the high and low watermarks (which means the Cache is healthy).  Periodic spikes or drops outside the watermarks are ok, but consistently hitting 100% indicates that the write cache is overstressed.
  • SP Utilization (%). Check and see if either SP is running higher than about 75%.  If either is running that high application response time will be increased.  Also, both will need to be under 50% for non-disruptive upgrades. We had to do a large scale migration of data from one SAN to another at one point in order to get a NDU accomplished.  You’ll also want to check for proper balance.  If one is much higher than the other, you should consider migrating LUNs from one SP owner to another.  I check SP balance on all of our arrays on a daily basis.
  • SP Response time (ms). Make sure again that both SPs are even and that response time is acceptable. I like to see response times under 10ms.  If you see that one SP has high utilization and response time but the other SP doesn’t, look for LUNs owned by the busier SP that are using more array resources. Looking at total IO on a per LUN basis can help confirm If both SPs have relatively similar throughput but one SP has much higher bandwidth.  That could mean that there is some large block IO occurring.
  • SP Port Queue Full Count. This represents the number of times that a front end port issued a QFULL response back to the hosts. If you are seeing QFULL’s it could mean that the Queue Depth on the HBA is too large for the LUNs being accessed.  A Clariion/VNX front end port has a queue depth of 1600 which is the maximum number of simultaneous IO’s that port can process.  Each LUN on the array has a maximum queue depth that is calculated using a formula based on the number of data disks in the RAID Group. For example, a port with 512 queues and a typical LUN queue depth of 32 can support up to: 512 / 32 = 16 LUNs on 1 Initiator (HBA) or 16 Initiators (HBAs) with 1 LUN each or any combination not to exceed this number. Configurations that exceed this number are in danger of returning QFULL conditions. A QFULL condition signals that the target/storage port is unable to process more IO requests and thus the initiator will need to throttle IO to the storage port. As a result of this, application response times will increase and IO activity will decrease.

The next thing I do is look at the specific LUNs that the application owner is asking about. The list below includes the basic performance metrics that I most often look at when investigating a performance problem.

  • Utilization (%) represents the fraction of an observation period during which a LUN has any outstanding requests. When the LUN becomes the bottleneck, the utilization will be at or close to 100%. However, since I/Os can get serviced by multiple disks an increase in workload might still result in a higher throughput.  Utilization by itself is not a very good indicator of the overall performance of the LUN, it needs to be factored in with several other things. For example, If you are writing to a LUN (100% Writes) and the location of the data is in a small physical space on the LUN, it may be possible to get to 100% with write cache re-hits. This means that all writes are being serviced by the write cache and since you are writing data to the same locations over and over you do not flush any of the data to the disks. This can cause your LUN Utilization to be 100% but there will actually be no IO to the disks. Utilization is very affected by caching, both read and write. The LUN can be very busy but may not have a problem. Use Utilization to assist in identifing busy LUNs then look at queuing and response times to see if there really is an issue.
  • Queue Length is the average number of requests within a polling interval that are outstanding to this LUN. A queue length of zero indicates an idle LUN. If three requests arrive at an idle LUN at the same time, only one of them can be served immediately; the other two must wait in the queue. That scenario would result in a queue length of 3.  My general guideline for “bad performance” on a LUN is a queue length greater than 2 for a single disk drive.
  • Average Busy Queue Length is the average number of outstanding requests when the LUN was busy. This does not include any idle time. This value should not exceed 2 times the number of spindles on a LUN. For example, if a LUN has 25 spindles, a value of 50 is acceptable. Since this queue length is counted only when the LUN is not idle, the value indicates the frequency variation (burst frequency) of incoming requests. The higher the value, the bigger the burst and the longer the average response time at this component. In contrast to this metric, the average queue length does also include idle periods when no requests are pending. If you have 50% of the time just one outstanding request, and the other 50% the LUN is idle, the average busy queue length will be 1. The average queue length however, will be ½.
  • Response Time (ms) is the average time, in milliseconds, that a request to this LUN is outstanding, including its waiting time. The higher the queue length for a LUN, the more requests are waiting in its queue, thus increasing the average response time of a single request. For a given workload, queue length and response time are directly proportional.  Keep in mind that cache re-hits bring down the average response time (and service times), whether they are reads or writes. LUN Response time is a good starting point for troubleshooting. It gives a good indicator of what the host system is experiencing. Usually if your LUN response time (Response time = queue length * service time) is good then the host performance is good. High response times don’t always mean that the CLARiiON is busy, it can also indicate that you’re having issues with your host or Fabric.  We use the Brocade Health report on a regular basis to identify hosts that have an excessive amount of traffic, as well as running the EMC HEAT report on hosts that have reported issues (which can identify incorrect HBA Drivers, Bad HBA, etc).These are my general guidelines for response time:
    Less than 10 ms: very good
    Between 10 – 20 ms: okay
    Between 20 – 50 ms: slow, needs attention
    Greater than 50 ms:  I/O bottleneck
  • Service Time (ms) represents the Time, in milliseconds, a request spent being serviced by a component. It does not include time waiting in a queue. Service time is mainly a characteristic of the system component. However, larger I/Os take longer and therefore usually result in lower throughput (IO/s) but better bandwidth (Mbytes/s). In general, Service time is simply the time it takes to actually send the I/O request to the storage and get an answer back. In general, I like to see service times below 20ms.
  • Total Throughput (IO/sec) is the average number of host requests that is passed through the LUN per second. This includes both read and write requests. Smaller requests usually result in a higher total throughput than larger requests.  Examining total throughput (along with %Utilization) is a good way to identify the busiest LUNs on the array. In general, here are the IOPs limits by drive type:
RPM        Drive Type      IOPs
7,200      SATA,NL-SAS     ~80
10,000     SATA,NL-SAS     ~130
10,000     FC,SAS          ~140
15,000     FC,SAS          ~180
N/A        EFD             ~1500 (Read/Write, 60/40)
N/A        EFD             ~6000 (Read)
N/A        EFD             ~3000 (Write)
  • Write Throughput (I/O/sec) The average number of host write requests that is passed through the LUN per second. Smaller requests usually result in a higher write throughput than larger requests.  When troubleshooting specific LUNs, check the write IO size and see if the size is what you would expect for the application you are investigating. Extremely large IO sizes coupled with high IOPS may cause write cache contention.
  • Read Throughput (I/O/sec) The average number of host read requests that is passed through the LUN per second. Smaller requests usually result in a higher read throughput than larger requests.
  • Total Bandwidth (MB/s) The average amount of host data in Mbytes that is passed through the LUN per second. This includes both read and write requests. Larger requests usually result in a higher total bandwidth than smaller requests.
  • Read Bandwidth (MB/s) The average amount of host read data in Mbytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests.
  • Write Bandwidth (MB/s) The average amount of host write data in Mbytes that is passed through the LUN per second. Larger requests usually result in a higher bandwidth than smaller requests. Keep in mind that writes consume many more array resources than reads.
  • Read Size (KB) The average read request size in Kbytes seen by the LUN. This number indicates whether the overall read workload is oriented more toward throughput (I/Os per second) or bandwidth (Mbytes/second). For a finer distinction of I/O sizes, use an IO Size Distribution chart for this LUN.
  • Write Size (KB) The average write request size in Kbytes seen by the LUN. This number indicates whether the overall write workload is oriented more toward throughput (I/Os per second) or bandwidth (Mbytes/second). For a finer distinction of I/O sizes, use an IO Size Distribution chart for the LUNs.

Below is an explanation of additional performance metrics that I don’t use as frequently, but I’m including them for completeness.

  • Forced Flushes/s Number of times per second the cache had to flush pages to disk to free up space for incoming write requests. Forced flushes are a measure of how often write requests will have to wait for disk I/O rather than be satisfied by an empty slot in the write cache. In most well performing systems this should be zero most of the time. 
  • Full Stripe Writes/s Average number of write requests per second that spanned a whole stripe (all disks in a LUN). This metric is applicable only to LUNs that are part of a RAID5 or RAID3 group.
  • Used Prefetches (%) The percentage of prefetched data in the read cache that was read during the last polling interval.
  • Disk Crossing (%) Percentage of host requests that require I/O to at least two disks compared to the total number of host requests. A single disk crossing can involve more than two disk drives.
  • Disk Crossings/s Number of times per second that a request requires access to at least two disk drives. A single disk crossing can involve more than two disks.
  • Read Cache Hits/s Average number of read requests per second that were satisfied by either read or write cache without requiring any disk access. A read cache hit occurs when recently accessed data is re-referenced while it is still in the cache.
  • Read Cache Misses/s Average number of read requests per second that did require one or more disk accesses.
  • Reads From Write Cache/s Average number of read requests per second that were satisfied by write cache only. Reads from write cache occur when recently written data is read again while it is still in the write cache. This is a subset of read cache hits which includes requests satisfied by either the write or the read cache.
  • Reads From Read Cache/s Average number of read requests per second that were satisfied by the read cache only. Reads from read cache occur when data that has been recently read or prefetched is re-read while it is still in the read cache. This is a subset of read cache hits which includes requests satisfied by either the write or the read cache.
  • Read Cache Hit Ratio The fraction of read requests served from both read and write caches vs. the total number of read requests. A higher ratio indicates better read performance.
  • Write Cache Hits/s Average number of write requests per second that were satisfied by the write cache without  requiring any disk access. Write requests that are not write cache hits are referred to as write cache misses.
  • Write Cache Misses/s Average number of write requests per second that did require one or multiple disk accesses. Write requests that cause forced flushes or that bypass the write cache due to their size are examples of write cache misses.
  • Write Cache Rehits/s Average number of write requests per second that were satisfied by the write cache since they had been referenced before and not yet flushed to the disks. Write cache rehits occur when recently accessed data is referenced again while it is still in the write cache. This is a subset of Write Cache Hits.
  • Write Cache Hit Ratio The ratio of write requests that the write cache satisfied without requiring any disk access vs. the total number of write requests to this LUN. A higher ratio indicates better write performance.
  • Write Cache Rehit Ratio The ratio of write requests that the write cache satisfied since they have been referenced before and not yet flushed to the disks vs. the total number of write requests to this LUN. This is a measure of how often the write cache succeeded in eliminating a write operation to disk. While improving the rehit ratio is useful it is more beneficial to reduce the number of forced flushes.

What’s new in FLARE 32?

We recently installed a new VNX5700 at the end of July and EMC was on-site to install it the day after the general release of FLARE 32. We had planned our pool configuration around this release, so the timing couldn’t have been more perfect. After running the new release for about a month now it’s proven to be rock solid, and to date no critical security patches have been released that we needed to apply.

The most notable new feature for us was the addition of mixed RAID types within the same pool.  We can finally use RAID6 for the large NL-SAS drives in the pool and not have to make the entire pool RAID6.  There also are several new performance enhancements that should make a big impact, including load balancing within a tier and pool rebalancing.

Below is an overview of the new features in the initial release (05.32.000.5.006).

· VNX Block to Unified Services Plug-n-play: This feature allows customers to perform Block to Unified upgrades.

· Support for In-Family upgrades: This feature allows for the In-family Data In Place (DIP) Conversion kits that are to be available for the VNX OE for File 7.1 and VNX OE for Block 05.32 release. In-family DIP conversions across the VNX family will re-use the installed SAS DAEs with associated drives and SLICs

· Windows 2008 R2: Branch Cache Support: This feature provides support for the branch cache feature in Windows 2008 R2. This feature allows a Windows client to cache a file locally in one of the branch office servers and then use the cached copy whenever the same file is being requested. The VNX for File CIFS Server operates as the Central Office Content Server in support of the Branch Cache feature.

· VAAI for NFS: Snaps of snaps: Supports snap of snap of VMDK files on a NFS file systems to at least 1 level of depth (source to snap to snap). Though the functionality will initially only be available through the VAAI for NFS interface, it may in future be exposed through the VSI plug-in as well. This feature allows VMware View 5.0 to use VAAI for NFS on a VNX system. This feature also requires that file systems be enabled during creation for “VMware VAAI nested clone support”, and the installation of the EMCNasPlugin-1-0.10.zip on the ESX servers.”

· Load balance within a tier: This feature allows for redistribution of slices across all drives in a given tier of a pool to improve performance. This also includes proactive load balancing, such that slices are relocated from one RAID group in a tier to another, based on activity level, to balance the load within the tier. These relocations will occur within the user-defined tiering relocation window.

· Improve block compression performance: This feature provides for improved block compression performance. This includes increasing speed of compression operations, decreasing storage system impact, and decreasing host response time impact.

· Deeper File Compression Algorithm: This feature provides an option to utilize deeper compression algorithm for greater capacity savings, when using file-level compression. This option can be leveraged by 3rd party application servers such as the FileMover Appliance Server, based on data types (i.e. per metadata definitions) that are best suited for the deeper compression algorithm).

· Rebalance when adding drives to Pools: This feature provides for the redistribution of slices across all drives in a newly expanded pool to improve performance.

· Conversion of DLU to TLU and back, when enabling Block Compression: This feature provides an internal mechanism (not user-invoked), when enabling Compression on a thick pool LUN, that would result in an in-place conversion from Thick (“direct”) pool LUNs to Thin Pool LUNs, rather than performing a migration. Additionally, for LUNs that were originally Thick and then converted to Thin, it provides an internal mechanism, upon disabling compression, to convert the LUNs back to Thick, without requiring a user-invoked migration.

· Mixed RAID Types in Pools: This feature allows a user to define RAID types per storage tier in pool, rather than requiring a single RAID type across all drives in a single pool.

· Improved TLU performance, no worse than 115% of a FLU: This feature provides improved TLU performance. This includes decreasing host response time impact and potentially decreasing storage system impact.

· Distinguished Compression Capacity Savings: This feature provides a display of the capacity savings due to compression. This display will inform the user of the savings due to compression, distinct from the savings due to thin provisioning. The benefit for the user is that he can determine the incremental benefit of using compression. This is necessary because there is currently a performance impact when using Compression, so users need to be able to make a cost/benefit analysis.

· Additional Tiering Policies: This feature provides additional tiering options in pools: namely, “Start High, then Auto-tier”. When the user selects this policy for a given LUN, the initial allocation is on highest tier, and subsequent tiering is based on activity.

· Additional RAID Options in Pools: This feature provides 2 additional RAID options in pools, for better efficiency: 8+1 for RAID 5, and 14+2 for RAID 6. These options will be available for new pools. These options will be available in both the GUI and the CLI.

· E-Trace Enhancements: Top files per fs and other stats: This feature allows the customer to identify the top files in a file system or quota tree. The files can be identified by pathnames instead of ids.

· Support VNX Snapshots in Unisphere Quality of Service Manager: This feature provides Unisphere Quality of Service Manager (UQM) support for both the source LUN and the snapshot LUN introduced by the VNX Snapshots feature.

· Support new VNX Features in Unisphere Analyzer: This feature provides support for all new VNX features in Unisphere Analyzer, including but not limited to VNX Snapshots and 64-bit counters.

· Unified Network Services: This feature provides several enhancements that will improve user experience. The various enhancements delivered by UNS in VNX OE for File 7.1 and VNX OE for Block 05.32 release are as follows:

· Continuous Monitoring: This feature provides the ability to monitor the vital statistics of a system continuously (out-of-box) and then take appropriate action when specific conditions are detected. The user can specify multiple statistical counters to be monitored – the default counters that will be monitored are CPU utilization, memory utilization and NFS IO latency on all file systems. The conditions when an event would be raised can also be specified by the user in terms of a threshold value and time interval during which the threshold will need to be exceeded for each statistical counter being monitored. When an event is raised, the system can perform any number of actions – possible choices are log the event, start detailed correlated statistics collection for a specified time period, send email or send a SNMP trap.

· Unisphere customization for VNX: This feature provides the addition of custom references within Unisphere Software (VNX) via editable source files for product documentation and packaging, custom badges and nameplates.

· VNX Snapshots: This feature provides VNX Snapshots (a.k.a write-in-place pointer-based snapshots) that in their initial release will support Block LUNs only and require pool-based LUNs. VNX Snapshots will support File Systems in a later release. The LUNs referred to below are pool-based LUNs (thick LUNs and Thin LUNs.)

· NDMP V4 IPv6 extension: This feature provides support for the Symantec, NetApp, and EMC authored and approved NDMP v4 IPv6 extension in order to back up using NDMP 2-way and 3-way in IPv6 networked environments.

· NDMP Access Time: This feature provides the last access time (atime). In prior releases, this was not retained during an NDMPCopy and was thus set to the time of migration. So, after a migration, the customer lost the ability to archive “cold” or “inactive” data. This feature adds an optional NDMP variable (RETAIN_ATIME=y/n, the default being ‘n’) which if set includes the atime in the NDMP data stream so that it can be restored properly on the destination.

· SRDF Interoperability for Control Station: This feature provides SRDF (Symmetrix Remote Data Facility) with the ability to manage the failover process between local and remote VNX Gateways. VNX Gateway needs a way to give up control of the failover and failback to an external entity and to suppress the initiation of these processes from within the Gateway.

 

Making a case for file archiving

We’ve been investigating options for archiving unstructured (file based) data that resides on our Celerra for a while now. There are many options available, but before looking into a specific solution I was asked to generate a report that showed exactly how much of the data has been accessed by users for the last 60 days and for the last 12 months.  As I don’t have permissions to the shared folders from my workstation I started looking into ways to run the report directly from the Celerra control station.  The method I used will also work on VNX File.

After a little bit of digging I discovered that you can access all of the file systems from the control station by navigating to /nas/quota/slot_.  The slot_2 folder would be for the server_2 data mover, slot_3 would be for server_3, etc.  With full access to all the file systems, I simply needed to write a script that scanned each folder and counted the number of files that had been modified within a certain time window.

I always use excel for scripts I know are going to be long.  I copy the file system list from Unisphere then put the necessary commands in different columns, and end it with a concatenate formula that pulls it all together.  If you put echo -n in A1, “Users_A,” in B1, and >/home/nasadmin/scripts/Users_A.dat in C1, you’d just need to type the formula “=CONCATENATE(A1,B1,C1)” into cell D1.  D1 would then contain echo -n “Users_A,” > /home/nasadmin/scripts/Users_A.dat. It’s a simple and efficient way to make long scripts very quickly.

In this case, the script needed four different sections.  All four of these sections I’m about to go over were copied into a single shell script and saved in my /home/nasadmin/scripts directory.  After creating the .sh file, I always do a chmod +X and chmod 777 on the file.  Be prepared for this to take a very long time to run.  It of course depends on the number of file systems on your array, but for me this script took about 23 hours to complete.

First, I create a text file for each file system that contains the name of the filesystem (and a comma) which is used later to populate the first column of the final csv output.  It’s of course repeated for each file system.

echo -n "Users_A," > home/nasadmin/scripts/Users_A.dat
echo -n "Users_B," > home/nasadmin/scripts/Users_B.dat

... <continued for each filesystem>
 Second, I use the ‘find’ command to walk each directory tree and count the number of files that were accessed over 60 days ago.  The output is written to another text file that will be used in the csv output file later.
find /nas/quota/slot_2/ Users_A -mtime +365 | wc -l > /home/nasadmin/scripts/ Users_A_wc.dat

find /nas/quota/slot_2/ Users_B -mtime +365 | wc -l > /home/nasadmin/scripts/ Users_B_wc.dat

... <continued for each filesystem>
 Third, I want to count the total number of files in each file system.  A third text file is written with that number, again for the final combined report that’s generated at the end.
find /nas/quota/slot_2/Users_B | wc -l > /home/nasadmin/scripts/Users_B_total.dat

find /nas/quota/slot_2/Users_B | wc -l > /home/nasadmin/scripts/Users_B_total.dat

... <continued for each filesystem>
 Finally, each file is combined into the final report.  The output will show each filesystem with two columns, Total Files & Files Accessed 60 days ago.  You can then easily update the report in Excel and add columns that show files accessed in the last 60 days, the percentage of files accessed in the last 60 days, etc., with some simple math.
cat /home/nasadmin/scripts/Users_A.dat /home/nasadmin/scripts/Users_A_wc.dat /home/nasadmin/scripts/comma.dat /home/nasadmin/scripts/Users_A_total.dat | tr -d "\n" > /home/nasadmin/scripts/fsoutput.csv | echo " " > /home/nasadmin/scripts/fsoutput.csv

cat /home/nasadmin/scripts/Users_B.dat /home/nasadmin/scripts/Users_B_wc.dat /home/nasadmin/scripts/comma.dat /home/nasadmin/scripts/Users_B_total.dat | tr -d "\n" >> /home/nasadmin/scripts/fsoutput.csv | echo " " > /home/nasadmin/scripts/fsoutput.csv

... <continued for each filesystem>

My final output looks like this:

Total Files Accessed 60+ days ago Accessed in Last 60 days % Accessed in last 60 days
Users_A            827,057                734,848               92,209                                                   11.15
Users_B              61,975                  54,727                 7,248                                                   11.70
Users_C            150,166                132,457               17,709                                                   11.79

The three example filesystems above show that only about 11% of the files have been accessed in the last 60 days.   Most user data has a very short lifecycle, it’s ‘hot’ for a month or less then dramatically tapers off as the business value of the data drops.  These file systems would be prime candidates for archiving.

My final report definitely supported the need for archving, but we’ve yet to start a project to complete it.  I like the possibility of using EMC’s cloud tiering appliance which can archive data directly to the cloud service of your choice.  I’ll make another post in the future about archiving solutions once I’ve had more time to research it.

Can’t join CIFS Server to domain – sasl protocol violation

I was running a live disaster recovery test of our Celerra CIFS Server environment last week and I was not able to get the CIFS servers to join the replica of the domain controller on the DR network.  I would get the error ‘Sasl protocol violation’ on every attempt to join the domain.

We have two interfaces configured on the data mover, one connects to production and one connects to the DR private network.  The default route on the Celerra points to the DR network and we have static routes configured for each of our remote sites in production to allow replication traffic to pass through.  Everything on the network side checked out, I could ping DC’s and DNS servers, and NTP was configured to a DR network time server and was working.

I was able to ping the DNS Server and the domain controller:

[nasadmin@datamover1 ~]$ server_ping server_2 10.12.0.5
server_2 : 10.12.0.5 is alive, time= 0 ms
 
[nasadmin@datamover1 ~]$ server_ping server_2 10.12.18.5
server_2 : 10.12.18.5 is alive, time= 3 ms
 

When I tried to join the CIFS Server to the domain I would get this error:

[nasadmin@datamover1 ~]$ server_cifs prod_vdm_01 -Join compname=fileserver01,domain=company.net,admin=myadminaccount -option reuse prod_vdm_01 : Enter Password:********* Error 13157007706: prod_vdm_01 : DomainJoin::connect:: Unable to connect to the LDAP service on Domain Controller ‘domaincontroller.company.net’ (@10.12.0.5) for compname ‘fileserver01’. Result code is ‘Sasl protocol violation’. Error message is Sasl protocol violation.
 

I also saw this error messages during earlier tests:

Error 13157007708: prod_vdm_01 : DomainJoin::setAccountPassword:: Unable to set account password on Domain Controller ‘domaincontroller.company.net’ for compname ‘fileserver01’. Kerberos gssError is ‘Miscellaneous failure. Cannot contact any KDC for requested realm. ‘. Error message is d0000,-1765328228.
 

I noticed these error messages in the server log:

2012-06-21 07:03:00: KERBEROS: 3: acquire_accept_cred: Failed to get keytab entry for principal host/fileserver01.company.net@COMPANY.NET – error No principal in keytab matches desired name (39756033) 2012-06-21 07:03:00: SMB: 3: SSXAK=LOGON_FAILURE Client=x.x.x.x origin=510 stat=0x0,39756033 2012-06-21 07:03:42: KERBEROS: 5: Warning: send_as_request: Realm COMPANY.NET – KDC X.X.X.X returned error: Clients credentials have been revoked (18)
 

The final resolution to the problem was to reboot the data mover. EMC determined that the issue was because the kerberos keytab entry for the CIFS server was no longer valid. It could be caused by corruption or because the the machine account password expired. A reboot of the data mover causes the kerberos keytab and SPN credentials to be resubmitted, thus resolving the problem.

Errors when creating new replication jobs

I was attempting to create a new replication job on one of our VNX5500’s and was receiving several errors when selecting our DR NS-960 as the ‘destination celerra network server’.

It was displaying the following errors at the top of the window:

– “Query VDMs All.  Cannot access any Data Mover on the remote system, <celerra_name>”. The error details directed me to check that all the Data Moverss are accessible, that the time difference between the source and destination doesn’t exceed 10 min, and that the passphrase matches.  I confirmed that all of those were fine.

– “Query Storage Pools All.  Remote command failed:\nremote celerra – <celerra_name>\nremote exit status =0\nremote error = 0\nremote message = HTTP Error 500: Internal Server Error”.  The error details on this message say to search powerlink, not a very useful description.

– “There are no destination pools available”.  The details on this error say to check available space on the destination storage pool.  There is 3.5TB available in the pool I want to use on the destination side, so that wasn’t the issue either.

All existing replication jobs were still running fine so I knew there was not a network connectivity problem.  I reviewed the following items as well:

– I was able to validate all of the interconnects successfully, that wasn’t the issue.

– I ran nas_cel -update on the interconnects on both sides and received no errors, but it made no difference.

– I checked the server logs and didn’t see any errors relating to replication.

Not knowing where to look next, I opened an SR with EMC.  As it turns out, it was a security issue.

About a month ago an EMC CE accidently deleted our global security accounts during a service call.  I had recreated all of the deleted accounts and didn’t think there would be any further issues.  Logging in with the re-created nasadmin account after the accidental deletion was the root cause of the problem.  Here’s why:

The clariion global user account is tied to a local user account on the control station in /etc/passwd. When nasadmin was recreated on the domain, it attempted to create the nasadmin account on the control station as well.  Because the account already existed as a local account on the control station, it created a local account named ‘nasadmin1‘ instead, which is what caused the problem.  The two nasadmin accounts were no longer synchronized between the Celerra and the Clariion domain, so when logging in with the global nasadmin account you were no longer tied to the local nasadmin account on the control station.  Deleting all nasadmin accounts from the global domain and from the local /etc/passwd on the Celerra, and then recreating nasadmin in the domain solves the problem.  Because the issue was related only to the nasadmin account in this case, I could have also solved the problem by simply creating a new global account (with administrator priviliges) and using that to create the replication job.  I tested that as well and it worked fine.

Auto transferring reports from VNX to an IIS web server via FTP

I previously posted on how to create a script that monitors your Celerra replication jobs.  I have an intranet web page that is updated daily with many other reports (most of which I’ve posted about here), so I thought I’d add this one to the web page as well rather than having to search through my inbox for it every day.

Developing an easy and automated method of getting files from the Celerra to a windows based web server was my challenge.  I figured out an easy way to do this with FTP.  As my internal windows web server is also my internal FTP server, I can place the file directly in the public folder for easy web publishing.  Now that I’ve got the report working and updating on the intranet page every day my next task will be to come up with a more secure method using SSH or SCP, but this works well for now.

The big challenge in creating a bash script using FTP is figuring out how to pass the user id and password.  I tried various methods unsuccessfully and finally settled on using the .netrc file.  Create an empty file named .netrc in your home directory (in my case I put it in /home/nasadmin) with the following syntax:

machine <ftp_server_name> login <ftp_login_id> password <ftp_password>

Once that is created, you need to do a chmod 600 on the .netrc file in order for it to work.  If the permissions are not set to 600 on that file the auto-login to the FTP server will fail.

My next step was to create the script that sends the replication status report to the IIS web server:

#!/bin/bash
cd /home/nasadmin/scripts
ftp <ftp_server_name> <<SCRIPT
put <filename>.csv
quit
SCRIPT
 I always chmod the script with 755 and +X after creating it in vi.  The script always ran fine manually, but I struggled for a while getting it to work properly when run from crontab.  I figured out that you must cd to the correct directory in the script before you call the ftp command, if not you will get “file not found” errors when you run it.  I was always running it manually from within that directory, so I didn’t immediately catch that problem. 🙂

I then added the above script to crontab on the Celerra.  I run it at 6AM every morning with the following entry:

0 6 * * * /scripts/repl_status.sh

For those not familiar with cron, you can add an entry using “crontab -e”, and list your current entries with “crontab -l”.  The first two entries in the line “0 6” represent minutes and the hour of each day, in this case it will run at 6:00AM every day.

I have a download link to the csv file on my web page, and I also have a script on my web server that converts the csv file to HTML output with a perl script called csv2html.pl so the data can be easily viewed without having to download the csv and open it in excel.  You can find csv2html.pl easily with a google search, I’ve blogged about it in previous posts as well.

That’s it!  An easy way to automatically push your reports to another server from the Celerra.  Now that I have the transfer method down, I’ll be adding more daily reports in the near future.  If anyone has experience doing this type of transfer from a Celerra (or Linux server) to a windows server via SSH or SCP, please comment! 🙂

Problem with soft media errors on SSD drives and FastCache

4/25/2012 Update:  EMC has released a fix for this issue.  Call your account service representative and say you need to upgrade your NS-960 dart to 6.0.55.300 and flare to 4.30.000.5.524 plus drive firmware upgrade on all SSD drives to TC3Q.

Do you have FastCache enabled on your array?  Keep a close eye on your SP event logs for soft media errors on your SSD drives.  I just noticed over 2000 soft media errors on one of my FastCache enabled arrays, and found a technical advisory from EMC (emc282741) that desribes this as a potentially critical problem.  I just opened a case with EMC for my array to be reviewed for a possible disk replacement.  In the event a second disk drive in the same FastCache RAID group encounters soft media errors before the system automatically retires the first drive a dual faulted RAID Group could occur.  This can result in storage pools going offline and becoming completely inaccessible to the attached hosts.  That’s basically a total SAN outage, not good.

Look for errors like the following in your SP event logs:

“Date Stamp”  “Time Stamp” Bus1 Enc1 Dsk0  820 Soft Media Error [Bad block]

EMC states in emc282741 that enhancements are targeted for Q1 2012 to address SSD media errors and dual hardware faults, but in the meantime, make sure you review the SP logs if you have CLARiiON or VNX arrays that are configured with SSD disk drives or are using FAST Cache.  If any instance of the “Soft Media Error” listed above is associated with any one of the solid state disk drives in your arrays, the array should be upgraded to at least FLARE Release 04.30.000.5.522 (for CX4 Series arrays) or Release 05.31.000.5.509 (for VNX Series arrays) and then start a Proactive Copy (PACO) to a hot spare and replace the drive as soon as possible.

In order to quickly review this on each of my arrays, I wrote the following script to update my intranet site with a report every morning:

naviseccli -h clariion1a getlog >clariion1a.txt
naviseccli -h clariion1b getlog >clariion1b.txt  
cat clariion1a.txt | grep -i ‘soft media’ >clariion1_softmedia_errors.csv
cat clariion1b.txt | grep -i ‘soft media’ >>clariion1_softmedia_errors.csv
./csv2htm.pl -e -T -i /home/scripts/clariion1_softmedia_errors.csv -o /<intranet_web_server>/clariion1_softmedia_errors.html
 

The script dumps the entire SP log from each SP into a text file, greps for only soft media errors in each file, then converts the output to HTML and writes it to my intranet web server.

 

Reporting on LUN auto-tier distribution

We have auto-tiering turned on in all of our storage pools, which all use EFD, FC, and SATA disks.  I created a script that will generate a list of all of our LUNs and the current tier distribution for each LUN.  Note that this script is designed to run in unix.  It can be run using cygwin installed on a Windows server if you don’t have access to a unix based server.

You will first need to create a text file with a list of the hostnames for your arrays (or the IP to one of the storage processors for each array).  Separate lists must be made for VNX vs. older Clariion arrays, as the naviseccli output was changed for VNX.  For example, “Flash” in the text output on a CX was changed to “Extreme Performance” as the output from a VNX when you run the same command.  I have one file named san.list for the older arrays, and another named san2.list for the VNX arrays.

As I mentioned in my previous post, our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting.  This information is what truly makes this report useful, as simply having a list of LUNs gives me all the information I need for reporting.  If I need to look at tier distribution for a certain server from this report, I simply filter the list in the spreadsheet for the server name (which is included in the LUN name).

Here’s what our LUN names looks like: P1_LUN100_SPA_0000_servername_filesystem_150G

As I said earlier, because of output differences from the naviseccli command on VNX arrays vs. older CX’s, I have two separate scripts.  I’ll include the complete scripts first, then explain in more detail what each section does.

Here is the script for CX series arrays:

for san in `/bin/cat /reports/tiers/san.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out 
     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done 

mv $san.report.csv $san.report.`date +%j`.csv 
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv 
     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
 done

./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html

Here is the script for VNX series arrays:

for san in `/bin/cat /reports/tiers2/san2.list`
do
naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
   for lun in `cat $san.out`
     do
     sleep 2
     echo $san.Generating-LUN-List
     naviseccli -NoPoll -h $san lun -list -name $lun -tiers > $lun.$san.dat &
  done

mv $san.report.csv $san.report.`date +%j`.csv
echo "LUN Name","FLASH","FC","SATA" > $san.report.csv
   for lun in `cat  $san.out`
      do
      echo $lun
      echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i extreme $lun.$san.dat |awk '{print $3}'`","`grep -i Performance $lun.$san.dat |grep -v Extreme|awk '{print $2}'`","`grep -i Capacity $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
   done
 done

./csv2htm.pl -e -T -i /reports/VNX1_hostname.report.csv -o /reports/VNX1_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX2_hostname.report.csv -o /reports/VNX2_hostname.report.html

./csv2htm.pl -e -T -i /reports/VNX3_hostname.report.csv -o /reports/VNX3_hostname.report.html
 Here is a more detailed explanation of the script.

Section 1:

The entire script runs in a loop based on the SAN hostname entries.   We’ll use this list in the next section to get the LUN information from each SAN that needs to be monitored.

for san in `/bin/cat /reports/tiers/san.list`

do

naviseccli -h $san lun -list -tiers |grep LUN |awk '{print $2}' > $san.out
 Section 2:

This section will run the naviseccli command for every lun in each of the <san_hostname>.out files, and output a single text file with the tier distribution for every LUN.  If you have 500 LUNs, then 500 text files will be created in the same directory that your run the script in.

     for lun in `cat $san.out`
        do
        sleep 2
        echo $san
        naviseccli -h $san -np lun -list -name $lun -tiers > $lun.$san.dat &
     done
 Each file will be named <lun_name>.dat, and the contents of the file looks like this:
LOGICAL UNIT NUMBER 962
Name:  P1_LUN962_0000_SPB_servername_filesystem_350G
Tier Distribution: 
Flash:  4.74%
FC:  95.26%
 Section 3:

This line simply makes a copy of the previous day’s output file for archiving purposes.  The %j adds the Julian date to the file (which is 1-365, the day of the year), so the files will automatically be overwritten after one year.  It’s a self cleaning archive directory.  🙂

mv $san.report.csv $san.report.`date +%j`.csv

Section 4:

This section then processes each individual LUN file pulling out only the tier information that we need, and then combines the list into one large output file in csv format.

The first line creates a blank CSV file with the appropriate column headers.

echo "LUN Name","FLASH","FC","SATA" > $san.report.csv

This block of code parses each individual LUN file, doing a grep for each column item that we need added to the report, and awk to only grab the specific text that we want from that line.  For example, if the LUN output file has “Flash:  4.74%” in one line, and we only want the “4.74%” and the word “Flash:” stripped off, we would do an awk ‘{print $2}’ to grab only the second line item.

     for lun in `cat  $san.out`
        do
        echo $lun
        echo `grep Name $lun.$san.dat |awk '{print $2}'`","`grep -i flash $lun.$san.dat |awk '{print $2}'`","`grep -i fc $lun.$san.dat |awk '{print $2}'`","`grep -i sata $lun.$san.dat |awk '{print $2}'` >> $san.report.csv
     done
done
 Once every LUN file has been processed and added to the report, I run the csv2html.pl perl script (from http://www.jpsdomain.org/source/perl.html) to add to our intranet website.  The csv files are also added as download links on the site.
./csv2htm.pl -e -T -i /reports/clariion1_hostname.report.csv -o /reports/clariion1_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion2_hostname.report.csv -o /reports/clariion2_hostname.report.html

./csv2htm.pl -e -T -i /reports/clariion3_hostname.report.csv -o /reports/clariion3_hostname.report.html
 And finally, the output looks like this:
LUN Name FLASH FC SATA
P0_LUN101_0000_SPA_servername_filesystem_100G

24.32%

67.57%

8.11%

P0_LUN102_0000_SPA_servername_filesystem_100G

5.92%

58.77%

35.31%

P1_LUN103_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P1_LUN104_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN200_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

P0_LUN201_0000_SPA_servername_filesystem_100G

6.44%

71.21%

22.35%

P0_LUN202_0000_SPA_servername_filesystem_100G

4.55%

90.91%

4.55%

P0_LUN203_0000_SPA_servername_filesystem_100G

10.73%

80.76%

8.52%

P0_LUN204_0000_SPA_servername_filesystem_100G

8.62%

88.31%

3.08%

P0_LUN205_0000_SPA_servername_filesystem_100G

10.88%

82.65%

6.46%

P0_LUN206_0000_SPA_servername_filesystem_100G

7.00%

81.79%

11.20%

P0_LUN207_0000_SPA_servername_filesystem_100G

1.40%

77.20%

21.40%

P0_LUN208_0000_SPA_servername_filesystem_100G

5.77%

75.06%

19.17%

Testing Disaster Recovery for VNX VDM’s and CIFS servers

After spending a few days working on a DR test recovery, I thought I’d describe the process along with a few roadblocks that I hit along the way.  We had some specific requirements that had to be met, so I thought I’d share my experiences.  Our host site has a VNX5500 and our DR site has an NS-960, and we have Celerra Replicator configured to replicate the VDM and all of the production filesystems from one site to the other.

Here were my business requirements for this test:

  1. Replicate the VDM, production CIFS server and production filesystems from the host site to DR site.
  2. Fail over (or bring up a copy of) the VDM from the host site to the DR site, mounting the replicated VDM at the DR site.
  3. Fail over (or bring up a copy of) the production CIFS server at the DR site.
  4. Create R/W checkpoints of all replicated filesystems at DR site to allow for appropriate user and application testing.
  5. Share the R/W checkpoints of the replicated filesystems on the CIFS server at the DR site rather than the original replicated filesystems, so original replicated data is not touched and does not need to be replicated again after the test.

I started off by setting up replication jobs for our VDM and all filesystems.  Once those were complete (after several weeks of data transfers) I was ready to test.

Step 1: Replicate VDM and production filesystems

This post isn’t meant to detail the process of actually setting up the initial replications, just how to get the replicated data working and accessible at your DR site.  Setting up replication is a well documented procedure which can be reviewed in EMC’s guide “Using Celerra Replicator (V2)”, P/N 300-009-989.  Once the VDMs and filesystems are replicated, you’re ready for the next step.

Step 2: Bring up the VDM at the DR site

The first step in my testing requirements is to bring up the VDM at the DR site.

Failed attempt 1:

I initially created a new replication session for the VDM as I didn’t want to use the actual production VDM, as this is a DR test and not an actual disaster.

After replicating a new copy of the VDM, I attempted to load it in the CLI with the command below.  This must be done from the CLI as there is no option to do this step in Unisphere.

nas_server –vdm <VDMNAME> -setstate loaded

It failed with this error:

Error 12066: root_fs <VDMNAME> is the source or destination object of a file system and cannot be unmounted or is the source or destination object of a VDM replication session and cannot be unloaded.

It was pretty obvious here that you need to stop the replication first before you can load the VDM.  So, as a next step, I stopped the replication with a simple right click/stop on the source side and tried again.

It failed with this error:

Error 4038: <interface_name_1> <interface_name_2> : interfaces not available on server_2

So, it looks like the interface names need to be the same.  I didn’t really want to change the interface names if I didn’t have to, so I tried a different approach next.

Failed attempt 2:

I thought this time I’d create a blank VDM on the destination side first and replicate the host VDM to it, thinking it wouldn’t keep the interface name requirement, and I still wouldn’t have to stop replication on the actual prod VDM, as I didn’t really want to use that one in a test.

I did just that. I created a blank VDM on the DR side, then started a new replication session from the host side and chose it as the destination, making sure to choose the overwrite option when I replicated to it.  The replication was successful.  I stopped the replication on the source side after it was complete, and then attempted to load the new replicated VDM on the DR side.

Voila! It worked:

nas_server –vdm <VDMNAME> -setstate loaded
            id          =          10
            name    =          vdm_replica
            acl        =          0
            type      =          vdm
            server   =          server_2
            rootfs    =          root_fs_vdm_replica
            I18N     =          UNICODE        
            Status   :
            Defined=          enabled
            Actual  =          loaded,ready

Now that it was loaded up, it was time to move on to the next step and create the R/W checkpoints of the filesystems. This is where the process failed again.

After clicking on the drop down box for “Choose Data Mover”, I got this error:

 No file systems exist

 Query file systems vdm_replica: All. File system not found. 

I’m not sure why this failed, but since the VDM couldn’t find the filesystems it was time to try another approach again.

Successful attempt:

After my first two failures, it looked pretty obvious that I’d need to change the interface names and use the original replicated VDM.  Making a copy of the VDM to a blank VDM didn’t work because it couldn’t see the filesystems, and using the original requires the interface names to be the same.  The lesson learned here is to make sure you have matching ports on your host and DR Celerras, and use the same interface names.  If I had done that, my first attempt would have been succesful.

If the original VDM has four CIFS servers (each with it’s own interface) and the DR Celerra only has one port configured on the network, you’d be out of luck.  You wouldn’t have enough interfaces to rename them all to match, and you’d never be able to load your VDM.  The VDM’s only look for the name to be the same, NOT the IP’s.  The IP’s can be different to match your DR network, and the IP’s that are already assigned to the DR site interfaces will NOT change when you load the VDM.

In my case, the host Celerra has two CIFS servers, each with it’s own interface.  One is for production, one is for backups.

Here are the steps that worked for me:

  1. Stop the replication of the VDM (You will see it change to a ‘stopped’ state in Unisphere).
  2. Change the interface names on the DR side (changing IP’s is not necessary) to match the host side.
  3. Load the VDM with the command  nas_server –vdm <VDMNAME> -setstate loaded
  4. You will see the VDM status change from ‘unloaded’ to ‘OK’.

Step 3:  Bring up the CIFS server at the DR site

After you’ve completed the previous step, the VDM will be loaded using the same exact same interfaces as production, and the CIFS servers will be automatically created as well.  If a CIFS server uses cge1-0 on server_2 on the host side, it will now be set up with the same name using cge1-0 on server_2 on the destination (DR) side.

This would be very useful in a real disaster, but for this test I wanted to create an alternate CIFS server with a different IP as the domain controller, DNS servers, and IP range used at our DR site is different.  You could choose to use the same CIFS server that was replicated with the VDM, but for our test I decided to bring up an entirely new CIFS server.  We use DFS for access all of our shares in production, so the name of the CIFS server won’t matter for our testing purposes.  We would just need to update DFS with the new name on the DR network.

Here are the steps I took to bring up the CIFS server for DR:

  1. Gather IP information from the DR team.  Will need a valid IP and subnet mask for the new CIFS server.
  2. Verify IP config on new DR network.
    1. Check that the default route matches the DR network
    2. Check that the DNS server entries match the DNS servers on the DR network
  3. Verify that the Domain controller in the DR network is up and available
  4. Modify the interface of your choice with the correct IP information for the CIFS server.
  5. Create the CIFS server and join it to the DR active directory domain.
    1. If you need to test an AD account, use this command:
    2. server_cifssupport <vdm_name> -cred -name -domain

That’s it for this step.  The CIFS server was successfully joined to the domain and I was able to ping it from one of our previously recovered windows servers on the DR network.

Step 4: Create Read/Write checkpoints of all replicated filesystems

One of my business requirements for this test was to allow read/write access to the replicated filesystems without having to actually change the production data.  The easy way to accomplish this is to create a single read/write checkpoint (snapshot) of each filesystem.  To do this, go to the checkpoint area in Unisphere, click create, and select the “Writeable Checkpoint” checkbox when you create the checkpoint.  You can also script the process and run it from the CLI on the control station.

First, create each checkpoint with this command:

nas_ckpt_schedule -create <ckpt_fs_name> -filesystem <fs_name> -recurrence once

Second, create a read/write copy of each checkpoint with this command:

fs_ckpt <ckpt_fs_name> -name <r/w_ckpt_fs_name>-Create -readonly n 

I would recommend running these no more than two a time and letting them finish.  I’ve had issues in the past running dozens of checkpoint jobs at once that hang and never complete, requiring a reboot of the data mover to correct.

Step 5: Share the replicated filesystems on the DR CIFS server

Once all of the R/W checkpoints are created, they can be shared on the DR CIFS server with the same share names as the original production share names. This allows all of our recovered application and file servers to connect to the same names, simplifying the configuration of the test environment.

You can use a CLI command to export each r/w copy to share them on your CIFS Server:

server_export [vdm] -P cifs -name [filesystem]_ckpt1 -option netbios=[cifserver] [filesystem]_ckpt1_writeable1

Step 6: Cleanup

That’s it!  We had a successful DR test.  Once the test was complete, I peformed the following cleanup steps:

  1. Remove CIFS server shares
  2. Remove CIFS server
  3. Change interfaces on DR celerra back to their original names and IP’s.
  4. Unload the replicated VDM with this command:
    1. nas_server –vdm <VDMNAME> -setstate mounted
    2. Restart the VDM replication from the source

VNX replication monitoring script

This script allows me to quickly monitor and verify the status of my replication jobs every morning.  It will generate a csv file with six columns for file system name, interconnect, estimated completion time, current transfer size,current transfer size remaining, and current write speed.

I recently added two more remote offices to our replication topology and I like to keep a daily tab on how much longer they have to complete the initial seeding, and it will also alert me to any other jobs that are running too long and might need my attention.

Step 1:

Log in to your Celerra and create a directory for the script.  I created a subdirectory called “scripts” under /home/nasadmin.

Create a text file named ‘replfs.list’ that contains a list of your replicated file systems.  You can cut and paste the list out of Unisphere.

The contents of the file should should look something like this:

Filesystem01
Filesystem02
Filesystem03
Filesystem04
Filesystem05
 Step 2:

Copy and paste all of the code into a text editor and modify it for your needs (the complete code is at the bottom of this post).  I’ll go through each section here with an explanation.

1: The first section will create a text file ($fs.dat) for each filesystem in the replfs.list file you made eariler.

for fs in `cat replfs.list`
         do
         nas_replicate -info $fs | egrep 'Celerra|Name|Current|Estimated' > $fs.dat
         done
 The output will look like this:
Name                                        = Filesystem_01
Source Current Data Port            = 57471
Current Transfer Size (KB)          = 232173216
Current Transfer Remain (KB)     = 230877216
Estimated Completion Time        = Thu Nov 24 06:06:07 EST 2011
Current Transfer is Full Copy      = Yes
Current Transfer Rate (KB/s)       = 160
Current Read Rate (KB/s)           = 774
Current Write Rate (KB/s)           = 3120
 2: The second section will create a blank csv file with the appropriate column headers:
echo 'Name,System,Estimated Completion Time,Current Transfer Size (KB),Current Transfer Remain (KB),Write Speed (KB)' > replreport.csv

3: The third section will parse all of the output files created by the first section, pulling out only the data that we’re interested in.  It places it in columns in the csv file.

         for fs in `cat replfs.list`

         do

         echo $fs","`grep Celerra $fs.dat | awk '{print $5}'`","`grep -i Estimated $fs.dat |awk '{print $5,$6,$7,$8,$9,$10}'`","`grep -i Size $fs.dat |awk '{print $6}'`","`grep -i Remain $fs.dat |awk '{print $6}'`","`grep -i Write $fs.dat |awk '{print $6}'` >> replreport.csv

        done
 If you’re not familiar with awk, I’ll give a brief explanation here.  When you grep for a certain line in the output code, awk will allow you to output only one word in the line.

For example, if you want the output of “Yes” put into a column in the csv file, but the output code line looks like “Current Transfer is Full Copy      = Yes”, then you could pull out only the “Yes” by typing in the following:

 nas_replicate -info Filesystem01 | grep  Full | awk '{print $7}'

Because the word ‘Yes’ is the 7th item in the line, the output would only contain the word Yes.

4: The final section will send an email with the csv output file attached.

uuencode replreport.csv replreport.csv | mail -s "Replication Status Report" user@domain.com

Step 3:

Copy and paste the modified code into a script file and save it.  I have mine saved in the /home/nasadmin/scripts folder. Once the file is created, make it executable by typing in chmod +X scriptfile.sh, and change the permissions with chmod 755 scriptfile.sh.

Step 4:

You can now add the file to crontab to run automatically.  Add it to cron by typing in crontab –e, to view your crontab entries type crontab –l.  For details on how to add cron entries, do a google search as there is a wealth of info available on your options.

Script Code:

for fs in `cat replfs.list`

         do

         nas_replicate -info $fs | egrep 'Celerra|Name|Current|Estimated' > $fs.dat

        done

 echo 'Name,System,Estimated Completion Time,Current Transfer Size (KB),Current Transfer Remain (KB),Write Speed (KB)' > replreport.csv

         for fs in `cat replfs.list`

         do

         echo $fs","`grep Celerra $fs.dat | awk '{print $5}'`","`grep -i Estimated $fs.dat |awk '{print $5,$6,$7,$8,$9,$10}'`","`grep -i Size $fs.dat |awk '{print $6}'`","`grep -i Remain $fs.dat |awk '{print $6}'`","`grep -i Write $fs.dat |awk '{print $6}'` >> replreport.csv

         done

 uuencode replreport.csv replreport.csv | mail -s "Replication Status Report" user@domain.com
 The final output of the script generates a report that looks like the sample below.  Filesystems that have all zeros and no estimated completion time are caught up and not currently performing a data synchronization.
Name System Estimated Completion Time Current Transfer Size (KB) Current Transfer Remain (KB) Write Speed (KB)
SA2Users_03 SA2VNX5500 0 0 0
SA2Users_02 SA2VNX5500 Wed Dec 16 01:16:04 EST 2011 211708152 41788152 2982
SA2Users_01 SA2VNX5500 Wed Dec 16 18:53:32 EST 2011 229431488 59655488 3425
SA2CommonFiles_04 SA2VNX5500 0 0 0
SA2CommonFiles_03 SA2VNX5500 Wed Dec 16 10:35:06 EST 2011 232173216 53853216 3105
SA2CommonFiles_02 SA2VNX5500 Mon Dec 14 15:46:33 EST 2011 56343592 12807592 2365
SA2commonFiles_01 SA2VNX5500 0 0 0

How to scrub/zero out data on a decommissioned VNX or Clariion

datawipe

Our audit team needed to ensure that we were properly scrubbing the old disks before sending our old Clariion back to EMC on a trade in.  EMC of course offers scrubbing services that run upwards of $4,000 for an array.  They also have a built in command that will do the same job:

navicli -h zerodisk -messner B E D
B Bus
E Enclosure
D Disk

usage: zerodisk disk-names [start|stop|status|getzeromark]

sample: navicli -h 10.10.10.10 zerodisk -messner 1_1_12

This command will write all zero’s to the disk, making any data recovery from the disk impossible.  Add this command to a windows batch file for every disk in your array, and you’ve got a quick and easy way to zero out all the disks.

So, once the disks are zeroed out, how do you prove to the audit department that the work was done? I searched everywhere and could not find any documentation from emc on this command, which is no big surprise since you need the engineering mode switch (-messner) to run it.  Here were my observations after running it:

This is the zeromark status on 1_0_4 before running navicli -h 10.10.10.10 zerodisk -messner 1_0_4 start:

 Bus 1 Enclosure 0  Disk 4

 Zero Mark: 9223372036854775807

 This is the zeromark status on 1_0_4 after the zerodisk process is complete:

(I ran navicli -h 10.10.10.10 zerodisk -messner 1_0_4 getzeromark to get this status)

 Bus 1 Enclosure 0  Disk 4

Zero Mark: 69704

 The 69704 number indicates that the disk has been successfully scrubbed.  Prior to running the command, all disks will have an extremely long zero mark (18+ digits), after the zerodisk command completes the disks will return either a 69704 or 69760 depending on the type of disk (FC/SATA).  That’s be best I could come up with to prove that the zeroing was successful.  Running the getzeromark option on all the disks before and after the zerodisk command should be sufficient to prove that the disks were scrubbed.

VMWare/ESX can’t write to a Celerra Read/Write NFS mounted datastore

I had just created serveral new Celerra NFS mounted datastores for our ESX administrator.  When he tried to create new VM hosts using the new datastores, he would get this error:   Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “servername.company.com” failed.

Searching for that error message on powerlink, the VMWare forums, and general google searches didn’t bring back any easy answers or solutions.  It looked like ESX was unable to write to the NFS mount for some reason, even though it was mounted as Read/Write.  I also had the ESX hosts added to the R/W access permissions for the NFS export.

After much digging and experimentation, I did resolve the problem.  Here’s what you have to check:

1. The VMKernel IP must be in the root hosts permissions on the NFS export.   I put in the IP of the ESX server along with the VMKernel IP.

2. The NFS export must be mounted with the no_root_squash option.  By default, the root user with UID 0 is not given access to an NFS volume, mounting the export with no_root_squash allows the root user access.  The VMkernal must be able to access the NFS volume with UID 0.

I first set up the exports and permissions settings in the GUI, then went to the CLI to add the mount options.
command:  server_mount server_2 -option rw,uncached,sync,no_root_squash <sharename> /<sharename>

3. From within the ESX Console/Virtual Center, the Firewall settings should be updated to add the NFS Client.   Go to ‘Configuration’ | ‘Security Profile’ | ‘Properties’ | Click the NFS Client checkbox.

4. One other important item to note when adding NFS mounted datastores is the default limit of 8 in ESX.  You can increase the limit by going to ‘Configuration’ | ‘Advanced Settings’ | ‘NFS’ in the left column | Scroll to ‘NFS.MaxVolumes’ on the left, increase the number up to 64.  If you try to add a new datastore above the NFS.MaxVolumes limit, you will get the same error in red at the top of this post.

That’s it.  Adding the VMKernel IP to the root permissions, mounting with no_root_squash, and adding the NFS Client to ESX resolved the problem.

Strategies for implementing Multi-tiered FAST VP Storage Pools

After speaking to our local rep and attending many different classes at the most recent EMC World in Vegas, I came away with some good information and a very logical best practice for implementing multi-tiered FAST VP storage pools.

First and foremost, you have to use Flash.  High RPM Fiber Channel drives are neighter capactiy efficient or performance efficient, the highest IO data needs to be hosted on Flash drives.  The most effective split of drives in a storage pool is 5% Flash, 20% Fiber Channel, and 75% SATA.

Using this example, if you have an existing SAN with 167 15,000 RPM 600GB Fiber Channel Drives, you would replace them with 97 drives in the 5/20/75 blend to get the same capacity with much improved performance:

  • 25 200GB Flash Drives
  • 34 15K 600GB Fiber Channel Drives
  • 38 2TB SATA Drives

The ideal scenario is to implement FAST Cache along with FAST VP.  FAST Cache continously ensures that the hottest data is serverd from Flash Drives.  With FAST Cache, up to 80% of your data IO will come from Cache (Legacy DRAM Cache served up only about 20%).

It can be a hard pill to swallow when you see how much the Flash drives cost, but their cost is negated by increased disk utilization and reduction in the number of total drives and DAEs that you need to buy.   With all FC drives, disk utilization is sacrificed to get the needed performance – very little of the capacity is used, you just buy tons of disks in order to get more spindles in the raid groups for better performance.  Flash drives can achieve much higher utilization, reducing the effective cost.

After implementing this at my company I’ve seen dramatic performance improvements.  It’s an effective strategy that really works in the real world.

In addition to this, I’ve also been implementing storage pools in pairs of two, each sized identically.  The first pool is designated only for SP A, the second is for SPB.  When I get a request for data storage, in this case let’s say for 1 TB, I will create a 500GB LUN in the first pool on SP A, and a 500GB LUN in the second pool on SP B.  When the disks are presented to the host server, the server administrator will then stripe the data across the two LUNs.  Using this method, I can better balance the load across the storage processors on the back end.

Critical Celerra FCO for Mac OS X 10.7

Here’s the description of this issue from EMC:

EMC has become aware of a compatibility issue with the new version of MacOS 10.7 (Lion). If you install this new MacOS 10.7 (Lion) release, it will cause the data movers in your Celerra to reboot which may result in data unavailability. Please note that your Celerra may encounter this issue when internal or external users operating the MacOS 10.7 (Lion) release access your Celerra system.  In order to protect against this occurrence, EMC has issued the earlier ETAs and we are proactively upgrading the NAS code operating on your affected Celerra system. EMC strongly recommends that you allow for this important upgrade as soon as possible.

Wow.  A single Mac user running 10.7 can cause a kernel panic and a reboot of your data mover.  According to our rep, Apple changed the implementation of SMB in 10.7 and added a few commands that the Celerra doesn’t recognize.  This is a big deal folks, call your Account rep for a DART upgrade if you haven’t already.

Adding/Removing modules from a datamover

I recently had an issue where a brand new datamover installed by EMC would not allow me to make it a standby for the existing datamovers.  It turns out that the hardware (specifically the number of FC and ethernet interfaces) must match PRECISELY, the number of ports and the slots the modules are installed in have to match across all datamovers.

The new datamover that was installed had an extra 4 port ethernet module installed in it.  Below is the procedure I used to remove the module, including all the commands to take it down, reconfigure it, and bring it back up successfully.  Removing the extra module solved the problem, it matched the config of the others and allowed me to configure it as a standby.

First, log in to the CLI on the control station with root priviliges.  Next, just run the commands below in order.

Turn off connecthome and emails to avoid false alarms.
 /nas/sbin/nas_connecthome -service stop
 /nas/bin/nas_emailuser -modify -enabled no
 /nas/bin/nas_emailuser -info

Copy and paste this to save, it will list the current datamover config.
 nas_server -i -a

Run this to shut the datamover down.  Run getreason to verify when it’s down.
 server_cpu server_<x> -halt now
 /nasmcd/sbin/getreason

Remove/replace the module now.

Power the datamover back on.
 /nasmcd/sbin/t2reset pwron -s <slot number>

Watch getreason for status
 /nasmcd/sbin/getreason
(Wait for it to reboot and say ‘Hardware Misconfigured’)

Once it is in a ‘misconfigured’ state, run setup_slot to configure it:
 /nasmcd/sbin/setup_slot -i 4

Run this command to view the current hardware config, verify that your change was made:
 server_sysconfig server_4 -p

Restart connecthome and email services.
 /nas/sbin/nas_connecthome -service start -clear
 /nas/sbin/nas_connecthome -i
 /nas/bin/nas_emailuser -modify -enabled yes
 /nas/bin/nas_emailuser -info

That’s it!  your datamover has been updated and reconfigured.

VNX Root Replication Checkpoints

Where did all my savvol space go?  I noticed last week that some of my Celerra replication jobs had stalled and were not sending any new data to the replication partner.  I then noticed that the storage pool designated for checkpoints was at 100%.  Not good. Based on the number of file system checkpoints that we perform, it didn’t seem possible that the pool could be filled up already.  I opened a case with EMC to help out.

I learned something new after opening this call – every time you create a replication job, a new checkpoint is created for that job and stored in the savvol.  You can view these in Unisphere by changing the “select a type” filter to “all checkpoints including replication”.  You’ll notice checkpoints named something like root_rep_ckpt_483_72715_1 in the list, they all begin with root_rep.   After working with EMC for a little while on the case, he helped me determine that one of my replication jobs had a root_rep_ckpt that was 1.5TB in size.

Removing that checkpoint would immediately solve the problem, but there was one major drawback.  Deleting the root_rep checkpoint first requires that you delete the replication job entirely, requiring a complete re-do from scratch.  The entire filesystem would have to be copied over our WAN link and resynchronized with the replication partner Celerra.  That didn’t make me happy, but there was no choice.  At least the problem was solved.

Here are a couple of tips for you if you’re experiencing a similar issue.

You can verify the storage pool the root_rep checkpoints are using by doing an info against the checkpoint from the command line and look for the ‘pool=’ field.

nas_fs –list | grep root_rep  (the first colum in the output is the ID# for the next command)

nas_fs –info id=<id from above>

 You can also see the replication checkpoints and IDs for a particular filesystem with this command:

fs_ckpt <production file system> -list –all

You can check the size of a root_rep checkpoint from the command line directly with this command:

./nas/sbin/rootnas_fs -size root_rep_ckpt_883_72715_1

 

Optimizing Java Memory for Navisphere / Unisphere

If you have a CLARiiON system with a large configuration in terms of disks, LUNs, initiator records, etc, you may experience a slowdown when managing the system with Navisphere or Unisphere.  If you increase the amount of memory that Java can use, you can significantly improve the response time when using the management console.

Here are the steps:

  1. Log in to the CLARiiON setup page (http://<clariion IP>/setup).  Go to Set Update Parameters > Update Interval.  Change it to 300.
  2. On the Management Server (or your local PC/laptop) go to Control Panel and launch the Java icon.
  3. Go to the Java tab and click view.
  4. Enter -Xmx128m under Java Runtime Parameter, which allocates 128MB for Java.  This number can be increased as you see fit, you may see better results with 512 or 1024.

Celerra Health Check with CLI Commands

Here are the first commands I’ll type when I suspect there is a problem with the Celerra, or if I want to do a simple health check.

1. <watch> /nas/sbin/getreason.  This will quickly give you the current status of each data mover. 5=up, 0=down/rebooting.  Typing watch before the command will run the command with continuous updates so you can monitor a datamover if you are purposely rebooting it.

10 – slot_0 primary control station
5 – slot_2 contacted
5 – slot_3 contacted

2. nas_server -list.  This lists all of the datamovers and their current state.  It’s a good way to quickly tell which datamovers are active and which are standby.

1=nas, 2=unused, 3=unused, 4=standby, 5=unused, 6=rdf

id      type  acl  slot groupID  state  name
1        1    0     2                         0    server_2
2        4    0     3                        0    server_3

3. server_sysstat.  This will give you a quick overview of memory and CPU utilization.

server_2 :
threads runnable = 6
threads blocked  = 4001
threads I/J/Z    = 1
memory  free(kB) = 2382807
cpu     idle_%   = 70

4. nas_checkup.   This runs a system health check.

Check Version:  5.6.51.3
Check Command:  /nas/bin/nas_checkup
Check Log    :  /nas/log/checkup-run.110608-143203.log

————————————-Checks————————————-
Control Station: Checking if file system usage is under limit………….. Pass
Control Station: Checking if NAS Storage API is installed correctly…….. Pass

5. server_log server_2.  This shows the current alert log.  Alert logs are also stored in /nas/log/webui.

6. vi /nas/jserver/logs/system_log.   This is the java system log.

7. vi /var/log/messages.  This displays system messages.

Easy File Extension filtering with EMC Celerra

Are your users filling up your CIFS fileserver with MP3 files?  Sick of sending out emails outlining IT policies, asking for their removal?  However your manage it now, the best way to avoid the problem in the first place is to set up filtering on your CIFS server file shares.

So, to use the same example, lets say you don’t want your users to store MP3 files on your \\PRODFILES\Public share.

1. Navigate to the \\PRODFILES\C$ administrative share.

2. Open the folder in the root directory called .filefilter

3. Create an empty text file called mp3@public in the .filefilter folder.

4. Change the windows security on the file to restrict access to certain active directory groups or individuals.

That’s it!  Once the file is created and security is set, users who are restricted by the file security will no longer be able to copy MP3 files to the public share.  Note that this will not remove any existsing MP3 files from the share, it will only prevent new ones from being copied.

Auto generating daily performance graphs with EMC Control Center / Performance Manager

This document describes the process I used to pull performance data using the ECC pmcli command line tool, parse the data to make it more usable with a graphing tool, and then use perl scripts to automatically generate graphs.

You must install Perl.  I use ActiveState Perl (Free Community Edition) (http://www.activestate.com/activeperl/downloads).

You must install Cygwin.  Link: http://www.cygwin.com/install.html. I generally choose all packages.

I use the follow CPAN Perl modules:

Step 1:

Once you have the software set up, the first step is to use the ECC command line utility to extract the interval performance data that you’re interested in graphing.  Below is a sample PMCLI command line script that could be used for this purpose.

:Get the current date

For /f “tokens=2-4 delims=/” %%a in (‘date /t’) do (set date=%%c%%a%%b)

:Export the interval file for today’s date.

D:\ECC\Client.610\PerformanceManager\pmcli.exe -export -out D:\archive\interval.csv -type interval -class clariion -date %date% -id APM00324532111

:Copy all the export data to my cygwin home directory for processing later.

copy /y e:\san712_interval.csv C:\cygwin\home\<userid>

You can schedule the command script above to run using windows task scheduler.  I run it at 11:46PM every night, as data is collected on our SAN in 15 minute intervals, and that gives me a file that reports all the way up to the end of one calendar day.

Note that there are 95 data collection points from 00:15 to 23:45 every day if you collect data at 15 minute intervals.  The storage processor data resides in the last two lines of the output file.

Here is what the output file looks like:

EMC ControlCenter Performance manager generated file from: <path>

Data Collected for DiskStats

Data Collected for DiskStats – 0_0_0

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00 

Number of Arrivals with Non Zero Queue     12                         20                        23                      23 

% Utilization                                                30.2                     33.3                     40.4                  60.3

Response Time                                              1.8                        3.3                        5.4                     7.8

Read Throughput IO per sec                        80.6                    13.33                   90.4                    10.3

Great information in there, but the format of the data makes it very hard to do anything meaningful with the data in an excel chart.  If I want to chart only % utilization, that data is difficult to chart because there are so many counters around it that are also have data collected on them.   My next goal was to write a script to reformat the data in a much more usable format to automatically create a graph for one specific counter that I’m interested in (like daily utilization numbers), which could then be emailed daily or auto-uploaded to an internal website.

Step 2:

Once the PMCLI data is exported, the next step is to use cygwin bash scripts to parse the csv file and pull out only the performance data that is needed.  Each SAN will need a separate script for each type of performance data.  I have four scripts configured to run based on the data that I want to monitor.  The scripts are located in my cygwin home directory.

The scripts I use:

  • Iostats.sh (for total IO throughput)
  • Queuestats.sh (for disk queue length)
  • Resptime.sh (for disk response time in ms)
  • Utilstats.sh (for % utilization)

Here is a sample shell script for parsing the CSV export file (iostats.sh):

#!/usr/bin/bash

#This will pull only the timestamp line from the top of the CSV output file. I’ll paste it back in later.

grep -m 1 “/” interval.csv > timestamp.csv

#This will pull out only lines that begin with “total througput io per sec”.

grep -i “^Total Throughput IO per sec” interval.csv >> stats.csv

#This will pull out the disk/LUN title info for the first column.  I’ll add this back in later.

grep -i “Data Collected for DiskStats -” interval.csv > diskstats.csv

grep -i “Data Collected for LUNStats -” interval.csv > lunstats.csv

#This will create a column with the disk/LUN number .  I’ll paste it into the first column later.

cat diskstats.csv lunstats.csv > data.csv

#This adds the first column (disk/LUN) and combines it with the actual performance data columns.

paste data.csv stats.csv > combined.csv

#This combines the timestamp header at the top with the combined file from the previous step to create the final file we’ll use for the graph.  There is also a step to append the current date and copy the csv file to an archive directory.

cat timestamp.csv combined.csv > iostats.csv

cp iostats.csv /cygdrive/e/SAN/csv_archive/iostats_archive_$(date +%y%m%d).csv

#  This removes all the temporary files created earlier in the script.  They’re no longer needed.

rm timestamp.csv

rm stats.csv

rm diskstats.csv

rm lunstats.csv

rm data.csv

rm combined.csv

#This strips the last two lines of the CSV (Storage Processor data).  The resulting file is used for the “all disks” spreadsheet.  We don’t want the SP
data to skew the graph.  This CSV file is also copied to the archive directory.

sed ‘$d’ < iostats.csv > iostats2.csv

sed ‘$d’ < iostats2.csv > iostats_disk.csv

rm iostats2.csv

cp iostats_disk.csv /cygdrive/e/SAN/csv_archive/iostats_disk_archive_$(date +%y%m%d).csv

Note: The shell script above can be run in the windows task scheduler as long as you have cygwin installed.  Here’s the syntax:

c:\cygwin\bin\bash.exe -l -c “/home/<username>/iostats.sh”

After running the shell script above, the resulting CSV file contains only Total Throughput (IO per sec) data for each disk and lun.  It will contain data from 00:15 to 23:45 in 15 minute increments.  After the cygwin scripts have run we will have csv datasets that are ready to be exported to a graph.

The Disk and LUN stats are combined into the same CSV file.  It is entirely possible to rewrite the script to only have one or the other.  I put them both in there to make it easier to manually create a graph in excel for either disk or lun stats at a later time (if necessary).  The “all disks graph” does not look any different with both disk and lun stats in there, I tried it both ways and they overlap in a way that makes the extra data indistinguishable in the image.

The resulting data output after running the iostats.sh script is shown below.  I now have a nice, neat excel spreadsheet that lists the total throughput for each disk in the array for the entire day in 15 minute increments.   Having the data formatted in this way makes it super easy to create charts.  But I don’t want to have to do that manually every day, I want the charts to be created automatically.

                                                             3/28/11 00:15       3/28/11 00:30      3/28/11  00:45      3/28/11 01:00

Total Throughput IO per sec   – 0_0_0          12                             20                             23                           23 

Total Throughput IO per sec    – 0_0_1        30.12                        33.23                        40.4                         60.23

Total Throughput IO per sec    – 0_0_2         1.82                          3.3                           5.4                              7.8

Total Throughput IO per sec    -0_0_3         80.62                        13.33                        90.4                         10.3 

Step 3:

Now I want to automatically create the graphs every day using a Perl script.  After the CSV files are exported to a more usable format from the previous step, I Use the GD::Graph library from CPAN (http://search.cpan.org/~mverb/GDGraph-1.43/Graph.pm) to auto-generate the graphs.

Below is a sample Perl script that will autogenerate a great looking graph based on the CSV ouput file from the previous step.

#!/usr/bin/perl

#Declare the libraries that will be used.

use strict;

use Text::ParseWords;

use GD::Graph::lines;

use Data::Dumper;

#Specify the csv file that will be used to create the graph

my $file = ‘C:\cygwin\home\<username>\iostats_disk.csv’;

#my $file  = $ARGV[0];

my ($output_file) = ($file =~/(.*)\./);

#Create the arrays for the data and the legends

my @data;

my @legends;

#parse csv, generate an error if it fails

open(my $fh, ‘<‘, $file) or die “Can’t read csv file ‘$file’ [$!]\n”;

my $countlines = 0;

while (my $line = <$fh>) {

chomp $line;

my @fields = Text::ParseWords::parse_line(‘,’, 0, $line);

#There are 95 fields generated to correspond to the 95 data collection points in each
of the output files.

my @field =

(@fields[1],@fields[2],@fields[3],@fields[4],@fields[5],@fields[6],@fields[7],@fields[8],@fields[9],@fields[10],@fields[11],@fields[12],@fields[13],@fields[14],@fields[15],@fields[16],@fields[17],@fields[18],@fields[19],@fields[20],@fields[21],@fields[22],@fields[23],@fields[24],@fields[25],@fields[26],@fields[27],@fields[28],@fields[29],@fields[30],@fields[31],@fields[32],@fields[33],@fields[34],@fields[35],@fields[36],@fields[37],@fields[38],@fields[39],@fields[40],@fields[41],@fields[42],@fields[43],@fields[44],@fields[45],@fields[46],@fields[47],@fields[48],@fields[49],@fields[50],@fields[51],@fields[52],@fields[53],@fields[54],@fields[55],@fields[56],@fields[57],@fields[58],@fields[59],@fields[60],@fields[61],@fields[62],@fields[63],@fields[64],@fields[65],@fields[66],@fields[67],@fields[68],@fields[69],@fields[70],@fields[71],@fields[72],@fields[3],@fields[74],@fields[75],@fields[76],@fields[77],@fields[78],@fields[79],@fields[80],@fields[81],@fields[82],@fields[83],@fields[84],@fields[85],@fields[86],@fields[87],@fields[88],@fields[89],@fields[90],@fields[91],@fields[92],@fields[93],@fields[94],@fields[95]);
push @data, \@field;

if($countlines >= 1){

push @legends, @fields[0];

}

$countlines++;

}

#The data and legend arrays will read 820 lines of the CSV file.  This number will change based on the number of disks in the SAN, and will be different depending on the SAN being reported on.  The legend info will read the first column of the spreadsheet and create a color box that corresponds to the graph line.  For the purpose of this graph, I won’t be using it because 820+ legend entries look like a mess on the screen.

splice @data, 1, -820;

splice @legends, 0, -820;

#Set Graphing Options

my $mygraph = GD::Graph::lines->new(1024, 768);

# There are many graph options that can be changed using the GD::Graph library.  Check the website (and google) for lots of examples.

$mygraph->set(

title => ‘SP IO Utilization (00:15 – 23:45)’,

y_label => ‘IOs Per Second’,

y_tick_number => 4,

values_vertical => 6,

show_values => 0,

x_label_skip => 3,

) or warn $mygraph->error;

#As I said earlier, because of the large number of legend entries for this type of graph, I change the legend to simply read “All disks”.  If you want the legend to actually put the correct entries and colors, use this line instead:  $mygraph->set_legend(@legends);

$mygraph->set_legend(‘All Disks’);

#Plot the data

my $myimage = $mygraph->plot(\@data) or die $mygraph->error;

# Export the graph as a gif image.  The images are currently moved to the IIS folder (c:\inetpub\wwwroot) with one of the scripts.  The could also be emailed using a sendmail utility.

my $format = $mygraph->export_format;

open(IMG,”>$output_file.$format”) or die $!;

binmode IMG;

print IMG $myimage->gif;

close IMG;

After this script runs the resulting image file will be saved in the cygwin home directory (It saves it in the same directory that the CSV file is located in).  One of the nightly scripts I run will copy the image to our interal IIS server’s image directory, and sendmail will email the graph to the SAN Admin team.

That’s it!  You now have lots of pretty graphs with which you can impress your management team. 🙂

Here is a sample graph that was generated with the Perl script:

Reporting on Soft media errors

 

Ah, soft media errors.  The silent killer.  We had an issue with one of our Clariion LUNs that had many uncorrectable sector errors.  Prior to the LUN failure, there were hundreds of soft media errors reported in the navisphere logs.  Why weren’t we alerted about them?  Beats me.  I created my own script to pull and parse the alert logs so I can manually check for these type of errors.

What exactly is a soft media error?  Soft Media errors indicate that the SAN has identified a bad sector on the disk and is reconstructing the data from RAID parity data  in order to fulfill the read request.   It can indicate a failing disk.

To run a report that pulls only soft media errors from the SP log, put the following in a windows batch file:

naviseccli -h <SP IP Address> getlog >textfile.txt

for /f "tokens=1,2,3,4,5,6,7,8,9,10,11,12,13,14" %%i in ('findstr Soft textfile.txt') do (echo %%i %%j %%k %%l %%m %%n %%o %%p %%q %%r %%s %%t %%u %%v)  >>textfile_mediaerrors.txt

The text file output looks like this:

10/25/2010 19:40:17 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:22 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:22 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:27 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:27 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:33 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:33 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:38 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:38 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:44 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:44 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:49 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5
 10/25/2010 19:40:49 Enclosure 6 Disk 7 (820) Soft Media Error [0x00] 0 5

If you see lots of soft media errors, do yourself a favor and open a case with EMC.  Too many can lead to the failure of one of your LUNs.

The script can be automated to run and send an email with daily alerts, if you so choose.  I just run it manually about once a week for review.

Tiering reports for EMC’s FAST VP

Note: On a separate blog post, I shared a script to generate a report of the tiering status of all LUNs.

One of the items that EMC did not implement along with FAST VP is the ability to run a canned report on how your LUNs are being allocated among the different tiers of storage.  While there is no canned report, alas, it is possible to get this information from the CLI.

The naviseccli –h {SP IP or hostname} lun –list –tiers command fits the bill. It shows how a specific LUN is distributed across the different drive types.  I still need to come up with a script to pull out only the information that I want, but the info is definitely in the command’s output.

Here’s the sample output:

LOGICAL UNIT NUMBER 6
 Name:  LUN 6
 Tier Distribution:
 Flash:  13.83%
 FC:  86.17%

The storagepool report gives some good info as well.  Here’s an excerpt of what you see with the naviseccli –h {SP IP or hostname} storagepool –list –tiers command:

SPA

Tier Name:  Flash
 Raid Type:  r_5
 User Capacity (GBs):  1096.07
 Consumed Capacity (GBs):  987.06
 Available Capacity (GBs):  109.01
 Percent Subscribed:  90.05%
 Data Targeted for Higher Tier (GBs):  0.00
 Data Targeted for Lower Tier (GBs):  11.00

Tier Name:  FC
 Raid Type:  r_5
 User Capacity (GBs):  28981.77
 Consumed Capacity (GBs):  10592.65
 Available Capacity (GBs):  18389.12
 Percent Subscribed:  36.55%

Tier Name:  SATA
 Raid Type:  r_5
 User Capacity (GBs):  11004.67
 Consumed Capacity (GBs):  260.02
 Available Capacity (GBs):  10744.66
 Percent Subscribed:  2.36%
 Data Targeted for Higher Tier (GBs):  3.00
 Data Targeted for Lower Tier (GBs):  0.00
 Disks (Type):

SPB

Tier Name:  Flash
 Raid Type:  r_5
 User Capacity (GBs):  1096.07
 Consumed Capacity (GBs):  987.06
 Available Capacity (GBs):  109.01
 Percent Subscribed:  90.05%
 Data Targeted for Higher Tier (GBs):  0.00
 Data Targeted for Lower Tier (GBs):  25.00

Tier Name:  FC
 Raid Type:  r_5
 User Capacity (GBs):  28981.77
 Consumed Capacity (GBs):  10013.61
 Available Capacity (GBs):  18968.16
 Percent Subscribed:  34.55%
 Data Targeted for Higher Tier (GBs):  25.00
 Data Targeted for Lower Tier (GBs):  0.00

Tier Name:  SATA
 Raid Type:  r_5
 User Capacity (GBs):  11004.67
 Consumed Capacity (GBs):  341.02
 Available Capacity (GBs):  10663.65
 Percent Subscribed:  3.10%
 Data Targeted for Higher Tier (GBs):  20.00
 Data Targeted for Lower Tier (GBs):  0.00

Good stuff in there.   It’s on my to-do list to run these commands periodically, and then parse the output to filter out only what I want to see.  Once I get that done I’ll post the script here too.

Note: I did create and post a script to generate a report of the tiering status of all LUNs.

VNX NAS CLI Command Reference Guide

vnx1.jpg

Other CLI Reference Guides:
Isilon CLI  |  EMC ECS CLI  |  VNX NAS CLI  |  ViPR Controller CLI  NetApp Clustered ONTAP CLI  |  Data Domain CLI  |  Brocade FOS CLI

This VNX NAS CLI reference guide includes command syntax samples for more commonly used commands at the top, and a list of available commands at the bottom with a brief description of their function.  Here are some other posts on my blog that provide more specific examples of using some CLI commands, with additional detail and some scripting examples:

Undocumented VNX CLI Commands
Using the Database Query option with the VNX NAS CLI
Celerra Health Check CLI Commands
Testing Disaster Recovery with VDM’s and CIFS Servers
Checking Replication Job Throughput with the CLI
Collecting info on Actives Shares, Clients, Protocols, & Authentication with the CLI
Listing and Counting Multiprotocol File Systems from the CLI

VNX NAS CLI Command Reference (Updated January 2018):

NAS Commands:
nas_disk   -list Lists the disk table
nas_checkup Runs a system health check.
nas_pool   -size -all Lists available space on each defined storage pool
nas_replicate  -info –all | grep <fs> Info about each filesystem’s replication status, grep to view just one.
nas_replicate  -list A list of all current replications
nas_server  -list Lists all datamovers. 1=primary,4=standby,6=rdf (remote data facility)
<watch> /nas/sbin/getreason Shows current status of each datamover. 5=up, 0=down or rebooting
nas_fs Creates, deletes, extends, modifies, and lists filesystems.
nas_config Control station configuration (requires root login)
nas_version View current nas revision
nas_ckpt_schedule Manage  checkpoint schedule
nas_storage -list List the attached backend storage systems (with ID’s)
nas_storage -failback id=<x> Fail back failed over SP’s or disks
nas_server  -vdm <vdm_name> -setstate loaded Loads a VDM
nas_server  -vdm <vdm_name> -setstate mounted Unloads a VDM
/nas/sbin/t2reset pwron -s Powers on a data mover that has been shut down.
Server commands:
server_cpu server_<x> -r now Reboots a datamover
server_ping <IP> Ping any IP from the control station
server_ifconfig server_2 –all View all configured interfaces
server_route server_2 {-list,flush,add,delete} Routing table commands
server_mount Mount a filesystem
server_export Export a filesystem
server_stats Provides realtime stats for a datamover, many different options.
server_sysconfig Modifies hardware config of the data movers.
server_devconfig Configures devices on the data movers.
server_sysstat Shows current Memory, CPU, and thread utilization
server_log server_2 Shows current log
vi /nas/jserver/logs/system_log Java System log
vi /var/log/messages System Messages
server_ifconfig server_2 <interface_name> up Bring up a specific interface
server_ifconfig server_2 <interface_name> down Take a specific interface down
server_date Sets system time and NTP server settings
server_date <server_X> timesvc start ntp Starts NTP on a data mover
server_date <server_X> timesvc stats ntp To view the status of NTP.
server_date <server_X> timesvc update ntp Forces an update of NTP
server_file FTP equivalent command
server_dns Configure DNS
server_cifssupport Support services for CIFS users
nas_ckpt_schedule -create <ckpt_fs_name> -filesystem -recurrence once To create a single Checkpoint
fs_ckpt <ckpt_fs_name> -name -Create -readonly n To create a Read/Write copy of a single Checkpoint
server_export [vdm] -P cifs -name [filesystem]_ckpt1 -option netbios=[cifserver] [filesystem]_ckpt1_writeable1 To export a Read/Write checkpoint copy to a CIFS Share
server_cifs server_2 -Join compname=SERVERNAME,domain=DOMAIN.COM,admin=ADMINID Join a CIFS Server to the domain
server_cifs server_2 -Unjoin compname=SERVERNAME,domain=DOMAIN.COM,admin=ADMINID Unjoin a CIFS Server to the domain
 .server_config server_2 -v “pdc dump” To view the current domain controllers visible on the data mover
.server_config server_2 -v “pdc enable=<ip_address>” Enable a domain controller
.server_config server_2 -v “pdc disable=<ip_address>” Disable a domain controller
server_setup server_2 -P cifs -o stop Stop CIFS Service
server_setup server_2 -P cifs -o start Start CIFS Service
server_iscsi server_2 -service -start Start iSCSI service
server_iscsi server_2 -service -stop Stop iSCSI service
server_iscsi server_2 -service -status Check the status of the iSCSI service
.server_config  server_x  “logsys set  severity  NDMP=LOG_DBG2” Enable NDMP Logging [run both of these commands]
.server_config  server_x  “logsys set  severity  PAX=LOG_DBG2” Enable NDMP Logging [run both of these commands]
.server_config  server_x  “logsys  set severity  NDMP=LOG_ERR” Disable NDMP Logging [run both of these commands]
.server_config  server_x  “logsys set severity   PAX=LOG_ERR” Disable NDMP Logging [run both of these commands]
server_netstat server_x -i               Interface statistics Gather interface performance statistics
server_sysconfig server_x -v         Lists virtual devices List Virtual Devices
server_sysconfig server_x -v -i vdevice_name  Informational stats on the virtual device Informational stats on the virtual device
server_netstat server_x -s -a tcp  Retransmissions Retransmissions
server_nfsstat server_x                    NFS SRTs NFS SRTs
server_nfsstat server_x -zero        Reset NFS stats Reset NFS Stats
To view HBA Statistics:
.server_config server_2 -v “printstats fcp reset” View HBA Stats:  Toggles the service on/off
.server_config server_2 -v “printstats fcp full” View HBA Stats:  View stats table (must wait for some stats to collect before viewing)
Filesystem specific commands:
fs_ckpt Manage Checkpoints
fs_dhsm Manage File Mover
fs_group Manage filesystem groups
Complete List of  “nas_”  Commands:
nas_acl Creates, lists, and displays information for access control level entries within the table
nas_ckpt_schedule Manages SnapSure checkpoint scheduling for the VNX
nas_dbtable Displays the table records of the Control Station.
nas_emailuser Manages email notifications for serious system events
nas_inventory Provides detailed information about hardware components
nas_pool Manages the user-defined and system-defined storage pools
nas_slice Manage Slices
nas_task Manages in-progress or completed tasks
nas_automountmap Creates and displays an automount map containing all permanently exported file systems
nas_cmd nas_cmd
nas_devicegroup Manages an established MirrorView/Synchronous consistency group
nas_event Provides a user interface to system-wide events
nas_license Enables software packages.
nas_quotas Manages quotas for mounted file systems.
nas_stats Manages Statistics Groups.
nas_version Displays the software version running on the Control Station.
nas_cel Performs management of remotely linked VNX or a linked pair of Data Movers.
nas_copy Creates a replication session for a one-time copy of a file system.
nas_disk Manages the disk table.
nas_fs Manages local file systems for the VNX.
nas_logviewer Displays the content of nas_eventlog generated log files.
nas_replicate Manages loopback, local, and remote VNX Replicator sessions.
nas_storage Controls storage system access and performs some management tasks
nas_volume Manages the volume table.
nas_checkup Provides a system health checkup for the VNX.
nas_cs Manages the configuration properties of the Control Station.
nas_diskmark Queries the system, manages and lists the SCSI devices configuration.
nas_fsck Manages fsck and aclchk utilities on specified file systems.
nas_message Displays message description.
nas_server Manages the Data Mover (server) table.
nas_symm nas_symm
nas_xml nas_xml
Complete list of  “server_”  Commands:
server_archive Reads and writes file archives, and copies directory hierarchies.
server_cifssupport Provides support services for CIFS users.
server_file Copies files between the Control Station and the specified Data Movers.
server_log Displays the log generated by the specified Data Mover.
server_name Manages the name for the specified Data Movers.
server_ping6 Checks the IPv6 network connectivity for the specified Data Movers.
server_sysconfig Manages the hardware configuration for the specified Data Mover(s).
server_vtlu Configures a virtual tape library unit (VTLU) on the specified Data Movers
server_arp Manages the Address Resolution Protocol (ARP) table for the Data Movers.
server_cpu Performs an orderly, timed, or immediate halt or reboot of a Data Mover.
server_ftp Configures the FTP server configuration for the specified Data Movers.
server_mgr server_mgr (deprecated?)
server_netstat Displays the network statistics for the specified Data Mover.
server_rip Manages the Routing Information Protocol (RIP) configuration
server_sysstat Displays the operating system statistics for the specified Data Movers.
server_cdms Provides File Migration Service for VNX functionality
server_date Displays or sets the date and time for a Data Mover, and synchronizes time
server_http Configures the HTTP configuration file for independent services
server_mount Mounts file systems and manages mount options
server_nfs Manages the NFS service, including secure NFS and NVSv4
server_route Manages the routing table for the specified Data Movers.
server_tftp Manages the Trivial File Transfer Protocol (TFTP)
server_cepp Manages the Common Event Publishing Agent (CEPA) service
server_dbms Enables backup and restore of databases, displays database environment statistics.
server_ifconfig Manages the network interface configuration
server_mountpoint Manages mount points for the specified Data Movers.
server_nfsstat server_nfsstat (deprecated?)
server_security Manages GPO Policy settings for CIFS Servers
server_umount Unmounts file systems
server_certificate Manages VNX for file system’s Public Key Infrastructure (PKI)
server_devconfig Queries, saves, and displays the SCSI over Fibre Channel device configuration
server_ip Manages the IPv6 neighbor cache and route table for VNX.
server_mpfs Sets up and configures MPFS protocol.
server_nis Manages the Network Information Service (NIS) configuration
server_setup Manages the type and protocol component for the specified Data Movers.
server_uptime Displays the length of time that a specified Data Mover has been running since the last reboot
server_checkup Checks the configuration parameters, and state of a Data Mover and its dependencies
server_df Reports free and used disk space and inodes for mounted file systems
server_iscsi server_iscsi (deprecated?)
server_mpfsstat server_mpfsstat (deprecated?)
server_param Manages parameter information for the specified Data Movers.
server_snmpd Manages the Simple Network Management Protocol (SNMP) config values
server_usermapper Provides an interface to manage the Internal Usermapper service.
server_cifs Manages the CIFS configuration for the specified Data Movers or VDMs
server_dns Manages the Domain Name System (DNS) lookup server config
server_kerberos Manages the Kerberos configuration within the specified Data Movers.
server_mt Manages the magnetic tape drive for the specified Data Mover.
server_pax Displays and resets backup and restore statistics and file system information for a backup session already in progress.
server_standby Manages the standby and RDF relationships for the specified Data Movers.
server_version Displays the software version running on the specified Data Movers.
server_cifsstat server_cifsstat (deprecated?)
server_export Exports file systems and manages access on the specified Data Movers for NFS/CIFS clients
server_ldap Manages the LDAP-based directory client configuration and LDAP over SSL
server_muxconfig server_muxconfig (deprecated?)
server_ping Checks the network connectivity for the specified Data Movers.
server_stats Displays sets of statistics that are running on the specified Data Mover.
server_viruschk Manages the virus checker configuration for the specified Data Movers.
Complete list of  “fs_” Commands:
fs_ckpt Manages checkpoints using the EMCSnapSure functionality.
fs_dedupe Manages filesystem deduplication state.
fs_dhsm Manages the VNX FileMover file system connections.
fs_group Creates a file system group from the specified file systems or a single file system
fs_rdf Manages the Remote Data Facility (RDF) functionality for a file system residing on RDF drives.
fs_timefinder Manages the TimeFinderTM/FS functionality for the specified filesystem

A Roundup of Storage Startups

Blockchain and Enterprise Storage