My name is Steve Engelhardt.  I’ve been in IT for over 20 years now, and throughout my career I’ve had a wide array of responsibilities and I’ve touched a wide variety of technologies.  My responsibilities have included windows and UNIX server administration, server virtualization, network administration, business continuity, SAP software & infrastructure administration, and SAN and storage administration. With all that experience I have a few tricks up my sleeve after all this time, and part of the purpose of this blog was to share some of that experience.  I have professional certifications from Microsoft, Cisco, Citrix, and Dell EMC.  My most recent job transition in this field was to full time SAN and Storage Administration, which has been my exclusive responsibility since 2010.

I currently work for a global Fortune 500 company in the enterprise storage group and have on the job experience with EMC, IBM, NetApp, Brocade, Cisco and Pure hardware deployed in data centers around the globe.  I’ve been responsible for the management and administration of all of this hardware in my various roles.  In many cases storage vendors don’t always provide all of the tools (or documentation) you need, and as a result administrators like us sometimes have to get a bit creative to get the job done.  A big part of that creativity is finding alternative solutions to issues that arise, which often results in the creation of custom tools and scripts to address those issues.  I’ve created many custom scripts in scenarios like that, many of which are shared on this blog.  Along with those scripts, I’ll be posting tips and tricks I’ve picked up about SAN and Storage administration here, as well as some simple, “work smarter not harder” tips for every day tasks, industry news and trends, and some general info as well.  All comments on this blog are screened and moderated before they will appear, but are welcome and encouraged.  I will try to answer questions that are posted, although I can’t always answer them all.

This blog is my own and the posts found here reflect my personal opinions. The published information is not approved by any hardware vendor.


37 thoughts on “About”

  1. hi

    Great blog… clean, accurate information… no confusion, and quicker to read than any 200-pages pdf doc file 🙂

    Could you post your experience related to the performance of the datamovers ? I never quite understood why there are 4 ports on each datamover, and usually only 1 is configured. And 2 are probably enough for link aggregtion. 4 ports for multiple Cifs servers ?

    When I perform backups (with netbackup) I am amazed by how slow my nas can be…


    1. Hi Jeff,

      Thanks for your comments and for the kind words. I’d be happy to put something together regarding my experience with data mover performance and how I’ve got ours configured, I’ll work on that today.

    2. There are four ports so you have the option to trunk them together. The best way (IMHO) is to use the EMC “Fail Safe Network” option (https://mattcraggs.wordpress.com/tag/celerra-setup/). However, instead of just two ports in an active/passive configuration, first create a three ports LACP trunk and define that as the “primary” interface with the fourth port as the “secondary. With the fourth port being connected to a different switch this will give you switch level redundancy while (under normal operations) having three times as much bandwidth to your NAS.

  2. Hi Steve,

    Just wanted to say thanks for posting all this useful information. The blog looks real helpful and neat.


    1. Thanks Jeff. I think VFCache looks very promising, but it’s still “bleeding edge” and I’m going to wait a while on it. It needs to be fully integrated with VMWare before I could even use it (it’s not yet), and the price will come down on it when they switch from SLC to MLC flash later this year.

  3. I have 6 years experience only with EMC products, and as you mention “Because EMC doesn’t always provide all of the tools (or documentation) you need, sometimes administrators like us have to get a bit creative to get the job done.” I found your blog very usefull and informative 😉 THANX and keep up with the good job!

  4. I’d be interested in hearing more about your thoughts on riverbed steelhead products as we are considering them for first WAN acceleration and then server consolidation. We’re also looking at Silverpeak. Can I speak with you? It’s for a civil engineering company w branch offices.

    1. Hi Brian. Unfortunately I don’t have much info to share regarding riverbed products as they are managed by our network infrastructure team. I worked with the network team once to resolve an issue with our CIFS servers not closing open files properly, they had found a riverbed knowledgebase article that said we had to be at a certain Celerra DART revision when using a steelhead appliance. I only know that the network team researched the market extensively and then picked riverbed as the best option, however I was not involved at all with that process.

  5. Very useful blog…
    One question or request I’d like to submit – do you have anything that would show the IP replication delta transfer details on an on-going basis? Something equivalent to the NetApp snap delta command?

    1. I do have a script that runs every morning that shows me the current status of my replication jobs, including the amount of data remaining to transfer and the estimated completion time. You can see my post about it here: http://emcsan.wordpress.com/2011/11/08/celerra-replication-monitoring-script/. I’m not familiar with the snap delta command or what it can do. The nas_replicate command is the CLI command you’d use on the Celerra to gather info.

  6. Thank you for all of this valuable information. I’m struggling with a scenario EMC has been unable to shed light on to date, so I’m wondering if you’ve run across it (I looked on your blog but didn’t see anything about it.)

    Our environment is as follows:
    * VNX 7.1 for File (VNX7500 frame)
    * NDMP local backups (via Netbackup) into a Quantum i6000 library (FC IBM LTO5 drives)

    The symptom I’m struggling with understanding is why backups of server_8 *never* exceed 20MB/sec per filesystem, but other datamovers’ backups using the same drives regularly exceed 120-130 MB/sec.

    Ever seen/heard of something like this?

    1. Thanks for posting. This is one of those times that I don’t think I can help based on my previous experience. We don’t use NDMP and I haven’t seen such a dramatic performance difference on backups between data movers in our environment. I’ll do a bit of research myself and post again if I can find any further info that could assist. Sorry I can’t be of more help right now.

  7. Hey Man,

    I have learnt a lot from your blogs. Even the A B C of storage was provided by the name “the SAN guy’. I need your help in some script creation for VNX/Celerra/ and Isilon. Can you please help in providing some info for writing some scripts for checkig the pool and fs utilization, health check etc.

    Thanks in advance. !!!

    1. Do a quick search and you’ll see that I’ve posted lots of example scripts over the years, including scripts that do exactly what you’re asking for.

      Reporting on Block Pool utilization: http://emcsan.wordpress.com/2013/08/09/reporting-on-celerravnx-block-storage-pool-capacity-with-a-bash-script/
      Reporting on File Pool utilization: http://emcsan.wordpress.com/2013/08/05/reporting-on-celerravnx-nas-pool-sizes-with-a-bash-script/
      Health Check commands: http://emcsan.wordpress.com/2011/06/08/celerra-health-check-with-cli-commands/

      1. Thanks so much – VERY Close!
        We need to tie out all the Physical Servers Attached to each Array – and report the LUN Allocation and Usage on those specific arrays. Our end goal is the resource sum of all applications – so the tie out is app to server – and then server to storage. We also have benchmarking data feeds that compare Cost per TB by Tier 1-3, and Backup Tier. Any idea how to capture disc profiles and align them with tiers in each array too?

        1. I don’t have any specific experience to apply to answering your question, Sorry Tom. If a future business requirement at work has me working on a process like that I will share the info on this blog. I don’t have any current scripts that would do that.

  8. Our IT Consulting Firm needs to hire an EMC Expert to help pull Server To LUN Mapping and Utilization on VMAX, VNX, and Isilon environments. These would need to be in CSV format so our IT Financial Analytics platform can consume them on a monthly basis. Could you recommend a peer or colleague in the RTP Region we could work with in our offices for 20 hours quarterly? Any help is appreciated – http://www.apptio.com

  9. Hi,
    Have you ever experienced any customer sharing the same pool for NAS/SAN workload?


    1. Yes, that is how our main production NAS is configured. We have two storage pools defined for NAS, one for file systems and one for file system checkpoints.

  10. Hi, I am looking for a way to dig out Storage pool Respones Times and IOPS on a VNX2 Block Storage pool. Have you come a cross that?

    Many thanks in advance


    1. Contact your local EMC sales rep and get a copy of the VNX Monitoring and Reporting software. It will do exactly what you’re asking, and it’s free.

  11. Hello,
    i need your help.
    How can i match a Virtual Volume on VPLEX with his storage volume on VNX ?
    thanks a lot for your Help


  12. Hi, Thanks for helping the community. Had a qucik question, have you ever seen a Cepp Server IP address that is not matching the one that is on the CEPP.CONF on a VNX? Thanks

    1. I honestly have no experience with Cepp servers. I’d suggest manually editing it the cepp.conf file or contacting EMC for assistance. Sorry I can’t give you any specific help. I found the info below regarding editing the file.

      1. Use a text editor to create a new, blank file in the home directory.

      2. Add the CEPA information that is necessary for your system. This information can be on one line, or on separate lines by using a space and “\” at the end of each line except for the last line and the lines that contain global options (cifsserver, surveytime, ft, and msrpcuser).

      3. Save the file with the name cepp.conf, and then close the text editor.

      4. Move the cepp.conf file to the Data Mover’s root file system: $ server_file -put cepp.conf cepp.conf

      Sample cepp.conf file:

      ft level=1 location=/fs1 size=5
      pool name=sepapool
      preevents=* postevents=*
      posterrevents=* option=ignore reqtimeout=500 retrytimeout=50

  13. Hi,

    My name is Pravin, Product Manager in ManageEngine. Have been reading your articles. You are doing a great job. I handle product management for OpStor, a storage monitoring product from ManageEngine. I am looking to place an ad in your blog for our product. Let me know If you would be interested.


  14. Hi SanGuy,

    Perhaps you can help me with this disk performance issue on our VNX5200.
    We have two hosts running Win 2012 with native MPIO and using Unisphere host agent installed. The initiators on each host are registered with our Brocade FC switches.

    I get these ” HAVT issues” in host lists

    name: AB1 Windows Server 2012 family Fibre HAVT Issues Normal 7.33.0 (0.15) 11800.0

    In the LUN section shows “Unknown” for logical and physical disk in host AB1
    LUN 0 0 Mixed On VNX1 Unknown Unknown 0
    LUN 1 1 Mixed On VNX1 Unknown Unknown 1

    name: AB2 Windows Server 2012 family Fibre HAVT Issues Normal 7.33.0 (0.15) 5900.0

    in the LUN section shows “Unknown” for logical and physical disk in host AB2
    LUN 4 4 Mixed On VNX1 Unknown Unknown 0

    Also, in the system event viewer on the hosts which are HP Proliant DL380p G8 servers, I get these event id 153 warning errors, which is effecting the disk performance to the LUNs

    Event ID 153 – The IO Operation at logical block address xxxx for MPIO\Disk 4 was retried.

    I noticed too in device manager under other devices that it shows yellow exclamation for several “base system devices” and two “system interrupt controller” on the Win2012 hosts.

    The native MPIO properties for AB1 in MPIO devices tab

    Vendor 8Product 16

    In the Discover Multi-paths tab for AB1 the “add support for SAS devices” is checked and greyed out, in the others section there is no info and Add is greyed out

    The native MPIO properties for AB2 in MPIO devices tab

    Vendor 8Product 16

    In the Discover Multi-paths tab for AB1 the “add support for SAS devices” is not checked and greyed out and no info in the others section and Add is greyed out.
    in the others section there is no info and Add is greyed out


    1. Hey Steve. It’d honestly be pretty difficult to properly diagnose what’s going on without being able to get my hands on it. It sounds like the hosts lost access to the LUNs. I know that The internal array HAVT process will detect and report that issue if the host is zoned and registered to the array but has no access to any LUNs. The Connection Status will change to active once you assign the host to a Storage Group with LUNs. I’d start with verifying your storage group config. I’d also make sure your failover mode settings are set correctly on all the the initiators.

  15. Really Great Blog with very insightful information.
    How do you compare VNX vs Isilon vs Netapp FAS?
    The VNX and FAS are similar in many ways, but Isilon is a different animal. Our workload is all NAS with departmental shares and home directories.
    Which one do you feel is best for this use case?

    1. I can give you a brief answer, however I’d recommend doing your research regarding the specific differences between the platforms you mentioned. All of the platforms are solid enterprise class offerings and all meet the needs of a general workload. For simple home directories in a smaller environment and lower performance requirements, VNX/Unity or NetApp FAS would be a more cost effective choice. NetApp and VNX match up very well feature for feature, and choosing one over the other would depend on the specific requirements of the install, however I think NetApp FAS offers more flexibility in an all NAS environment over a traditional VNX Unified install.

      1. Thank you.
        We are also looking at a NAS gateway with an object storage backend. We looked at a couple of options such as Avere systems and Panzura.
        The issue I see is that most of these systems can only have a single CIFS server, unlike VNX and Netapp where you can have many.

        Have you looked into this?

Comments are closed.

Enterprise Storage Engineer

%d bloggers like this: