Category Archives: reference-emc

Adding, modifying and viewing an ACL in the Isilon OneFS CLI

This is an overview and reference for the commands and syntax needed for adding and modifying an ACL on Isilon OneFS files and directories from the CLI.

Access Control Entries for the ACL

Note that a complete ACE list can be viewed by running “man chmod” from the CLI.  This is a list of the entries I used.

dir_gen_all Read,write and execute access(dir_gen_read,dir_gen_write,dir_gen_execute,delete_child and std_write_owner)
object_inherit Only files in this directory and its descendants inherit the ACE
container_inherit Only directories in this directory and its descendants inherit the ACE
delete_child The right to delete children, including read-only files
file_gen_all file_gen_read, file_gen_write, file_gen_execute,delete, std_write_dac, and std_write_owner
add_file The right to create a file in the directory
add_subdir The right to create a sub-directory

Sample commands for addding ACLs to a folder

The chmod +a command is used to specify an AD group and set ACLs explicitly on directories and files.  Setting directory permissions automatically sets all of the files within that directory to the same set.

chmod -R +a group "<Domain Name>\<Group Name>" allow dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir </absolute_path>
chmod -R +a user "<Domain Name>\<User Name>" allow dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir </absolute_path>

For the equivalent of Full Control I added the following groups:

dir_gen_all,delete_child,object_inherit,container_inherit,file_gen_all,add_file,add_subdir

For the equivalent of Read\Execute I added the following groups:

dir_gen_read,dir_gen_execute,file_gen_read,file_gen_execute,object_inherit,container_inherit

Making Changes

If you need to alter an ACL, the most commonly used chmod command line switches would be a, b and D, described below.

-a The -a mode is used to delete ACL entries.
-b Removes the ACL and replaces with the specified mode.
-D Removes all ACEs in the security descriptor's DACL for all named files. This results in implicitly denying everything.

If you attempt to add a well known windows group such as “Authenticated Users” or “Power Users” to a file or directory Access Control List (ACL) from the command line interface you’ll get the error “illegal group name: Invalid argument”.  You’ll need to use the well known SID in ofder to add the ACL entry rather than the name. You can view a complete list of the well known SIDs on Microsoft’s website here: https://support.microsoft.com/en-us/kb/243330.

Syntax examples using well-known SIDs

(S-1-5-11 is “Authenticated Users”)

chmod +a sid S-1-5-11 allow traverse,list

(S-1-1-0 is “Everyone”)

chmod +a sid S-1-1-0 allow traverse,list

Viewing AD permissions on a OneFS folder

Once you’ve added your ACL’s, you can list and confirm permissions on the folder with the “ls -led” command. Below is some sample output.

ISILON-NODE-1# ls -led
drwxrwx--- + 34 AD_DOMAIN_NAME\account_name AD_DOMAIN_NAME\domain users 1630 May 11 13:26 .
OWNER: user:AD_DOMAIN_NAME\account_name
GROUP: group:AD_DOMAIN_NAME\domain users
CONTROL:dacl_auto_inherited,sacl_auto_inherited
0: group:AD_DOMAIN_NAME\folder-admin allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
1: user:AD_DOMAIN_NAME\service_file_transfer allow dir_gen_all,object_inherit,container_inherit
2: SID:S-1-5-21-2127695773-1422393826-955202855-634017 allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
3: SID:S-1-5-21-2127695773-1422393826-955202855-634018 allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
4: group:AD_DOMAIN_NAME\desktop_corpaccess allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
5: group:AD_DOMAIN_NAME\desktop_corpaccess_dev allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
6: group:AD_DOMAIN_NAME\desktop_svaccess allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
7: group:AD_DOMAIN_NAME\desktop_svaccess_dev allow dir_gen_read,dir_gen_execute,object_inherit,container_inherit
8: group:AD_DOMAIN_NAME\folder-admin-auth allow dir_gen_read,dir_gen_write,dir_gen_execute,std_delete,object_inherit,container_inherit
9: group:AD_DOMAIN_NAME\fileadmin allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
10: group:AD_DOMAIN_NAME\ADadmins allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
11: group:AD_DOMAIN_NAME\domain admins allow inherited dir_gen_all,object_inherit,container_inherit,inherited_ace
12: group:AD_DOMAIN_NAME\domain users allow inherited dir_gen_read,dir_gen_execute,container_inherit,inherited_ace

To verify the permissions on the files within that folder, run “ls -la”.

ISILON-NODE-1# ls -la
total 6611740
drwxrwx--- + 34 AD_DOMAIN_NAME\account1 AD_DOMAIN_NAME\domain users 1630 May 16 13:26 .
drwxrwx--- + 4 root wheel 51 Feb 19 20:31 ..
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 245 Apr 26 07:06 File1
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 253 Apr 25 17:15 File2
drwxrwx--- + 2 AD_DOMAIN_NAME\filetrans AD_DOMAIN_NAME\domain users 249 Apr 25 13:59 File3
drwxrwx--- + 2 AD_DOMAIN_NAME\filetransr AD_DOMAIN_NAME\domain users 253 Apr 25 17:16 File4
...
Advertisements

Isilon Port Usage

Below is a table of Isilon port usage and the OneFS services that use them.   Additional detail is available in the Isilon Security Configuration guide on Dell EMC’s support site.

Affected Services Port Service Protocol Connection Type
FTP 20 ftp-data TCP, IPv4, IPv6 External, Outbound
FTP 21 ftp TCP, IPv4, IPv6 External, Inbound
SSH 22 ssh TCP, IPv4, IPv6 External, Inbound
Telnet 23 telnet TCP External, Inbound
SMTP 25 smtp TCP, IPv4 External, Outbound
SmartConnect 53 domain TCP, UDP, IPv4 External, Outbound
SmartConnect 53 domain UDP, IPv4 External, Inbound
HTTP 80 http TCP, IPv4, IPv6 External, Inbound
Kerberos 88 kerberos TCP, UDP, IPv4, IPv6 External, Outbound
Portmapper 111 sunrpc TCP, UDP, IPv4, IPv6 External, Inbound
Time Service 123 ntp UDP, IPv4, IPv6 External, Inbound
Netbios 137 netbios-ns IPv4 External, Inbound
Netbios 138 netbios-gdm IPv4 External, Inbound
Netbios 139 netbios-ssn TCP, IPv4 External, Inbound
SNMP 161 snmp UDP, IPv4 External, Inbound
SNMP Traps 162 snmptrap UDP, IPv4 External, Inbound
NFS 2049 nfsd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Mount 300 nfsmountd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Notifications 302 nfsstatd TCP, UDP, IPv4, IPv6 External, Inbound
NFSv3 Locking 304 nfslockd TCP, UDP, IPv4, IPv6 External, Inbound
DNS Caching 307 isi-cbind_d UDP, IPv4 External, Inbound
LDAP 389 ldap TCP, IPv4, IPv6 External, Outbound
LDAP 636 ldap TCP, IPv4, IPv6 External, Outbound
HTPS 443 https TCP, IPv4, IPv6 External, Inbound
SMB1/2 Services 445 microsoft-ds TCP, IPv4 External, Outbound
Syslog 514 syslog TCP, IPv4 Internal, Inbound
MSDP 639 msdp UDP, IPv4 Internal
Entrust SPS 640 entrust-sps UDP, IPv4 Internal
Secure FTPS 989 ftps-data TCP, IPv4, IPv6 External, Outbound
Secure FTPS 990 ftps TCP, IPv4, IPv6 External, Inbound
SyncIQ 2098 isi_repl_pworker TCP, IPv4, IPv6 External, Inbound
SyncIQ 3148 isi_repl_bandwidth TCP, IPv4, IPv6 External, Inbound
SyncIQ 3149 isi_repl_bandwidth TCP, IPv4, IPv6 External, Inbound
SyncIQ 5667 isi_migr_sworker TCP, IPv4, IPv6 External, Inbound
iSCSI 3260 iscsi-target TCP, IPv4, IPv6 External, Inbound
MS AD Global Catalog 3268 n/a TCP, IPv4 External, Outbound
ISI Stats 6116 isi_stats_d External, Inbound
ISI Stats 7117 isi_stats_d External, Inbound
HDFS (Hadoop) 8020 hdfs TCP External, Inbound
HDFS (Hadoop) 8021 hdfs TCP, IPv4, IPv6 External, Inbound
Isilon WebGUI (https) 8080 n/a TCP, IPv4, IPv6 External, Inbound
REST API (https) 8080 n/a TCP, IPv4, IPv6 External, Inbound
VASA (vCenter) 8081 vasa TCP External, Inbound

What is VPLEX?

vplexWe are looking at implementing a storage virtualization device and I started doing a bit of research on EMC’s product offering.  Below is a summary of some of the information I’ve gathered, including a description of what VPLEX does as well as some pros and cons of implementing it.  This is all info I’ve gathered by reading various blogs, looking at EMC documentation and talking to our local EMC reps.  I don’t have any first-hand experience with VPLEX yet.

What is VPLEX?

VPLEX at its core is a storage virtualization appliance. It sits between your arrays and hosts and virtualizes the presentation of storage arrays, including non-EMC arrays.  Instead of presenting storage to the host directly you present it to the VPLEX. You then configure that storage from within the VPLEX and then zone the VPLEX to the host.  Basically, you attach any storage to it, and like in-band virtualization devices, it virtualizes and abstracts them.

There are three VPLEX product offerings, Local, Metro, and Geo:

Local.  VPLEX Local manages multiple heterogeneous arrays from a single interface within a single data center location. VPLEX Local allows increased availability, simplified management, and improved utilization across multiple arrays.

Metro.  VPLEX Metro with AccessAnywhere enables active-active, block level access to data between two sites within synchronous distances.  Host application stability needs to be considered. It is recommended that depending on the application that consideration for Metro be =< 5ms latency. The combination of virtual storage with VPLEX Metro and virtual servers allows for the transparent movement of VM’s and storage across longer distances and improves utilization across heterogeneous arrays and multiple sites.

Geo.  VPLEX Geo with AccessAnywhere enables active-active, block level access to data between two sites within asynchronous distances. Geo improves the cost efficiency of resources and power.  It provides the same distributed device flexibility as Metro but extends the distance up to 50ms of network latency. 

Here are some links to VPLEX content from EMC, where you can learn more about the product:

What are some advantages of using VPLEX? 

1. Extra Cache and Increased IO.  VPLEX has a large cache (64GB per node) that sits in-between the host and the array. It offers additional read cache that can greatly improve read performance on databases because the additional cache is offloaded from the individual arrays.

2. Enhanced options for DR with RecoverPoint. The DR benefits are increased when integrating RecoverPoint with VPLEX Metro or Geo to replicate the data using real time replication. It includes a capacity based journal for very granular rollback capabilities (think of it as a DVR for the data center).  You can also use the native bandwidth reduction features (compression & deduplication) or disable them if you have WAN optimization devices installed like those from Riverbed.  If you want active/active read/write access to data across a large distance, VPLEX is your only option.  NetApp’s V-Series and HDS USPV can’t do it unless they are in the same data center. Here’s a few more advantages:

  • DVR-like recovery to any point in time
  • Dynamic synchronous and asynchronous replication
  • Customized recovery point opbjectives that support any-to-any storage arrays
  • WAN bandwidth reduction of up to 90% of changed data
  • Non-disruptive DR testing

4. Non disruptive data mobility & reduced maintenance costs. One of the biggest benefits of virtualizing storage is that you’ll never have to take downtime for a migration again. It can take months to migrate production systems and without virtualization downtime is almost always required. Also, migration is expensive, it takes a great deal of resources from multiple groups as well as the cost of keeping the older array on the floor during the process. Overlapping maintenance costs are expensive too.  By shortening the migration timeframe hardware maintenance costs will drop, saving money.   Maintenance can be a significant part of the storage TCO, especially if the arrays are older or are going to be used for a longer period of time.  Virtualization can be a great way to reduce those costs and improve the return on assets over time.

5. Flexibility based on application IO.  The ability to move and balance LUN I/O among multiple smaller arrays non-disruptively would allows you to balance workloads and increase your ability to respond to performance demands quickly.  Note that underlying LUNs can be aggregated or simply passed through the VPLEX.

6. Simplified Management and vendor neutrality.   Implementing VPLEX for all storage related provisioning tasks would reduce complexity with multiple vendor arrays.  It allows you to manage multiple heterogeneous arrays from a single interface.  It also makes zoning easier as all hosts would only need to be zoned to the VPLEX rather than every array on the floor, which makes it faster and easier to provision new storage to a new host.

7. Increased leverage among vendors.  This advantage would be true with any virtualization device.  When controller based storage virtualization is employed, there is more flexibility to pit vendors against each other to get the best hardware, software and maintenance costs.  Older arrays could be commoditized which could allow for increased leverage to negotiate the best rates.

8. Use older arrays for Archiving. Data could be seamlessly demoted or promoted to different arrays based on an array’s age, it’s performance levels and it’s related maintenance costs.  Older arrays could be retained for capacity and be demoted to a lower tier of service, and even with the increased maintenance costs it could still save money.

9. Scale.  You can scale it out and add more nodes for more performance when needed.  With a VPLEX Metro configuration, you could configure VPLEX with up to 16 nodes in the cluster between the two sites.

What are some possible disadvantages of VPLEX?

1. Licensing Costs. VPLEX is not cheap.  Also, it can be licensed per frame on VNX but must be licensed per TB on CX series.  Your large,older CX arrays will cost you a lot more to license.

2. It’s one more device to manage.   The VPLEX is an appliance, and it’s one more thing (or things) that has to be managed and paid for.

3. Added complexity to infrastructure.  Depending on the configuration, there could be multiple VPLEX appliances at every site, adding considerable complexity to the environment.

4. Managing mixed workloads in virtual enviornments.  When heavy workloads are all mixed together on the same array there is no way to isolate them, and the ability to migrate that workload non-disruptively to another array is one of the reasons to implement a VPLEX.  In practice, however, those VMs may end up being moved to another array with the same storage limitations as where they came from.  The VPLEX could be simply temporarily solving a problem by moving that problem to a different location.

5. Lack of advanced features. The VPLEX has no advanced storage features such as snapshots, deduplication, replication, or thin provisioning.  It relies on the underlying storage array for those type of features.  As an example, you may want to utilize block based deduplication with an HDS array by placing a Netapp V-series in front of it and using Netapp’s dedupe to enable it.  It is only possible to do that with a Netapp Vseries or HDS USP-V type device, the VPLEX can’t do that.

6. Write cache performance is not improved.  The VPLEX uses write-through caching while their competitor’s storage virtualization devices use write-back caching. When there is a write I/O in a VPLEX environment the I/O is cached on the VPLEX, however it is passed all the way back to the virtualized storage array before an ack is sent back to the host.  The Netapp V-Series and HDS USPV will store the I/O in their own cache and immediately return an ack to the host.  At that point the I/Os are flushed to the back end storage array using their respective write coalescing & cache flushing algorithms.  Because of the write-back behavior it is possible for a possible performance gain above and beyond the performance of the underlying storage arrays due to the caching on these controllers.  There is no performance gain for write I/O in VPLEX environments beyond the existing storage due to the write-through cache design.

What is EMC’s CAVA / Common Event Enabler?

I was recently asked to do a bit of research on EMC’s CAVA product, as we are looking for AntiVirus solutions for our CIFS based shares.  I found very little info with general google searches about exactly what CAVA is and what it does, so I thought I’d share some of the information that I did find after a bit of research and talking to my local EMC rep. 

Basically CAVA is a service runs on the Celerra (or VNX) data mover in conjunction with a Windows server running a 3rd Party Anti-Virus engine (along with EMC’s CAVA API agent) to handle the conversation.  It only facilitates the communication to an existing AV server, EMC doesn’t provide the actual AV software.  It supports Symantec, McAfee, eTrust, Sophos, Kaspersky, and Trend Micro.  In a nutshell, CAVA employs three key components:  Software on the data mover (VC Client), Software on a windows AV server (CAVA), and your 3rd party AV engine on a Windows server. 

CAVA used to stand for “Celerra Anti Virus Agent”, but was changed to “Common AntiVirus Agent”.  Quite convenient that they could re-use the “C” without changing the acronym, right? The product is now officially known as “Common Event Enabler for Windows” by EMC and the package includes CEPA, or the EMC Common Event Publishing Agent, and CAVA, the aforementioned Common Antivirus Agent.  For this post I’m focusing on the Antivirus agent.

CAVA is a fairly straightforward install, however if implemented incorrectly it can adversely affect your performance. It’s important to know how it scans your files and essential to know how to troubleshoot it and do performance monitoring.  There is definitely a performance hit when using CAVA. 

When are files scanned for a virus? 

Each time the Celerra receives a file, it will be locked for read access first, at which time a request is sent to the AV server (or servers) to scan the file.  The Celerra will send the UNC path name to the windows server and wait for verification that the file is not affected.  Once that verification is complete, the file is made available for user access. 

CAVA will scan a file in the following instances: 

  •          CAVA will scan files for a virus the first time that a file is read, subsequent to the initial implementation of CAVA and any updates to virus definitions.
  •          Creating, modifying, or moving a file
  •          When restoring a file (or files) from backup
  •          When renaming a file with a different file extension
  •          Whenever an administrator performs a full file system scan (with the server_viruschk command) 

What are the features of CAVA? 

  •          Automatic Virus Definition Updates. Files opened after the update will be re-scanned.
  •          CAVA Calculator (a free sizing tool to assist in implementation)
  •          User Notifications on Virus detection, cofigurable by administrators to be sent as notifications to the client, event log entries, or both.
  •          Scan on read can be enabled
  •          Event reporting and configuration 

What are some implementation considerations? 

  •          EMC recommends that an MPFS client system not be configured as the AV server system.
  •          CAVA doesn’t support a data mover CIFS server using share level access.
  •          Always update the viruschecker.conf file to avoid scanning temp files. It can be modified with the Celerra AV Management Snap-In.
  •          It’s CIFS only. There is no support for NFS or FTP.  If those protocols are used to open, modify, or move files the files will not be scanned.
  •          You must check for compatibility with your installed 3rd party AV software.

How is it licensed, and how much does it cost?

CAVA is licensed per array, on the VNX series it is in the Security and Compliance Suite.   Pricing will vary of course, but it’s not very expensive relative to the cost of the array.  It should be in the range of thousands rather than tens of thousands of dollars.