Matching LUNs and UIDs when presenting VPLEX LUNs to Unix hosts

Our naming convention for LUNs includes the pool ID, LUN number, server name, filesystem/drive letter, last four digits of the array’s serial number, and size (in GB). Having all of this information in the LUN name makes for very easy reporting and identification of LUNs on a server.  This is what our LUN names look like: P1_LUN100_SPA_0000_servername_filesystem_150G

Typically, when presenting a new LUN to our AIX administration team for a new server build, they would assign the LUNs to specific volume groups based on the LUN names. The command ‘powermt display dev=hdiskpower#’ always includes the name & intended volume group for the LUN, making it easy for our admins to identify a LUN’s purpose.  Now that we are presenting LUNs through our VPlex, when they run a powermt display on the server the UID for the LUN is shown, not the name.  Below is a sample output of what is displayed.

root@VIOserver1:/ # powermt display dev=all
Pseudo name=hdiskpower0
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A6 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk8 CL1-0B active alive 0 0
1 fscsi1 hdisk6 CL1-0F active alive 0 0
0 fscsi0 hdisk4 CL1-0D active alive 0 0
0 fscsi0 hdisk2 CL1-07 active alive 0 0

Pseudo name=hdiskpower1
VPLEX ID=FNM00141800023
Logical device ID=6000144000000010704759ADDF2487A1 (this would usually be displayed as a LUN name)
state=alive; policy=ADaptive; queued-IOs=0
==============================================================================
————— Host ————— – Stor – — I/O Path — — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
1 fscsi1 hdisk9 CL1-0B active alive 0 0
1 fscsi1 hdisk7 CL1-0F active alive 0 0
0 fscsi0 hdisk5 CL1-0D active alive 0 0
0 fscsi0 hdisk3 CL1-07 active alive 0 0

In order to easily match up the UIDs with the LUN names on the server, an extra step needs to be taken on the VPlex CLI. Log in to the VPlex using a terminal emulator, and once you’re logged in use the ‘vplexcli’ command. That will take you to a shell that allows for additional commands to be entered.

login as: admin
Using keyboard-interactive authentication.
Password:
Last login: Fri Sep 19 13:35:28 2014 from 10.16.4.128
admin@service:~> vplexcli
Trying ::1…
Connected to localhost.
Escape character is ‘^]’.

Enter User Name: admin

Password:

VPlexcli:/>

Once you’re in, run the ls -t command with the additional options listed below. You will need to substitute the STORAGE_VIEW_NAME with the actual name of the storage view that you want a list of LUNs from.

VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/STORAGE_VIEW_NAME::virtual-volumes

The output looks like this:

/clusters/cluster-1/exports/storage-views/st1pvio12a-b:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN411_7872_SPB_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a6,10G),
(1,P0_LUN111_7872_SPA_VIOServer1_VIO_10G,VPD83T3:6000144000000010704759addf2487a1,10G)]

Now you can easily see which disk UID is tied to which LUN name.

If you would like to get a list of every storage view and every LUN:UID mapping, you can substitute the storage view name with an asterisk (*).

VPlexcli:/> ls -t /clusters/cluster-1/exports/storage-views/*::virtual-volumes

The resulting report will show a complete list of LUNs, grouped by storage view:

/clusters/cluster-1/exports/storage-views/VIOServer1:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_/clusters/cluster-1/exports/storage-views/ VIOServer2:
Name Value
————— ————————————————————————————————–
virtual-volumes [(0,P1_LUN421_9322_SPB_VIOServer2_root_75G,VPD83T3:6000144000000010704759addf248ad9,75G),
(1,R2_LUN125_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN124_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_oraarc_250G,VPD83T3:6000144000000010704759addf248afa,250G)]

/clusters/cluster-1/exports/storage-views/VIOServer2:
Name Value
————— ————————————————————————————————
virtual-volumes [(1,R2_LUN1025_9322_SPB_VIOServer2_redo2_12G,VPD83T3:6000144000000010704759addf248b09,12G),
(2,R2_LUN1024_9322_SPA_VIOServer2_redo1_12G,VPD83T3:6000144000000010704759addf248b04,12G),
(3,P3_LUN906_9322_SPB_VIOServer2_ora1_250G,VPD83T3:6000144000000010704759addf248aff,250G),
(4,P2_LUN706_9322_SPA_VIOServer2_ora2_250G,VPD83T3:6000144000000010704759addf248afa,250G)]

/clusters/cluster-1/exports/storage-views/VIOServer3:
Name Value
————— ————————————————————————————————
virtual-volumes [(0,P0_LUN101_3432_SPA_VIOServer3_root_75G,VPD83T3:6000144000000010704759addf248a0a,75G),
(1,P0_LUN130_3432_SPA_VIOServer3_redo1_25G,VPD83T3:6000144000000010704759addf248a0f,25G),

Our VPlex has only been installed for a few months and our team is still learning.  There may be a better way to do this, but it’s all I’ve been able to figure out so far.

Advertisements

The steps for NFS exporting a file system on a VDM

I made a blog post back in January 2014 about creating an NFS export on a virtual data mover but I didn’t give much detail on the commands you need to use to actually do it. As I pointed out back then, you can’t NFS export a VDM file system from within Unisphere however when a file system is mounted on a VDM its path from the root of the physical Data Mover can be exported from the CLI.

The first thing that needs to be done is determining the physical Data Mover where the VDM resides.

Below is the command you’d use to make that determination:

[nasadmin@Celerra_hostname]$ nas_server -i -v name_of_your_vdm | grep server
server = server_4

That will show you just the physical data mover that it’s mounted on. Without the grep statement, you’d get the output below. If you have hundreds of filesystems it will cause the screen to scroll the info you’re looking for off the top of the screen. Using grep is more efficient.

[nasadmin@Celerra_hostname]$ nas_server -i -v name_of_your_vdm
id = 1
name = name_of_your_vdm
acl = 0
type = vdm
server = server_4
rootfs = root_fs_vdm_name_of_your_vdm
I18N mode = UNICODE
mountedfs = fs1,fs2,fs3,fs4,fs5,fs6,fs7,fs8,…
member_of =
status :
defined = enabled
actual = loaded, active
Interfaces to services mapping:
interface=10-3-20-167 :cifs
interface=10-3-20-130 :cifs
interface=10-3-20-131 :cifs

Next you need to determine the file system path from the root of the Data Mover. This can be done with the server_mount command. As in the prior step, it’s more efficient if you grep for the name of the file system. You can run it without the grep command, but it could generate multiple screens of output depending on the number of file systems you have.

[nasadmin@stlpemccs04a /]$ server_mount server_4 | grep Filesystem_03
Filesystem_03 on /root_vdm_3/Filesystem_03 uxfs,perm,rw

The final step is to actually export the file system using this path from the prior step. The file system must be exported from the root of the Data Mover rather than the VDM. Note that once you have exported the VDM file system from the CLI, you can then manage it from within Unisphere if you’d like to set server permissions. The “-option anon=0,access=server_name,root=server_name” portion of the CLI command below can be left off if you’d prefer to use the GUI for that.

[nasadmin@Celerra_hostname]$ server_export server_4 -Protocol nfs -option anon=0,access=server_name,root=server_name /root_vdm_3/Filesystem_03
server_4 : done

At this point the client can mount the path with NFS.