After spending a few days working on a DR test recovery, I thought I’d describe the process along with a few roadblocks that I hit along the way. We had some specific requirements that had to be met, so I thought I’d share my experiences. Our host site has a VNX5500 and our DR site has an NS-960, and we have Celerra Replicator configured to replicate the VDM and all of the production filesystems from one site to the other.
Here were my business requirements for this test:
- Replicate the VDM, production CIFS server and production filesystems from the host site to DR site.
- Fail over (or bring up a copy of) the VDM from the host site to the DR site, mounting the replicated VDM at the DR site.
- Fail over (or bring up a copy of) the production CIFS server at the DR site.
- Create R/W checkpoints of all replicated filesystems at DR site to allow for appropriate user and application testing.
- Share the R/W checkpoints of the replicated filesystems on the CIFS server at the DR site rather than the original replicated filesystems, so original replicated data is not touched and does not need to be replicated again after the test.
I started off by setting up replication jobs for our VDM and all filesystems. Once those were complete (after several weeks of data transfers) I was ready to test.
Step 1: Replicate VDM and production filesystems
This post isn’t meant to detail the process of actually setting up the initial replications, just how to get the replicated data working and accessible at your DR site. Setting up replication is a well documented procedure which can be reviewed in EMC’s guide “Using Celerra Replicator (V2)”, P/N 300-009-989. Once the VDMs and filesystems are replicated, you’re ready for the next step.
Step 2: Bring up the VDM at the DR site
The first step in my testing requirements is to bring up the VDM at the DR site.
Failed attempt 1:
I initially created a new replication session for the VDM as I didn’t want to use the actual production VDM, as this is a DR test and not an actual disaster.
After replicating a new copy of the VDM, I attempted to load it in the CLI with the command below. This must be done from the CLI as there is no option to do this step in Unisphere.
nas_server –vdm <VDMNAME> -setstate loaded
It failed with this error:
Error 12066: root_fs <VDMNAME> is the source or destination object of a file system and cannot be unmounted or is the source or destination object of a VDM replication session and cannot be unloaded.
It was pretty obvious here that you need to stop the replication first before you can load the VDM. So, as a next step, I stopped the replication with a simple right click/stop on the source side and tried again.
It failed with this error:
Error 4038: <interface_name_1> <interface_name_2> : interfaces not available on server_2
So, it looks like the interface names need to be the same. I didn’t really want to change the interface names if I didn’t have to, so I tried a different approach next.
Failed attempt 2:
I thought this time I’d create a blank VDM on the destination side first and replicate the host VDM to it, thinking it wouldn’t keep the interface name requirement, and I still wouldn’t have to stop replication on the actual prod VDM, as I didn’t really want to use that one in a test.
I did just that. I created a blank VDM on the DR side, then started a new replication session from the host side and chose it as the destination, making sure to choose the overwrite option when I replicated to it. The replication was successful. I stopped the replication on the source side after it was complete, and then attempted to load the new replicated VDM on the DR side.
Voila! It worked:
nas_server –vdm <VDMNAME> -setstate loaded
id = 10
name = vdm_replica
acl = 0
type = vdm
server = server_2
rootfs = root_fs_vdm_replica
I18N = UNICODE
Actual = loaded,ready
Now that it was loaded up, it was time to move on to the next step and create the R/W checkpoints of the filesystems. This is where the process failed again.
After clicking on the drop down box for “Choose Data Mover”, I got this error:
No file systems exist
Query file systems vdm_replica: All. File system not found.
I’m not sure why this failed, but since the VDM couldn’t find the filesystems it was time to try another approach again.
After my first two failures, it looked pretty obvious that I’d need to change the interface names and use the original replicated VDM. Making a copy of the VDM to a blank VDM didn’t work because it couldn’t see the filesystems, and using the original requires the interface names to be the same. The lesson learned here is to make sure you have matching ports on your host and DR Celerras, and use the same interface names. If I had done that, my first attempt would have been succesful.
If the original VDM has four CIFS servers (each with it’s own interface) and the DR Celerra only has one port configured on the network, you’d be out of luck. You wouldn’t have enough interfaces to rename them all to match, and you’d never be able to load your VDM. The VDM’s only look for the name to be the same, NOT the IP’s. The IP’s can be different to match your DR network, and the IP’s that are already assigned to the DR site interfaces will NOT change when you load the VDM.
In my case, the host Celerra has two CIFS servers, each with it’s own interface. One is for production, one is for backups.
Here are the steps that worked for me:
- Stop the replication of the VDM (You will see it change to a ‘stopped’ state in Unisphere).
- Change the interface names on the DR side (changing IP’s is not necessary) to match the host side.
- Load the VDM with the command nas_server –vdm <VDMNAME> -setstate loaded
- You will see the VDM status change from ‘unloaded’ to ‘OK’.
Step 3: Bring up the CIFS server at the DR site
After you’ve completed the previous step, the VDM will be loaded using the same exact same interfaces as production, and the CIFS servers will be automatically created as well. If a CIFS server uses cge1-0 on server_2 on the host side, it will now be set up with the same name using cge1-0 on server_2 on the destination (DR) side.
This would be very useful in a real disaster, but for this test I wanted to create an alternate CIFS server with a different IP as the domain controller, DNS servers, and IP range used at our DR site is different. You could choose to use the same CIFS server that was replicated with the VDM, but for our test I decided to bring up an entirely new CIFS server. We use DFS for access all of our shares in production, so the name of the CIFS server won’t matter for our testing purposes. We would just need to update DFS with the new name on the DR network.
Here are the steps I took to bring up the CIFS server for DR:
- Gather IP information from the DR team. Will need a valid IP and subnet mask for the new CIFS server.
- Verify IP config on new DR network.
- Check that the default route matches the DR network
- Check that the DNS server entries match the DNS servers on the DR network
- Verify that the Domain controller in the DR network is up and available
- Modify the interface of your choice with the correct IP information for the CIFS server.
- Create the CIFS server and join it to the DR active directory domain.
- If you need to test an AD account, use this command:
- server_cifssupport <vdm_name> -cred -name -domain
That’s it for this step. The CIFS server was successfully joined to the domain and I was able to ping it from one of our previously recovered windows servers on the DR network.
Step 4: Create Read/Write checkpoints of all replicated filesystems
One of my business requirements for this test was to allow read/write access to the replicated filesystems without having to actually change the production data. The easy way to accomplish this is to create a single read/write checkpoint (snapshot) of each filesystem. To do this, go to the checkpoint area in Unisphere, click create, and select the “Writeable Checkpoint” checkbox when you create the checkpoint. You can also script the process and run it from the CLI on the control station.
First, create each checkpoint with this command:
nas_ckpt_schedule -create <ckpt_fs_name> -filesystem <fs_name> -recurrence once
Second, create a read/write copy of each checkpoint with this command:
fs_ckpt <ckpt_fs_name> -name <r/w_ckpt_fs_name>-Create -readonly n
I would recommend running these no more than two a time and letting them finish. I’ve had issues in the past running dozens of checkpoint jobs at once that hang and never complete, requiring a reboot of the data mover to correct.
Step 5: Share the replicated filesystems on the DR CIFS server
Once all of the R/W checkpoints are created, they can be shared on the DR CIFS server with the same share names as the original production share names. This allows all of our recovered application and file servers to connect to the same names, simplifying the configuration of the test environment.
You can use a CLI command to export each r/w copy to share them on your CIFS Server:
server_export [vdm] -P cifs -name [filesystem]_ckpt1 -option netbios=[cifserver] [filesystem]_ckpt1_writeable1
Step 6: Cleanup
That’s it! We had a successful DR test. Once the test was complete, I peformed the following cleanup steps:
- Remove CIFS server shares
- Remove CIFS server
- Change interfaces on DR celerra back to their original names and IP’s.
- Unload the replicated VDM with this command:
- nas_server –vdm <VDMNAME> -setstate mounted
- Restart the VDM replication from the source