NetApp cDOT – Restore SVM root volume

In my previous entry I have briefly describe how you can protect SVM root volume by using load-sharing mirrors. (The post can be found here: NetApp cDOT – SVM root volume protection). If you haven’t read it, and you are not sure what are the consequences of root volume not being accessible, I would encourage to give the article a try. Long story short, each SVM (Storage Virtual Machine, a.k.a vserver), has it’s own namespace. Root volume is a root (/) path of SVM. If SVM root volume becomes unavailable all NAS (CIFS/NFS) clients will lose access to all shares from that particular SVM. If you want to read a little bit more into namespace concept, check out my other entry: NetApp cDOT – Namespace, junction path.

Remember: when a SVM root volume became unavailable, it will be disruptive for all NAS clients! Never “experiment” on production environment.

Promoting Load-Sharing mirror copy

I will promote a load-sharing mirror copy to restore the root volume of SVM (Storage Virtual Machine).  Let me bring back the configuration from my previous entry, to show You on example how can that be done.

cluster1::> snapmirror show -vserver svm1
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster1://svm1/svm1_rootvol LS cluster1://svm1/svm1_rootvol_m1 Snapmirrored Idle - true -
                                cluster1://svm1/svm1_rootvol_m2 Snapmirrored Idle - true -
2 entries were displayed.

cluster1::>

In my case cluster1 has two nodes, svm1 has a root volume called svm1_rootvol. This root volume has a ls-set (Load Sharing mirror set) with two mirrors, svm1_rootvol_m1 and svm1_rootvol_m2.  Let’s assume that

  • svm1_rootvol became unavailable for unknown error (for example volume was deleted by accident, hosting aggregate became offline or any other, etc)
  • we would like to promote a load-sharing mirror copy to re-enabled access to SVM for NAS (CIFS/NFS) clients by promoting svm_rootvol_m2

To execute this action I will execute snapmirror promote, I will also changing my privilege level to advanced to verify the vsroot value (you can read more about shell privilege levels here).

cluster1::> snapmirror show -vserver svm1
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster1://svm1/svm1_rootvol LS cluster1://svm1/svm1_rootvol_m1 Snapmirrored Idle - true -
                                cluster1://svm1/svm1_rootvol_m2 Snapmirrored Idle - true -
2 entries were displayed.

cluster1::> snapmirror promote -destination-path svm1:svm1_rootvol_m2

Warning: Promote will delete the read-write volume cluster1://svm1/svm1_rootvol and replace it with cluster1://svm1/svm1_rootvol_m2.
Do you want to continue? {y|n}: y
[Job 130] Job succeeded: SnapMirror: done

cluster1::> snapmirror show -vserver svm1
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
cluster1://svm1/svm1_rootvol_m2 LS cluster1://svm1/svm1_rootvol_m1 Snapmirrored Idle - true -

cluster1::> set -privilege advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

cluster1::*> volume show -volume svm1_rootvol_m2 -fields junction-path,vsroot
vserver volume          junction-path vsroot
------- --------------- ------------- ------
svm1    svm1_rootvol_m2 /             true

cluster1::*> set -privilege admin

cluster1::>

I have verified that the promotion was successful with snapmirror show, and confirmed that the root junction-path is set on my volume with vol show command.

If this promotion is permanent it would be wise to also consider renaming the volume to the original rootvol name.

Promoting a new FlexVol volume

What if a SVM root volume haven’t been protected, and it got removed? Is this an unrecoverable, tragic condition? Thankfully it is not. In below example I will create and use a new volume to restore the root volume of SVM.

Starting from ONTAP 8.2 SVM root volume is created with 1GB size, therefore to execute this task I have to start with creating a new 1GB (or bigger) new volume. Next step is to make my new volume a vsroot (SVM root volume) by executing volume make-vsroot command.

cluster1::> volume create -vserver svm1 -volume svm1_new_rootvol -aggregate aggr1_cluster1_01 -size 1G -type RW
[Job 132] Job succeeded: Successful

cluster1::> set -privilege advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y

cluster1::*> volume show -volume svm1_new_rootvol -fields junction-path,vsroot
vserver volume           junction-path vsroot
------- ---------------- ------------- ------
svm1    svm1_new_rootvol -             false

cluster1::*> volume make-vsroot -vserver svm1 -volume svm1_new_rootvol
[Job 133] Job succeeded: DONE

cluster1::*>
cluster1::*> volume show -volume svm1_new_rootvol -fields junction-path,vsroot
vserver volume           junction-path vsroot
------- ---------------- ------------- ------
svm1    svm1_new_rootvol /             true

cluster1::*>

Once this is completed, the new SVM root volume is already fully functional. You have to remember about assigning correct export-policy to new volume, it should be the same policy (or a copy with the same rules) as the old SVM root volume had. This step is crucial, making sure all NAS clients will have the same access at they had before the switch. You can test CIFS/NFS access by executing export-policy check-access for couple of Your NAS customers (all you have to provide in the commands is the client-ip, protocol using, way of authentication – in most cases it’s sys, protocol and the full path to CIFS/NFS share to which Your client should have access to).

One thought on “NetApp cDOT – Restore SVM root volume

Leave a Reply

Your email address will not be published. Required fields are marked *