NetApp

NetApp cDOT – SVM root volume protection

In this entry I would like briefly  describe how to protect the SVM (vserver) root volume by creating load-sharing mirrors on every node of a cluster. If you are unfamiliar with SVMs, check out my article NetApp cDOT – what is SVM or vserver ?. Every vserver has it’s root volume, which is an entry point fo a namespace provided by that SVM (You can read more about namespaces in NetApp cDOT – Namespace, junction path entry). Continue reading

NetApp cDOT – Create and Extend an Aggregate

As you probably already know ONTAP manages disks in groups called aggregates. An aggregate is build from one or more RAID group. Any given disk can be only a member of a single aggregate. Of course, if you read my previous entry you already know this is not always the truth. The exception is when you have partitioned disks (if you haven’t check that out yet, i wrote an short article on that subject here: NetApp cDOT – Advanced Drive Partitioning).

If terms like an aggregate or RAID group are quite new to You, please refer to my older entries: NetApp – Data ONTAP storage architecture – part 1 and NetApp – Data ONTAP storage architecture – part 2.

In this entry I would like to show you how to build new aggregate, how to extend one, and how to manage your spare disks, and how to utilize your brand new disks.

Manage new disks

Once the new shelf is connected to Your cluster new disks should be assigned to one node as spare disks. It depends on Your auto assign option, and auto-assign policy if the disks will be automatically assigned, or not. You can verify that with disk option show command:

cDOT_cluster::> disk option show                   
Node           BKg. FW. Upd.  Auto Copy     Auto Assign    Auto Assign Policy
-------------  -------------  ------------  -------------  ------------------
cDOT_node1    on             on            on             shelf
cDOT_node2    on             on            on             shelf
2 entries were displayed.

Based on my output you can notice that the assign asign is on, and the policy is shelf. Possible policies are:

  • stack – automatic ownership at the stack or loop level
  • shelf – automatic ownership at the shelf level
  • bay – automatic ownership at the bay level
  • default – policy depends on the system model

If there are multiple stacks or shelves, that have different ownership, one disk must be manually assigned on each shelf/stack before automatic ownership assignment will work.

Continue reading

NetApp cDOT – How to create a snapshot schedule?

Clustered Data ONTAP in many ways is much different than the predecessor – ONTAP 7-mode. Even simple task, like create a snapshot schedule, might be confusing at the beginning for ex-7-mode Administrators. in Clustered ONTAP a lot ot things is now policy-based. One of those, are snapshot schedules. What does it mean? It means that if you want to check the  snapshot schedule, you first have to check which policy is assigned to a volume, for example:

cDOT01::> volume show -vserver svm1 -volume may -fields snapshot-policy
vserver volume snapshot-policy
------- ------ ---------------
svm1    may    40dayssnap

40dayssnap is a policy assigned to volume may, which is located on SVM (vserver) svm1. OK, let’s see what this policy holds: Continue reading