In this entry I would like briefly describe how to protect the SVM (vserver) root volume by creating load-sharing mirrors on every node of a cluster. If you are unfamiliar with SVMs, check out my article NetApp cDOT – what is SVM or vserver ?. Every vserver has it’s root volume, which is an entry point fo a namespace provided by that SVM (You can read more about namespaces in NetApp cDOT – Namespace, junction path entry). Continue reading
Month: July 2017
NetApp cDOT – Volume Move
In next few entries I would like to describe some non-disruptive operations that are possible within Clustered Data ONTAP. In this article I will focus on a DataMotion for Volumes functionality. It is a build-in functionality that you get with your clustered ONTAP system. As I mentioned in my previous post about SVMs (What is SVM?) and in describing benefits of a cluster , data SVM is acting as a dedicated virtual storage controller, which is not “linked” to any single node, not even to single HA pair within the cluster. Volume move is an excellent example of this advantage. You can easily, non-disruptively (continue reading to understand it fully) move a volume between two different nodes, operation will be executed in the background and it will be practically invisible for Your users! There are some things you have to consider, of course. Especially if Your volumes contains LUNs, I will describe that a bit later.
Why do we need volume move anyways?
Volume move (DataMotion for Volumes) is a very useful tool used often in capacity and performance planning. It might happen (and almost always does) that your data aggregates utilization differs inside Your cluster. Having the possibility to non-disruptively move volumes across aggregates is great benefit in such cases. Also, often particular node in the cluster might have higher utilization (for example in terms of IOs) than others. You can choose the destination aggregate that belongs to different node (even in another HA pair within the same cluster!) to level-down the performance impact on each node. Continue reading
NetApp cDOT – Create and Extend an Aggregate
As you probably already know ONTAP manages disks in groups called aggregates. An aggregate is build from one or more RAID group. Any given disk can be only a member of a single aggregate. Of course, if you read my previous entry you already know this is not always the truth. The exception is when you have partitioned disks (if you haven’t check that out yet, i wrote an short article on that subject here: NetApp cDOT – Advanced Drive Partitioning).
If terms like an aggregate or RAID group are quite new to You, please refer to my older entries: NetApp – Data ONTAP storage architecture – part 1 and NetApp – Data ONTAP storage architecture – part 2.
In this entry I would like to show you how to build new aggregate, how to extend one, and how to manage your spare disks, and how to utilize your brand new disks.
Manage new disks
Once the new shelf is connected to Your cluster new disks should be assigned to one node as spare disks. It depends on Your auto assign option, and auto-assign policy if the disks will be automatically assigned, or not. You can verify that with disk option show command:
cDOT_cluster::> disk option show Node BKg. FW. Upd. Auto Copy Auto Assign Auto Assign Policy ------------- ------------- ------------ ------------- ------------------ cDOT_node1 on on on shelf cDOT_node2 on on on shelf 2 entries were displayed.
Based on my output you can notice that the assign asign is on, and the policy is shelf. Possible policies are:
- stack – automatic ownership at the stack or loop level
- shelf – automatic ownership at the shelf level
- bay – automatic ownership at the bay level
- default – policy depends on the system model
If there are multiple stacks or shelves, that have different ownership, one disk must be manually assigned on each shelf/stack before automatic ownership assignment will work.