NetApp

NetApp cDOT – Volume Move

In next few entries I would like to describe some non-disruptive operations that are possible within Clustered Data ONTAP. In this article I will focus on a DataMotion for Volumes functionality. It is a build-in functionality that you get with your clustered ONTAP system. As I mentioned in my previous post about SVMs (What is SVM?) and in describing benefits of a cluster , data SVM is acting as a dedicated virtual storage controller, which is not “linked” to any single node, not even to single HA pair within the cluster. Volume move is an excellent example of this advantage. You can easily, non-disruptively (continue reading to understand it fully) move a volume between two different nodes, operation will be executed in the background and it will be practically invisible for Your users! There are some things you have to consider, of course. Especially if Your volumes contains LUNs, I will describe that a bit later.

Why do we need volume move anyways?

Volume move (DataMotion for Volumes) is a very useful tool used often in capacity and performance planning. It might happen (and almost always does) that your data aggregates utilization differs inside Your cluster. Having the possibility to non-disruptively move volumes across aggregates is great benefit in such cases. Also, often particular node in the cluster might have higher utilization (for example in terms of IOs) than others. You can choose the destination aggregate that belongs to different node (even in another HA pair within the same cluster!) to level-down the performance impact on each node.  Continue reading

NetApp cDOT – Create and Extend an Aggregate

As you probably already know ONTAP manages disks in groups called aggregates. An aggregate is build from one or more RAID group. Any given disk can be only a member of a single aggregate. Of course, if you read my previous entry you already know this is not always the truth. The exception is when you have partitioned disks (if you haven’t check that out yet, i wrote an short article on that subject here: NetApp cDOT – Advanced Drive Partitioning).

If terms like an aggregate or RAID group are quite new to You, please refer to my older entries: NetApp – Data ONTAP storage architecture – part 1 and NetApp – Data ONTAP storage architecture – part 2.

In this entry I would like to show you how to build new aggregate, how to extend one, and how to manage your spare disks, and how to utilize your brand new disks.

Manage new disks

Once the new shelf is connected to Your cluster new disks should be assigned to one node as spare disks. It depends on Your auto assign option, and auto-assign policy if the disks will be automatically assigned, or not. You can verify that with disk option show command:

cDOT_cluster::> disk option show                   
Node           BKg. FW. Upd.  Auto Copy     Auto Assign    Auto Assign Policy
-------------  -------------  ------------  -------------  ------------------
cDOT_node1    on             on            on             shelf
cDOT_node2    on             on            on             shelf
2 entries were displayed.

Based on my output you can notice that the assign asign is on, and the policy is shelf. Possible policies are:

  • stack – automatic ownership at the stack or loop level
  • shelf – automatic ownership at the shelf level
  • bay – automatic ownership at the bay level
  • default – policy depends on the system model

If there are multiple stacks or shelves, that have different ownership, one disk must be manually assigned on each shelf/stack before automatic ownership assignment will work.

Continue reading

NetApp cDOT – Advanced Drive Partitioning

If you are completely new to ONTAP world, I encourage you first to have a quick read NetApp – Data ONTAP storage architecture – part 2 article. To better understand this entry, it would be best if you know what RAID groups are, what is an aggregate, and how data is stored on ONTAP.

Clustered ONTAP Aggregates

By default each cluster node has one aggregate known as the root aggregate. It is important difference, comparing cDOT to 7-mode, so I want to stress that out, each node in the cluster has it’s own root aggregate, that does not host any customer data. Within 7-mode it was only a “best practice” to have a separate aggregate for root volume, however with cDOT it is a requirement. A node’s root aggregate is created during ONTAP installation, in RAID-DP configuration (possibly even RAID-TEC).

Think for a second of a consequences. Let’s assume you’ve got some entry-level FAS HA-pair (2 nodes) with single 12-disk disk shelf. Root aggregate for node1 is consuming 3 disks (1 data disk + 2 parity disks – RAID-DP), node2 is consuming another 3 disks (1 data + 2 parity), and you are left with 6 disks. Since you have to leave at least one as spare, you’ve got 5 disks left, configuring data aggregate with single RAID-DP group, you have only 3 disks for plain data utilization! (2 disks are being utilized as parity). And that’s not the only consequence, if you think about it a bit more. Since you will have only 1 data aggregate, and an aggregate can be owned only by 1 head (node) at-time, it means that one of your nodes would do all the work, where second would just be an “passive” node, awaiting to take-over in case of some unlikely failure.

Of course this is very extreme example, but even if you have more shelves available, “loosing” 6 disks for each HA pair seems like a waste, doesn’t it ?

Continue reading