NetApp cDOT – Advanced Drive Partitioning

If you are completely new to ONTAP world, I encourage you first to have a quick read NetApp – Data ONTAP storage architecture – part 2 article. To better understand this entry, it would be best if you know what RAID groups are, what is an aggregate, and how data is stored on ONTAP.

Clustered ONTAP Aggregates

By default each cluster node has one aggregate known as the root aggregate. It is important difference, comparing cDOT to 7-mode, so I want to stress that out, each node in the cluster has it’s own root aggregate, that does not host any customer data. Within 7-mode it was only a “best practice” to have a separate aggregate for root volume, however with cDOT it is a requirement. A node’s root aggregate is created during ONTAP installation, in RAID-DP configuration (possibly even RAID-TEC).

Think for a second of a consequences. Let’s assume you’ve got some entry-level FAS HA-pair (2 nodes) with single 12-disk disk shelf. Root aggregate for node1 is consuming 3 disks (1 data disk + 2 parity disks – RAID-DP), node2 is consuming another 3 disks (1 data + 2 parity), and you are left with 6 disks. Since you have to leave at least one as spare, you’ve got 5 disks left, configuring data aggregate with single RAID-DP group, you have only 3 disks for plain data utilization! (2 disks are being utilized as parity). And that’s not the only consequence, if you think about it a bit more. Since you will have only 1 data aggregate, and an aggregate can be owned only by 1 head (node) at-time, it means that one of your nodes would do all the work, where second would just be an “passive” node, awaiting to take-over in case of some unlikely failure.

Of course this is very extreme example, but even if you have more shelves available, “loosing” 6 disks for each HA pair seems like a waste, doesn’t it ?

Continue reading