As you probably already know ONTAP manages disks in groups called aggregates. An aggregate is build from one or more RAID group. Any given disk can be only a member of a single aggregate. Of course, if you read my previous entry you already know this is not always the truth. The exception is when you have partitioned disks (if you haven’t check that out yet, i wrote an short article on that subject here: NetApp cDOT – Advanced Drive Partitioning).
If terms like an aggregate or RAID group are quite new to You, please refer to my older entries: NetApp – Data ONTAP storage architecture – part 1 and NetApp – Data ONTAP storage architecture – part 2.
In this entry I would like to show you how to build new aggregate, how to extend one, and how to manage your spare disks, and how to utilize your brand new disks.
Manage new disks
Once the new shelf is connected to Your cluster new disks should be assigned to one node as spare disks. It depends on Your auto assign option, and auto-assign policy if the disks will be automatically assigned, or not. You can verify that with disk option show command:
cDOT_cluster::> disk option show Node BKg. FW. Upd. Auto Copy Auto Assign Auto Assign Policy ------------- ------------- ------------ ------------- ------------------ cDOT_node1 on on on shelf cDOT_node2 on on on shelf 2 entries were displayed.
Based on my output you can notice that the assign asign is on, and the policy is shelf. Possible policies are:
- stack – automatic ownership at the stack or loop level
- shelf – automatic ownership at the shelf level
- bay – automatic ownership at the bay level
- default – policy depends on the system model
If there are multiple stacks or shelves, that have different ownership, one disk must be manually assigned on each shelf/stack before automatic ownership assignment will work.
To display unassigned disks:
cDOT_cluster::> storage disk show -container-type unassigned Analyzing 24 matching entries... Ctrl-C to quit Usable Disk Container Container Disk Size Shelf Bay Type Type Name Owner ---------------- ---------- ----- --- ------- ----------- --------- -------- 7.21.0 - 21 0 BSAS unassigned - - 7.21.1 - 21 1 BSAS unassigned - - 7.21.2 - 21 2 BSAS unassigned - - 7.21.3 - 21 3 BSAS unassigned - - 7.21.4 - 21 4 BSAS unassigned - - 7.21.5 - 21 5 BSAS unassigned - - 7.21.6 - 21 6 BSAS unassigned - - 7.21.7 - 21 7 BSAS unassigned - - 7.21.8 - 21 8 BSAS unassigned - - 7.21.9 - 21 9 BSAS unassigned - - 7.21.10 - 21 10 BSAS unassigned - - 7.21.11 - 21 11 BSAS unassigned - - 7.21.12 - 21 12 BSAS unassigned - - 7.21.13 - 21 13 BSAS unassigned - - 7.21.14 - 21 14 BSAS unassigned - - 7.21.15 - 21 15 BSAS unassigned - - 7.21.16 - 21 16 BSAS unassigned - - 7.21.17 - 21 17 BSAS unassigned - - 7.21.18 - 21 18 BSAS unassigned - - 7.21.19 - 21 19 BSAS unassigned - - 7.21.20 - 21 20 BSAS unassigned - - 7.21.21 - 21 21 BSAS unassigned - - 7.21.22 - 21 22 BSAS unassigned - - 7.21.23 - 21 23 BSAS unassigned - - 24 entries were displayed.
In this example there is a new shelf installed with shelf-id 21. My Auto Assign Policy is shelf. So, if I assign a single disk:
cDOT_cluster::> storage disk assign -disk 7.21.0 -owner cDOT_node1 cDOT_cluster::>
After few seconds, the auto assign policy should kick in:
cDOT_cluster::> disk show -spare -owner cDOT_node1 Original Owner: cDOT_node1 Checksum Compatibility: block Usable Physical Disk HA Shelf Bay Chan Pool Type RPM Size Size Owner --------------- ------------ ---- ------ ----- ------ -------- -------- -------- 2.4.18 0b 4 18 A Pool0 BSAS 7200 2.42TB 2.43TB cDOT_node1 2.4.19 0b 4 19 A Pool0 BSAS 7200 2.42TB 2.43TB cDOT_node1 2.4.20 0b 4 20 A Pool0 BSAS 7200 2.42TB 2.43TB cDOT_node1 7.21.0 3a 21 0 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.1 3a 21 1 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.2 3a 21 2 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.3 3a 21 3 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.4 3a 21 4 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.5 3a 21 5 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.6 3a 21 6 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.7 3a 21 7 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.8 3a 21 8 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.9 3a 21 9 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.10 3a 21 10 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.11 3a 21 11 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.12 3a 21 12 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.13 3a 21 13 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.14 3a 21 14 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.15 3a 21 15 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.16 3a 21 16 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.17 3a 21 17 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.18 3a 21 18 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.19 3a 21 19 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.20 3a 21 20 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.21 3a 21 21 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.22 3a 21 22 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 7.21.23 3a 21 23 B Pool0 BSAS 7200 1.62TB 1.62TB cDOT_node1 27 entries were displayed.
As you can notice, the all 24 disks has been assigned as spare for host cDOT_node1. (Plus 3 spare disks from other shelf that was already assigned to the system)
Manage spare disks
To create or extend an aggregate, we need spare disks available. A spare disk is an installed disk which is not being utilized, it can be zeroed (ready to be used) or not zeroed – which means it has to be zeroed before it can be used. There is actually many ways to check the availability of your spare disks, to give you a few commands:
- storage disk show -container-type spare – this command list all the spares available in one output
- storage disk show -spare – this command lists all the spare disks available, with a separete section for each node
- storage aggregate show-spare-disks – this command lists all spares with addition information if disk is zeroed or not
- node run -node <node_name> -command vol status -s – this command list all spares for node <node_name>, output is very similar comparing to 7-mode ONTAP
To verify if the spares are zeroed you can, for instance, run a command storage aggregate show-spare-disks
cDOT_cluster::> storage aggregate show-spare-disks -owner-name cDOT_node1 Original Owner: cDOT_node1 Pool0 Spare Pool Usable Physical Disk Type RPM Checksum Size Size Status --------------------------- ----- ------ -------------- -------- -------- -------- 2.4.18 BSAS 7200 block 2.42TB 2.43TB zeroed 2.4.19 BSAS 7200 block 2.42TB 2.43TB zeroed 2.4.20 BSAS 7200 block 2.42TB 2.43TB not zeroed 7.21.0 BSAS 7200 block 1.62TB 1.62TB zeroed 7.21.1 BSAS 7200 block 1.62TB 1.62TB zeroed 7.21.2 BSAS 7200 block 1.62TB 1.62TB zeroed [....]
(I have shorten the output, but there are all 24 disks zeroed from shelf 21 (disks 7.21.X) ). If you notice that your spares are not zeroed (like, in my example disk 2.4.20), you can run the command:
cDOT_cluster::> disk zerospares cDOT_cluster::>
Zeoring take some time, so you can take a (longer) break, before those disks will be ready. It is safe to run it, it will not disrupt any data access for Your customers. You can verify the status within the same command (storage aggregate show-spare-disks), the progress will be in percentage, like:
cDOT_cluster::> storage aggregate show-spare-disks -owner-name cDOT_node1 Original Owner: cDOT_node1 Pool0 Spare Pool Usable Physical Disk Type RPM Checksum Size Size Status --------------------------- ----- ------ -------------- -------- -------- -------- 2.4.18 BSAS 7200 block 2.42TB 2.43TB zeroed 2.4.19 BSAS 7200 block 2.42TB 2.43TB zeroed 2.4.20 BSAS 7200 block 2.42TB 2.43TB zeroing, 16% done 7.21.0 BSAS 7200 block 1.62TB 1.62TB zeroed 7.21.1 BSAS 7200 block 1.62TB 1.62TB zeroed 7.21.2 BSAS 7200 block 1.62TB 1.62TB zeroed [....]
Once your disks are zeroed you are good to build Your brand new data aggregate, or extend an existing one. Just remember to leave some spares available, in case of future disks failures! “How many disks should I leave?” – this is a tricky question and depends on number of disks you have, what’s your support policy for failed drives, what’s your raid settings, but usually one spare per a disk shelf seems to be a right choice in most of the cases.
Create a new Aggregate
We have our disks assigned as spares and zeroed. Next step would be to build a new data aggregate, to execute this with command line, you can run a command storage aggregate create. With cDOT ONTAP there is an option to ‘simulate‘ and aggregate creation:
cDOT_cluster::> storage aggregate create -aggregate cDOT_node1_aggr2_data -node cDOT_node1 -maxraidsize 20 -disktype BSAS -disksize 1658 -diskcount 10 -simulate [Job 25352] Job is queued: Create cDOT_node1_aggr2_data. [Job 25352] creating aggregate cDOT_node1_aggr2_data ... [Job 25352] Job succeeded: Aggregate creation would succeed for aggregate "cDOT_node1_aggr2_data" on node "cDOT_node1". The following disks would be used to create the new aggregate: 7.21.1, 7.21.2, 7.21.3, 7.21.4, 7.21.5, 7.21.6, 7.21.7, 7.21.8, 7.21.9, 7.21.10.
As you notice, I have specified disksize as 1658, because this command is accepting this value in GB (1.62TB is around 1658GB). If you are happy with the results, you can re-run the same command without -simulate specified. The aggregate creation should not take longer than couple of seconds, and aggregate is ready-to-use as soon as it’s created.
Extend an Aggregate
I have created an aggregate called cDOT_node1_aggr2_data build from 10 disks, with max RAID size of 20 disks. Since the disk type is BSAS, the biggest possible RAID group size is 20 indeed. Let’s assume that we want to extend our aggregate using spare disks. Since we only utilized 10 disks, we still have 14 available for that particular disk shelf, disk size. To extend an aggregate we have two options:
- create a new RAID group. It means that we will utilize additional 2 disks for parity (using default RAID-DP configuration), but smaller RAID size gives some benefits as well (for example it’s less likely to hit a double disk failure within the same RAID group, and a RAID reconstruction will run shorter). To add 10 disks, creating new RAID group, you can run a command:
cDOT_cluster::> storage aggregate add-disks -aggregate cDOT_node1_aggr2_data -disktype BSAS -disksize 1658 -diskcount 10 -raidgroup new (-simulate)
- Add disks to an exsiting RAID group. You will not “lose” additional disks for parity, to do so you can execute the command adding disks to raid group 1:
cDOT_cluster::> storage aggregate add-disks -aggregate cDOT_node1_aggr2_data -disktype BSAS -disksize 1658 -diskcount 10 -raidgroup rg1 (-simulate)
As best practice, always try to build an aggregate with as-many disks as you plan, try to avoid extending an aggregate little-by-little adding just 1-2 disks as a time. To have a full performance benefits of using RAID group, it’s better to extend an aggregates by adding a full new RAID group at-a-time, so, for instance, if your RAID group is 20, try extending an aggregate (of course if possible) utilizing 20 disks and building a new RAID group. Once the aggregate is extended it’s a good practice to run an reallocate job (to have a best read/write performance for Your data placed on that particular aggregate), I will create an separete entry for that topic in the near future.
Shrink an Aggregate
This is very important to remember: You cannot shrink an aggregate size. You can only increase the size by adding physical disks, however there is no option to release single disks from existing aggregate. The only option is to evacuate all the data, offline and destroy the aggregate. After that you have to zero spare disks (command disk zerospares), and build a brand new aggregate according to the new layout. Always remember about that before you decide to build a new aggregate, or extend an existing one.