NetApp

NetApp cDOT – junction paths in practice

This entry is a follow-up of an example from my previous post about junction paths and namespaces (entire post you can find here: NetApp cDOT  – Namespace, junction path). Today I would like to show you from ‘technical’ point of view how easy it is to modify junction paths for your volumes.

Let me first bring the example:

Namespace example - step 1

Namespace example – step 1

In my example path /current_month is used to store documents and reports from the running month. When the month is over we still want to have an access to those reports in /archive/<month_name> location. Step one can be seen form clustershell as:

cDOT01::> vol show -vserver svm1 -fields junction-path
  (volume show)
vserver volume junction-path
------- ------ --------------
svm1    april  /current_month
svm1    february /archive/february
svm1    march  /archive/march
svm1    svm1_root /
4 entries were displayed.

Let’s assume april is about to finish and we have to get ready for next month. Using feature of junction paths you do not have to physically move any data, and the whole operation can be done with just couple of commands.

Step 1: First you have to unmount april from /current_month:

cDOT01::> volume unmount -vserver svm1 april

cDOT01::> vol show -vserver svm1 -fields junction-path
  (volume show)
vserver volume junction-path
------- ------ -------------
svm1    april  -
svm1    february /archive/february
svm1    march  /archive/march
svm1    svm1_root /
4 entries were displayed.

Caution: Since volume april is not mounted now – it cannot be accessed via NAS protocols / your customers.

Step 2: Mount april to the correct junction-path:

cDOT01::> volume mount -vserver svm1 -volume april -junction-path /archive/april

cDOT01::> vol show -vserver svm1 -fields junction-path
  (volume show)
vserver volume junction-path
------- ------ --------------
svm1    april  /archive/april
svm1    february /archive/february
svm1    march  /archive/march
svm1    svm1_root /
4 entries were displayed.

Step 3. Create a new volume for your current reports:

cDOT01::> volume create -vserver svm1 -volume may -size 100m -aggregate aggr1 -junction-path /current_month
[Job 34] Job succeeded: Successful

As you may noticed – you can mount the volume to the correct junction-path within volume mount command, or during volume create you can just specify your junction path with -junction-path parameter. Now, let’s check if our namespace is  correct:

cDOT01::> vol show -vserver svm1 -fields junction-path
  (volume show)
vserver volume junction-path
------- ------ --------------
svm1    april  /archive/april
svm1    february /archive/february
svm1    march  /archive/march
svm1    may    /current_month
svm1    svm1_root /
5 entries were displayed.

And it is – exactly as on below example:

Namespace example - step 2

Namespace example – result

NetApp cDOT – Namespace, junction path

One of the biggest difference between “7-mode vs cluster-mode” approach I noticed at the beginning was the term namespace. In 7-mode all volumes were automatically mounted during volume creation to /vol/<vol_name> path. In didn’t matter if the volume was added to vfiler, all volumes on single Data ONTAP 7-mode instance have unique path /vol/<vol_name>. With clustered Data ONTAP the situation is different. Flexible volumes that contain NAS data (basically data served via CIFS or NFS) are junctioned into the owning SVM in a hierarchy.

Junction path

When the flexvol is created, administrator specifies the junction path for that flexible volume. If you have an experience with 7-mode, it’s safe to compare that junction path for 7-mode was set to /vol/<vol_name>. The junction path is a directory location under the root of the SVM where the flexible volume can be accessed.

Namespace and junction paths

Namespace and junction paths

Above you can see an namespace, that have couple of junction paths. / is a root path for SVM (also called SVM root volume). vol1 and vol2 are mounted directly under root path, which means that those can be accessed via SVM1:/vol1 and SVM:/vol2.
vol3 Junction path is /vol1/vol3, which means it can be access via SVM1:/vol1/vol3, also customers who have an access to /vol1, can access vol3 by simply accessing vol3 folder (windows) or directory (unix).
dir1 is a simple directory that doesn’t contain any data, but is used to mount vol4 and vol5 to junction path /dir1/vol4, /dir1/vol5 (if you would like to have same juntion paths as in 7-mode environment you would simply call this directory vol instaed of dir1). Finally there is a qtree created on vol5, since it’s junction path is /dir1/vol5, the path to the qtree is /dir1/vol5/qtree1.

This feature have several advantages. For example NFS clients can access multiple flexible volumes using a single mount point. Same with windows clients, they can access multiple flexvols using a single CIFS share. For example, if your project team need additional capacity for their current action, you can just create a new volume for that, and mount it under the volume that this group have already an access to. In fact – a junction path is independent from a volume name. In other words volume1 can be mounted as /volume1 as well as /current_month.

Namespace example - step 1

Namespace example – step 1

Example:  let’s assume that your customers are storing daily reports to SVM1:/current_month location. At the beginning of march you can create a volume called “march” and junction it to /current_month location. At the end of march you can change this junction to /archive/march, and later on create an “april” volume with junction /current_month.

 

Namespace example - step 2

Namespace example – step 2

Such operation doesn’t require any action form your customers and doesn’t involve any data movement or data copy on the storage array. It’s a simple modification within your SVM’s namespace.

Namespace

A namespace consists a group of volumes that are connected using junction paths. It’s a hierarchy of all flexible volumes with junction paths within the same SVM (vserver).

Export Policies

I will create a separate entry about this term. Now I would like to briefly introduce it, to explain you another usage of junction paths. An export policy is used to control client access to a specified flexvol. Each flexvol has an export policy associated with it. Multiple volumes can have the same export policy or all of them can have their unique ones. Also qtrees can have theirs unique export policies. Example: you can create a volume “finance” with junction path /finance that can be access only by selected hosts/protocols. In future, when the finance department need a new volume, you can create new_volume with junction path /finance/new_volume. This volume can be accessed only for hosts/protocols that are in-align with “finance” export policy at least on read-level (and additional new_volume policy).

NetApp cDOT – Drive name formats

In my last two posts I have briefly described Data ONTAP storage architecture (you can find those here: part 1, part 2). In this fairly short entry I want to show you how to identify Your disks within Data ONTAP. Why do you want to do so? Well,  for example – a disk numbering enables you to quickly locate a disk that is associated with a displayed error message.

Data ONTAP 8.2.x and previous

Unfortunately it depends on your Data ONTAP version. With Data ONTAP 8.2.x (and earlier) drive names have different formats, depending on the connection type (FC-AL / SAS). Each drive has a universal unique identifier (UUID) that is a unique number from all other drives in your cluster.

Each disk is named wit their node name at the beginning. For example node1:0b.1.20 (node1 – nodename, 0 – slot, b – port, 1 – shelfID, 20 – bay).

In other words for SAS drives the name convention is <node>:<slot><port>.<shelfID>.<bay>

For SAS in multi-disk shelf the name convention is: <node>:<slot><port>.<shelfID>.<bay>L<position> – <position> is either 1, or 2 – in this shelf type two disks are inside a single bay.

For FC-AL name convention is: <node>:<slot><port>.<loopID>

Node name for unowned disks

As you probably noticed, this naming convention is kind of tricky. In normal HA pair each disks shelf is connected to both nodes. How come can you know which disk is named after which nodename? It’s quite simply actually: if disk is assigned (owned) by this node, it will take its name. If disks is unowned (either broken or unassigned) it will display the alphabetically lowest node name in this HA pair (for example if you have two nodes: cluster1-01, cluster1-02 all your unowned disks will be displayed as cluster1-01:<slot>…

Data ONTAP 8.3.x

Starting from ONTAP 8.3 drive names are independent of what nodes the drive is physically connected to and from which node you can access the drive (Just as a reminder, in healthy cluster drive is physically connected to two nodes  – HA pair, but it’s accessed only by one node – owned by one node).

The drive name name convention is: <stack_id>.<shelf_id>.<bay>.<position>. Let me briefly explain those:

  • stack_id – this value is assgined by Data ONTAP, is unique across the cluster and start with 1.
  • shelf_id – shelf ID is set on the storage shelf when the slehf is added to the stack or loop. Unfortunatelly there is a possibility of shelf ID conflict (two shelves with same shelf_id). In such case shelf_id is replaced with unique shelf serial number.
  • bay – the position of the disk within the shelf.
  • position – used only in multi-disk carrier shelf, when 2 disks can be installed in single bay. This value can be either 1, or 2

Pre-cluster drive name format

Before the node is joined to the cluster it’s disk drive name format is same as it was in Data ONTAP 8.2.x

 

Shelf id and bay number

You may wonder how to read shelf id and bay number. It depends on the shelf model, however please take a look at this picture:

 

DS4243

DS4243

It’s shelf DS4243, that can contain 24 SAS disks (bay numbers from 0 to 23). Shelf ID is a digital number that can be adjusted during shelf installation.