NetApp

NetApp cDOT – Benefits of a cluster

Recently words cloud and virtualization are more and more common. And when you think about it, it actually make sense. To accomplish best scaling possibilities, to extract the full benefit of your hardware investment, virtualization make a lot of sense! NetApp with ONTAP 7-mode approach did a little bit of that bringing the vFilers. However, it still had a lot of limitations, for instance all resources were limited to a single Filer that was hosting the vFiler. With introduction of Clustered ONTAP NetApp brings much more benefits from virtualization to storage systems, to give you a few:

  • Introduction of Storage Virtual Machine (SVM) which I have already briefly discussed in NetApp cDOT – what is SVM or vserver ? post. In a nutshell SVM is acting as a dedicated virtual storage controller, with its own data volumes, CIFS shares, NFS exports, and LUNs. It is not “linked” to single node – it can utilize physical storage resource pool, containing all nodes within the cluster
  • Different type of physical storage controller models can be connected to build a single cluster
  • NetApp cDOT brings Quality of Service (QoS) to help managing resource utilization between storage workloads
  • Almost all maintenance ops can be executed without downtime, including software and firmware updates.

Continue reading

NetApp cDOT – How to create a snapshot schedule?

Clustered Data ONTAP in many ways is much different than the predecessor – ONTAP 7-mode. Even simple task, like create a snapshot schedule, might be confusing at the beginning for ex-7-mode Administrators. in Clustered ONTAP a lot ot things is now policy-based. One of those, are snapshot schedules. What does it mean? It means that if you want to check the  snapshot schedule, you first have to check which policy is assigned to a volume, for example:

cDOT01::> volume show -vserver svm1 -volume may -fields snapshot-policy
vserver volume snapshot-policy
------- ------ ---------------
svm1    may    40dayssnap

40dayssnap is a policy assigned to volume may, which is located on SVM (vserver) svm1. OK, let’s see what this policy holds: Continue reading

NetApp cDOT – NFS access and export policies

Today I would like to brefily explain terms: export policy and export rule. In Netapp 7-mode if you wanted to create an NFS export you could add an entry to /etc/exports file and export it with exportfs command. In NetApp cDOT it is a different proceedure. To export an share via NFS you have to create an export-policy and assign it to either a Volume or a Qtree that you wish to export.

Another difference is the structure of NFS permissions. in 7-mode if you would like to access via NFS /vol/my_volume/my_qtree, you could just created an exportfs entry for that particular location. In Clustered Data ONTAP NFS clients that have access to the qtree also require at least read-only access at the parent volume and at root level.

You can easily verify that with “export-policy check-access” CLI command, example:

Example 1)

cdot-cluster::> export-policy check-access -vserver svm1 -volume my_volume -qtree my_qtree -access-type read-write -protocol nfs3 -authentication-method sys -client-ip 10.132.0.2
                                         Policy    Policy       Rule
Path                          Policy     Owner     Owner Type  Index Access
----------------------------- ---------- --------- ---------- ------ ----------
/                             root_policy    svm1_root volume 1 read
/vol                          root_policy    svm1_root volume 1 read
/vol/my_volume                     my_volume_policy my_volume volume    2 read
/vol/my_volume/my_qtree              my_qtree_policy my_qtree qtree           1 read-write
4 entries were displayed.

In above example, host 10.132.0.2 has an read-access defined in root_policy and my_volume_policy exports policies. This host has also read-write access defined in rules of my_qtree_policy export policy.

Example 2)

cdot-cluster::> export-policy check-access -vserver svm1 -volume my_volume -qtree my_qtree -access-type read-write -protocol nfs3 -authentication-method sys -client-ip  10.132.0.3
                                         Policy    Policy       Rule
Path                          Policy     Owner     Owner Type  Index Access
----------------------------- ---------- --------- ---------- ------ ----------
/                             root_policy    svm1_root volume 1 read
/vol                          root_policy    svm1_root volume 1 read
/vol/my_volume                       my_volume_policy my_volume volume    0 denied
3 entries were displayed.

In second example host 10.132.0.3 has an read-access defined in root_policy, however it does not have an read-access defined in volume’s policy my_volume_policy. Because of that this host cannot access /vol/my_volume/my_qtree  even if it has read-write access in my_qtree_policy export policy.

Export policy

Export policy contain one or more export rule that process each client access request. Each Volume and qtree can have only one export policy assigned, however one export policy might be assigned to many volumes and qtrees. What is important – you cannot assign an export policy to a directory, only to objects like volumes and qtrees. As a consequence you cannot export via NFS a directory  – in opposite to NetApp 7-mode, where it was possible (this article is written when the newest ONTAP version is 9.1).

Export rule

Each rule has an position, and that is the order in which client access is checked.  It means that if you have an export rule (1) saying that 0.0.0.0/0 (all clients) have read-only access, and the rule (2) saying that LinuxRW host has an RW access, LinuxRW in fact will not get  a RW permission, because during client access check, this host was already cought by rule 1, which only gave a RO access. Of course order of rules can be easily modified, it is important to pay attention to it.