This is a second part of an introduction to Data ONTAP storage architecture (first part can be found here). In the first part I have briefly describe the two physical layers of a storage architecture: Disks and RAID Groups. There is one more physical layer to introduce – aggregates. Let me start with this one.
Aggregates can be explained as a pool of 4-KB blocks, that gives you the possibility to store data. But to store data you need an physical disks underneath. To gain performance (in terms of I/O) and increase protection NetApp uses RAID groups (RAID-4 or RAID-DP). In other words, Aggregate is nothing more than a bunch of disks. Those disks are combined into one or more RAID group(s). Your NetApp filer can obviously have more than one aggregate. But a single disk can be part of only single raid group and therefore single Aggregate* (* the exeption of that rule would be NetApp Advanced Drive Partitioning, which I will explain in some future entires). But for now, it’s quite safe to remember that single disks can be part only of one aggregate.
Why do you want to have more than one aggregate in your configuration? It’s to support the differing performance, backup, security or data sharing need for your customers. For example one aggregate can be built from ultra-performance SSD disks, where-as second aggregate can be built from capacity – SATA drives.
Root / aggr0
This aggregate is atuomatically created during system utilization and it should only contain the node root volume – no user data. Actually within Data ONTAP 8.3.x user data cannot be placed on aggr0 (with older version it is possible, however it’s against the best practices)
Separate aggregate(s) created especially to serve data to our customers. Those can be single-tiered (built upon same type of disks – for example all HDD or all SSD) or multi-tiered (flash pool) – build with tier of HDDs and SSDs. Currently NetApp enforces a 5-disk minimum to build a data aggregate (3 disks used as “data-disks”, one disk as “parity disk” and one as “dparity disk” in RAID-DP configuration).
FlexVols – Volumes
It’s a first logical layer. A flexvol volume is a volume created on containing aggregate. Single aggregate can have many independent FlexVols. Volumes are managed seperately from the aggregate, you can increase and decrease sizes of a FlexVol without any problems (where-as for aggregates to increase it’s size you have to allocate more physical disks to an aggregate, and you cannot decrease aggregate size). The maximum size of an single volume is 16TB.
Volume is a logical structure that has its own file system.
Thin and thick provisioning
Within FlexVol you have an option to fully guarantee space reservation . What that means is that when you create a 100GB FlexVol, Data ONTAP will reserve 100GB for that FlexVol, and that space will be available only to this volume. This is called thick provisioning. Blocks are not allocated until they are needed, however you guarantee space on the aggregate level the moment you create the volume.
As I mentioned in my previous paragrapth full space guarantee is an option. Within FlexVol you do not have to guarantee space. Volume without that guarantee is thin provisioned. It means the Volume takes only as much space as is actually needed. Back to our previous example, if you create a 100GB thin Volume, it will consume almost nothing on an aggregate at the moment it is created. If you place 20GB data on that Volume, only around 20GB data will be used on aggregate level as well. Why is that cool? Because it gives you the possibility to provision more storage then you currently have, and add additional capacity based on current utilization, not on the requirements. It is much more cost-efficient approach.
Qtree is another logical layer. Qtree enables you to partition the volume into smaller segments which can be managed individually (to some limits). You can for example set the qtree size (by applying quota limit on the qtree), or you can change the security style. You can either export qtree, directory, file or the whole volume to the customers. If you export (via NFS or CIFS) the volume that contains qtrees, customer will see those qtrees as a normal directories (unix) or folders (windows).
LUN represents a logical disk that is addressed by a SCSI protocol – it enables block level access. From Data ONTAP point of view single LUN is a single special file that can be accessed via either FC protocol or iSCSI protocol.
Now, please bear in mind – this is just a brief introduction to Data ONTAP storage architecture. In my other posts you can get some additional information.
2 thoughts on “NetApp – Data ONTAP storage architecture – part 2”