Month: July 2014

EMC VNX – Support for VMware

Unisphere, together with vCenter Server integration, makes storage management in a virtualized environment quite easy.  EMC has delivered the VMware vStorage API for Array Integration (VAAI), allowing the VNX series to be fullz optimiyed for virtualiyed environments. This technologz offloads VMware storage-related functions from the server to the storage system, enabling more efficient use of server and network resources for increased performance and consolidation.

The VNX series also supports the VMware vStorage API for Storage Awareness (VASA), allowing the VNX storage capabilities to be mapped to vCenter Storage Profiles for creating virtual machines.

VMware ESXi servers connect to VNX block storage through FC, FCoE, iSCSI. The block storage is provisioned to the ESXi host for creating VMFS datastores for virtual machines or as a raw device-mapped (RDM) volumes for virtual machines. Note: ESXi servers can also access VNX file storage through NFS.

EMC VSI

EMC Virtual Storage Integrator (VSI) for VMware vSphere Unified Storage Management is a VMware feature designed to simplify storage administration of the EMC VNX. The feature enables VMware admins to provision new NFS and VMFS datastores, and RDM volumes directly from vSphere Client.

VAAI

VAAI

VAAI

The vStorage API for Arraz Integration (VAAI) is a VMware-based vendor neutral storage API. It is designed to offload specific storage operations to compliant storage arrays. The array thus provides a hardware acceleration of the ESXi storage operations. Offloading storage operations to the array reduces ESXi CPU, memory and storage fabric bandwidth consumption, thus enabling more of its compute power to be available for running virtual machines.

VAAI operations

  • Array Accelerated Bulk Zero – Within a virtual disk file there is both used space (data) and empty space yet to be utilized. When a VM is cloned this “empty space” is also copied, which means that many additional SCSI operations are generated for, essentially, empty space. Block Zeroing allows for a reduced SCSI command operation set to be generated as the storage array is made responsible for zeroing large numbers of blocks without impacting the creating ESXi server. This is achieved by acknowledging the completion of the zero block write before the process has completed and then completing the process in the background.
  • Full Copy – The creation of a Virtual Machine (VM) from a template is performed within the storage array by cloning the Virtual Machine from one volume to another, utilizing the array functionality and not the ESXi server’s resources.
  • Hardware Locking – VMFS volume locking mechanisms are offloaded to the array and are implemented at the sub-volume level. This allows the more efficient use of shared VMFS volumes by VMware cluster servers by allowing multiple locks on a VMFS volume without locking the entire volume. The ATS operation atomically (non-interruptible) compares an on-disk sector to a given buffer, and, if the two are identical, writes new data into the on-disk sector. The ATS primitive reduces the number of commands required to successfully acquire an on-disk lock.
  • Thin Provisioning – When a VM is deleted or migrated from a Thin LUN datastore, the space that was consumed is reclaimed to available space in the storage pool.
  • Stun and resume – When a thin LUN runs out of space due to VM space consumption, the affected VM is paused to prevent it from being corrupted. The storage administrator is alerted to the condition so more storage can be allocated.

 VASA

VASA

VASA

The VMware Aware Storage API (VASA) is a VMware-based vendor neutral storage API. VASA is a VMware specific feature and protocol and uses an out-of-band http level protocol to communicate with the storage environment.
It is designed for the storage array to provide its storage capabilities into VMware vCenter. The VNX will provide capabilities for its Storage Processors, LUNS, I/O ports and file systems. The array health status and space capacity alerts are also provided to vCenter.

A key aspect of the API integration is the ability to create and use profiles for system resource configuration. In this case a Storage Profile can be defined for a specific VM need and then when performing vMotion or Cloning etc. a profile can be substituted for an actual target device.

VASA and Storage Profiles

VASA and Storage Profiles

The system will then chose a target of the same profile and will highlight the most suitable target based on free space for the target.
Storage profiles can also be associated with a datastore cluster when SDRS is enabled. When the profile is part of a datastore cluster, SDRS controls datastore placement.
Storage capabilities can also be used for other tasks such as new VM creation and VM migration. They provide the ability to match virtual machine disks with the appropriate class of storage to support application I/O needs for VM tasks such as initial placement, migration, and cloning.

EMC VNX – Accessing Snapshot Data via CVFS

CVFS is checkpoint virtual file system. Remember? Within VNX SnapSure checkpoint = snapshot. CVFS is a navigation feature that provides NFS and CIFS clients with read-only access to online, mounted snapshots from within the PFS (production filesystem) namespace. This eliminates the need for administrator involvement in recovering point-in-time files. The snapshots are automatically mounted and able to be read by end users.

CVFS Naming Convention

There is a hidden directory, called .ckpt.  In addition to .ckpt_mountpoint entry at the root of the PFS, SnapSure also creates virtual links within each directory of the PFS. All of these hidden links are named by default .ckpt and can be accessed from within every directory, as well as the root, of a PFS. You can change the name of the virtual snapshot name from .ckpt to a name of your choosing by using a parameter in the slot_(x)/param file.

Default name of the snapshot is: yyyy_mm_dd_hh.mm.ss_<Data_Mover_timezone>, but that also can be change in param file.

Accessing Snapshot using Shadow Copy Clinet

CIFS clients can also access snapshot data via Shadow Copy Client.  This feature allows Windows users to access previous versions of a file via the Microsoft Volume Shadow Copy Server.

Previous Versions

Previous Versions

There is one risk though. Choosing snapshot on the window above and clicking ‘Restore’  you will restore snapshot on the production file system! But you can disable that button.. And it’s actually a really good idea to do so. Right before the restore there is a new snapshot taken, so you can ‘undo’ such a mistake. But it still can be messy if someone restore a snapshot, edit a file or two, and then discovery that he restored snapshot and he missess some files.

EMC VNX – Writable Snapshots with SnapSure

SnapSure gives you the possibility to create the writable Snapshots. Writable snapshots can be mounted and exported as a read-write file systems, share the same SavVol with read-only snaphots. There is no SavVol shrink. The SavVol grows to accommodate a busy writeable snapshot file system. The space cannot be returned to the cabinet until all snapshots of a file system are deleted.

You can create, delete and restore writeable snapshots. Writeable  snapshots are branched from baseline read-only snapshot. A baseline snapshot exists for the lifetime of the writeable snaphsot. Writable snapshots and their baselines cannot be refreshed or be part of a snapshot schedule.

In case of a restore from a writeable snapshot to a PFS the writeable snapshot must be remounted as a read-only file system before the restore starts.

Architecture of Writeable Snapshots

Writeable snapshots haves their own Bitmap and Blockmap. But they share the same SavVol with all snaphots within SnapSure technology on that particular PFS (Production File System).

Writeable Snapshot

Writeable Snapshot

WCKPT = Writeable checkpoint = writeable snapshot.

  1. We have Baseline bitmap and blockmap – so we can track all the changes that happens between baseline-snapshot and preset
  2. We have WCKPT biatmap and blockmap – so we can track all the changes that happens between baseline-snapshot  and writes to writeable snapshot
  3. One the block is changed (on WCKPT) the new value is saved on SavVol. Once the same block is changed, the data in SavVol is overwritten. That means that if Bitmap points to 1 wihtin the write request, the Blockmap locates the block in SavVol, and that block is overwritten with the new value.

 Writeable Snapshots – limits

  • You can have only one writeable snapshot per baseline read-only snapshot.
  • There is a maximum of 16 writeable snapshots per PFS. Writeable snapshots do not count against the 96 user snapshot limit. All together there could be a total of 112 user snapshots per PFS. However, remember that VNX Replicator uses snapshots that are within this 96 limit, so if there are 95 read-only snapshots on the PFS, VNX Replicator will fail.
  • You cannot create a snapshot from a writeable snapshot.
  • You can create a writeable snapshot from a scheduled r/o snapshot. However, if the writeable snapshot exists when the schedule executes a refresh, it will fail.
  • Writeable snapshots cannot be used with VNX Replicator.
  • Writeable checkpoints can not be created from the VNX Replicator internal snapshots.
  • There is limited support for writeable snapshots with VNX FileMover. The support is limited to creation of stub files and recall of files for write operations.
  • If the amount of data written to the snapshot is big, that could cause two different outcomes
  1. SavVol automatically extending to accommodate the writes to the snaphost. Once a SavVol is extended, it cannot be shrunk, it can only be deleted and recreated, but for that you will have to delete ALL the snaphots for that PFS.
  2. If the Savvol will not extend (either the autoextention is disabled or there is no space available), SnapSure will begin validating older snapshots to acoommodate the writes to the checkpoint. That could cause data loss.