Month: July 2014

EMC ESRS – basic overview

ESRS stands for EMC Secure Remote Support. The main benefit of ESRS is to enable EMC to deliver proactive customer service by identyfying and addressing potential problems before there is an impact to the customer’s business.

So what is ESRS in a nutshell?

  • Two-way remote connection between EMC and customer EMC products that enables:
    • Remote monitoring
    • Remote diagnosis and repair
  • Secure, high-speed, and operates 24×7
  • Included at no charge for supported products with the Enhaced or Premium Support options

OK, that’s a nutshell. There a three different approaches for installing/configuring ESRS.

ESRS Configurations

1. ESRS Gateway Client Configuration

ESRS Gateway Client Configuration

ESRS Gateway Client Configuration


Firstly, ESRS Gateway Client configuration. To my knowledge that is the most popular solution, mostly because it’s the most universal solution. It’s appropiriate for customer environment with a heterogeneous mix of EMC products.

For that approach you will need ESRS Gateway and possibly Policy Manager Server. Let me explain what those are.

The ESRS Gateway server provides a single instance of the ESRS application, and also a single point of failure, coordinating remote connectivity for multiple systems. In this scenario, the customer should be prepared to provide one to two Gateway servers. These can be physical servers or VMware instances running Enterprise Red Hat Linux or Windows, and must be dedicated. No additional applications should be run on the Gateway. A second Gateway server is recommended for high availability.

The optional Policy Manager requires customer-provided server, which can be physical or virtual. This can be any server with network connectivity to ESRS. However, it should not utilize the same physical or virtual server as the ESRS Gateway. This server does not need to be dedicated. You can use this application to:

  • View and change the policy settings for managed EMC systems
  • View and approve pending requests from EMC to access a system
  • View and terminate remote access sessions
  • Check the audit log for recent remote activity

2. ESRS IP Client Configuration

ESRS IP Client Configuration

ESRS IP Client Configuration

This configuration is appropriate for customer environments that only include CLARiiON or VNX products. Keep in mind that these products also support the ESRS Gateway configuration.
Here, there is no need for a dedicated Gateway. ESRS is installed directly on the CLARiiON or VNX management station, which should be a dedicated customer-supplied physical or  virtual server. This server will host the ESRS software as well as other tools and utilities for CLARiiON and VNX products.

3. ESRS Device Client Configuration

ESRS Device Client Configuration

ESRS Device Client Configuration

This configuration is appropriate for Customer environments that only include VNXe, VNX File and Unified, as well as Symmetrix products. In this cenario, there is no need for a dedicated server and ESRS is installed directly on the EMC system. Keep in mind that these products also support the ESRS Gateway configuration. Also note that VNX Block products should also have an ESRS Device Client option in place during the second half of 2013.
Just like the other ESRS configurations, best practice includes the optional Policy Manager application on a non-dedicated server to control remote support permissions and record audit logs for all activity.

 Policy Manager settings

The customer has the option to pick from four possible scenarios to manage remote access. This includes three policy manager settings:

  • always allow
  • never allow
  • ask for approval
  • no policy manager

Ask for approval is the most popular option because it allows the customer to receive notification each time EMC requests access for any remote activity. This option enables the customer to receive notification and evaluate the situation before either agreeing to or denying a remote support session.


EMC VNX – SnapView Overview

A SnapView enabler gives the user access to both Snapshot and Clone technology. These two methods of making poin-in-time copies are independent. They use vastly different replication mechanisms and have limits which are independent of each other.


Snapshots use pointers to indicate where data is currently located. It may be on the Source LUN (Traditional, Thick, or Thin) or may have been copied to the Reserved LUN Pool. As a result of the Copy on First Write (COFW) technology used, Snapshots may use appreciably less additional space than a full copy, such as a Clone, would use. As a rough guide, a Snapshot will use around 20% of the space occupied by its Source LUN.


Clones, on the other hand, make a full copy of the Source LUN data, and therefore use additional disk space equal to 100% of the space occupied by the Source LUN. Because Clone data can be copied back to the Source LUN, there is a requirement that Source LUN and Clone be exactly the same size. When a Clone is fractured, changes to the Clone or its Source LUN are tracked in the Fracture Log.

SnapView terminology

  • Production host – server where customer applications are executed, and source LUNs are accessed from the production host
  • Backup (secondary) host – host where backup processing occurs, offloads backup processing from production host, snapshots and clones are accessed from backup host
  • Admsnap utility – an executable program that runs interactively or with a script to manage clones and snapshots.
  • Source LUN – Production LUN, from which replicas are made
  • Activate – when a snapshot is activated it maps a snapshot to an available snapshot session. The snapshot is a point-in-time view of the LUN and can be made accesible to a secondary host, but not to the primary host once a snapshot session has been started on that LUN
  • Reserved LUN Pool (RLP) – private area used to contain Copy on First Write (CoFW) data
  • Snapshot session –defines a point-in-time designation by invoking COFW activity for updates to the Source LUN
  • Chunk – granularity at which data is copied from the Source LUN to a resreved area – 64kB
  • Copy on First Write (CoFW) – when a chunk is changed on the Source LUN for the first time, data is copied to a reserved area
  • Fracture – the process of breaking off a clone from its source. Once a clone is fractured, it can receive server I/O requests
  • Clone group – contains a Source LUN and all of its clones
  • Clone private LUNs – used for recording information that have been modified on the Source LUN and clone LUN that has been fractured. Due to that, you can synchronize a fractured clone.

SnapView features

The SnapView enabler allows the user the choice of Snapshots and/or Clones. The choice of which method to use depends on the specifics of the environment.The SnapView enabler allows the user the choice of Snapshots and/or Clones. The choice of which method to use depends on the specifics of the environment.

All Snapshot sessions are automatically persistent and save metadata to their Reserved LUNs. An advantage of Snapshots over Clones is if there is logical corruption of the data, this would be distributed to the Clones immediately if they are still in the unfractured state. In this example, the data on both the Source and the Clone would be damaged. If this were a Snapshot the data would not have changed at all. Point-in-time copies are also useful for backups and other operations. If there is a need to return the Source LUN to a previous data state, both Clones and Snapshots are capable of doing so.
Clones, however, achieve their persistence by tracking changed data extents using Clone Private LUNs. They can therefore survive planned and unplanned events, such as component failures, and the loss of power.

SnapView Clones vs SnapView Snapshots

While both Clones and Snapshots are each point-in-time views of a Source LUN, the essential difference between them is that clones are exact copies of their sources (with fully populated data in the LUNs), and are not based on pointers. It should be noted that creating clones takes more time than creating Snapshots, since the former requires actually copying data. A clone can be described as a copy of the data, whereas a Snapshot is a view of the data.

Another benefit to the clones having actual data, rather than pointers to the data, is that the performance penalty associated with the Copy-on-First-Write mechanism is avoided. Clones generate a much smaller performance load on the source LUN than do Snapshots. If the source LUN has a heavy write load, especially if those writes are randomly distributed on the LUN, then a clone has a lower performance impact than a Snapshot.

Because clones are exact replicas of their source LUNs, they generally take more space than SnapView Reserved LUNs, since the Reserved LUNs only store the Copy-on-First-Write data.Because clones are exact replicas of their source LUNs, they generally take more space than SnapView Reserved LUNs, since the Reserved LUNs only store the Copy-on-First-Write data.

An additional Clone advantage is that a clone can be moved to the peer SP for load balancing. It is automatically trespassed back to the owning SP for synchronization operations.


EMC VNX – metaLUNs

The VNX metaLUN feature allows Traditional LUNs (Pool LUNs cannot be used for metaLUNs) to be aggregated in order to increase the size or performance of the base LUN. The base LUN, which can be a regular LUN or a metaLUN, is the LUN that will be expanded by the addition of other LUNs. The LUNs that make up a metaLUN are called component LUNs.

A RAID Group is limited to 16 disks, and that places an upper limit on the size of a Traditional LUN, and the performance which may be achieved by a single Traditional LUN. metaLUNs allow an increase of available bandwidth or throughput, or LUN capacity, by adding hard disks. metaLUNS are functionally similar to volumes created with host volume managers, but with some important distinctions.

Any Traditional LUN (or metaLUN) which is not a private LUN may be expanded.Any Traditional LUN (or metaLUN) which is not a private LUN may be expanded.

Ways to create a metaLUN

Striped expansion

This method is time-consuming and can have an impact on the performance during the metaLUN creation. When adding extra LUN(s) to Base LUN, the data is spread across the whole stripe, meaning that data from Base LUN is partially transfered to rest of components LUN(s). Additional capacity is not available immediately – restriping takes time.

metaLUN - striped expansion

metaLUN – striped expansion

Stripped expansion required homogeneous components – meaning that base LUN and the component LUNs must all have the same LUN capacity! In addition, these LUNs ust have the same underlying RAID group RAID level and drive type.

Concatenated expansion

Concatenated expansion is the process of taking an existing base LUN and appending additional component LUNs to increase capacity. The capacity of a concatenated component LUN is appended as new addressable capacity to the metaLUN’s base component. This expansion offers more flexibility of capacity expansion by allowing heterogenous component expansion. That means that component LUN’s capacity may be different. The RAID level underlying the component LUNs does not have to be the same

metaLUN - concatenated expension

metaLUN – concatenated expension

There are some restrictions though, all the components must share the same drive type.