ontap

NetApp – Basic Administration (7-mode) screen-cast

It’s been a while since I wrote anything about NetApp. Instead of going deep into any subject I decided to create a ~40 minute screen-cast with some very basic overview of a normal NetApp administration. This video is intended for rather less advanced admins (or even newbies to NetApp world) and I’m focusing mostly on GUI – OnCommand System Manager.

I started with showing how you can access NetApp filer – via GUI (Graphical User Interface) or CLI (Command Line). Later on I give a very short introduction to command line, and move on to the GUI – OnCommand System Manager.

Within GUI I’m starting with creating and explaining what is an aggregate. Later on creating flexible volume  that resides on this aggregate, qtree and set up some quota. Once this is ready I’m configuring CIFS Server and create CIFS shares plus additional NFS exports. Mounting those from both Windows and Linux host.

At the very end I give a very short introduction to iSCSI LUN as well, but for that topic I promiss to deliver a seperate video.

As always – all comments are most welcomed!

NetApp – Fix the “Bad Label” issue

Recently I came across with a Bad Label error during the change of failed disk. My company changed the support company for one of our systems and the new one sent disks to replace the failed ones from the system. Normally the DC tech make a swap and assign the disks to the system, but this time he called me with an issue (from /etc/messages):

Thu May 22 13:02:54 CEST [NETAPP: raid.config.disk.bad.label:error]: Disk 9.10 Shelf 6 Bay 9 [NETAPP X291_S15KXXX0F15 NA01] S/N [3QQ312Y2XXXPBW] has bad label.
Thu May 22 13:02:54 CEST [NETAPP: raid.config.disk.bad.label:error]: Disk 6.70 Shelf 4 Bay 7 [NETAPP X291_S15KXXX0F15 NA01] S/N [3QQ3097KXXX5VU] has bad label.

To fix the issue I did:

NETAPP> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by NetApp
personnel.
NETAPP*> vol status -f

Broken disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
bad label 6.70 0d 4 7 FC:A 1 FCAL 15000 418000/856064000 420156/860480768
bad label 9.10 0d 6 9 FC:A 1 FCAL 15000 418000/856064000 420156/860480768
NETAPP*> disk unfail -s 6.70
disk unfail: unfailing disk 6.70...
NETAPP*> Fri May 23 08:42:47 CEST [NETAPP: raid.disk.unfail.done:info]: Disk 6.70 Shelf 4 Bay 7 [NETAPP X291_S15XXX0F15 NA01] S/N [3QQ3097KXXX5VU] unfailed, and is now a spare

NETAPP*> disk unfail -s 9.10
disk unfail: unfailing disk 9.10...
NETAPP*> Fri May 23 08:43:04 CEST [NETAPP: raid.disk.unfail.done:info]: Disk 9.10 Shelf 6 Bay 9 [NETAPP X291_S15XXX0F15 NA01] S/N [3QQ312Y2XXXPBW] unfailed, and is now a spare

NETAPP*> vol status -f

Broken disks (empty)
NETAPP*> vol status -s

Pool1 spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 6.70 0d 4 7 FC:A 1 FCAL 15000 418000/856064000 420156/860480768 (not zeroed)
spare 9.10 0d 6 9 FC:A 1 FCAL 15000 418000/856064000 420156/860480768 (not zeroed)
NETAPP* > priv set
NETAPP> disk zero spares
NETAPP>

 

 

SnapMirror – set it up!

What is SnapMirror?

SnapMirror is a feature that enables us to replicate data. You can replicate data from specified source volume or qtree to another destination. The destination can be on the same filer, or it can be in complete other location as long as there is a connection between source and destination.

There are three modes availabe:

  • SnapMirror Sync – replicated data to the destination ASAP – so basically when data is written to the source volume it is being replicated instantly.
  • SnapMirror Semi-Sync – The lag between source and destination is maximum 10 seconds. This mode give us better performance compared to sync mode, and still RPO (Recovery Point Objective) is close to be zero.
  • SnapMirror Async – I would say this is the one you will probably meet the most. The snapmirror is being updated based on the schedule, it can be updated as often as every minute, or once a month. This mode I will focus on in this post.

 How does SnapMirror work?

SnapMirrors task is to replicate the data from a source volume (or qtree!) to a partner destination volume (or qtree). Before using SnapMirror you have to establish a relationship between the source and the destination.
In case of SnapMirror Async you have to set up the schedule that goes to /etc/snapmirror.conf file on the destination filer/vfiler. Well – by definition you do not have to setup the schedule, but without the schedule the relationship will never get updated as long as storage admin will not update it manually.
So – how SnapMirror works when the relationship is initialized:

  1. Creates a Snapshot copy of the data on the surce volume
  2. Copies it to the destination, which can be a read-only volume or qtree
  3. Source and destination share the common snapshot.

As you can notice, when the relationship is initlized for the first time, step 2 is transferring all the data, in other words it is base-line copy.

How SnapMirror works when the relationship is already initialized and the update is executed:

  1. Create a Snapshot copy of the data on the source volume
  2. Compare the new Snapshot with the last common snapshot copy between the source and the destination
  3. Transfer to destination only the data that has changed since the last update

 Let’s set up of the Volume SnapMirror a-sync relationship

Step 1.  Add a proper licence to both source and destination filer:

filerA> license add xxxyyy
filerB> license add xxxyyy

Step 2. Turn on the SnapMirror

filerA> options snapmirror.enable on
filerB> options snapmirror.enable on

Step 3. Allow the access on the source (snapmirror.allow vs snapmirror.access)

filerA> options snapmirror.access host=filerB

Step 4. Create a source and a destination volume

filerA> vol create sourcevol aggr1 50g
filerB> vol create destvol aggr1 50g

Step 5. Restrict the destination volume (the destinatnio volume has to be restricted)

filerB> vol restrict destvol

Step 6. Initialize the snapmirror

filerB> snapmirror initialize -S filerA:sourcevol filerB:destvol
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

Step 7. Check the status. If it’s empty volume the initialization shoud go really fast so after a minute or two you can see

filerB> snapmirror status
Snapmirror is on.
Source              Destination      State         Lag    Status
filerA:sourcevol    filerB:destvol   Snapmirrored   00:00:45   Idle

Step 8. Setup the snapmirror schedule

The snapmirror schedule has to be setup on the destination volume (/etc/snapmirror.conf)

The syntax of the schedule is:

src_system :/vol/src_vol[/src_qtree] dest_system :/vol/dest_vol[/dest_qtree] arguments schedule

Simple example, which will update the snapmirror relationship at 10 a.m. every Monday, Wednesday and Friday would be:

filerB>rdfile /etc/snapmirror.conf

filerA:sourcevol  filerB:destvol – 0 10 * 1,3,5

Summary

SnapMirror is a complex technology. In this post I presented only the most simple setup of asynchronus Volume Snapmirror. If you would like to go a little bit deeper into how the transfer works, how to setup the QSM or what arguments can you specify try this book:
Data Protection Online Backup and Recovery Guide