Sunday, August 29, 2010

Netapp Snapmirror Setup Guide

Netapp Snapmirror Setup Guide

Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself. This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).


Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.

This guides you quickly through the Snapmirror setup and commands.

1) Enable Snapmirror on source and destination filer


source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>

2) Snapmirror Access

Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>

3) Initializing a Snapmirror relation

Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.

4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.

destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.

If you want to sync the data on a scheduled frequency, you can set that in destination filer's /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.


destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm
destination-filer>

6) Other Snapmirror commands

  • To break snapmirror relation - do snapmirror quiesce and snapmirror break.
  • To update snapmirror data - do snapmirror update
  • To resync a broken relation - do snapmirror resync.
  • To abort a relation - do snapmirror abort

Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed

source: http://unixfoo.blogspot.com/2009/01/netapp-snapmirror-setup-guide.html

Sunday, August 22, 2010

How to check symmetric environment status i.e powersupply

# ./symcfg -sid 2705 list -env_data

Symmetrix ID : 000000002705
Timestamp of Status Data : 08/23/2010 01:22:32

System Bay

Bay Name : SystemBay
Number of Fans : 3
Number of Power Supplies : 6
Number of Standby of Power Supplies : 6

Summary Status of Contained Modules
All Fans : Normal
All Power Supplies : Normal
All Standby Power Supplies : Normal

Drive Bays

Bay Name : Bay-1A
Number of Standby of Power Supplies : 8
Number of Drive Enclosures : 16

Summary Status of Contained Modules
All Enclosures : Normal
All Link Control Cards : Normal
All Power Supplies : Normal
All Standby Power Supplies : Normal

Bay Name : Bay-2A
Number of Standby of Power Supplies : 8
Number of Drive Enclosures : 16

Summary Status of Contained Modules
All Enclosures : Normal
All Link Control Cards : Normal
All Power Supplies : Normal
All Standby Power Supplies : Normal

Commons SYMCLI

symcfg

  • symcfg list - A brief description of the all connected Symmetrix boxes.
  • symcfg -sid 1234 remove - Remove the array 1234 from symcfg list.
  • symcfg -sid 1234 list -lockn all - List all the external locks held in Symmetrix array 1234.
  • symcfg -sid 1234 -lockn 15 release -force - Release the lock 15 held on array 1234 .
  • symcfg -sid 1234 list -v - Displays detailed information about the Symmetrix Array 1234.
  • symcfg -sid 1234 -dir 4a -p 0 list -addr -avail - List the LUN information / availability of lun ids on port 4a0 in array 1234 .
  • symcfg -sid 1234 list -rdfg all - List details about all the rdf groups in array.
  • symcfg -sid 1234 list -rdfg 3 - List details about rdf group 3 .
  • symcfg -sid 1234 list -rdfg all -dynamic - List details about all the dynamic rdf groups in array .
  • symcfg -sid 1234 list -rdfg all -static - List details about all the static rdf groups in array .
  • symcfg -sid 1234 list -ra all - List all RA ports with details like rdfg number , remote array sid and online status.
  • symcfg discover - Scans all the devices in hosts looking for new symmetrix devices and rebuilds the symmetrix configuration database .

symdev

  • symdev -sid 1234 list - List all devices in symmetrix 1234.
  • symdev -sid 1234 list -noport - List the devices which are not mapped to any ports.
  • symdev -sid 1234 list -noport -meta - List all unmapped meta devices .
  • symdev -sid 1234 list -dynamic - List all devices whose dyn_rdf attribute set .
  • symdev -sid 1234 list -hotspare - Checks whether hotspare invoked in the array .
  • symdev -sid 1234 show ABC - show the detailed information about devioce ABC.
  • symdev -sid 1234 write_disable ABC -SA all - Write disable the device ABC from through all directors.
  • symdev -sid 1234 not_ready ABC -SA all - Not ready the device ABC from through all directors.

symmaskdb

  • symmaskdb -sid 1234 -dev ABC list assign - List the masking details of the dev ABC .
  • symmaskdb -sid 1234 -wwn xxxxxxx list devs - List the devices masked to given wwn number .
  • symmaskdb -sid 1234 -awwn hba_alias list devs - List the devices masked to given alias hba name .

symmask

  • symmask list hba - List HBA details of the host.
  • symmask -sid 1234 -dir 4a -p 0 list logins - List out wwns logged through port 4a0 .
  • symmask -sid 1234 list logins -wwn xxx - Check whether wwn xxx logged in to any of the FAs on array 1234.
  • symmask -sid 1234 refresh - Refresh the VCM Data Base after a masking and unmasking operation.
  • symmask -sid 1234 -wwn xxxx -dir 4a -p 0 add devs ABC,ABD - Mask the devices ABC and ABD to given wwn in 1234 arrray .
  • symmask -sid 1234 -wwn xxxx -dir 4a -p 0 remove devs ABC,ABD - Unmask the devices ABC and ABD from given wwn in 1234 arrray .

symdg

  • symdg -sid 1234 list - List device groups which include the devices from array 1234.
  • symdg create mydg -type rdf1 - Create device group mydg of rdf1 type .
  • symdg show mydg - Shows members/details of mydg.
  • symdg rename mydg yourdg - Renames the mydg to yourdg.
  • symdg delete mydg -force - Delete device group mydg.

symld

  • symld -g mydg -sid 1234 add dev ABC DEV006 - Add the RDF device ABC to device group mydg as DEV006
  • symld -g mydg remove DEV006 - Remove DEV006 form device group mydg.

symrdf

  • symrdf -sid 1234 -rdfg 3 -type rdf1 -file rdf.txt -g mydg createpair -establish - Establish the SRDF relation between the devices given in the file rdf.txt from array 1234(R1) and remote box according to the rdf group . This command start sync between R1 and R2, and also add these devices after creating the device group mydg.
  • symrdf -sid 1234 -rdfg 3 -file rdf.txt query - Query the Devices by using device pair file.
  • symrdf -g mydg set mode acp_disk - Set synching mode to Adaptive Copy.
  • symrdf -g mydg query - Query device group.
  • symrdf -g mydg split - Split the srdf pair for devices given in mydg.
  • symrdf -sid 1234 -rdfg 3 -file rdf.txt deletepair.txt -force - Delete the srdf pairing between R1/R2 and return them to stanadard.

symdisk

  • symdisk -sid 1234 list -hotspare - List Hotspares configured in the array.
  • symdisk -sid 1234 list -by_diskgroup - Displays all the disks in array by disk groups.
  • symdisk -sid 1234 list -diskg_group 1 - Displays all the disks in disk group 1.

Saturday, August 14, 2010

How to rename wwn of the hosts to hostname/port#

Command:

symmask -sid xxxx -wwn rename hostname/hba#