SAN bits

Seeing is believing, knowing is everything


Leave a comment

How to find the End of life and end of support for Netapp

There would have been many occassions where one would be looking for the EOA and EOS for the Netapp produts.There is a simpler way to find it out. Continue reading by clicking here, there’s more!


Leave a comment

How to download required Ontap version for upgrade

Here I am going to explain on how to download the required ontap version from the Netapp support site,


You would have registered yourself on Netapp support site with your filer serial numbers as below.

http://mysupport.netapp.com/

netapp1

Choose the filer model

netapp2

choose the required ontap version you wish to download.

netapp3


Leave a comment

fs_size_fixed option in Netapp

Whenever there is a volume which is in snapmirror relation, The destination volume in the snapmirror relation will have the volume options fs_size_fixed changed to on, the default setting for any volume is off.

fs_size_fixed =ON

This feature gets enabled when the snapmirror is enabled for the volume at the destination and would be remain even after the snapmirror relation gets broken. This feature is there so that the destination volume is same as that of the source volume and cannot be re sized. Make sure that this is made to off in case you want to re size the destination volume.


Leave a comment

Netapp Cluster Mode LIF — Part 1

There are some very good post on this topic in various internet sites. But still wanted to write the same post here because I wanted
add some basic stuffs which I found very difficult to undertsnad in the first read. So here I am writing an another blog on cluster mode
LIF
So what is LIF and Why to use them came to my mind first. Here is the explanation using my own words and sentences.

Logical interface — Why ? Because it is created on physical port like e0a,e0b etc.

Can we have on top of IFGRP/VIF that we used to have in 7 Mode — Yes.. We can have it on top of IFGRP/vif .. So we create a Vif/ifgrp
sitting directly on the node mode and then move to cluster management and associate them to the LIF.

Can we live without a LIF in cluster mode — No – Lif is mandatory and the Clients using specific protocol can access the flex vol only
using LIF , they cannot connect directly to IFGRP or specific port for getting data. Even FC port need to be associated as LIF.
What Roles do the LIF have other than Serving Data.

1) Cluster LIF
2) Data Lif
3) Node Management LIF
4) Cluster Management LIF
5) Intracluster LIF
Huh why do we need these many lIF’s … Because they have to do specific function… Some LIF serve data and other for management
functions. Here are some explanation below

1) Cluster LIF — These are created for intercluster management,so for cluster to make sure that it is still alive and all the nodes are
intact. This is created only on 10GBe ports. .These are created one per node and doesnot move out of the node incase of failure. Bydeafult
the firewall policy for cluster VIF is cluster,this way we make sure no other traffic goes thru the cluster traffic.The Cluster
atomatically assigns the Ip address to any nodes that join the cluster,but if e ant to manually assign the Ip address we need to make sure
they are in same subnet.

2) Data LIF — Important LIF from user point of view, as this is the interface that is going to serve the data for both NFS and SAN
clients.This can migrate beteen nodes and one point is that it stays with the Vserver.

3) Node Management LIF — This is used to troubleshoot the Node if it goes out of cluster. Other use case would be Autosupport, and NTp servers use this LIF for their purpose.

4) Cluster Manament LIF — This can be created on data port or any node port and It is used manage the cluster. This can also be used to delicate rights to VSadmin ( vserver Admin) to manage specific SVM.

5) Intracluster LIF — This is basically used for Mirroring in netapp terminoalogy for snapmirror between cluster. To do a snapmirror in cluster mode we need to peer and while doing peer we need this to be created before peering can be accomplished.
Will Continue in next post on the commands to create LIF and LIF routing groups… Stay Tuned

 


Leave a comment

NetApp Cluster Mode common terminologies

NetApp cluster mode is an extension to NetApp 7 mode with some changes to physical and command set. In this blog I am going to discuss basic difference in terminologies used by 7 mode and Cluster mode

Cluster mode Terminologies


Node :

A node in a cluster mode is a single NetApp controller. A 7 mode clustered controllers is called as a 2 node switch less cluster.


     Vservers/SVM  :

A Vserver is defined as logical container which holds the volumes. A 7 mode vfiler is called as a vserver in Clustered mode . With latest firmware it is named as SVM (Storage Virtual Machine). The main difference between a vfiler in 7 mode and SVM in cluster mode is that in 7 mode a volumes , qtrees , shares and LUN’s can work directly physical controller without vfiler,but with cluster mode it cannot work without SVM.


 

LIF ( Logical interface) :

As the name suggest its a logical interface which is created from physical interface of NetApp controllers. That is we create a ifgrp/vif group in NetApp controllers physical Ethernet interface and create a Logical interface on top of the ifgrp/vif.

Different types of LIFs in a cluster are data LIFs, cluster-management LIFs, node-management LIFs, intercluster LIFs, and cluster LIFs. The ownership of the LIFs depends on the Vserver where the LIF resides. Data LIFs are owned by data Vserver, node-management and cluster LIFs are owned by node Vserver, and cluster-management LIFs are owned by admin Vserver.

 


SP / BMC/RLM : 

This is physical component in NetApp controller and user for Out of band management and it works same in both cluster mode and 7 mode, except the command used is different in Cluster mode


 

Junction Path :

This is a new term in cluster mode and this is used for mounting.Volume junctions are a way to join individual volumes together into a single, logical namespace to enable data access to NAS clients.When NAS clients access data by traversing a junction, the junction appears to be an ordinary directory. A junction is formed when a volume is mounted to a mount point below the root and is used to create a file-system tree. The top of a file-system tree is always the root volume, which is represented by a slash (/). A junction leads from a directory in one volume to the root directory of another volume. In Cluster mode he volume cannot be exported (NFS) and a share cannot be created (CIFS) until the volume is mounted to a junction point in the namespace.


 

Namespace :

A namespace can be defined as a common name to access shares stored in different NAS servers / different controllers . For example in 7 mode if we have cifs server running in 2 controllers we will have two different names of cifs server,to access the  share in two different controllers user needs to use different names to access a share specific to controllers. Now with single namespace we have a common name where we have exported volumes  from different controllers as a cifs shares in a single SVM/Vservers , hence user will use a single name to access the shares.


 

Infinite Volumes :

NetApp Infinite Volume is a software abstraction hosted over clustered Data ONTAP. It provides a single mountpoint that can scale to 20PB and 2 billion files, and it integrates with NetApp’s proven technologies and products, such as deduplication, compression, and NetApp SnapMirror® .Infinite Volume writes an individual file in its entirety to a single node but distributes the files across several controllers within a cluster.


 

In the next post will try to discuss on individual services and command to create the same.


Leave a comment

Happy New Year 2015

Wishing you a very happy New Year 2015

– SANBits

 <img src="//ws-in.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=IN&source=ss&ref=ss_til&ad_type=product_link&tracking_id=httpskumarays-21&marketplace=amazon&region=IN&placement=B00JE6HMNY&asins=B00JE6HMNY&linkId=GJ6SCL5SMMRZR75G&show_border=true&link_opens_in_new_window=true” alt=”test” />


Leave a comment

How to calculate the usable space in Netapp storage

1) When a disk is inserted in Netapp array , it fist checks if the disk is listed in diskqual file located in /etc folder and if the disk is not part of file it doesn’t accept the disk into aggregate . Workaround is to add hard disk information to the qual file ( /etc/qual_devices) or net app recommendation is to add a net app supported hard disk with latest qual file downloaded from net app tools.

2) Once the disk is qualified the disk is calibrated to usable raw size . For example a 450GB SAS hard drive gets calibrated to 418 GB. This can be viewed using the command sysconfig -a ( 7 Mode).

3) Next calculation is based on the type of aggregate we use, for a 32 Bit aggregate the size of all the raid group together cannot exceed 16TB. It is also recommended to have some space free in the raid group for future expansion else we will end up creating a new Raid group of smaller size which might cause the performance issues. Table below shows the recommended raid group size

Broome-Tioga BOCES storage infrastructure.

 

Raid group can be checked using command  aggr status -R or sysconfig -R.  For example if we have 2 Raid groups each with 15 disk (450GB) we can calculate the usable space using following formula  ( total disk – 4  )* calibrated capacity  where the 4 refers to parity disks in Raid DP so the calculation would be  26*418GB = 10868 GB

4)  Once we have the raw information on usable space we have some reservation on ontap

  •             WAFL reserve 10% of usable space i.e 10868*.10 = 1087GB 
  •            Aggregate  Snap reserve 5% Default ( can be reduced) (10868-1087)*0.05 = 489 GB

5) The Usable space for the data now would be 10868-1087-489 = 9292 GB . This information can be viewed using command aggr_showspace -g

6) This usable space can still have reservation for dedupe metadata which is displayed as ASIS in aggr_showspace command

7) If we create new volume in the aggregate we reserve 10% for the snapshot space so this again comes as reservation which can be changed to zero to save space but we end up loosing snapshot when there is a space crunch . The command to view and change the snap reserve is snap reserve -V ( Volume) and -A ( aggregate)  <Vol/aggr name>  <%reserve>

8) Inside the volume we can reserve space on 3 diffrent ways

  •  None — Thin Provisioned volume
  • File – Space reserved when a new LUN is created
  • Volume – Space the volume is reserved from the aggregate.

Its a best practice to have vol0 to be volume guarantee so that the filer is still up if the aggregate is filled up completely.

9) Then comes fractional reserve, a unique component of netapp which is by default used to guarantee that the writes to the LUN doesn’t fail because of any space issue. By default this is 100%, that is we guarantee that writes to a Luns . This can be changed to any percentage depending on requirement. This value can only be changed when the option is thin provisioned.

Netapp recommendation is to change the fractional reserve to zero for thin provisioned Lun’s and use the volume autogrow option for any space related issue ( Flexible volumes)