Quantcast
Channel: ONTAP Discussions topics
Viewing all 4988 articles
Browse latest View live

Baffled by Ontap 8 Clustered Mode NFS Export Policies

$
0
0

I am baffed by the ue of export polcies. I have an NFS volume exported as follows:

netapp-clr01::> vserver export-policy rule show -policyname templates -vserver netapp-nfs01
             Policy          Rule    Access   Client                RO
Vserver      Name            Index   Protocol Match                 Rule
------------ --------------- ------  -------- --------------------- ---------
netapp-nfs01 templates       1       nfs      10.0.0.0/8            any


netapp-clr01::> vserver export-policy rule show -policyname templates -vserver netapp-nfs01  -ruleindex 1

                                    Vserver: netapp-nfs01
                                Policy Name: templates
                                 Rule Index: 1
                            Access Protocol: nfs
Client Match Hostname, IP Address, Netgroup, or Domain: 10.0.0.0/8
                             RO Access Rule: sys
                             RW Access Rule: sys
User ID To Which Anonymous Users Are Mapped: 65534
                   Superuser Security Types: any
               Honor SetUID Bits in SETATTR: true
                  Allow Creation of Devices: true

netapp-clr01::> volume show -volume templates -fields policy vserver volume policy
------------ --------- ---------
netapp-nfs01 templates templates

Yet all clients are denied: 

 

netapp-clr01::> vserver export-policy check-access -vserver netapp-nfs01 -volume templates -authentication-method sys -protocol nfs3 -access-type read -client-ip 10.2.48.1 -policy templates
There are no entries matching your query.

netapp-clr01::> vserver export-policy check-access -vserver netapp-nfs01 -volume templates -authentication-method sys -protocol nfs3 -access-type read -client-ip 10.2.48.1
Policy Policy Rule
Path Policy Owner Owner Type Index Access
----------------------------- ---------- --------- ---------- ------ ----------
/ default netapp_nfs01_root
volume 0 denied

 

Showmount -e looks OK:

~$ showmount -e 10.2.48.102
Exports list on 10.2.48.102:
/                                   Everyone

What am I missing here? 


ADP disk layout

$
0
0

If I factory reset a small dual controller FAS system with Advanced Disk Partitioning enabled and 24 disks, 20 SATA and 4 SSD, what will the resulting layout be by default?

 

How will the disks be partitioned, what will be used for aggr0, what will be available for data aggregates and what is the best practice for spare disks?

 

Thanks!

Bricked controller - need to netboot - lab environment

$
0
0

I managed to brick one of the controllers in my FAS2220 HA pair running 8.1.2 (7 Mode). 

 

It originally had 4 disks assigned and was sat there in an active/active config with only the root vol across 3 disks with one as a spare.  I had to give up the DS2246 shelf attached to it for U requirements and basically did a p*ss poor job of decomissioning it.  Turns out 2 of the root vol disks were located there, the shelf is long gone now.  I have two disks spare (now unowned) on the first controller to assign to the bricked controller but booting into OnTAP just isn't going to happen without netboot I fear.

 

Is there anywhere I can get the boot image from?  I don't have a subscription contract any more as this unit was decommed a while back, gutted.

 

I'm getting:

 

PANIC: raid: there are no data or parity disks in SK process rc on release

 

Total Used vs. Total Physical Used

$
0
0

I recently attached a previously used additional SSD shelf to an AFF HA pair. It has different disk sizes than the existing SSD so i created a new stack and aggregate. Everything appears normal. I then moved several volumes to this aggregate.

 

Here's where things get strange.

 

At least two of the volumes are using >1TB of space, but the new aggregate shows only 613GB of used space, even after several days.

 

To pick one as an example, the cithqvmccpp_01p volume shows 1.28TB used in System Manager and 58.21GB of snapshot space used, leaving 938.3 GB available.I also show 745.91 GB of deduplication savings and 5.33 GB of compression savings. The volume is thin provisioned.

 

The command line shows the following:

cithqnacl01p::> volume show-space -volume cithqvmccpp_01p

Vserver : cithqnaccpp01p
Volume : cithqvmccpp_01p

Feature Used Used%
-------------------------------- ---------- ------
User Data 1.27TB 57%
Filesystem Metadata 683.8MB 0%
Inodes 16KB 0%
Deduplication 1.57GB 0%
Snapshot Spill 58.16GB 3%
Performance Metadata 256.0MB 0%

Total Used 1.33TB 59%

Total Physical Used 269.8GB 12%

 

Why and how does "total physical used" = a significant amount less than "total used"? If "total physical" is supposed to take into account what is actually used in a thin provisioned volume, it is inaccurate. As already noted, the volume properties shows 1.28TB used, not 269.8GB. I should note there is a LUN in the volume where the data resides, but the LUN itself is not thin provisioned.

 

Any help in deciphering this would be greatly appreciated!

How to Export SVM Configuration to a Single File

$
0
0

Hello,

 

Each SVM contain several configurations, such as snapshot schedule, deduplication schedule, etc.

 

Currently, if I check the SVMs' configuration, I need to click to each SVM and check one by one.

 

For environment with lots of SVM, this task can be tedious and time-consuming.

 

Is there any other method to view or export all SVMs' configuration to a single file (e.g. excel or csv)?

 

Any idea, workaround or suggestion are welcome.

 

 

Thanks,

Tony

Netapp zoning and DR

$
0
0

Hello , I am new to netapp and would like to  get few clarificaion . 

 

We are doing a DR test for the customer and the vcenter and Vms will be built at the time of the test , There will 4 ESX hosts to be recovered during the test .The Luns are snapmirrored .

Process would be to break the replication . 

Questios :

1)Can I create one igroup and map all the luns to that Igroup as the ESX hosts need to see all the luns . ?

 

 

2)what is the commnd to see from the netapp  the wwpn of the ESX hosts once zoned  ?

 

3)Also how should I zone the SVM Lifs to the esx host .Appreciate some sample steps . 

 

Kindy asssit as I am still learning netapp 

 

thanks 

NetApp PowerShell Toolkit to get storage reports....

$
0
0

I know agood bit about PowerShell but my NetApp knowledge is weak so I apoligize if this is easy.  I have been asked by management to write a script that outputs the following :

 

These would be broken down by cluster.

 

Total storage in TB

Storage used in TB and % used

# of vols and LUNs

 

Then an overall Total TB across all cluster.

 

Thanks in advance.  Any input is very appreciated.

ONTAP Recipes: Easily create a Policy for ONTAP to ONTAP to CLOUD topology using SnapCenter

$
0
0

ONTAP Recipes: Did you know you can…?

 

Easily create a Policy for ONTAP to ONTAP to CLOUD topology using SnapCenter

 

The recipe shows the easy steps that you can follow:

 

  1. Login to SnapCenter UI

 2. Go to “Settings” page > “Policies” tab.

 

 3. You can see list of pre-canned policies, which SnapCenter supports

 

Picture1.png

 

 4. Select “Backup to Cloud ” policy and click on Copy

Picture2.png

 

5. Enter the Policy Name for the newly copying Policy

 

Picture3.png

 

6. Once the policy has been successfully copied, please select the policy and click on Modify to select the Cloud Bucket associated with AVA

 

Picture4.png

 

7. You have an option, while modifying, depending on whether you want to capture ACLs or not

 

Picture5.png

8. You can enter the Backup Creation Time, Retention for Respective Nodes and their tiers

 

Picture6.png

Picture7.png

9. Scheduled Daily, Weekly and Monthly backups can be set and triggered on primary ONTAP site. You can also configure these backup’s to be transferred from ONTAP to CLOUD at Backup Transfer Start time, based on the less traffic on Clustered ONTAP.

 

Picture8.png

10  Add the hours for Transfer window and Lag time, which is for SnapCenter to monitor the backup transfer.

 

11. If Snapshot gets transferred within expected hours (Transfer Window + Lag Time), then Volume is in compliant state and scheduled jobs (daily, weekly and monthly) will be marked as Completed.

 

Picture9.png

 12. Else Volume is in non-compliant state and scheduled job will be in warning state

 

Picture10.png

  

 For more information, see the ONTAP 9 documentation center and the Data Fabric Solution for Cloud Backup Workflow Guide Using SnapCenter (http://docs.netapp.com/mgmt/topic/com.netapp.doc.df-cloud-bu-wg/home.html?cp=5_4)

 

 

 


Unmapping LUN from igroup urgent help needed

$
0
0

Hello Netapp engineers , 

 

I an doing a DR test , in the process I did - 

snapmirror break -destination-path drfcpt1:CC_hqcvcp01_SP_311_Copy_568_hqfcpt1_hq_t0_ss01_Mirror

next, created an igroup & mapped the lun to the igroup

 

lun map    -vserver drfcpt1 -lun /vol/CC_hqcvcp01_SP_311_Copy_568_hqfcpt1_hq_t0_ss01_Mirror/hq_t0_ss01 -igroup DR_IGROUP -lun-id 60

 

======= 

Now for the post clean up test - 

 

1)

lun offline -vserver drfcpt1 -lunf /vol/CC_hqcvcp01_SP_311_Copy_568_hqfcpt1_hq_t0_ss01_Mirror/hq_t0_ss01

 

2)lun unmap -vserver drfcpt1 -path /vol/CC_hqcvcp01_SP_311_Copy_568_hqfcpt1_hq_t0_ss01_Mirror/hq_t0_ss01 -igroup DR_IGROUP

 

3)

snapmirror resync -destination-path CC_hqcvcp01_SP_311_Copy_568_hqfcpt1_hq_t0_ss01_Mirror

 

Question - 

 

1)Do i have to necessarily put  the lun offline before unmapping ? If yes , I do I have to put it online again before running step 3 . i.e when running  snapmirror resync. Or the step mentioned above is good . 

 

2)Also, I do not see the  flag to unmap the luns using lun_id in step to 2 . For example , I used lun id 60 to map the luns  , do I have to specifically mention the  -lun-id when unmounting 

 

Thanks,

 

 

 

 

 

 

ONTAP Recipes: Easily create a Data Lake using ONTAP Storage

$
0
0

ONTAP Recipes:  Did you know you can…?

 

Easily create a data lake using Apache Hadoop and ONTAP storage

 

The term “data lake” can be defined as a centralized store for enterprise data, including structured, semi-structured and unstructured data, used by multiple enterprise applications.

This recipe highlights the steps how to create a data lake with Apache Hadoop on ONTAP.

 

  1. Determine hardware and network requirements using the Hortonworks cluster planning guide: 

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_cluster-planning/content/ch_hardware-recommendations_chapter.html

 

For evaluation purposes, a single server may be sufficient.

 

A two-node cluster can be configured with two servers: one master node and one worker node.

 

Larger clusters for actual production will require the following:

 

  • 1 NameNode server
  • 1 Resource Manager server
  • Several worker node servers, each running both the DataNode and NodeManager services. The number of worker nodes will depend on the desired compute capacity.  The planning guide should help with making that determination.

 HA is recommended for production clusters, so you may also need a secondary NameNode server and a secondary ResourceManager server.

 

2. Determine the data set size. Since, data in a Hadoop cluster can grow quickly, that size should be increased by 20% or more, based on growth projections.

 

3. Apply the HDFS replication factor to the data set size to determine storage requirements.

For ONTAP storage, a minimum replication count of “2” is acceptable.  To get storage requirements, multiply the data set size by “2”.

 

4. Calculate storage for each datanode so that the total data set will be spread evenly across them.

 

Per NetApp SAN best practices configure storage as follows:

 

a. Configure an SVM with the FC protocol enabled

 

b. Configure LIFs, aggregates, volumes and LUNs to meet storage requirements. Two LUNs per datanode, one LUN per volume should     be sufficient.

 

5. Reference the Hortonworks Ambari Automated install documentation and then complete the Hadoop install:

https://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.2.2.0/index.html

 

a. Determine which server operating system will be used and then configure your servers per the minimum system requirements.

 

b. On the storage array, create FC igroups and map storage LUNs to the datanodes.

 

c. On the datanode servers, partition the LUNs, and create the file systems on the datanode hosts.

 

d. Create mountpoints on the datanodes for the new file systems.

 

e. Mount the file systems on the datanodes.

 

f. Follow the procedure outlined in the Ambari documentation for preparing the environment, configuring the Ambari repository,    installing the Ambari Server and deploying the HDP cluster. Once the Ambari Server has been installed, the deployment will be a    guided, automated procedure.

 

 

After the Ambari Hadoop deployment has finished, data can be loaded into HDFS using a number of utilities, including Flume and Sqoop. We’re now able to harness all the power of ONTAP for Hadoop.

 

Below is a diagram showing an example of an ONTAP based data lake:

 

Picture1.png

 

 

For more information, see the ONTAP 9 documentation center

SnapMirror Destination

$
0
0

I have a requirement to move some of the snapmirror destination volumes from one SVM to another SVM in the same cluster (8.3.2.P9). The source volume is in production use. Its just the destination volume that needs to be in a new SVM. Is it possible to  move the current snapmirror destination volume to a new SVM without any impact on the source volume?

Cannot add Cluster to SRA Arrays Manager

$
0
0

Hello,

 

 

 

I am trying to add NetApp cluster to vmware SRM , Arrays Manager. but I found this error, I am sure the username/password and IP address are correct.

 

error.jpg

 

Could somebody help me to solve this.

 

- ONTAP 9.2 Simulator

- VSC 7.0

- SRA 7.0

 

 

Thanks

 

champ

Problem auditing ~snapshot

$
0
0

Hello,

 

  I want to use auditing to detect if anyone has accessed /~snapshot. We are using cluster mode 8.x.

When testing this my results are eratic. Sometimes accessing a prevous version of a file is captured in the audit log but usually not.

Any ideas?

 

-sbennett1298

Cluster mode upgrade planning.

$
0
0

Hi, We have 2 datacenters and one is primary and second one is for DR. We use snapmirror to replicate the data to the DR and both are running 8.3.2P9. We are planning to upgrade the DR cluster first to 9.2 and after 2-3 weeks upgrade the primary to 9.2.

 

Here is the question, during the 2-3 weeks where the DR is at higher level and the DR senerio happens, whether we will be able to reverse resync to primary which is a lower level ?

 

thanks.

 

 

Resizing LUN - Just looking for some input on my process.

$
0
0

General info :

 

   NetApp Oncommand : 8.2.5 7-Mode

   VMWare : 6.5

   Windows Server : 2012 R2

 

 

I've got a LUN that is setup specifically for one machine.  This machine has 4 drives, with a total disk space allocatoin that would require 1.5 TB.    The LUN it resides on, is currently 2.6 TB.  I'm wanting to resize the LUN so I can get back the 1.1 TB of space that's not needed.   I've shut off the snapshots that use to be there, and remove any old ones that existed.  Regardless what I do, the NetApp still says that I'm using up the 2.6 TB of disk space which is likely because of the prior blocks that have been written as well as where the data was written. 

 

My thoughts :

 

Vmotion the storage to another LUN.

 

    I'm thinking it would have to create another LUN that could hold the 1.5 TB total disk space for the Vmotion.  I'd make a LUN with 1.6 TB, yes I would have a 100Gig extra, but that's much better than wasting 1.1 TB.  Once vmotioned over, the unused space would be freed up and not written to the new LUN.     

 

I'm thinking I'll have to then delete the old 2.6 TB LUN and VOL to completely recover the disk space.

 

To me this sounds straight forward, but that's what worries me.  I'm sure there are many ways to accomplish this task, but is what I'm suggesting one of the more plausable solutions?  Am I missing something that I'm not thinking about?    

 

This is the first time I've posted here, so forgive me if I've misplaced this request. 

 

Above all, thanks in advance for your constructive input.

 

Let me know, thx.

 

 

 

 

 

 

 


modify cluster lif with Vifmgr offline

$
0
0

Is there a way to modify cluster lif ip address while Vifmgr is offline?

NetApp Release 8.2.4 Cluster-Mode

 

thanks in advance

Adding a TDP volume once cutover into an existing SVM-DR relationship

$
0
0

To All -

 

I'm just curious how others are solving this problem.  Prestaging a new SVM-DR.  Migrating a TDP Volume into the primary online SVM, breaking it, so now the volume is r/w on primary.

 

The SVM-DR relationship will not pick up the new volume, even with a break and resync.

 

What is the proper workflow to set this up?

 

 

How to Move from 7-Mode to Cluster ONTAP

$
0
0

I'm trying to figure out the firewall ports to be opened up to snapmirror data from a 7 Mode filer to a Cluster mode filer.

 

I couldn't find any documentation about this configuration.I wouldn't be the first to run into this. So I'm surprised i couldn't find much information on this.

 

From C-Mode ONTAP documentation:

Cluster ONTAP uses  port 11104 to manage intercluster communication sessions and port 11105 to transfer data.

 

From 7-Mode documentation:

SnapMirror source binds on port 10566. The destination storage system contacts the SnapMirror source storage system at port 10566 using any of the available ports assigned by the system. The firewall must allow requests to this port of the SnapMirror source storage system.

 

Over what ports does  the  Intercluster-LIF on C-Mode cluster nodes  communicate with a 7 Mode filer to snapmirror the data.

 

Is there a definitive list of ports to be opened up for snapmirroring from 7-mode to C-DOT filers?

 

TIA

hw_assist error

$
0
0

Have a few of this error below.

 

Error: bind failed to port 4444 on IP address xx.xx.xx.xx. Error 49.  The IP was not on our list 192.0.x.x. I called the orignal installer of the NAS unit and he made some adjustment but we are still getting time out error.

 

 

Do you guys have any ideas?

 

nas-01
                     Partner : nas-02
                    Hwassist Enabled : true
                    Hwassist IP : 134.xxx.xxx.97
                    Hwassist Port : 4444
                     Monitor Status : active
                     Inactive Reason : -
                   Corrective Action : -
                   Keep Alive Status : Error: did not receive hwassist keep alive alerts from partner.
nas-02
                     Partner : nas-01
                    Hwassist Enabled : true
                     Hwassist IP : 134.xxx.xxx.98
                     Hwassist Port : 4444
                     Monitor Status : active
                     Inactive Reason : -
                   Corrective Action : -
                   Keep Alive Status : Error: did not receive hwassist keep alive alerts from partner.
2 entries were displayed.

nas::*> storage failover hwassist test -node nas-01

Info: No response from partner(nas-02).Timed out.

Exchange 2013 File Share Witness on NetApp

$
0
0

Running a Exchange 2013 cluster, 2 nodes, on Windows 2012 R2.  The MSCS quorum is using a File Share Witness on a 3rd Windows Server.  Does anybody have any documentation that shows MS Exchange 2013 supports using a CIFS File Share Witness for Quorum on a NetApp CIFS server?  Running a simple ONTAP 8.3.2 CIFS SVM and already have 12+ SQL clusters using NetApp FSWs that are supported with no issue.

Viewing all 4988 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>