Quantcast
Channel: ONTAP Discussions topics
Viewing all 4964 articles
Browse latest View live

Usable Space FAS 2552 C-mode

$
0
0

Hi,

 

 

we have newly install NetApp FAS-2552 2 node cluster with 12 Disk 1.2TB HDD, OS Ontap 9.1 C-mode, what is a best practice to get maximum space.

 

currently assgin 6 disk each node, per node i get 2.56TB usable space, how can i get meximum space

 

 

 

need help


Latency issue with Windows Failover Cluster role failover and FAS2240-2/Data ONTAP 8.1.4 7-Mode

$
0
0

Greetings,

 

I've found an issue involving our specific filer model and ONTAP version (FAS2240-2/ONTAP 8.1.4 7
-Mode) with a new implementation that we're testing, and I'm hoping that someone could provide
some thoughts. When using LUNs created on this filer in this implementation, manually failing over
a file server role in a two-node Server 2016 Windows Failover Cluster using in-guest iSCSI
consistently takes around 12 minutes for the failover between WFC nodes to complete. During the
failover between these WFC nodes, a LUN reset request is sent by the MS iSCSI initiator to our
filer, and the connection to the disk is reestablished within the Windows environment.

 

I have tested the same configuration on an old filer/ONTAP version (FAS2020/ONTAP 7.3.5.1) and we
do not experience the 12 minute failover time. The failover of the file server role happens within
seconds, as expected. The only part of the configuration that changes to reproduce the long
failover time is which filer the Windows source and destination disks are hosted on.

 

The implementation is a newer Microsoft block-level replication technology called Storage Replica.
Our configuration involves two Windows Server 2016 DCE nodes in a Windows Failover Cluster, with
each node using the in-guest MS iSCSI initiator and SnapDrive 7.1.4 x64. Each node is connected to
one separate LUN for data (2TB) and one separate LUN for logging (25GB), making four LUNs total,
each thin-provisioned with SnapDrive. The four disks are then added to the Windows Failover
Cluster and a File Server role is created using one of the 2TB disks as the source disk.
Replication is then successfully enabled between the identically-sized disks using the Storage
Replica wizard, to create a source and destination for replication. The role is supposed to
failover to the other node (destination) within seconds, but this operation takes around 12
minutes on our specific filer and ONTAP version. As stated previously, the long failover does not
happen on an older filer, with an older ONTAP version.

 

We have a total of four FAS2240-2 filers, and each pair are in a HA configuration and reside at
different physical sites. I have tested hosting the storage in this configuration across the
physical sites and have also isolated the configuration to each individual site, and consistenly
achieve the same long failover time of the file server role with the FAS2240-2/ONTAP 8.1.4 7-Mode
filers. The older filer is a FAS2020 pair in a HA configuration, running ONTAP 7.3.5.1. The long
failover time does not happen when hosting the storage in this configuration on the older filer.

 

Since we are currently on 8.1.4 7-mode, we are unable to get support due to the version falling
under EOVS. We intend to move to a newer version when possible to open a support case. However in
the meantime, we've been scratching our heads on this one and are hoping to see if anyone on the
NetApp forums have any ideas/thoughts. I would be happy to answer any additional questions.

 

Thanks!

REBOOT (panic)) WARNING - lost all snapvault schedules

$
0
0

Hello Community,

 

I have a filer that panicked and rebooted due to a power issue. After the reboot I noticed that all of the snapvault schedules have disappeared. Is there a way to look at previous autosupports/configs that may contain these schedules that I can just use to recreate the schedules.

 

Thanks,

7-mode, CIFS, local accounts and SnapMirror

$
0
0

Previous config: IBM N6210, ONTAP 8.1.3P3, 7-mode, no multistore license

 

Current setup: NetApp FAS8020 ONTAP 8.2.4P6 7-mode, no multistore license

 

Previous and current setup:

 

  • CIFS shares located on site A are accessed using a local FAS account, i.e. 'cifs_user'
  • Site A volumes are SnapMirror replicated to Site B
  • On site B SnapMIrror destinantion volumes are shared out using a local FAS account identically named to the site A account, i.e. 'cifs_user

 

Scenario:

 

  1. SnapMirrors were broken and data written into the shares in site B under the site B local account 'cifs_user'
  2. The volumes were then replicated back to site A and the site A volumes made r/w again

Issue:

 

In site A, the data written to the shares whilst in site B is not accessible (permission denied) after mirroring back to site A.

 

From my perspective this should never have worked, so I'm not after any evidence to support this. However I am told that it has worked under the 'previous configuration' mentioned at the top of this post so I am struggling to find an answer as to how it could have possibly worked previously. For example, have there been any changes to ONTAP code that means the newer version is 'stricter' with ACL permissions?

 

Any ideas at all on how this could have worked?

Unable to get Netapp Cluster 9.1 cluster, node and aggregate related data

$
0
0

I am getting following errors while getting cluster, node, aggregate, lun and user roles related data.

 

Device OS Version is :  NetApp Release 9.1

 

com.netapp.nmsdk.ApiExecutionException: Unable to find API: cluster-identity-get
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: system-node-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: cluster-peer-health-info-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: aggr-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: aggr-list-info
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)
com.netapp.nmsdk.ApiExecutionException: Unable to find API: lun-list-info
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at cluster.TestNetappCluster.go(Unknown Source)
        at cluster.TestNetappCluster.main(Unknown Source)

com.netapp.nmsdk.ApiExecutionException: Unable to find API: security-login-get-iter
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:141)
        at com.netapp.nmsdk.client.ApiRunner.run(ApiRunner.java:105)
        at filer.Cluster.go(Unknown Source)
        at filer.Cluster.main(Unknown Source)

 

Is there any configuration I need to change?

 

Kindly help me.

 

Thanks

 

 

Clustered Ontap LIF Status

$
0
0

A simple command line tool to watch the network activity on your Ontap Logical Interface.

 

https://github.com/robinpeters/cdot-lif-status

 

## Example:
```
$ ./lifstat.pl 
Option c required 
Option u required 
Option p required 
Usage :: 
  -c|--cluster : Cluster Name to get Logical Interface Status. 
  -n|--node       : Node Name [To limit the lif stats to node level]. 
  -u|--username   : Username to connect to cluster.       Example : -u admin 
  -p|--passwd     : Password for username.                Example : -p netapp123 
  -v|--vserver    : Vserver Name [To limit the lif stats to vserver level]. 
  -i|--interval   : The Interval in seconds between the stats. 
  -h|--help       : Print this Help and Exit! 
  -V|--version    : Print the Version of this Script and Exit!


$ ./lifstat.pl -c 10.10.10.10 -u username -p password 
Node             UUID        R-Data R-Err   R-Pkts       S-Data S-Err   S-Pkts  LIF-Name             
node-01          1065       1748060     0      187            0     0        0  nfs_lif3             
node-01          1024           976     0        5          564     0        5  node-01_clus1     
node-02          1014           564     0        5          976     0        5  node-02_clus2     
node-01          1049           492     0        4          392     0        4  node-01_clus3     
node-02          1051           392     0        4          492     0        4  node-02_clus3     
node-02          1066           220     0        1            0     0        0  nfs_lif4             
node-02          1013           128     0        2          128     0        2  node-02_clus1     
node-01          1023           128     0        2          128     0        2  node-01_clus2     
node-02          1064             0     0        0            0     0        0  nfs_lif4             
node-01          1045             0     0        0            0     0        0  iscsi_lif2           
node-01          1026             0     0        0            0     0        0  node-01_mgmt1     
node-02          1047             0     0        0            0     0        0  iscsi_lif4           
node-02          1033             0     0        0            0     0        0  DFNAS02DR_nfs_lif2   
node-01          1038             0     0        0            0     0        0  cifs_lif1        
node-01          1032             0     0        0            0     0        0  DFNAS02DR_nfs_lif1   
node-02          1058             0     0        0            0     0        0  smb3cifs_lif02       
node-01          1057             0     0        0            0     0        0  smb3cifs_lif01       
node-01          1063             0     0        0            0     0        0  nfs_lif3             
node-02          1056             0     0        0            0     0        0  node-02_nfs_lif_1 
node-01          1053             0     0        0            0     0        0  node-01_icl1      
node-01          1048             0     0        0            0     0        0  iscsi-mgmt       
node-02          1035             0     0        0            0     0        0  lif2         
node-02          1039             0     0        0            0     0        0  cifs_lif2        
node-01          1034             0     0        0            0     0        0  lif1         
node-02          1046             0     0        0            0     0        0  iscsi_lif3           
node-01          1025             0     0        0            0     0        0  cluster_mgmt         
node-02          1027             0     0        0            0     0        0  node-02_mgmt1     
node-02          1054             0     0        0            0     0        0  node-02_icl1      
node-01          1059             0     0        0            0     0        0  smb3cifs_mgmt        
node-01          1062             0     0        0            0     0        0  coecifs1             
node-01          1044             0     0        0            0     0        0  iscsi_lif1           
node-01          1052             0     0        0            0     0        0  svm2_lif1            
node-01          1050             0     0        0            0     0        0  svm1_lif1            
node-02          1055             0     0        0            0     0        0  nfs_lif_1 
^C
....
....

Flexclobe

$
0
0
Before the clone is split off, it should share same file with its parent volume, which means own same inodes, unless there are new added files after the creation of the clone.
My question: is there any way to tell how many inodes are shared and how many are owned by the clone itself?

Thank you!

SnapVault secondary volume, inline compression and deduplication

$
0
0

Hi

 

FAS8020, 8.2.4P6, 7-mode

 

I've created my SnapVault destination volumes using the following method:

 

vol create sv_ca_testvol -s none aggr0 94g
snap sched sv_ca_testvol 0 0 0
snap reserve sv_ca_testvol 0
sis on /vol/sv_ca_testvol
sis config -C true -I true /vol/sv_ca_testvol
sis config -s manual /vol/sv_ca_testvol

 

tr-3958 says this:

 

The manual schedule is an option for SnapVault destinations only. By default if deduplication and postprocess compression are enabled on a SnapVault destination it will automatically be started after the transfer completes. By configuring the postprocess compression and deduplication schedule to manual it prevents deduplication metadata from being created and stops the postprocess compression and deduplication processes from running.

 

Why am I not seeing any inline compression or deduplication savings on the SnapVault destination volumes?

 

Thanks


ONTAP Recipes: Deploy a MongoDB test/dev environment on DP Secondary

$
0
0

ONTAP Recipes: Did you know you can…?

 

Deploy MondoDB test or development environment on Data Protection Secondary storage

 

Before you begin, you need: 

  • AFF/FAS system as primary storage
  • FAS system as secondary storage (unified replication for DR and vault)
  • Snapmirror license on both systems
  • Intercluster LIF on both systems
  • MongoDB Replica Set production system
  • Server to mount the clone database (cloning server)

 

  1. Set up the cluster peering between primary and secondary systems.
  2. Establish the SVM peering between the SVM that holds your MongoDB production database on the primary system with the SVM on the secondary system that will work as the vault system.
  3. Initialize the relationship to get your baseline snapshot in place.
  4. On the unified replication system, identify the volume(s) which contain the MongoDB replica set LUNs, identify the snapshot that reflects the version, and create a FlexClone based on that snapshot.
  5. Map the cloned LUNs to the cloning server. You don’t have to map LUNs from primary and all secondaries, just pick one (for example, primary member of the replica set) and map its LUNs to the cloning server.
  6. Mount the cloned LUNs filesystem.
  7. Create a MongoDB config file in the cloning server, like the one that already exists in the production environment, except that you will make the dbpath option pointing to the cloned LUNs filesystem and exclude the replication section of the config file.
  8. Connect to your cloned MongoDB database.

 

For more information, please see the ONTAP9 documentation center

 

How can I change the IP address of "ONTAP select Deploy(9.2RC)" DNS server?

$
0
0

I want to change the IP address of the DNS server that was set up when OntapSelectDeploy was constructed.

I checked the user guide and the command list of the real machine, but I could not find the change command.

 

 # DNS server information is written in the /etc/network/interface

 # I'll try to modify, but this file is READ ONLY!

    # edit /etc/network/interface

 

Could you tell me how to solve it?

 

cluster peer show - availability pending

$
0
0

I am adding nodes to a cluster  running ontap 9.1P2.   I added the nodes and then added intercluster interfaces on each of the new nodes,  there is a firwall rule update required so I removed the interfaces for now. 

 

 

I am not sure of the status before but now all cluster peer relationships avialability status is pending.  even after verifying that the cluster is replicating still and that the cluster peer address list is good. 

 

 

there are no offers so I can't do anything to change this status,   how can I make the cluster peer availability get back to Available? 

Architecture Solution - Question

$
0
0

One of my friends reached out to me to provide a solution for the below assignment. Can someone answer the questions ?

 

 

You have been tasked with Architecting Netapp Storage Solution for a new application environment. The environment consists of an Oracle database and CIFS shares for holding multimedia image files

 

  • The long term plan for this storage environment is to host multiple customer environments with the cluster growing across multiple FAS nodes in the future. Keep this in mind when planning this implementation, to take advantage of Netapp storage features and efficiencies.

 

  • You have 2 * FAS8080 heads
  • It has been decided that each server will only run a single protocol SAN or NAS.

 

Firstly, the oracle database will serve a heavily transactional application.

 

The database will be mounted on a Linux cluster (linux-1 & linux-2) with the following mount points.

 

/DB/data (Oracle datafiles) – 1024 GB

/DB/redo (Oracle online redo logs) – 100 GB

/DB/arch (Oracle archived redo logs) – 300 GB

 

As this is heavily transactional database, it is critical that the writes to the redo area have very low latency. Writes to the archive area are less critical in terms of latency, but the Dbas often request that /DB/arch grows several times in size when they have to keep many more archive logs online than usual. Therefore /DB/arch needs to be expandable to 1.5 TB when asked. After a day or so, they’ll delete the logs so you can reclaim the space. The data area must handle quite large IOPS rate.

 

To keep things simple, assume:

 

  • The storage will be mounted by 2 (Linux) hosts.
  • Standard Active/Passive Veritas clustering

 

Secondly the CIFS environment will require a 10 TB CIFS share along with a 40 TB share.

 

The 10 TB CIFS share will be used for initial storage of the image files while they are manipulated and analysed, so have a high performance low latency requirement. The 40 TB share will be used for long term storage, with storage efficiency and capacity of more importance than performance.

 

1) How many shelves would you buy of what type and why? 

2) How would you configure your physical environment and why?

ONTAP Recipes: Send ONTAP EMS messages to your syslog server

$
0
0

ONTAP Recipes: Did you know you can…?

 

Send ONTAP EMS messages to your syslog server


1. Create a syslog server destination for important events:


event notification destination create -name syslog-ems -syslog syslog-server-address


2. Configure the important events to forward notifications to the syslog server:


event notification create -filter-name important-events -destinations syslog-ems


The default “important-events” work well for most customers.

 

 

For more information, see the ONTAP 9 EMS Configuration Express Guide in theONTAP9 documentation center

 

 

Failed Disk Not Rebuilding / Prefail State Several Weeks

$
0
0

Greetings,

 

I have a Clustered system that I have recently taken over, for one of our customers, that is running on 8.2.2P1.

These are older 6240 systems.

 

The problem is with one of the aggregates not rebuilding a failed disk or performing the copy for the prefail.

(I am not positive on the pre-fail, still trying to determine how to vailidate that data is being copied)

 

1. The system appears to have adequte spares of the same type.

2. I noticed that the aggregate is failed-over to the partner node. 

3. Several Aggr scrubs are in the suspended state (one is the RG with a failed disk), but the two with prefailed disks are not currently scrubbing.

 

I have not taken any action at this point other than investigating.

 

Questions:

1. Could the scrub process be causing this?

2. Would the aggregate not being on the home node be an issue? (thought I've seen rebuilds in this scenario)

 

I appreciate any help as I have never seen this before. Disk rebuilding are one of those Netapp things that just happen!

 

 

 

Thanks in advance!

 

Ken

 

 

ONTAP Recipes: Send ONTAP EMS messages to your syslog server

$
0
0

ONTAP Recipes: Did you know you can…?

 

Send ONTAP EMS messages to your syslog server

 

  1. Create a syslog server destination for important events:

event notification destination create -name syslog-ems -syslogsyslog-server-address

 

  1. Configure the important events to forward notifications to the syslog server:

event notification create -filter-name important-events -destinations syslog-ems

 

The default “important-events” work well for most customers.

 

 

For more information, see the ONTAP 9 EMS Configuration Express Guide in the ONTAP 9 Documentation Center.


LUN full

$
0
0

Hello,

 

A thick LUN has filled up, and this caused it to go offline.  It's a VM VHD file LUN.  We managed to brink it back online and delete some data from the LUN from within Window, but it keeps going offline due to lack of space.

 

How can I make it aware that we've just deleted 80GB of data so it doesn't keep going offline?  We've just upgraded our hosts to Server 2012 R2 and we haven't yet installed SnapDrive and apparently we're not authorised to download it from the NetApp site now.

 

Can we do the space reclamation thing without SnapDrive?

 

We have FAS2020 with 7.3.2 OS.

 

Thanks!

VSM (TDP) fails consistently at 342.5GB

$
0
0

Has anyone ever experienced a Volume SnapMirror fail at the exact same spot consistently?

 

This is the very last mirror as part of a migration project.  All other 9 VSM mirrors transferred from the same source to the same destination just fine.

 

During the initialization phase the transfer gets to 342.5GB and then the source reports that the snapmirror failed.  Just generic message.  The destination continues to say transferring for another 30 minutes before it finally stops with a failed message (also generic).  The source volume is only using 5% of inodes and 80% of storage space.  It is not deduped or compressed.

 

I have tried multiple things to troubleshoot.  I have deleted the snapmirror and volume on the destination and create it again.  I have created the destination volume twice the size and started the mirror.  Everything I have tried stops at same 342.5GB.

 

On the source I created three QSMs to a bogus volume and those QSMs finished just fine.

 

The Source    =   NetApp Release 8.1.4P9D18

Destionation  =   NetApp Release 9.1P2

ONTAP 8.1.X VSES AntiVirus Scanning 7-MODE

$
0
0

Hello,

 

I a very new to NetApp so i will do my best to explain our current setup.

 

Release Version - 8.1.4 7-Mode

Model - 3210 (x2 HA Pair)

VSES - 1.2.0.163 (Mcafee Enterprise Storage 2xScanning Servers)

 

We have created a private Interface group for the AV traffic - i add the private network IP's to the vFilers.

 

We have also set the following vscan variables on the filer (Do they need to be set on a vFILER level too?):

 

VSCAN OPTIONS
TIMEOUT = 10 SECONDS
ABORT_TIMEOUT = 50 SECONDS
MANDATORY_SCAN = OFF
VLIENT_MSGBOX = OFF

 

SCANNER Policy

TIMEOUT = 40
THREADS = 150

 

Nothing is enabled as of yet, we had issues previously where we had file locks and i believe this is down to the Mandatory_scan = On (Now off!)

 

Question is, how do i limit what is scanned? do i have to enable vscan on the filer and vfiler ? & Add the private network IP/s to the Mcafee EPO policy ?

 

Also if anyone could recomend any changes to our vscan details above it would be appreciated

 

Thank you,

Phil

SnapLock Enterprise - is it possible to destroy aggregate?

$
0
0

Hello,

We have a small 9.1 filer with SnapLock Enterprise enabled. In configuration we overlooked requirements for priviledged deletes - that it requires snaplock compliance volume & aggregate. We are in UAT now and I need to delete all the files written to the volume so far. I have only one spare disk. Can I destroy snaplock enterprise aggregate? (then I could create it again with less disks, giving me enough disks for extra aggregate). 

Error 500 Servlets not enabled In Fas 2020

$
0
0

Hello Team,

 

I was new in configuration and while i was setting up and all done the foillowing error was posting in GUI session 

 

Error 500

Servlets not enabled

 

 NetApp Release 7.2.4L1: So please help me out, thanks in advacne.

 

Best Regards,

P Uday Prasad

Viewing all 4964 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>