Quantcast
Channel: ONTAP Discussions topics
Viewing all 4966 articles
Browse latest View live

ONTAP 9: Active Directory Authentication Failed

$
0
0

I'm trying to set up AD authentication so that AD administrators can access the CLI and System Manager using their AD accounts

 

1. I've run CIFS setup and added a data SVM to AD, the SVM is called 'svm-hostname' and the computer account (CIFS server) is called 'hostname-cifs'

2. I've run the command >security login domain-tunnel create -vserver svm-hostname

3. I've then run the command '>security login create -vserver hostname -user-or-group-name "AD SEC GRP" -application ontapi -authentication-method domain -role admin'

4. I've repeated the above for ssh and http

 

hostname::> security login show

 

Vserver: hostname

                                                                 Second

User/Group                 Authentication                 Acct   Authentication

Name           Application Method        Role Name        Locked Method

-------------- ----------- ------------- ---------------- ------ --------------

AD SEC GRP     http        domain        admin            -      none

AD SEC GRP     ontapi      domain        admin            -      none

AD SEC GRP     ssh         domain        admin            -      none

admin          console     password      admin            no     none

admin          http        password      admin            no     none

admin          ontapi      password      admin            no     none

admin          service-processor

                           password      admin            no     none

admin          ssh         password      admin            no     none

autosupport    console     password      autosupport      no     none

 

 

I've tried various ways of login in with my AD account but I still keep getting access denied - any ideas?

 

Is it because the AD computer name ('hostname-cifs') is different to the data svm ('svm-hostname')

 

Thanks


LIF failover policy

$
0
0
Hi everybody,
I was wondering if system-defined failover policy won't migrate lifs to a HA partner even if there are only two nodes in a cluster which are in a HA pair.
I came to an understanding that this policy only migrates to local ports or ports on non-HA partner nodes. This is a default policy and sounds problematic in a one HA pair cluster...
Thanks in advance!

New TR Released: TR-4668-0318-Name_Services_Best_Practices_Guide_ONTAP_9.3

$
0
0

1 Overview

The NetApp ONTAP operating system provides the ability to unify clients under a single namespace by way of storage virtual machines (SVMs). These SVMs can live on clusters that are up to 24 nodes in size. Each SVM provides the ability to offer individualized LDAP, NIS, DNS, and local file configurations for authentication purposes. These features are also known as “name services.”

External servers can provide replicated copies of databases containing user information, such as UID, GID, group membership, home directory, and other information, as well as netgroup and name resolution capabilities. These external servers make it possible to manage large environments that span global locations without extra administrative overhead and with the ability to reduce WAN latency by providing localized copies of databases to clients and servers.

 

For more info, please check here

New TR Released: TR-4669-HCI File Services Powered by ONTAP Select

$
0
0

NetApp® ONTAP® Select extends the NetApp HCI product, adding a rich set of file and data services to the platform. This technical report details how to successfully execute postinstallation tasks to configure an ONTAP Select instance for NetApp HCI.Detailed information about the advanced configuration of the ONTAP Select appliance can be found in the ONTAP Select 9 Installation and Cluster Deployment Guide and the ONTAP Select Product Architecture and Best Practices documents.Detailed information about the advanced configuration of the ONTAP Select appliance can be found in the ONTAP Select 9 Installation and Cluster Deployment Guide and the ONTAP Select Product Architecture and Best Practices documents.

 

For more info, please check here

Snapmirror logs

$
0
0

Hi -  i  am running OnTap cDOT 9.1P7.

I need to gather information for every snapmirror transfer which has completed and is still in the logs. I want to generate a single text file with the following fields

RelaionshipID:Start timestamp:FinishTimeStamp:TransferSize:TransferStatus

 

There are many logs and i can't seem to find any clear 'Start' and 'End' in the logs

 

Can anyone help me identify these fields so I can write some code to extract the data?

 

Thanks!

 

About compatibility

$
0
0

Hi,

 

We are going to upgrade ontap from 8.3.2  to 9.1. We are checking compatibility with host hba but could not found information about hba from netapp site.

 

operating system :windows 2008r2

hba :ak344a

hba driver version : Stor Miniport 9.1.8.17

driver firmware version : 5.01.02

 

Thanks,

Tuncay

 

Snap Mirror Procedure

$
0
0

Dear Experts,

 

I am new to this Community and I am looking for some steps on how to create a SnapMirror ( Site to Site Replication) both our Sites Identical Replica of AFF700 with 9.3 P1 running on it and we are using it as a Unified Array ( Block & File ) 

 

Appreciate any help or any technical kb article that shows the procedure. 

 

Thank You 

Storage Professional 

Snapshots get stuck and require manual intervention

$
0
0

Hi, I am an enduser, I do not manage Storage. I know we are on version ONTAP 9.1 P5.

 

 

We have a 3-day daily snapshot. So we have a snapmirror and the three 'daily' snapshots maintained. Snaps roll off after 3 days automatically.

 

 

We have situations where a volume will reach 100%. 

 

Sometimes when this happens, the volume will get stuck. I was told internally that "the volume will not snapmirror sync anymore due to lack of space to create the update snapshot".

 

When this happens, snapshots stop working. snapmirror stops. Snapshots to not roll off automatically.

 

After a few days passes, the .snapshot subdirectory shows the snapmirror directory and 'daily' snapshot directories which are far > 3 days old.

 

The action we have to take, is that human intervention is required, space must be added to this volume to allow the snap process to operate.

 

My question is:  Why was it designed like this? in other words, why must disk space be added? Why can't the existing snapshots which is (by this time) > 3 days, just rolloff automatically? This rolloff process will (very likely) release free space to the volume.

 

it seems odd to me that the system cannot self-manage itself, even when disk reaches 100% usage.

 

please help me understand this, thank you!!

 

 


Cluster Data Ontap Administration

Does extending a LUN 1.5TB that is not Thin provisioned take an extended time?

$
0
0

Basically my question is this. I have a LUN that I need to extended. The Voume it is in has ample space, and the LUN is not Thin provisioned.

 

I have a FAS2220 and if I go into oncommand GUI and click Edit on my LUN, increase the size there and then click Save, what exactly does this do at this point?

 

I am trying to do this live and I just want to make sure I am not going to bog anything down or take anything offline while it is "creating" the additional space for the LUN. Does it zero anything out, or simply define the space reserved for the LUN on disk and should be basically instant?

 

also I understand that I will have to go to my host cluster and extend the volume in disk management there, but i just want to make sure that expading the LUN in on command wont negatively impact anything running.

ONTAP 9.2P1 - Not Enough Spare Disks Event after Drive Failure

$
0
0

I lost a drive this morning and the newest event states:  "There are not enough spare disks".  Is there an event that shows the status of a spare being called in to take over for the failed drive?

 

Event

 

 

4/1/2018 08:48:00   node2      ERROR         monitor.globalStatus.nonCritical: Disk on adapter 0a, shelf 10, bay 15, failed.  There are not enough spare disks.

 

 

Actual Spare Disks

 

aus-cluster1::> aggr show-spare-disks

Original Owner: node1
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.10.10 SSD solid-state - block 0B 53.88GB 3.49TB zeroed
1.10.11 SSD solid-state - block 0B 53.88GB 3.49TB zeroed

 

Original Owner: node2
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.10.23 SSD solid-state - block 0B 53.88GB 3.49TB zeroed
3 entries were displayed.

NetApp Zoning with 3 Hosts and 2 Nodes

$
0
0

Hey all,

 

i have a short question to the zoning chapter.

Sometimes i read that it is recommended to do a 1:1 zoning and sometimes i hear that it is recommendet to zone each host port with all FC Luns of the SVM.

Can you tell me what is right for our situation?

 

I have a cluster with 2 nodes. A SVM that spans all 2 nodes.  Each node has 4 FCP ports 0c, 0d, 0e, 0f.  So I have a total of 8 Fiber Channel ports. My SVM has 8 Fiber Channel LIFs that are homed to each of those nodes.  Those 8 ports are split between 2 fabrics.  So I have 4 ports on each fabric.  My three hosts have a dual port FC card attached to each fabric. 

 

Thank you

 

 

 

Ndmp Error

$
0
0

Hi,

 

We are using Networker to take back up with ndmp to tape. Back up starting with no problem but after some time back up failing.

 

Netapp logs " Error (restore path construction for source inode number has been interrupted due to an abort"

 

Networker log

"42573:nsrndmp_recover:ssid'4105862079': Error reading. System Error: Connection reset by peer 
42572:nsrndmp_recover:ssid'4105862079': Timeout reading 
42572:nsrndmp_recover:ssid'4105862079': Timeout reading 
42589:nsrndmp_recover:ssid'4105862079': reply message for sequence 47 is not received. 
42590:nsrndmp_recover:ssid'4105862079': Timeout to receive any message from server. 
42856:nsrndmp_recover:ssid'4105862079': NDMP data server has an internal error. 
42871:nsrndmp_recover:ssid'4105862079': Error during File NDMP Extraction. 
42866:nsrndmp_recover:ssid'4105862079': Failed to close the tape device: communication failure 
42596:nsrndmp_recover:ssid'4105862079': data stop: communication failure. 
42840:nsrndmp_recover:ssid'4105862079': NDMP recover failed. 
42880:nsrndmp_recover:ssid'4105862079': Error during NDMP recover 
16279:recover: NDMP retrieval: child failed with status of 1"

 

Thanks,

LIF and Aggregate best practice

$
0
0

 

Hello Guys,

 

Can anyone help me with this please?

 

 

LIF 

 

I am configuring  2 node cluster FAS2650 each node has 4x10g - e0c,e0d,e0e,e0f 

2xSMB e0c and e0e teamed as a0a for SMB for both nodes

2xiSCSI e0d and e0f teamed as a0b for iSCSI for both nodes

1 g x management: using e0m for both node management

Can I use e0M for node management and cluster management?

 

I am looking to configure 2 SVM (1 iSCSi and 1 for SMB) I have  VLAN 14 and 15 for SMB VLAN 18 for iSCSI. what is the best practice to create broadcast domain failover group and LIF?how do I assign to the SVM following the best practice.

 

Aggregate

 

4 X SSD  and 44 X SAS

FAS2650 with 4X SSD and 20 X SAS and another 1 disk shelf with 24 X SAS

 

Planning to create 4 X SSD flash pool RAID4 (Using this create hybrid aggregate)

 

What would be the best practice to assign the disk to each controller and best practice to create the aggregate?

 

Regards

 

 

 

 

 

MirrorAndVault usage

$
0
0

Hi

 

For too many years we've been using DP snapmirror relationships to maintain identical copies of volumes between the source and destination. Given we have a policy for multiple weeks of online backups this makes for some overhead on the source side.

 

I have been looking into switching to XDP policies so as to reduce the soure snapshot retention whilst maintaining the higher number on the destination side. I believe this would have the added benefit of version independence but isn't the greatest for DR on the basis it's not straightforward to make such a destination volume read/writable.

 

I came across MirrorAndVault yesterday which on the face of it appears to be the best of both worlds. Mirrored snapshot of the active filesystem plus the flexibility of higher numbers of snapshots on the destination for online backups.

 

What I'd like help clarifying is:

 

  • Would a MirrorAndVault destination just need a snapmirror quiesce/break in order to make it read/writeable in a DR scenario?
  • Are MirrorAndVault relationships version independent? i.e. can the source ONTAP be upgraded to a higher family version than the destination

 

Are there any other pros/cons to weigh up?

 

Currently we are running 9.1p8

 

Thanks for your time.

 

Cheers,


Bob


Is there any way to check data / File names inside a Lun thru the Netapp 7 mode CLI ?

$
0
0

Hi,

 

I want to know that Is there any way to check Data / File names inside a Lun thru the Netapp 7 mode CLI ?

 

Thanks Smiley Happy 

 

 

NetAppDocs v3.1P1 errors

$
0
0

WARNING: The Hardware-ONTAP file is out of date. An update should be downloaded from the NetAppDocs Community site and copied to the folder: C:\Program Files (x86)\NetApp\NetAppDocs PowerShell Module\NetAppDocs\Resources

 

Where can I get the latest download of NetAppDocs?  I use ICT to collect data on 7-mode Filers, and used to use NetAppDocs_3.1P1 for cDOT.  If NetAppDocs has been retired, what is the replacement?

Flexgroup mount slow to advertise

$
0
0

Has anyone noticed from the time you mount a flexgroup and create a cifs share that it takes a long while before you can access it?

 

I created a test flexgroup, mounted it to a junction path, created a cifs share. Took over 30 mins before Windows would allow me to connect to the share.

 

Unmounted and remounted with a different junction path name and same thing again.

 

I have a 100gb flexgroup across two aggrs w/ default multiplier of 4. (12.5GB x 8)

"network port ifgrp rename" Command missing?

$
0
0

If created a new interface group for changing from singlemode to multimode_lacp, moved all logical interfaces and deleted the old interface group.

 

Now I want to rename the new "a0b" to "a0a" again, but there is no command "network port ifgrp rename".

 

For automation reasons, I need the interface group to be "a0a" again.

 

Any ideas?

 

New TR Released: TR-4670 FPolicy Solution Guide for ONTAP: IntraFind

$
0
0

1 Introduction

NetApp® FPolicyTM is a file access notification framework that allows users to monitor file access over NFS and CIFS protocols. This feature was introduced in NetApp clustered Data ONTAP® 8.2, a scale-out architecture that enables a rich set of use cases working with partners. The FPolicy framework requires that all the nodes in the cluster are running Data ONTAP 8.2 or later. FPolicy supports all SMB versions such as SMB 1.0 (also known as CIFS), SMB 2.0, SMB 2.1, and SMB 3.0. It also supports major NFS versions such as NFS v3 and NFS v4.0.

The FPolicy framework natively supports a simple file-blocking use case, which enables administrators to restrict end users from storing unwanted files. For example, an administrator can block audio and video files from being stored in data centers, which saves precious storage resources. This feature blocks files based on only extension. For more advanced features, partner solutions must be considered.

The FPolicy framework enables partners to develop applications catering to a diverse set of use cases. The use cases include, but are not limited to, the following:

  • File screening

  • File access reporting

  • User and directory quotas

  • HSM and archiving solutions

  • File replication

  • Data governance

For more info, please check here

Viewing all 4966 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>