Quantcast
Channel: ONTAP Discussions topics
Viewing all 4958 articles
Browse latest View live

Usage of new ontap software without ONTAP OS support

$
0
0

Hi!
Does anyone have experience with the legal use of new  ONTAP P-releases and the usage of new ONTAP versions with expired ONTAP software support?

As far as i know,  the usage of ontap major release allowed, if the major release was published within the ONTAP software support? Is this also allowed for Patch releases?
For example: a system is running 9.8.x and we want to update to the newest 9.12.x or 9.13.x.


Thanks.


Multi-Admin Verify Query

$
0
0

Hello,

I am testing the MAV (Multi-Admin Verify) feature. I just stumbled across the limitation that it doesn't work with SnapCenter yet, so that leaves our CIFS Servers. I set up a rule with the following parameters:

 

Operation "volume snapshot delete"

Query "-vserver <vserver>" (where "<vserver>" is the CIFS Server)

 

However, my test SnapCenter job, which runs against volumes in a different SVM, still kicks off multiple MAV requests when it tries to auto-delete older snapshots outside of the retention period. Did I miss something in the syntax? Any advice?

Snapdiff road map

$
0
0

We are using SnapDiff for our NetApp NAS backups. the snapdiff got disabled after we upgraded our OnTap to 9.10. After talked to NetApp, we have to manually enabled it. But NetApp confirmed the SnapDiff is going to be discontinued on version 9.18.

Not sure why NetApp is doing this.  Any replacement feature for Snapdiff after 9.18?

 

We are using Rubrik CDM to back up our NAS data, some CIFS shares are really big and have multi million small files. Without the SnapDiff, the meta data scan is going to take a few hours to get the difference between two backups, it caused our NetApp slow down and user complains the access.

 

we are worrying about these backups after version 9.18 when we don't have the snapdiff anymore.

 

Appreciated if someone has same concerns and has any plan to fix the situation.

thanks,

Warranty for Metro Cluster

$
0
0

Hello Guys,

I have a question regarding the warranty extension for netapp.

We have a netapp Metro Cluster and recently we want to purchase warranty extension for it. As you know for a Metro Cluster it contains multiple components like Filers, ATTO bridges, brocade switches, etc. The user wants to know if they can purchase the warranty from different vendor for different component. I know netapp has some restrictions on their hardwares that they don't allow other vendors to touch them. If you do that you will lose their support. But the boundary is not quite clear for me, for the ATTO and brocade part, does netapp treat them as their hardwares ? if so does this mean I also need to purchase warranty for them from netapp ?

Much appreciate if anyone can provide some advices !

MFA for NetApp ONTAP SSH with Azure

$
0
0

Need to setup MFA for NetApp SSH, have found it can be done using Yubikeys and all. Considering tight security and client requirements, is there a way to setup SSH from Azure Directory services (Similar to SAML).

Can we create or delete LIFs in ONTAP FSX deployed in AWS?

$
0
0

Can we create or delete LIFs in ONTAP FSX deployed in AWS?

Unable to configure S3 Protocol - An eligible broadcast domain was not found error

$
0
0

Greetings to All,

 

Trying to configure the S3 protocol in NetApp AFFC250 Version 9.12.1P1. Got the below error message. 

 

An eligible broadcast domain was not found, and the network interface could not be added. Validated the list of IPs from the both cluster node 1 and node 2 (e0c and e0d) through the cmd - broadcast-domain show and network port show

 

Node 1 cluster is up and healthy @ e0c and e0d

Node 2 cluster is up and healthy @e0c and e0d

 

Used the Network port IP's of Node 1 e0c and Node 2 e0c while creating SVM got the above mentioned error. Tried with e0d pair aswell. Unable to fix the error, Appreciate if anyone could share their experience and knowledge that can resolve this error. Thankyou. 

 

Issue Connecting to network paths

$
0
0

we have 2 cifs shares on vserver 

 

Both were working  till date but since yesterday only one of the paths connect at one time with the respective credentials.

 

we are getting below , if we try to connect to other share 

Error : The network Folder Specified is currently mapped using a different username and password

To connect using a different user name and password, first disconnect existing mapping to this network drive

 

The issue may be due to trying to connect to the same server with different credentials, both share1 & share 2 paths were working till yesterday the issue occurred today. Kindly help resolve the same as it will impact our  scheduler.

 

 

I am suspecting that earlier the one CIFS share was connecting through IP1 and Other CIFS share was connecting through IP2 or vice versa. Thus allowing both  shares to connect from same source server. Currently the connection is going to the same IP either IP1 or IP2

 

I have tested the scenario through  and it worked when I connected one of them with IP1 and other with IP2 via respective service accounts

 

Please advise on the fix since as per norms, we should not use the Ip addresses directly. The naming convention should be used. Please advise.

 

 

 


Rest API - Rename Cluster Management Lif failed over na_ontap_rest_cli

$
0
0

Hello everyone,

 

I am trying to rename the cluster management with Ansible via na_ontap_rest_cli and get an error message back.

 

FAILED! => {"changed": false, "msg": "Error: {'message': 'Field \"lif\" is not supported in the body of a PATCH.', 'code': '262203', 'target': 'lif'}"}

 

Does anyone know the solution to this problem?

 

- name: run ontap rest cli command
  netapp.ontap.na_ontap_rest_cli:
    hostname: "{{ dhcp_node_a }}"
    command: 'network/interface/rename'
    verb: 'PATCH'
    params: { 'vserver': 'nas-p01' }
    body: { 'lif': 'cluster_mgmt', 'newname': 'nas-p01_mgmt' }
FAILED! => {"changed": false, "msg": "Error: {'message': 'Field \"lif\" is not supported in the body of a PATCH.', 'code': '262203', 'target': 'lif'}"}

 

I also tried the na_ontap_interface module, but it also returned an error message that the cluster_mgnt interface could not be found.

 

- name: rename der Management-LIF
netapp.ontap.na_ontap_interface:
hostname: "{{ dhcp_node_a }}"
vserver: nas-p01
state: present
from_name: cluster_mgmt
interface_name: nas-p01_mgmt
use_rest: always
FAILED! => {"changed": false, "msg": "Error renaming interface nas-p01_mgmt: no interface with from_name cluster_mgmt."}

 

This interface is displayed on the console. This is a newly set up cluster, where access is via the previously assigned DHCP IP address.

 

nas-p01::> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
nas-p01
cluster_mgmt up/up x.x.x.x/27 nas-p01b e0M true
nas-p01a_mgmt up/up x.x.x.x/27 nas-p01a e0M true
nas-p01b_mgmt up/up x.x.x.x/27 nas-p01b e0M true
nas-p01b_mgmt_auto up/up x.x.x.x/24 nas-p01b e0M true

 

need to get SAN ready for production deployment

$
0
0

device: NetApp A220

 

In our staging environment, I ran through the initial  management network configure via the USB connected console, then configured the everything else via the web portal (data NICs, SVM, first LUN). I did this in order to document the process for production.

 

I realize that there is way to reset the configuration and wipe the disks in a secure way. There is no sensitive data on the SAN. Is there any way to get the configuration back to the way I received it initially without requiring a secure wipe of the disks (hours of unnecessary zerorizing). i.e., I would like to use the exactly same steps I followed when I first configured the device to deploy it in production as it will be an audited process.

 

I found this link that seems to be ok but I heard in some situations resetting to factory config leaves the SAN in a state that is not always the way the vendor shipped it e.g., licenses may be preloaded from the vendor: 

https://dailysysadmin.com/KB/Article/8724/how-to-wipe-or-decommission-a-netapp-san-to-clear-config-and-wipe-or-zero-disks/

thanks in advance!

Marcus

issue connecting to shares with named CIFS path instead of IP

$
0
0

We have 2 Shares on same Vserver,

we were able to connect to both shares at same time with different service account credentials , but now only one at a time is getting connected.

 

While connecting with IP's it is working fine, but not with names 

i have found the below KB

Multiple connections to a server or shared resource by the same user is not allowed error while accessing CIFS share - NetApp Knowledge Base [kb.netapp.com]

 

 But still have queries 

 

  1. How both the shares using same vserver were accessible from the same source server via different credentials from the migration date till now . Please note the  team was using the named CIFS path for i2q and Tips shares.
  2. Is there any way to set the primary and secondary on the share level?

we should not use the Ip addresses directly. The naming convention should be used. Please advise.

 

 

Risk of Disabling SNMP

$
0
0

We have enabled SNMP on all of our NetApp clusters over the years. Now I would like to disable it as I'm aware it is considered insecure. Is there any loss of functionality in the following NetApp tools / features if I disable it? Is there any way to easily determine if it's being used in some way?

 

  • Active IQ
  • Active IQ Unified Manager (in particular the EMS Events functionality)
  • OnTAP Tools
  • SnapCenter
  • SnapCenter Plug-In for VMware
  • PowerShell Toolkit

Latest powershell commandlet for flexcache volume creation

$
0
0

new-ncflexcache commandlet works only for 9.8 and older than that. May i know what is the latest powershell command let for flexcache volume creations on 9.12.X ontap versions? 

[FAS2750] how to migrate volumes

$
0
0

Hello.

I have a question.

 

We are going to renew FAS 2750 for a project.

1. Now we have 15TB volume.
2.The license to be applied will be ordered in core bundle, so snap mirror will not be available.
3. How can I safely migrate ?

ADP Design -2750

$
0
0

I have FAS270 which has 1 tray externally attached to the controllers.

These are 7 TB drives.

 

I have install using ADP but don't quite seem to be able to control which drives belong to which nodes
cluster-01::> aggr show -aggregate aggr0_netapp_stk_cluster_01_01 -fields disklist
aggregate disklist
------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------
aggr0_netapp_stk_cluster_01_01 1.0.1,1.0.3,1.0.5,1.0.7,1.0.9,1.0.11,1.0.13,1.0.15,1.0.17,1.0.19,1.0.21,1.0.23,1.3.1,1.3.3,1.3.5,1.3.7,1.3.9,1.3.11,1.3.13,1.3.15,1.3.17,1.3.19

 

cluster-01::> aggr show -aggregate aggr0_netapp_stk_cluster_01_02 -fields disklist
aggregate disklist
------------------------------ -----------------------------------------------------------------------------------------------------------------------------------------------
aggr0_netapp_stk_cluster_01_02 1.0.0,1.0.2,1.0.4,1.0.6,1.0.8,1.0.10,1.0.12,1.0.14,1.0.16,1.0.18,1.0.20,1.0.22,1.3.0,1.3.2,1.3.4,1.3.6,1.3.8,1.3.10,1.3.12,1.3.14,1.3.16,1.3.18

netapp-stk-cluster-01::>

As you can see I have 0dd number disk on node A and even on Node B 
Tray 0  -1.0.x
Tray 3  - 1.3.x
I would rather have 1.0.0 - 23 on node A and 1.3.0-23 on node B.
I might get fairly good performance but should Node B go down then what happens to the aggregate and data should that occur.

These same questions should I introduce Flexgroup in same manner.  I would have same issue.

7 TB x 24 is 168 per tray so that 326 TB of capacity especially when performing upgrades?

I appreciate it any feedback






FAS2520 node down

$
0
0

Hello,

 

We have FAS2520 with two nodes, unfortunately the one node is down.  below is the result of cluster show -node command

Node: XXXX-NODE01
Eligibility: true
Health: false

 

All shares and datastore are still available now.

how can we bring this node back up? and how to identify the root cause of this issue?

we have receive this alert when the issue happen: System Alert from SP of XXXX-NODE01 (REBOOT (watchdog reset)) CRITICAL

 

Thanks for your answer and help

 

NetApp Volumes & Shares sizes don't match sizes shown by TreeSize.

$
0
0

NetApp Volumes & Shares sizes don't match sizes shown by TreeSize.

For example, NetApp shows a volume at 2T.  TreeSize shows 15T and counting.

Why the difference?

Thanks for your help!

Buggy NetApp Powershell Toolkit 9.14.1.2401

$
0
0

The newest NetApp Powershell Toolkit (9.14.1.2401) is very buggy. Anyone other has this problem?

For example a Invoke-NcSnapmirrorInitialize does a resync (instead of initialize) on a SnapMirror relationship. The command neither returns the jobId as stated in the documentation:

DESCRIPTION
Performs the initial update of a SnapMirror relationship. This API is usually used after New-NcSnapmirror, but it can be used alone, that is, without New-NcSnapmirror, to create and initially update a SnapMirror relationship. A job will be spawned to operate on the SnapMirror relationship, and the job id will be returned. The progress of the job can be tracked using the job cmdlets.

Output from the command:

NcController : cluster01
ErrorCode :
ErrorMessage :
JobId :
JobVserver : svm09
Status : running
Uuid : 10a75543-021d-11ef-b77f-00a098daf8c1
Message :

When working with the -ONTAPI switch at least the initialize works as expected.

There are some others problems as well.

AI Smart Predict SnapMirror to Completion based on Bandwidth throttle & LAG - Root cause ANALYSIS

$
0
0

Dear All

 

Can anyone offer smart solutions (tools / scripts / dev ops/ hybrid solution - effortlessly monitoring snapmirror replication lags in most commonly face /repeated issues? (be it network bandwidth related, ontap..etc)  via Generative AI?.. 😉

 

the use of generative ai to more accurately predicting

underlying snapmirror replication issues to occur & suggestions to root cause analysis,

(e.g. Integration/ hybrid solutions use of Netapp / Azure cloud services)

 

one of most common issues, perhaps, insufficient network bandwidth to complete replications due to increasing demands/ data growth, which resulted in replication lags... 

since network Bandwidth can be fluctuating, thus % to completion might be changes along the way.

 

Predicting SnapMirror transfer size / required bandwidth

Suggestion, / auto calculate how much additional bandwidth requirement to fulfill replication in time.

 

the solution should offer smart solution (auto calculate/ suggest, based on past data (e.g. replication throttle), it might not able finish replication in time, thus EXTRA bandwidth required blarr.. to complete such in time..)

it should also provide some INSIGHTS on STUCKED replication.. .snapshots delta / data inconsistencies / status  HUNG/ not moving, despite it shows transferring, etc,

 

Needs better efficiency to quickly identify/ point out replication that failed due to volume full or cluster/vserver peering issues or duplicate IP address or other issues...being flagged out..

 

even better include auto healing/solution to.. e.g. vol full auto increase as long aggregate threshold limit not hit..etc..

 

past performance data readings will be benchmarking/ baseline to detect any abnormality in bandwidth throttle. sudden data surge that current bandwidth no longer able to fulfil SLA within stipulated time..etc.

 

 

i find the netapp aiq might not yet provide depth of info required to quickly suggest where is root cause/ underlying issues.?

thus lots manual works (codings, etc, required.

 

in the past, such made available via tools, remaining estimated time, but with newer ontap, such tool no longer available/nor supported, thus may require to write own script to extract required info for such prediction.

 

https://community.netapp.com/t5/ONTAP-Discussions/Predicting-SnapMirror-transfer-size-required-bandwidth/td-p/75915?attachment-id=2096

 

if anyone has better ideas/ effective solutions / scripts to offer,  would be greatly appreciated the kind gestures.

 

looking forward for anyone who can shed some lights on this matter.

 

thank YOU & have wonderful days ahead. 😉

Data and Management on the same subnet

$
0
0

I remember reading years ago that it was best practice to not have your management ports (ie e0M) on the same subnet as your data LIFs to prevent traffic inadvertently being routed over e0M and impacting performance.

 

After searching this board and the KB database, I can't seem to find that recommendation anymore. Is this no longer an issue with modern versions of ONTAP?

Viewing all 4958 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>