Quantcast
Channel: ONTAP Discussions topics
Viewing all 4966 articles
Browse latest View live

Unable to remove foreign aggr OnTap

$
0
0

Hello all,

   I have a issue I am unable to get around and wanted to see if anyone could provide insight.

 

We brought over a couple of SSD shelves from a different OnTap installation and ofcourse when we hooked it up the system found foreign aggergates that we were able to remove using:

 

storage aggregate remove-stale-record -nodename

However we have an aggregate that is online and I can't take the aggregate offline because the aggreage has volumes.

 

I go to take the volumes offline but recieve the message:

 

storage node run -node "nodename" -command vol offline "volname"

 

cannot run command on clustered volume.

 

Unfortunantly the current cluster doesn't see the volume or aggregate so I have to go through the node itself.

 

Any suggestions on how I can take both the aggregate and volumes offline so that I can remove and reclaim the disk?


ndmpd not found after reset Netapp 7.3

$
0
0

hi

 

I met problem here. Due to hd failure, i replace new ones, the re-initlize the disk and reset netapp.

 

But after system start, i found that, ndmpd command is missing, i can't figure it out, could anyone give me some suggtions on it?  Thank you!

 

 

 

 

nas> options ndmpd
ndmpd.access                 all
ndmpd.authtype               challenge
ndmpd.connectlog.enabled     off
ndmpd.ignore_ctime.enabled   off
ndmpd.offset_map.enable      on
ndmpd.password_length        16
ndmpd.preferred_interface    e0a
ndmpd.tcpnodelay.enable      off
nas>
nas> ndmpd
ndmpd not found.  Type '?' for a list of commands

In-band ACP functionality

$
0
0

On IOM6s that have been upgraded to supported firmware and configured to use in-band, as well as IOM12s. How does the SAS channel actually issue the reset to the SAS chip / IOM that has stopped responding?

 

I see in this doc https://kb.netapp.com/app/answers/answer_view/a_id/1029778 it states: "The Alternate Control Path (ACP) interface currently uses Ethernet-based network connectivity to perform various recovery tasks, such as expander resets and shelf power cycles. This feature was referred to as Out-of-Band (OOB) ACP. With ONTAP 9.0 and later, ACP uses available SAS data links to perform the same tasks. This feature is referred to as In-Band ACP. The ACP mode that is configured applies to the entire ONTAP cluster."  

 

But with the SAS cable plugged in to the SAS chip, and the SAS chip is offline.. how does it reboot it should it lock?

How do I identify what physical ports being used for intercluster?

$
0
0

::*> network interface show -role intercluster
Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
vserver-name
node-01-ic up/up 10.192.18.21/24 node-01 a0a-201 true
node-02-ic up/up 10.192.18.22/24 node-02 a0a-201 true

 

 

My question is how do I find out what physical ports the lif (node-01-ic) is using for intercluster (snapmirror)? 

 

Thanks!

Cluster Configuration

$
0
0

Hi All,

 

I am relatively new managing Netapp storage systems. Any help suggestions I can help will be much appreciated.

 

We currently have 10 controllers and a network configuration that has all the lifs on a particular vlan in the same broadcast domain and failover group. However we are about to replace 4 fas8080 filers with 4 fas9000 filers. The issue is that our network team is also moving into a new network configuration whereby the old vlans will no more be used in the new configuration and a network folk said that the new vlans on the new core will be layer 3 vlans while the old ones are layer 2, but data going to the old vlans can be routed to the new core switch. We plan to connect these 4 new filers to a new core switch in the new configuration and this core will not have any of the old vlans configured on it. There seem to be two options (as we will prefer not to have the new filers connected to the old network since we will soon migrate everything over to the new network configuration):

 

1. Follow this route above and join the new nodes to the same cluster as the old ones but have broadcast domains and failover groups of the new nodes that are different from the those of the others in the old vlans. So failovers will be between 4 nodes instead of 10.

2. Avoid the use of vlans all together in configuring the lifs and configure lifs on top of ifgrps. Is this possible if the switch ports are configured as access ports instead of trunked ports? What about traffic to/from the intended clients for the different vlans getting/sending their data?

 

Please does anybody envisage any problems down the line, especially with vservers, data traffic, snapmirror, snapvaulting, etc.?

 

Examples

 

broadcast domains
Default 136.171.74.0/24 1500
                            node1:a0a-1000             complete
                            node2:a0a-1000             complete
                            node3:a0a-1000             complete
                            node4:a0a-1000             complete
                            node5:a0a-1000             complete
                            node6:a0a-1000             complete
                            node7:a0a-1000             complete
                            node8:a0a-1000             complete
                            node9:a0a-1000             complete
                            node10:a0a-1000             complete
        146.36.200.0/21 1500
                            node7:a0b-2000              complete
                            node3:a0b-2000              complete
                            node2:a0b-2000              complete
                            node6:a0b-2000              complete
                            node4:a0b-2000              complete
                            node2:a0b-2000              complete
                            node1:a0b-2000              complete
                            node5:a0b-2000              complete
       
Failover Groups      
node00
                 136.171.74.0/24
                                  node1:a0a-1000, node6:a0a-1000,
                                  node2:a0a-1000, node7:a0a-1000,
                                  node3:a0a-1000, node8:a0a-1000,
                                  node4:a0a-1000, node9:a0a-1000,
                                  node5:a0a-1000, node10:a0a-1000
                 146.36.200.0/21
                                  node1:a0b-200, node5:a0b-200,
                                  node2:a0b-200, node6:a0b-200,
                                  node3:a0b-200, node7:a0b-200,
                                  node4:a0b-200, node8:a0b-200

 

Thanks.

Missing Space in Qtree

$
0
0

Greetings all,

 

This one is a head scratcher.  I have a volume which is at 77% capacity, and a qtree within the volume that is at 8% usage, but, the user is unable to move data, only 10GB, to the qtree, which comes back out of space. 

 

Just to test, I have increased both the volume and qtree, and the same error shows.  Does anyone know why the available space in the qtree is not allowing the user to access it and utilize it?

 

Thanks in advance for your assistance.

 

James

volume-get-iter (SDK 5.7) doesnt shows all volumes under cluster(9.1)

$
0
0

Hi Guys,

 

I am using Netapp SDK 5.7 and below is the sample python code. If I try to list all of the volumes under the cluster it shows only 50+ or sometimes 200+ volumes. I could confirm that the cluster(version 9.1) has 1000+ volumes.How could I list all the volumes?

 

 

 

s = NaServer("x.x.x.x", 1 , 31)
s.set_server_type("FILER")
s.set_transport_type("HTTPS")
s.set_port(443)
s.set_style("LOGIN")
s.set_admin_user("someuser", "somepassword")



result = s.invoke('volume-get-iter' , 'max-records', 1000)
print(result.sprintf())  



for volume in result.child_get('attributes-list').children_get():
       volumename = volume.child_get('volume-id-attributes').child_get_string('name') 
       print(volumename)


#Below are the last few output lines of sprintf() 

		</volume-attributes></attributes-list><next-tag>&lt;volume-get-iter-key-td&gt;&lt;key-0&gt;vserver1&lt;/key-0&gt;&lt;key-1&gt;vol101&lt;/key-1&gt;&lt;/volume-get-iter-key-td&gt;</next-tag><num-records>250</num-records>  </result>

 

Looks to me "next-tag" says there is a volume vol101 being followed up and this time it prints 250 volumes(this number changes everytime running the script). How can I print all of the volumes from the cluster? Why max-records parameter doesnt helps here?

 

I am not able to get "next-tag" working if this is the only way. Please help me to modify or add any code that would help. Thanks in advance.

 

 

Regards,

 

Joy

Reporting Nodes Added Automatically When Moving iSCSI-Based Volumes?

$
0
0

A colleague of mine indicated he's never reconfigured reporting nodes when moving volumes containing LUNs from one HA Pair in a cluster to another HA Pair. My understanding from NetApp documentation has been that you have to add the destination nodes as reporting nodes in advance of a volume move if you want to retain optimized paths.

 

I created a volume as a test and moved it to another HA Pair without adjusting reporting nodes. Sure enough, the nodes I moved the volume to were added automatically to the list of reporting nodes for the LUN! This behavior contradicts NetApp documentation which indicates it must be done manually. Has anyone else encountered this? Does anyone know if there's a scenario where it wouldn't happen automatically?


New TR Released: TR-4670 FPolicy Solution Guide for ONTAP: IntraFind

$
0
0

1 Introduction

NetApp® FPolicyTM is a file access notification framework that allows users to monitor file access over NFS and CIFS protocols. This feature was introduced in NetApp clustered Data ONTAP® 8.2, a scale-out architecture that enables a rich set of use cases working with partners. The FPolicy framework requires that all the nodes in the cluster are running Data ONTAP 8.2 or later. FPolicy supports all SMB versions such as SMB 1.0 (also known as CIFS), SMB 2.0, SMB 2.1, and SMB 3.0. It also supports major NFS versions such as NFS v3 and NFS v4.0.

The FPolicy framework natively supports a simple file-blocking use case, which enables administrators to restrict end users from storing unwanted files. For example, an administrator can block audio and video files from being stored in data centers, which saves precious storage resources. This feature blocks files based on only extension. For more advanced features, partner solutions must be considered.

The FPolicy framework enables partners to develop applications catering to a diverse set of use cases. The use cases include, but are not limited to, the following:

  • File screening

  • File access reporting

  • User and directory quotas

  • HSM and archiving solutions

  • File replication

  • Data governance

Fore more info, please check here

What if vol0 is full?

$
0
0

Hi,

 

I am new to this environment and I am trying to dig out some questions related to root volume of storage system.

 

Q. What impact can be there if the root volume i.e. vol0 is full?

Q. What are be the best practises to reduce the filing?

 

Thanks

 

~Ravi

Volume did not autosized although the volume autosize is set to 20TB

$
0
0

I have a volume that is autosized to 20TB and the threshold is set to 90%. However, the volume got full at 15TB (100%utilized)

 

"Volume autosize is currently ON for volume "XXXXX".
The volume is set to grow to a maximum of 20.40t when the volume-used space is above 90%.
Volume autosize for volume 'XXXXX' is currently in mode grow."

 

vserver volume max-autosize
------- --------------------------- ------------
XX XXXXX 20.40TB

 

vserver volume autosize-mode
------- --------------------------- -------------
XX XXXXX grow

 

version is 9.2P3

NFS: v4 server .. turned a bad sequence-id error!

$
0
0

Redhat 6.9 on our client Linux box, ONTAP 9.1 P5 Storage layer

  

serverx1 kernel: [1550294.152606] NFS: v4 server netappprd.cof.ds.myorg.com  returned a bad sequence-id error!

serverx1 kernel: [1550284.154875] NFS: v4 server netappprd.cof.ds.myorg.com  returned a bad sequence-id error!

serverx1 kernel: [1550284.159944] NFS: v4 server netappprd.cof.ds.myorg.com  returned a bad sequence-id error!

 

On Apr 26, we had over 9,000 of these appear in /var/log/messages, and NFS mounted Netapp storage showed that over 1,024,000 files were 'open', so Storage layer hung and we were forced to reboot our linux server.

 

Any ideas?

 

Thank you

Vol autogrow: Under which log?

$
0
0

Hi,

 

I want to know which of the logs capture the vol autogrow in ONTAP 9.2?

 

Which of the log files capture it?

 

It used to be messages in 7 mode. 

 

Thanks in advance.

 

KS

ONTAP 9.x and volume autogrow - no more manual setting of incremental

$
0
0

Hello all,

 

I need to answer a question put forth by customer regarding volume autgrow. The way it now works, is autogrow and shrink happen in incrementals that are not configurable. ONTAP automatically grows or shrinks in incrementals automatically. The customer would like to get more technical explanation of this, as in the past they could of course control the incremental value.

 

If anyone can point to a more technical explanation of how this actually, works that would be great.

 

Rich

Deleting a qtree without it entering snapshots

$
0
0

I recently had a qtree that was quite large and deserved to be its own volume (3 TB, 25 million files), so I migrated it. That all worked fine but when I did a 'qtree delete' to clean up the original volume I realized this would all be going straight into snapshots (and then into vault snapshots, in my case) so I'd be carrying that extra 3 TB for a long time.

 

Is there any way to avoid this? I realize it goes against the philosophy of snapshots but also realize there could be something I'm not thinking of. Thanks!


Ontap Upgrade from 9.1P8 to 9.1P12 for both cluster mode and 7 mode

$
0
0

Hello Everyone,

Can someone please let me know where can I find (or point me in that direction) a ONTAP upgrade plan from 9.1p8 to 9.1p12 for both cluster mode and 7 mode systems.

If somone has the upgrade steps outline and can/would like to share, I would really apreciate it.

Thanks.

Ritesh

 

LACP & NetApp 8020

$
0
0

Hi!

 

We test perfomance storage NetApp 8020.

Configuration: 

- 2 x controller FAS8020 with 10Gb nic card

- 1 x disk shelf SAS

- 2 x disk shelves SATA (SATA + SSDdisk for FlashPool)  

 

Each controller is connected by 2 cables to Cisco Nexus.

This connection is configurable by EtherChannel or LACP 

 

Our results to CIFS share: ~ 7.5 Gb/s

 

This test showed that only one connection is used from the channel( EtherChannel/LACP)

 

 

Question - WHY? 

 

Please help

 

Aggregate level dedupe with NVE

$
0
0

I understand aggregate level deduplication is not supported for volumes encrypted by NVE.  Is anyone able to confirm whether or not this is being roadmapped in future releases of OnTap?

 

Thanks!

-Ben

Can't perform volume rehost on volume in an SVR-DR relationship

$
0
0

Hi all,

 

I want to rehost a volume from SVM1 to SVM2. This would normally be very straightforward, except both SVMs are in SVM-DR relationships. Hence when trying to do a volume rehost, it returns:

 

Error: command failed: Cannot rehost volume "somevol" on Vserver "SVM1" to destination "SVM2" because the Vserver "SVM1" is in a DR relationship.

 

For reference, SVM2 is also in an SVM-DR relationship.

 

I can't find a documented procedure for this scenario. Can someone confirm the correct steps for this?

 

Is it a case of deleting one or both SVM-DR snapmirror relationships, performaing the volume rehost (assuming it then lets me), then recreating the SVM-DR relationships?

 

TIA

SnapMirror Transfer Pauses Randomly

$
0
0

I have four relationships between the same two ONTAP 8.3.2 clusters.  Three relationships update every 4 hours in an acceptible time frame.

One of the four starts transferring then pauses....  sometimes for 20-30 minutes... then continues... then pauses some more....

 

Over the course of the day the average time to complete the SLOW relationship is over 1 hour.

Average for the other transfers are all under 23 minutes.

 

Latest transfer stats:

 

Source:volA --> Dest:volA_mirror average bytes transferred is 743MB and takes an average of 26 seconds

Source:volB --> Dest:volB_mirror average bytes transferred is 22.6GB and takes an average of 22 minutes 6 seconds

Source:volC --> Dest:volC_mirror average bytes transferred is 5GB and takes an average of 1 hour and 9 minutes

Source:volD --> Dest:volD_mirror average bytes transferred is 12.6GB and takes an average of 4 minutes 30 seconds

 

All fours source volumes are on the same node/aggregate, all four destination volumes are on the same node/aggregate.

 

What could be causing 1 of the 4 to drag on for so long... some WAFL process, etc.??

Viewing all 4966 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>