Quantcast
Channel: ONTAP Discussions topics
Viewing all 4957 articles
Browse latest View live

Unusable Spare - ONTAP 9.6P2

$
0
0

Anyone seen something like this before ?  It is a shared (partitioned) disk which is fine but there is no usable space on it.

 

storage aggregate show-spare-disks -owner-name mynode

                                                             Local    Local
                                                              Data     Root Physical

Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------

3.0.11           FSAS   capacity      7200 block                0B       0B   8.91TB zeroed

 

(other correct spares removed from output)


how to move volumen ntfs to another aggregate on NetApp Release 8.2.3P6 7-Mode

$
0
0

Hi, I have a NetApp Release 8.2.3P6 7-Mode and I want to move an aggregate ntfs volume but it gives me the following error when doing the command: vol move start vol_tecon aggr_SATA

vol move: Specified source volume is exported via NFS. Unexport the volume and retry or resume vol move.

How can i solve this problem?

Historical CPU Data

$
0
0

Question.

 

In order to enable CPU monioting on OCUM 9.5 do we need to actually acess that via CLI or is the setting enabled on the actual cluster nods themselves?

 

The command I have is the following - "dfm option set cpumoninterval=30m" or you can set the interval at whatever time you want.

 

Bascially, somehow within OCUM preferably I ned to get historical CPU data, not real-time, which I already know how to accomplish. It must be historical. My nodes are running ONTAP 9.3.

 

 

Changing IP addresses for the Deploy VM and the mgmt LIFs of a single node Select cluster

$
0
0

I'm going to need to pick up my Deploy virtual machines and an associated Select single node cluster and move them to a different VMware environment and a different IP space.

 

- I've figured out how to change the node management interface and get the deploy VM to find the changes (via a cluster resync). 

- I'm not as clear on how to change the cluster management interface's address without leaving the deploy node unable to talk to the cluster any longer. 

- I have not found a way to change the IP address of the deploy virtual machine.

 

Any suggestions?

 

Thanks!

9.5P6: portmapper is allowed globally on one node but blocked on another node

$
0
0

Two node cluster recently installed; it came with 9.5P1 and was later updated to 9.5P6. Starting with 9.4 portmapper (port 111) is normally blocked by mgmt firewall policy. To my surprise I found that on one node port 111 is globally allowed, while on another node it is only allowed on LIFs with data firewall policy:

ff-cdot01% sudo ipfw list | grep 111
00001 allow log ip from any to any dst-port 111 in
00001 allow log ip from any 111 to any out
00105 allow log ip4 from any to 10.197.2.2 dst-port 111 in
00105 allow log ip4 from any 111 10.197.2.2 to any out
ff-cdot01%

ff-cdot02% sudo ipfw list | grep 111
00102 allow log ip4 from any to 10.197.2.5 dst-port 111 in
00102 allow log ip4 from any 111 10.197.2.5 to any out
ff-cdot02%

Could somebody explain how it could happen? How can I "fix" it to match normal default 9.5 behavior?

 

And more importantly - at this point I am unsure what else can differ between two nodes. Is there any way to verify configuration consistency?

FAS8200 Scalability

$
0
0

Hi Guys,

Is it any workaround that a FAS8200 can scale to 24 nodes in SAN mode.

Thank you

ONTAP 9.5: unable to upload configuration via tftp to Linux tftpd

$
0
0

Does it work for anyone? Linux tftpd sends reply which is rejected by ONTAP with "port unreachable" which causes tftpd to error out.

 

17:53:04.562521 IP cdot01.16653 > linux.tftp:  72 WRQ "cdot.8hour.2019-10-10.18_15_05.7z" octet tsize 48466041 rollover 0
17:53:04.565341 IP linux.49124 > cdot01.16653: UDP, length 28
17:53:04.565514 IP cdot01 > linux: ICMP ff-cdot01-co udp port 16653 unreachable, length 36

cdot01 is node management interface.

 

vserver lif             role      firewall-policy
------- --------------- --------- ---------------
cdot    cdot01_mgmt1    node-mgmt mgmt

Ontap Select 9.5 deployment failed

$
0
0

Hello,

 

Trying to deploy single node cluster Ontap Select 9.5 on ESXi  6.0 with external array (over FC). During node deployment got he following error in the job deployment log:

[Request 445 (running)]: Creating cluster nodes. This operation may take up to two hours, depending on the response time of the virtualization envi[Request 445 (running)]: Node "OTS-PPE01-01" create failed. Reason: Virtual appliance create failed (waitForLease).. Manual deletion of this node f[Request 445 (running)]: Creating data disks. This operation may take as long as two hours depending on the amount of storage to be provisioned on [Request 445 (running)]: NodeVMNotFound: Node "OTS-PPE01-01" in cluster "OTS-PPE01" not found on any host. If the cluster has been reconfigured out[Request 445 (failure)]: Node "OTS-PPE01-01" delete complete ...

 

On VMware task event - got an error of "Cannot place virtual machine in a folder that is not a virtual machine type folder".

Is any folder required to be created for the Ontape Select VM or a specific VMware profile ? I have not seen any configuration option on Deploy VM. Appreciating any help/feedback related to this issue.

 

Thanks,

Andrei.


How to upload a file to NetApp

AutoSupport Message: SHELF_FAULT

ONTAP 9.6 cannot create mirror aggregate with SyncMirror

$
0
0

Hi all,

For evaluating of Local SyncMirror, I tried to create a mirror aggregate using
AFF8040A and 4 DS224Cs.

First of all, I tried to create with -diskcount option, but ONTAP didn't chose
1.1.* and 2.11.*, but chose out of range. So I did manual choice of each 5 disks
like that.

# storage aggregate create -aggregate aggr4 -disklist 1.1.0,1.1.1,1.1.2,1.1.3,1.1.4 \
-mirror-disklist 2.11.0,2.11.1,2.11.2,2.11.3,2.11.4

But ONTAP said.
Error: command failed: Aggregate creation would fail for aggregate "aggr4" on
node "netapp-n11-02". Reason: Current disk pool assignments do not
guarantee fault isolation for aggregates mirrored with SyncMirror. Other
disks in the loop are in a different pool than disks: "2.11.6",
"2.11.6". Use "disk show -v" to view and "disk assign" to change the
disk pool assignments.

Disk "2.11.6" was in pool1 and not chosen in the command line.
I don't have any ideas why "2.11.6" was related about creating mirror
aggregate.
Could you advise why and how can I create a mirror aggregate?

---
# storage disk show
--snip--
1.1.0 3.49TB 1 0 SSD spare Pool0 netapp-n11-02
1.1.1 3.49TB 1 1 SSD spare Pool0 netapp-n11-02
1.1.2 3.49TB 1 2 SSD spare Pool0 netapp-n11-02
1.1.3 3.49TB 1 3 SSD spare Pool0 netapp-n11-02
1.1.4 3.49TB 1 4 SSD spare Pool0 netapp-n11-02
1.1.5 3.49TB 1 5 SSD spare Pool0 netapp-n11-02
1.1.6 3.49TB 1 6 SSD spare Pool0 netapp-n11-02
1.1.7 3.49TB 1 7 SSD spare Pool0 netapp-n11-02
1.1.8 3.49TB 1 8 SSD spare Pool0 netapp-n11-02
1.1.9 3.49TB 1 9 SSD spare Pool0 netapp-n11-02
1.1.10 3.49TB 1 10 SSD spare Pool0 netapp-n11-02
1.1.11 3.49TB 1 11 SSD spare Pool0 netapp-n11-02

2.11.0 3.49TB 11 0 SSD spare Pool1 netapp-n11-02
2.11.1 3.49TB 11 1 SSD spare Pool1 netapp-n11-02
2.11.2 3.49TB 11 2 SSD spare Pool1 netapp-n11-02
2.11.3 3.49TB 11 3 SSD spare Pool1 netapp-n11-02
2.11.4 3.49TB 11 4 SSD spare Pool1 netapp-n11-02
2.11.5 3.49TB 11 5 SSD spare Pool1 netapp-n11-02
2.11.6 3.49TB 11 6 SSD spare Pool1 netapp-n11-02
2.11.7 3.49TB 11 7 SSD spare Pool1 netapp-n11-02
2.11.8 3.49TB 11 8 SSD spare Pool1 netapp-n11-02
2.11.9 3.49TB 11 9 SSD spare Pool1 netapp-n11-02
2.11.10 3.49TB 11 10 SSD spare Pool1 netapp-n11-02
2.11.11 3.49TB 11 11 SSD spare Pool1 netapp-n11-02
---

regards,
mau

Default Volume & Inode thresolds on netapp cluster

$
0
0

I am trying to configure SNMP alerts directly from Netapp cluster running Ontap 9.3p4 to SNMP Traphost(Netcool). So i have these questions related to that.

 

1) I see from the MIB files that the default thresholds for Volume Nearly full and VOlume full are 95% & 98%, is there a way we can change these thresholds

 

2) Do we have any default thresholds for Inode utilization like volume fulls? if yes what are they

 

3) From the MIB files, i see the codes for Volume nearly full & Volume full as 82 and 85, May I know what are the codes we should enable for inodes on the mib

NOte: We are trying to setup alerts directly from Cluster not from OCUM..I know that from ocum we can customise thresholds to our requirements, however we are not allowed to install OCUM.

 

Thanks in advance

ontap 9.6 simulator can't ping beyond default gateway

$
0
0

Hi there,

I have installed the ontap virtual appliance 9.6 in VMware workstation to include it in a vcenter lab that I created before, the thing is that I'm not able to ping any other node in the storage network from the appliance and any other nodes cannot ping the LIf's, this is my firts time touching a netapp system so Im not sure if i did sonthing wrong, this is the actual configuration:

 

Switch eth1: 10.1.1.123

Windows VM: 10.1.1.111 - 255.255.255.0 - 10.1.1.123

vmk1-iscsi(esxi1): 10.1.1.101 - 255.255.255.0 - 192.168.0.1

vmk1-iscsi(esxi2): 10.1.1.102 - 255.255.255.0 - 192.168.0.1

 

LIF1:

10.1.1.115 - 255.255.255.0 - 10.1.1.123

Data protocol access: iscsi

Role: data

Eth port: e0d - Up

 

LIF2:

10.1.1.116 - 255.255.255.0 - 10.1.1.123

Data protocol access: iscsi

Role: data

Eth port: e0d - Up

 

SVM status: Running(iscsi)

 

Ping from Win VM to both ESXI's: works fine

Ping from both ESXI's to Win VM: works fine

 

Ping from Win VM to LIFs: Not working

Ping from LIFs to Win VM: not working

Ping from LIFs to both ESXIs: not working

Ping from ESXIs to LIFs: not working

 

Ping from LIF1 to LIF2: works fine

Ping from LIF2 to LIF1: works fine

Ping from LIFs to default gateway: works fine

 

What could be causing this or what should I check?

everything seems to be up and I believe the network is working fine as win vm and esxi's are communicating fine but i think the problem is on the ontap appliance, correct me if Im wrong.

Recommended vs. Latest Release

$
0
0

Wanted to get everyone's take on whether to deploy NetApp's "recommended" release of a version of OnTAP or the "latest" release. Here's my dilemma: I have a scheduled patch planned for Wednesday night to go from 9.3P10 to 9.3P15. I just noticed that 9.3P16 was released on Friday and read notes on various bugs fixes. A few of them could apply to us at some point, but I'm nervous about deploying a patch that is this new. Any suggestions?

 

Recommended Release Article

https://kb.netapp.com/app/answers/answer_view/a_id/1000185

No more migrations from 7 mode after 9.5

$
0
0

I am wondering what the plan is to help people holding on to 7 mode migrate to a platform that only supports 9.6 like the the A320.  I can believe you are cutting people off before the end of support of 7mode, which is I think 8/2020?   Am I missing something?


replacement disk assigns and zeros but after a short period shows as un-zeroed

$
0
0

I have a customer who has replaced a disk assigned the disk and zeroed the disk but after a short period of time then goes back and says un-zeroed.

 

Customer is running ONTAP 9.3P10

 

 

not able to create aggregate in ontap 9.3

$
0
0

not able to create aggregate in ontap 9.3 bcz i have 22 disk 10 disk used in aggr0 and 3 disk used i aggr1

when i create to create aggregate it take only 6 disk,kindly suggest how to use max capacity.

 

 

 

 

 

Ontap Upgrade from 8.3.2x to 9.1x and BUG 1250500 - how to solve

Volume clone failed. Token expired

$
0
0

Hi All, 

 

I would like to know if someone had run into cloning issues as the ones below. The error happens intermittently. I was just wondering if this has to do with the heavy workload the controller is put into. 

 

[?] Fri Oct 25 06:42:19 UTC [node02: api_dpool_18: wafl.sis.clone.create.failed:info]: File clone create of file /vol/nfs_delta/Library/KJK_OJ_1_LC/vc1_7/vc1_10-sesparse.vmdk in volume nfs_delta failed with error: Token expired or not found.

 

[?] Fri Oct 25 06:42:23 UTC [node02: api_dpool_30: wafl.sis.clone.create.failed:info]: File clone create of file /vol/nfs_delta/Library/KJK_OJ_1_LC/vc2_2/vc2_9.vmdk in volume nfs_delta failed with error: Token expired or not found.

 

 

Failed to start transfer for Snapshot copy.... (CSM: Operation referred to a non-existent session.)

$
0
0

All,

 

Had some sort of network issue recently and rendered a handful of snapmirrors "unhealthy" and not able to successfully replicate.  Everything from a cluster peer standpoint looks ok, pings are successful between nodes, and no issue with authentication.  I am seeing some destination volumes with busy snaps as a result.  Was wondering if anyone has seen the snapmirror errors such as the below:

 

Failed to start transfer for Snapshot copy "snapmirror.e36...". (CSM: Operation referred to a non-existent session.)

 

cpeer.xcm.update.warn: Periodic update of peer network information failed. The following operations are incomplete: discovery failure.

 

cpeer.xcm.addr.disc.warn: Address discovery failed for peer cluster 0df39b.... Reason: Failed to discover remote addresses: RPC: Timed out [from mgwd on node "NODE" (VSID: -3) to Unknown Program:0 at Not available].

 

smc.snapmir.schd.trans.overrun: Scheduled transfer from source volume '_volume' to destination volume 'volume_dst' is taking longer than the schedule window. Relationship UUID '6b80exxxx'.

 

 

 

 

Viewing all 4957 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>