Quantcast
Channel: ONTAP Discussions topics
Viewing all 4991 articles
Browse latest View live

node not booting. probably disk ownership issue

$
0
0

Hello all

need some assistance. got handed a 4-node cluster (old). it is a 6020 system. node-1 is down, cannot boot back online due to errors below

Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1a.00.23 (S/N XXVER7AA) is supposed to be owned by th
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.11.19 (S/N S410EAVG) is supposed to be owned by th
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.12.0 (S/N KPVJGY4F) is supposed to be owned by thi
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.11.0 (S/N KPH2869F) is supposed to be owned by thi
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.10.0 (S/N KPVKRXHF) is supposed to be owned by thi
Jun 13 13:22:59 [b125c1fm1-01:wafl.memory.status:info]: 34689MB of memory is currently available for the WAFL file system.
WARNING: 0 disks found!

 

i tried the following

1. halted node-1, and also halted node-2. cluster is a test cluster

2. rebooted node-1 in hopes that it will take over its own disks. this did not work.

3. brought both nodes back online

4. tried assigning disk ownership from node-2, but get the below

b125c1fm1-02(takeover)> disk assign 1c.12.0 -s 1874201084
Assign request failed for disk 1b.12.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.
b125c1fm1-02(takeover)> disk assign 1c.11.0 -s 1874201084
Assign request failed for disk 1c.11.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.
b125c1fm1-02(takeover)> disk assign 1c.10.0 -s 1874201084
Assign request failed for disk 1c.10.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.

 

i'm out of ideas. any other suggestions that we can try. after i can get node-1 online, i have to reconfigure whole cluster ni best practice. like i said, just inherited this mess.

 

THanks everyone. appreciate the help.


disk ownership

$
0
0

Hello all

recently inherited a 4-node cluster

when doing health check on the cluster. found that node-1 is down in boot prompt. attempted to boot and got the below error

Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1a.00.23 (S/N XXVER7AA) is supposed to be owned by th
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.11.19 (S/N S410EAVG) is supposed to be owned by th
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.12.0 (S/N KPVJGY4F) is supposed to be owned by thi
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.11.0 (S/N KPH2869F) is supposed to be owned by thi
Jun 13 13:22:56 [b125c1fm1-01:diskown.ownerReservationMismatch:warning]: disk 1c.10.0 (S/N KPVKRXHF) is supposed to be owned by thi
Jun 13 13:22:59 [b125c1fm1-01:wafl.memory.status:info]: 34689MB of memory is currently available for the WAFL file system.
WARNING: 0 disks found!

 

 

 

tried

1. shutdown node-2, to see if reboot node-1 will take over its own disks. this did not work

2. restarted node-2 and attempted to re-assign disks to node-1. this did not work. node-2 is in takeover mode. (should i inhibit takeover prior to re-assigning disks)

3. during re-assigning of disks to node-1, received the below error

b125c1fm1-02(takeover)> disk assign 1c.11.19 -s 1874201084
b125c1fm1-02(takeover)> disk assign 1c.12.0 -s 1874201084
Assign request failed for disk 1b.12.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.
b125c1fm1-02(takeover)> disk assign 1c.11.0 -s 1874201084
Assign request failed for disk 1c.11.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.
b125c1fm1-02(takeover)> disk assign 1c.10.0 -s 1874201084
Assign request failed for disk 1c.10.0. ReasonSmiley Very Happyisk is a file system disk and part of an online aggregate. Changing its owner may cause aggregate or filer outage. Disk assign request failed.

 

any thoughts would be welcome. on how node-1 can take back the disks. I have not tried any commands in priv setting. NOt sure if this would make a difference

 

Thanks All

Manageontap JAR Keep-Alive header's timeout

$
0
0

In my project i am making frequent calls of invokeEle() method of manageontap JAR.

This was creating too much traffic on AD server as for each invokeEle() NetApp makes 1 AD authentication.

 

After research i found NaServer have setKeepAliveEnabled() method If we set that as "true", Manageontap will maintain session and reuse it for subsequent calls.

But from NetApp side in response we could see following message from NetApp

 

Read Line === Keep-Alive: timeout=5, max=100
Read Line === Connection: Keep-Alive

 

This means even though we set "Keep-Alive" flag at client set, NetApp will bydefault keep connection timeout only for 5 seconds, If there is gape of 5+ seconds between 2 NetApp calls, Server reject the request and it require AD authentication again.

 

Anyone can please tell me how we can increase this timeout from 5 to 100 second?

That will be really helpful.

 

Also there is a method in NaServer setTimeout() , but this timeout is only for how long a client will wait for response after request sending.

 

My 2nd controller shows the disk of the first, but my first controller shows only a few disks

$
0
0

Hello guys,

I'm new with Netapp and I need some help.

I'm using a Netapp 8.1 and I can't understand why my controllers show different things.

 

INFCTR01*> disk show
DISK OWNER POOL SERIAL NUMBER HOME
------------ ------------- ----- ------------- -------------
0a.00.4 INFCTR01 (1788982542) Pool0 PVHR5MAB INFCTR01 (1788982542)
0a.00.8 INFCTR01 (1788982542) Pool0 PVHR39HB INFCTR01 (1788982542)
0a.00.10 INFCTR01 (1788982542) Pool0 PVHNXXTB INFCTR01 (1788982542)
0a.00.6 INFCTR01 (1788982542) Pool0 PVHRE7HB INFCTR01 (1788982542)
0a.00.11 INFCTR01 (1788982542) Pool0 PVHR33JB INFCTR01 (1788982542)
0a.00.9 INFCTR02 (1788985759) Pool0 PVHR079B INFCTR02 (1788985759)
0a.00.5 INFCTR02 (1788985759) Pool0 PVHRG9RB INFCTR02 (1788985759)
0a.00.3 INFCTR02 (1788985759) Pool0 PVHRNYRB INFCTR02 (1788985759)
0a.00.1 INFCTR02 (1788985759) Pool0 PVHRG23B INFCTR02 (1788985759)
0a.00.2 INFCTR01 (1788982542) Pool0 PVHRH14B INFCTR01 (1788982542)
0a.00.0 INFCTR01 (1788982542) Pool0 PVHRH62B INFCTR01 (1788982542)
0a.00.7 INFCTR02 (1788985759) Pool0 PVHR2KZB INFCTR02 (1788985759)

 

INFCTR02*> disk show
DISK OWNER POOL SERIAL NUMBER HOME
------------ ------------- ----- ------------- -------------
0a.00.10 INFCTR01 (1788982542) Pool0 PVHNXXTB INFCTR01 (1788982542)
0a.00.0 INFCTR01 (1788982542) Pool0 PVHRH62B INFCTR01 (1788982542)
0a.00.8 INFCTR01 (1788982542) Pool0 PVHR39HB INFCTR01 (1788982542)
0a.00.6 INFCTR01 (1788982542) Pool0 PVHRE7HB INFCTR01 (1788982542)
0a.00.2 INFCTR01 (1788982542) Pool0 PVHRH14B INFCTR01 (1788982542)
0b.02.15 INFCTR01 (1788982542) Pool0 LXVNKP4M INFCTR01 (1788982542)
0b.02.19 INFCTR01 (1788982542) Pool0 JWWWYMMJ INFCTR01 (1788982542)
0b.02.1 INFCTR01 (1788982542) Pool0 LXVNZJZM INFCTR01 (1788982542)
0b.02.9 INFCTR01 (1788982542) Pool0 LXXESDUN INFCTR01 (1788982542)
0a.01.23 INFCTR01 (1788982542) Pool0 LXXHP0WN INFCTR01 (1788982542)
0a.01.7 INFCTR02 (1788985759) Pool0 CZWRVWTN INFCTR02 (1788985759)
0b.02.21 INFCTR01 (1788982542) Pool0 JWWW76AJ INFCTR01 (1788982542)
0b.02.11 INFCTR01 (1788982542) Pool0 LXXET65N INFCTR01 (1788982542)
0b.02.13 INFCTR01 (1788982542) Pool0 JWWWB29J INFCTR01 (1788982542)
0a.01.21 INFCTR01 (1788982542) Pool0 6SL3MP8X0000N236H5HB INFCTR01 (1788982542)
0b.02.18 INFCTR01 (1788982542) Pool0 6SL3R76W0000N2291D2E INFCTR01 (1788982542)
0a.01.15 INFCTR01 (1788982542) Pool0 6SL3MV0Q0000N236HTG1 INFCTR01 (1788982542)
0a.01.19 INFCTR01 (1788982542) Pool0 6SL3LTHG0000N236DGFJ INFCTR01 (1788982542)
0a.01.4 INFCTR01 (1788982542) Pool0 6SL3LQLP0000N2374CEV INFCTR01 (1788982542)
0b.02.14 INFCTR01 (1788982542) Pool0 6SL8XKBG0000N4520UDP INFCTR01 (1788982542)
0a.01.2 INFCTR01 (1788982542) Pool0 6SL3M8H40000N236E4NA INFCTR01 (1788982542)
0a.01.11 INFCTR01 (1788982542) Pool0 6SL3MKQQ0000N236L30C INFCTR01 (1788982542)
0a.01.8 INFCTR01 (1788982542) Pool0 6SL3LHGE0000N236H5HP INFCTR01 (1788982542)
0a.01.3 INFCTR01 (1788982542) Pool0 6SL3KD230000N2370G8P INFCTR01 (1788982542)
0a.01.12 INFCTR01 (1788982542) Pool0 6SL3MSLH0000N23766GV INFCTR01 (1788982542)
0b.02.16 INFCTR01 (1788982542) Pool0 6SLAVXEG0000N6310EBY INFCTR01 (1788982542)
0a.01.0 INFCTR01 (1788982542) Pool0 6SL3MG6L0000N236H0V1 INFCTR01 (1788982542)
0b.02.20 INFCTR01 (1788982542) Pool0 6SL3RJNW0000N2395B74 INFCTR01 (1788982542)
0a.01.16 INFCTR01 (1788982542) Pool0 6SL3MA290000N236HQ4Q INFCTR01 (1788982542)
0a.01.9 INFCTR01 (1788982542) Pool0 6SL3M6N60000N2376381 INFCTR01 (1788982542)
0a.01.10 INFCTR01 (1788982542) Pool0 6SL3MC9B0000N236DC91 INFCTR01 (1788982542)
0b.02.0 INFCTR01 (1788982542) Pool0 6SL3F40T0000N23951C8 INFCTR01 (1788982542)
0b.02.12 INFCTR01 (1788982542) Pool0 6SL3RBRX0000N2392NW6 INFCTR01 (1788982542)
0b.01.13 INFCTR01 (1788982542) Pool0 6SL3M71T0000N236E8GU INFCTR01 (1788982542)
0a.01.22 INFCTR01 (1788982542) Pool0 6SL3M8750000N236E99P INFCTR01 (1788982542)
0a.01.17 INFCTR01 (1788982542) Pool0 6SL3LV6Z0000N235AGV7 INFCTR01 (1788982542)
0b.02.10 INFCTR01 (1788982542) Pool0 6SL3RJN60000N2397C3U INFCTR01 (1788982542)
0b.02.22 INFCTR01 (1788982542) Pool0 6SL3RJFS0000N2394YAU INFCTR01 (1788982542)
0a.01.18 INFCTR01 (1788982542) Pool0 6SL3MQXK0000N2370F8U INFCTR01 (1788982542)
0a.01.6 INFCTR01 (1788982542) Pool0 6SL3MAR70000N236DCL7 INFCTR01 (1788982542)
0b.02.23 INFCTR01 (1788982542) Pool0 6SL3N3Z10000N2378W5N INFCTR01 (1788982542)
0a.01.1 INFCTR01 (1788982542) Pool0 6SL3MFSP0000N234AHGC INFCTR01 (1788982542)
0b.02.4 INFCTR01 (1788982542) Pool0 6SL3R9C30000N239B1NZ INFCTR01 (1788982542)
0b.02.3 INFCTR01 (1788982542) Pool0 6SL3N0SE0000N236HQ9A INFCTR01 (1788982542)
0b.02.6 INFCTR01 (1788982542) Pool0 6SL3R4EL0000N238J1VF INFCTR01 (1788982542)
0b.02.7 INFCTR01 (1788982542) Pool0 6SL3MDWY0000N236H44M INFCTR01 (1788982542)
0b.02.17 INFCTR02 (1788985759) Pool0 6SL3MGTK0000N23766FQ INFCTR02 (1788985759)
0a.01.14 INFCTR01 (1788982542) Pool0 6SL3MTSE0000N2370HDE INFCTR01 (1788982542)
0a.00.3 INFCTR02 (1788985759) Pool0 PVHRNYRB INFCTR02 (1788985759)
0a.00.7 INFCTR02 (1788985759) Pool0 PVHR2KZB INFCTR02 (1788985759)
0a.00.9 INFCTR02 (1788985759) Pool0 PVHR079B INFCTR02 (1788985759)
0a.00.1 INFCTR02 (1788985759) Pool0 PVHRG23B INFCTR02 (1788985759)
0a.00.11 INFCTR01 (1788982542) Pool0 PVHR33JB INFCTR01 (1788982542)
0a.00.4 INFCTR01 (1788982542) Pool0 PVHR5MAB INFCTR01 (1788982542)
0a.00.5 INFCTR02 (1788985759) Pool0 PVHRG9RB INFCTR02 (1788985759)
0b.02.5 INFCTR01 (1788982542) Pool0 0XHUBBLP INFCTR01 (1788982542)
0b.02.2 INFCTR01 (1788982542) Pool0 0XJKTGMP INFCTR01 (1788982542)
0b.02.8 INFCTR01 (1788982542) Pool0 0XJT7PHP INFCTR01 (1788982542)
0a.01.5 INFCTR01 (1788982542) Pool0 0XJJJ2HP INFCTR01 (1788982542)

 

 

The problem is that everytime i try do change ownership or whatever the first controller says that the disk doesn't exist and the 2nd says that the disk is owned by the partner.

ONTAP Cluster & OPENSSH

Adding controllers in DFM throwing error

$
0
0

Hello Team,

 

I am trying to add some netapp 7 mode controllers to my DFM version 3.2. But I am getting error of network latency.

 

All the controllers are using Natted IP's. Any thoughts on this.

NFS audit urgent help needed on 7-mode

$
0
0

Hi All,

 

Has someone configured native NFS auditing ?

 

According to documents, steps seems straight forward except for point 5 & 6.

https://library.netapp.com/ecmdocs/ECMP1196993/html/GUID-0E810555-2144-4894-80F0-9E3FB33E8755.html

 

I have following setings on my filer for NFS auditing, but no files is being generated.

 

 

cifs.audit.autosave.ontime.enable on
cifs.audit.autosave.ontime.interval 2m
cifs.audit.enable off
cifs.audit.file_access_events.enable on
cifs.audit.liveview.allowed_users
cifs.audit.liveview.enable on
cifs.audit.logon_events.enable on
cifs.audit.logsize 5368709120
cifs.audit.nfs.enable on
cifs.audit.nfs.filter.filename /vol/vol0/etc/log/nfs-audit.txt
cifs.audit.saveas /etc/log/adtlog.evt

 

My questions, can someone helpe me get thsi working ?

 

Thanks,

-Ashwin

change name of snaplock volume

$
0
0

I have a volume that is Snaplocked and then mirrored to our recovery site. I need to switch the source and destination systems however the folder on the destination was named incorrectly which is going to cause problems for the system that connects to it.

 

Can I change the name of the volume without affecting the Snaplocked data contained within the folder?


New TR Released: TR-4698-DEPLOY NetApp ONTAP Deploy on Intel NUC

$
0
0

This document shows how to install Select Deploy on an Intel NUC to create a small form-factor Select Deploy appliance.

ONTAP Select Deploy is an essential tool used to create and monitor ONTAP Select clusters. Additionally, it provides a mediator service that enables high availability for two-node Select clusters. Deploy is provided as a virtual machine (VM) that runs on either the VMware ESXi or KVM hypervisors. In certain small-form-factor deployments, the hosts used forSelect do not have enough resources to run both a Select node and a Deploy VM.In those instances, an Intel NUC can be used to run Deploy as a low-cost, small-form-factor standalone appliance.

For more info, please click here

 

 

Thanks

SVM DR on ONTAP Select

$
0
0

Hello all,

 

Our customer runs ONTAP Select on vNAS.

 

The SVM on this 1node Select Cluster has SVM DR configured for it's CIFS volumes.

After a VMware host fails the VM is restarted on a second node with VMware HA.

The Select cluster is recovering as planned because of the Deploy Utility witness but the Source SVM on this cluster is declared "Locked" as the Cluster restarted.

 

In a physical world this is indeed normal behaviour but doesn't fit with ONTAP Select which supports VMware HA.

 

Is this a known issue and are there ways to configure the system to resolve it for our customer?

NetApp harvest testing

$
0
0

Greetings,

 

I am currently in the process of installing NetApp harvest on RHEL to get the metric data from the storage devices, add it to InfluxDB database and project it Grafana. I have the document for the installation and I'm at the part where setup needs to be made on NetApp CLI. Unfortunately I do not have any free devices to test on, as they are running in production. I got an advice for a person who said that, the ONTAP operating system of NetApp can be installed on a 3rd party device or as a virtual OS and then be used with NetApp Harvest.

 

Can ONTAP installed in a virtual machine work with NetApp harvest? How does it push metric data when it is virtually connected?

 

I would be glad If someone from the community could guide me.

 

Thanks,

Akku

LIF's for two different VSERVER have the same broadcast domain and failover-group

$
0
0

There are two different vservesrs and LIFs in them have the same two broadcast domain and the same failover group.

 

Will that cause any issues?

 

Thanks in advvance for your inputs!

netapp einrichten

$
0
0

Hallo,

 

unsere Netapp ist nicht mehr erreichbar. The Netapp befand sich in einem anderen Netzwerk. Speicher / Freigeben mit Daten sind noch vorhanden. Nach dem Umzug des Netapp-Servers in ein anderes Netzwerk, wurde die Verbindung nicht mehr.      

 

Ich bitte um Hilfe.

 

 

What happens if a SM update did not finish on time

$
0
0

Hi,

 

VSM is scheduled for 1 Hr update (7 mode 8.2.x ONTAP). Suppose if a SM update does not finish on time, say for example it took 1:30 minutes to finish. What would happen to the SM update that got triggered after 1hr? how long it will be in queue. Is there a time out? Please advise.

Unable to boo the Netapp storage FAS3240 PANIC: sanown: not enough filer table entries in SK pro

$
0
0

SP MUNC1Netapp1-a> system console
User naroot has an active console session.
Would you like to disconnect that session, and start yours [y/n]? y
Type Ctrl-D to exit.

+++++++++++++++++++


Boot Loader version 3.6
Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2014 NetApp, Inc. All Rights Reserved.

CPU Type: Intel(R) Xeon(R) CPU L5410 @ 2.33GHz


Starting AUTOBOOT press Ctrl-C to abort...
Loading X86_64/freebsd/image2/kernel:0x100000/8455008 0x910360/1278312 Entry at 0x80158990
Loading X86_64/freebsd/image2/platform.ko:0xa49000/655152 0xb97c40/694752 0xae8f40/39656 0xc41620/43152 0xaf2a28/86316 0xb07b54/63858 0xb174e0/140640 0xc4beb0/159120 0xb39a40/2024 0xc72c40/6072 0xb3a228/304 0xc743f8/912 0xb3a358/1680 0xc74788/5040 0xb3a9e8/960 0xc75b38/2880 0xb3ada8/184 0xc76678/552 0xb3ae60/448 0xb6f000/12918 0xb97b53/237 0xb72278/84120 0xb86b10/69699
Starting program at 0x80158990
NetApp Data ONTAP 8.1.4P4 7-Mode
Copyright (C) 1992-2014 NetApp.
All rights reserved.
md1.uzip: 25536 x 16384 blocks
md2.uzip: 5760 x 16384 blocks
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
Chelsio T3 RDMA Driver - version 0.1
Ipspace "iwarp-ipspace" created
Jun 26 12:58:43 [localhost:fci.adapter.link.online:info]: Fibre Channel adapter 0c link online.
Jun 26 12:58:43 [localhost:fci.nserr.noDevices:error]: The Fibre Channel fabric attached to adapter 0c reports the presence of no Fibre Channel devices.
Jun 26 12:58:43 [localhost:fci.adapter.link.online:info]: Fibre Channel adapter 0d link online.
Jun 26 12:58:43 [localhost:fci.nserr.noDevices:error]: The Fibre Channel fabric attached to adapter 0d reports the presence of no Fibre Channel devices.
Cluster monitor: warning: firmware partner-sysid variable is not set
Jun 26 12:58:52 [localhost:diskown.isEnabled:info]: software ownership has been enabled for this system
Jun 26 12:58:52 [loPANIC : sanown: not enough filer table entries
version: 8.1.4P4: Sat Jun 28 01:52:51 PDT 2014
conf : x86_64
cpuid = 0
Uptime: 46s

PANIC: sanown: not enough filer table entries in SK process sanown_io on release 8.1.4P4 on Tue Jun 26 12:58:52 GMT 2018

version: 8.1.4P4: Sat Jun 28 01:52:51 PDT 2014
compile flags: x86_64
Writing panic info to HA mailbox disks.
HA: current time (in sk_msecs) 22601 (in sk_cycles) 251767824098
DUMPCORE: START
Raid assimilation incomplete

DUMPCORE: END -- coredump *NOT* written.
Dumpcore failed
coredump: primary dumper failed -1.
coredump: trying secondary dumper...
no tftp server ip specified
netdump: no targets available to dump
coredump: secondary dumper failed -1.
Logging shutdown event to the SP...
System halting...
cpu_reset called on cpu#0

Phoenix TrustedCore(tm) Server
Copyright 1985-2006 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 5.3.0
Portions Copyright (c) 2007-2014 NetApp, Inc. All Rights Reserved

CPU = 1 Processors Detected, Cores per Processor = 4
Intel(R) Xeon(R) CPU L5410 @ 2.33GHz
Testing RAM
512MB RAM tested
8192MB RAM installed
6144 KB L2 Cache
System BIOS shadowed
USB 2.0: MICRON eUSB DISK
BIOS is scanning PCI Option ROMs, this may take a few seconds...
+++++++++++++++++++


Boot Loader version 3.6
Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2014 NetApp, Inc. All Rights Reserved.

CPU Type: Intel(R) Xeon(R) CPU L5410 @ 2.33GHz
LOADER-A>

 

============================================================================

 

LOADER-A> set-defaults
LOADER-A> boot_ontap
Loading X86_64/freebsd/image2/kernel:0x100000/8455008 0x910360/1278312 Entry at 0x80158990
Loading X86_64/freebsd/image2/platform.ko:0xa49000/655152 0xb97c40/694752 0xae8f40/39656 0xc41620/43152 0xaf2a28/86316 0xb07b54/63858 0xb174e0/140640 0xc4beb0/159120 0xb39a40/2024 0xc72c40/6072 0xb3a228/304 0xc743f8/912 0xb3a358/1680 0xc74788/5040 0xb3a9e8/960 0xc75b38/2880 0xb3ada8/184 0xc76678/552 0xb3ae60/448 0xb6f000/12918 0xb97b53/237 0xb72278/84120 0xb86b10/69699
Starting program at 0x80158990
NetApp Data ONTAP 8.1.4P4 7-Mode
Copyright (C) 1992-2014 NetApp.
All rights reserved.
md1.uzip: 25536 x 16384 bl^Cocks
md2.uzip: 5760 x 16384 blocks
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
^CBoot Menu will be available.
^C^C^C^C^C^C^C^C^C^C^C^C
Please choose one of the following:

(1) Normal Boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 5
Chelsio T3 RDMA Driver - version 0.1
Ipspace "iwarp-ipspace" created
Jun 26 13:01:19 [localhost:fci.adapter.link.online:info]: Fibre Channel adapter 0c link online.
Jun 26 13:01:19 [localhost:fci.nserr.noDevices:error]: The Fibre Channel fabric attached to adapter 0c reports the presence of no Fibre Channel devices.
Jun 26 13:01:19 [localhost:fci.adapter.link.online:info]: Fibre Channel adapter 0d link online.
Jun 26 13:01:19 [localhost:fci.nserr.noDevices:error]: The Fibre Channel fabric attached to adapter 0d reports the presence of no Fibre Channel devices.
Cluster monitor: warning: firmware partner-sysid variable is not set
Jun 26 13:01:28 [localhost:diskown.isEnabled:info]: software ownership has been enabled for this system
Jun 2PANIC : sanown: not enough filer table entries
version: 8.1.4P4: Sat Jun 28 01:52:51 PDT 2014
conf : x86_64
cpuid = 0
Uptime: 49s

PANIC: sanown: not enough filer table entries in SK process sanown_io on release 8.1.4P4 on Tue Jun 26 13:01:28 GMT 2018

version: 8.1.4P4: Sat Jun 28 01:52:51 PDT 2014
compile flags: x86_64
Writing panic info to HA mailbox disks.
HA: current time (in sk_msecs) 21427 (in sk_cycles) 348330384325
DUMPCORE: START
Raid assimilation incomplete

DUMPCORE: END -- coredump *NOT* written.
Dumpcore failed
coredump: primary dumper failed -1.
coredump: trying secondary dumper...
no tftp server ip specified
netdump: no targets available to dump
coredump: secondary dumper failed -1.
Logging shutdown event to the SP...
System halting...
cpu_reset called on cpu#0

Phoenix TrustedCore(tm) Server
Copyright 1985-2006 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 5.3.0
Portions Copyright (c) 2007-2014 NetApp, Inc. All Rights Reserved

CPU = 1 Processors Detected, Cores per Processor = 4
Intel(R) Xeon(R) CPU L5410 @ 2.33GHz
Testing RAM
512MB RAM tested
8192MB RAM installed
6144 KB L2 Cache
System BIOS shadowed
USB 2.0: MICRON eUSB DISK
BIOS is scanning PCI Option ROMs, this may take a few seconds...
+++++++++++++++++++


Boot Loader version 3.6
Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2014 NetApp, Inc. All Rights Reserved.

CPU Type: Intel(R) Xeon(R) CPU L5410 @ 2.33GHz
LOADER-A>

 

 


NETAPP Select Host Configure Fails with Generic Error

$
0
0

Hello,

 

I'm getting a very non-specific error when configuring a host in the Deploy Utility.

 

HostInfoErr: Error in retrieving information for host "10.xx.xx.xx". Reason: No details.

 

If I add the host with the esx host credentials e.g. esx root and password, then I can configure the host no problem. The problem with that however, is when I attmept to deploy the single node cluster I get an error about not adding the host with vcenter credentials.

 

Here is the CLI input I'm using:

 

host configure --host-id 10.xx.xx.xx --location datacenter --storage-pool poolname --capacity-limit 1500 --eval --management-network mgmtVMNetwork --data-network mgmtVMNetwork --instance-type small

 

Anyone have any idea what's holding me up here? I've used the local administrator@vsphere.local credentials for vcenter and yet there is some generic erorr in configuring the host.

 

Thanks!

 

 

 

Copying volumes from one svm to a new one

$
0
0

Hello,

 

I have a question and maybe you can help me?

 

I have an existing primary/source svm (vfiler) with 20 - 30 volumes, with shares and exports. I would like to copy them to another new destination/secondary svm, implement snapmirror from primary to secondary svm where the new secondary svm will provide the data only in read-only modi. It is and will not be a Disaster Recovery solution. I could create the new secondary svm manually and create all the volumes and the create snapmirror relation. But is there a way to do it in a shorter way only with snapmirror, aggr copy or other methods, to avoid creating all the volumes manually and maybe also without creating all the shares, export options etc.? Maybe Snapmirror vfiler DR and then deleting the relationship and the volume snapmirror? 

 

Source svm is using ONTAP 9.1 and destination 9.2P2. Both are in different location and both are for its own a single head metrocluster. Source side has two single head systems as metrocluster and destination has two single head system as metrocluster. Connection between both location works - peer connection and relation is working.

 

Many thanks!

 

 

Netapp replication Alarms

$
0
0

Hi,

 

My Netapp is 8.3P1 CLUSTER-MODE.

 

Servers are installed in 2 Geored sites.

 

I am looking if there is alarms related to Netapp replication.

 

Thanks for help.

ns-switch vs source order

$
0
0

I have two commands as listed below. 

 

I thought nsswitch should define the sequene to find a object. But, why ns-switch here is empty?

Then The source order here seems like similar to what nsswitch is supposed to do. Source Order and ns-switch, what are differences?

 

 

 

clus::vserver services name-service*> vserver show -vserver vserver1 -fields ns-switch
vserver ns-switch
--------- ---------
vserver1 -


clus::vserver services name-service*> vserver services name-service ns-switch show -vserver vserver1
                              Source
Vserver Database Order
--------------- ------------ ---------
vserver1 hosts files,
                         dns
vserver1 group files, 
                         ldap
vserver1 passwd files,
                            ldap
vserver1 netgroup files
vserver1 namemap files
5 entries were displayed.

About storage capacity and retention days of EMS event log

$
0
0

Please tell me about EMS event log retention policy.

 

· What is the storage capacity of the EMS event log?

 

· What is the number of days to keep the EMS event log?

 

<Storage information>
Model: FAS 2650A
Ontap: NetApp Release 9.1 P5

 

I will post it for the first time.

 

Thank you.

Viewing all 4991 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>