Quantcast
Channel: ONTAP Discussions topics
Viewing all 4962 articles
Browse latest View live

CDOT 8.3.2 Install Script

$
0
0

Hello

 

I'm wondering if anyone has a script / method to install CDOT automatically?

I want to start from setting up a 2-node cluster (basically 1 system with 2 filers) and then configuring the SVM with all the services...

 

I know there are CLI and Powershell modules and I'm guessing someone autoamted it?

 

Any help is greatly appriciated

 

Omer


CDOT 8.3.2 Backup & Restore

$
0
0

Hi

 

Any easy way to backup / restore NetApp configuration?

Basically I have 2 identical clusters, each with 2 filers. I want to install everything on the 1st cluster, configure it (SVM, CIFS, NFS, iSCSI, AD, etc.) and then "dump" the configuration to a file that I can then dump on the other cluster and everything will just work :-)

 

I'm sure I'm over simplifying it but if there's anything I can do that will be awsome

 

Thanks

 

Omer

LDAP cache TTL

$
0
0

Hello all! That's a question for those of you that have configured your C-Mode filer with LDAP authentication. The default value for the LDAP cache is 86400sec which means 24h:

 

::*> diag secd cache show-config -node NodeA -cache-name ldap-username-to-creds

 

Current Entries: 0
Max Entries: 512
Entry Lifetime: 86400

 

I was wondering if anybody has tweaked this setting to be less than 24h, preferably something like 15min. If yes, was there any unexpected behaviour by the filer or is everything good? FYI, The filer I manage is an AFF8080 with CDOT 8.3.2P5. Thanks in advance for any response.

How to delete a snapshot without confirmation from Command Line?

$
0
0

Just not getting this.....

 

How do I delete groups of snapshots without confirmation in Ontap v8.3   ???

 

partial script is:

ssh@filer snapshot delete -vserver xxx -volume yyy -snapshot zzz

 

where ZZZ is a string feed list

and ZZZ keeps asking me for confirmation.

 

 

Thanks!

Changing VLAN # on NetApp CDOT 8.3 FAS2552

$
0
0

Hi,

 

I have Servers VLAN configured on my NetApp which is VLAN 10 , I need to change from VLAN 10 to VLAN 710 (to free up VLAN 10).

I've made the changes on our switches, just need to move the IP from VLAN 10 to VLAN 710 to make it active.

 

I am wondering what are the steps to change the VLAN on the NetApp, this is the current configuration:

 

NetApp_vlan.png


netapp_interfaces.pngnetapp_broadcast_domain.png

 

How should I go about changing from VLAN 10 to VLAN 710 ? there's no IP/subnet change, only VLAN number, I would like all configuration to stay as it is, just with the new VLAN 710 .

 

 

Thank you !

Data ONTAP API Failed : Snapmirror error: The aggregate that contains the destination volume doe

$
0
0

While creating the snapmirror recueved the following error message:-

 

"Data ONTAP API Failed : Snapmirror error: The aggregate that contains the destination volume does not support compression (Error:13102)"

 

Kindly suggest

VIF creation on static interface

$
0
0

HI ,

 

I am new in netapp.

 

I want to create vlan on my storage .I have only two interface available .

 

FAS2020

NetApp Release 7.3.7P3

 

e0a: flags=0x2d48867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
        inet 150.236.127.122 netmask-or-prefix 0xffffffe0 broadcast 150.236.127.127
        partner inet 150.236.127.117 (not in use)
        ether 00:a0:98:11:db:1c (auto-100tx-fd-up) flowcontrol full
e0b: flags=0x2d48867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
        inet 150.236.127.203 netmask-or-prefix 0xffffffe0 broadcast 150.236.127.223
        partner inet 150.236.127.202 (not in use)
        ether 00:a0:98:11:db:1d (auto-1000t-fd-up) flowcontrol full
lo: flags=0x1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 8160
        inet 127.0.0.1 netmask-or-prefix 0xff000000 broadcast 127.0.0.1
        ether 00:00:00:00:00:00 (VIA Provider)

 

Is it possible to create vif on one (e0b interface) or i need to use both e0a and e0b. How to create vif on running interface.

 

e0a is a 100mpbs interface and e0b is 1000mpbs.

 

E0m broadcast domain

$
0
0
I have 4 node cluster. E0m used for node and cluster mgmt LIFs with e0m's all in broadcast domain BD-M Then I have 4 10g ports on 2 pci cards e3a, e3b, e4a, e4b which each have 2 vlans trunked to them each. e3a and e4a are joined for a0a ifgrp with lacp. e3b and e4b are joined for a0b ifgrp with lacp. Then I have vlan1-a0a and vlan1-a0b. Also have vlan2-a0a and vlan2-a0b. The vlan1-a0a and vlan1-a0b ports are in broadcast domain BD-1. Then the vlan2-a0a and vlan2-a0b ports are in broadcast domain BD-2. I don't have any 1g onboard ports hooked up. How can I add more ports to BD-M to allow for node mgmt lif failover? They are all on same vlan subnet as BD-1.

I tried combining the e0m ports with the vlan1-a0a and vlan1-a0b ports and it barked at best practice? Should I combine with just a0a and a0b?

shelves migration from Old netapp to 2554

$
0
0

We have ordered New Netapp2554 and shall be installed.

And we have already a old Netapp.

We also loan 2 x DS2246 for our migration.

So what should we do for the shelves migration?

Just disconnect from the old one and connect to the new one with the cabling guide ?

 

shelves migration from FAS3210 to 2554

$
0
0

We have ordered New Netapp FAS2554 and shall be installed.

And we have already a old Netapp FAS3210.

We also loan 2 x DS2246 for our migration.

So what should we do for the shelves migration?

Just disconnect from the old one and connect to the new one with the cabling guide ?

 

error create Protection Relationship from netapp 1 to netapp 2

$
0
0

I have e problem

 

The Peering  from netappc2 to netappc3 is created, then

 

i will create Protection Relationship from netappc2 to netappc3. Authentificate with admin User and Password failed.

 

Both Netapp with cDot 8.3.2

 

The error in the Screenshot.

 

Thank you for your help.

 

 

Vserver status showing offline

$
0
0

Hello,

 

I'm running dashboard command on my clustershell and found that vserver state is offline.

 

an0990-na-cl1::> dashboard health vserver show
                                       EMS Issues
Vserver           Status   Health    Crit  Warn  Info
----------------- -------- -------  ----- ----- -----
an0990-na-cl1-fcp offline  ok           0     0     0
Issues: The filesystem protocols are not configured.

 

My vserver is running fine and is hosting data lun's.

 

I need to know why the status is offline ? and what exactly the status meant here ?

 

 

Regards..

JP

Ontap Select Performance Guidelines

$
0
0

Hi,

 

I'm wondering if anyone has some specific or general impressions of Ontap Select performance? 

 

I've read through TR-4517 which does have some rough benchmark data but doesn't include any latency or IOPS stats.

 

I'm also surprised that TR-4517 recommends one giant raid LUN.  It seems to me that placing the NVRAM partition on its own flash might be very good for write preformance.

 

Compared to VSAN, is it possible to get a very high preformance Ontap Select setup? 

 

Obviously alot of it has to do with the underlying disks but VSAN can be very fast.  Just wondering if Ontap Select could meet the same performance levels given the same underlying disk structure. 

Aggregate-Volume used space mismatch

$
0
0

Hi,

 

We have a problem where used space on our aggregate does not equal the total sum of used space for all volumes.

 

Filer1> df -Ag
Aggregate                      total              used      avail capacity
aggr0                               9699GB     6042GB     3657GB      62%
aggr0/.snapshot            0GB        0GB        0GB     ---%

Filter1> df -sg
Filesystem                      used      saved       %saved
/vol/vol0/                           5GB        0GB           0%
/vol/vol_vmware_nfs/     8456GB     2058GB          20%

As you can see there is approx 8.4TB of space used by all (thin provisioned) volumes, but only 6TB used on the aggregate.

 

Even taking away the 2TB of de-dupe savings, there is still nearly 500GB that appears to have gone "missing". Can anyone explain what might be going on here?

 

I have checked and there are no snapshots currently on the volume.

 

Thanks,

Cluster Hyper-V VMs slow

$
0
0

Hi,

 

I have a Hyper-V cluster with a CSV connected in ISCSI. Some of my VMs are very slow.

 

For example, I have some VMs (2008R2/2012R2) :  I/O -> 90Mb/s. Everything is Ok.

 

I have other Vms (2008R2/2012R2) : I/O -> 15Mb/s. Not ok.

 

The deduplication is activated everydays à 24AM.

 

Regards.


"Disks are currently unowned" imposible to change its state or assign.

$
0
0

Hi,

A few days ago a fild ingeneer replaced several drives, one of then enter in a strange state. 

Right now the state is "Not owned" if I try to assign ownership doesn't work (in both controllers) if I try I get this no matter what:

 

Thu Sep 29 10:07:51 CEST [PAG874: disk.senseError:error]: Disk 1d.19: op 0x28:0000a3e8:0018 sector 0 SCSI:hardware error - (4 44 0 20)
Thu Sep 29 10:07:51 CEST [PAG874: diskown.errorReadingOwnership:warning]: error 46 (disk condition triggered maintenance testing) while reading ownership on disk 1d.19 (S/N )
Thu Sep 29 10:07:51 CEST [PAG874: disk.senseError:error]: Disk 1d.19: op 0x28:0000a3f0:0008 sector 0 SCSI:hardware error - (4 44 0 20)
Thu Sep 29 10:07:51 CEST [PAG874: disk.senseError:error]: Disk 1d.19: op 0x28:0000a3e8:0018 sector 0 SCSI:hardware error - (4 44 0 20)
Thu Sep 29 10:07:51 CEST [PAG874: diskown.errorReadingOwnership:warning]: error 46 (disk condition triggered maintenance testing) while reading ownership on disk 1d.19 (S/N )
Thu Sep 29 10:07:51 CEST [PAG874: disk.senseError:error]: Disk 1d.19: op 0x28:0000a3f0:0008 sector 0 SCSI:hardware error - (4 44 0 20)
Thu Sep 29 10:07:51 CEST [PAG874: diskown.changingOwner:info]: changing ownership for disk 1d.19 (S/N ) from unowned (ID 4294967295) to PAG874 (ID 151736299)
Thu Sep 29 10:07:51 CEST [PAG874: disk.senseError:error]: Disk 1d.19: op 0x2a:0000a3e8:0008 sector 0 SCSI:hardware error - (4 44 0 20)
disk assign: Assign failed for one or more disks in the disk list.

 

If I launch a "disk show -v" the state of the disk is this:

 

DISK OWNER POOL SERIAL NUMBER HOME
------------ ------------- ----- ------------- -------------
1c.49 PAG873 (151736722) Pool0 PAJRXV7E PAG873 (151736722)
1d.51 PAG873 (151736722) Pool0 J80G2SML PAG873 (151736722)
4b.54 PAG873 (151736722) Pool0 PAHZ21WF PAG873 (151736722)
4a.38 PAG873 (151736722) Pool0 J80Z1A4L PAG873 (151736722)
4c.52 PAG873 (151736722) Pool0 HZ30EJ5L PAG873 (151736722)
4a.20 PAG873 (151736722) Pool0 MQ0AS3XF PAG873 (151736722)
1c.43 PAG873 (151736722) Pool0 J81S0W4L PAG873 (151736722)
4c.34 PAG873 (151736722) Pool0 J80ZVXWL PAG873 (151736722)
1b.33 PAG873 (151736722) Pool0 PAKKWX9F PAG873 (151736722)
4d.50 PAG873 (151736722) Pool0 J80ASWLL PAG873 (151736722)
4c.40 PAG873 (151736722) Pool0 PBHNVH9E PAG873 (151736722)
4b.51 PAG873 (151736722) Pool0 WD-WMATV8677177 PAG873 (151736722)
1b.16 PAG873 (151736722) Pool0 PAH2016F PAG873 (151736722)
1d.35 PAG873 (151736722) Pool0 9QJ2Q64N PAG873 (151736722)
4a.75 PAG873 (151736722) Pool0 PBHSKDNF PAG873 (151736722)
4a.54 PAG873 (151736722) Pool0 9QJ3G11B PAG873 (151736722)
1b.38 PAG873 (151736722) Pool0 9QJ688H8 PAG873 (151736722)
1d.64 PAG873 (151736722) Pool0 PBHE7VPE PAG873 (151736722)
1a.51 PAG873 (151736722) Pool0 9QJ688GV PAG873 (151736722)
1c.59 PAG873 (151736722) Pool0 PAJEJWTE PAG873 (151736722)
1d.54 PAG873 (151736722) Pool0 PAJ82JAF PAG873 (151736722)
4a.17 PAG873 (151736722) Pool0 PAH25J4F PAG873 (151736722)
1a.53 PAG873 (151736722) Pool0 WD-WMATV5302064 PAG873 (151736722)
1b.21 PAG873 (151736722) Pool0 J81PRRWL PAG873 (151736722)
1d.32 PAG873 (151736722) Pool0 PBHUWPGF PAG873 (151736722)
1a.33 PAG873 (151736722) Pool0 MQ0AR6YF PAG873 (151736722)
1b.41 PAG873 (151736722) Pool0 J80GGTBL PAG873 (151736722)
1d.36 PAG873 (151736722) Pool0 WD-WMATV8633422 PAG873 (151736722)
1d.67 PAG873 (151736722) Pool0 9QJ63LVW PAG873 (151736722)
4d.22 PAG873 (151736722) Pool0 J80Z995L PAG873 (151736722)
1a.24 PAG874 (151736299) Pool0 PAJESEUE PAG874 (151736299)
1d.20 PAG873 (151736722) Pool0 PAGWW7VD PAG873 (151736722)
4a.35 PAG873 (151736722) Pool0 PBHTK4ZE PAG873 (151736722)
1a.19 PAG873 (151736722) Pool0 9QJ662YL PAG873 (151736722)
1c.19 PAG873 (151736722) Pool0 9QJ662Z4 PAG873 (151736722)
4c.37 PAG874 (151736299) Pool0 PAJETY6E PAG874 (151736299)
1d.76 PAG873 (151736722) Pool0 MS3KG3ZF PAG873 (151736722)
1d.68 PAG873 (151736722) Pool0 9QJ3K9F4 PAG873 (151736722)
1d.19 Not Owned NONE
1b.76 PAG874 (151736299) Pool0 9QJ68FZP PAG874 (151736299)
4a.64 PAG873 (151736722) Pool0 MS0P25TK PAG873 (151736722)
4b.28 PAG873 (151736722) Pool0 PAGWJZLD PAG873 (151736722)
1d.59 PAG874 (151736299) Pool0 9QJ3JXDZ PAG874 (151736299)
4b.67 PAG873 (151736722) FAILED 9QJ66GH5 PAG873 (151736722)
4b.44 PAG874 (151736299) Pool0 MQ0A7B4F PAG874 (151736299)
4b.66 PAG873 (151736722) Pool0 MQ0BJY4F PAG873 (151736722)
1c.32 PAG873 (151736722) Pool0 HZ1LS1SL PAG873 (151736722)
4d.21 PAG873 (151736722) Pool0 PAHXRHLF PAG873 (151736722)
1d.48 PAG873 (151736722) Pool0 PAHTME6F PAG873 (151736722)
1c.56 PAG874 (151736299) Pool0 9QJ65YS7 PAG874 (151736299)
1a.27 PAG874 (151736299) Pool0 9QJ7HZRW PAG874 (151736299)
4b.40 PAG873 (151736722) Pool0 MS0P1U3K PAG873 (151736722)
1a.29 PAG874 (151736299) Pool0 J80ZN2LL PAG874 (151736299)
1d.70 PAG873 (151736722) Pool0 9QJ68GT5 PAG873 (151736722)
4d.71 PAG873 (151736722) Pool0 PAJL48PF PAG873 (151736722)
1b.60 PAG874 (151736299) Pool0 9QJ683VR PAG874 (151736299)
4c.53 PAG873 (151736722) Pool0 9QJ68GVZ PAG873 (151736722)
1a.70 PAG874 (151736299) Pool0 WD-WMATV6010458 PAG874 (151736299)
1c.22 PAG873 (151736722) Pool0 HZ1TSMGL PAG873 (151736722)
1c.24 PAG873 (151736722) Pool0 PBH2DR2E PAG873 (151736722)
4c.60 PAG874 (151736299) Pool0 9QJ5BRA1 PAG874 (151736299)
4b.57 PAG874 (151736299) Pool0 9QJ68KN9 PAG874 (151736299)
4a.34 PAG873 (151736722) Pool0 9QJ688ZY PAG873 (151736722)
4b.25 PAG873 (151736722) Pool0 WD-WMATV5302619 PAG873 (151736722)
1a.25 PAG874 (151736299) Pool0 9QJ60FGW PAG874 (151736299)
1b.56 PAG874 (151736299) Pool0 9QJ7ABEF PAG874 (151736299)
4b.53 PAG873 (151736722) Pool0 9QJ684W5 PAG873 (151736722)
1a.18 PAG873 (151736722) Pool0 PBG57WSF PAG873 (151736722)
1a.50 PAG874 (151736299) Pool0 9QJ47Y06 PAG874 (151736299)
1d.34 PAG873 (151736722) Pool0 PAJG21LE PAG873 (151736722)
1d.38 PAG873 (151736722) Pool0 9QJ6841Y PAG873 (151736722)
4c.61 PAG874 (151736299) Pool0 PAKJV3ZF PAG874 (151736299)
1a.36 PAG873 (151736722) Pool0 PAJEU7GE PAG873 (151736722)
1a.52 PAG873 (151736722) Pool0 HD2W2MKL PAG873 (151736722)
1d.18 PAG873 (151736722) Pool0 J81044AL PAG873 (151736722)
1b.71 PAG873 (151736722) Pool0 PAKK4R2F PAG873 (151736722)
1b.52 PAG874 (151736299) Pool0 9QJ7CF3M PAG874 (151736299)
4b.50 PAG873 (151736722) Pool0 MQ0AKMTF PAG873 (151736722)
1a.55 PAG874 (151736299) Pool0 WD-WMATV9103742 PAG874 (151736299)
1c.23 PAG874 (151736299) Pool0 PAKKME1F PAG874 (151736299)
1b.36 PAG873 (151736722) Pool0 J80ZMY0L PAG873 (151736722)
4a.68 PAG873 (151736722) Pool0 PAKJY7WF PAG873 (151736722)
4b.55 PAG874 (151736299) Pool0 9QJ5BQGJ PAG874 (151736299)
1d.56 PAG874 (151736299) Pool0 PAJE4JKF PAG874 (151736299)
4b.68 PAG873 (151736722) Pool0 J80E3YVL PAG873 (151736722)
1b.27 PAG873 (151736722) Pool0 PAKK5P6F PAG873 (151736722)
4c.58 PAG874 (151736299) Pool0 9QJ68G2G PAG874 (151736299)
1d.53 PAG873 (151736722) Pool0 N0292M5L PAG873 (151736722)
1d.55 PAG874 (151736299) Pool0 PAJEW8YE PAG874 (151736299)
1d.58 PAG874 (151736299) Pool0 9QJ4D8L6 PAG874 (151736299)
4a.44 PAG873 (151736722) Pool0 9QJ4GXE0 PAG873 (151736722)
1d.73 PAG874 (151736299) Pool0 9QJ68460 PAG874 (151736299)
4a.71 PAG874 (151736299) Pool0 9QJ68ECB PAG874 (151736299)
1b.45 PAG874 (151736299) Pool0 9QJ482WC PAG874 (151736299)
4b.19 PAG873 (151736722) Pool0 WD-WMATV4481829 PAG873 (151736722)
1a.73 PAG874 (151736299) Pool0 9QJ67W2G PAG874 (151736299)
4a.42 PAG874 (151736299) Pool0 9QJ68494 PAG874 (151736299)
1a.67 PAG874 (151736299) Pool0 9QJ7DWHZ PAG874 (151736299)
1d.27 PAG874 (151736299) Pool0 MS0NM9LK PAG874 (151736299)
1b.43 PAG874 (151736299) Pool0 9QJ687JV PAG874 (151736299)
1d.72 PAG874 (151736299) Pool0 9QJ67WFL PAG874 (151736299)
4d.43 PAG874 (151736299) Pool0 9QJ3JSP9 PAG874 (151736299)
4d.61 PAG874 (151736299) Pool0 9QJ63EP6 PAG874 (151736299)
1d.37 PAG873 (151736722) Pool0 MS0NY3PK PAG873 (151736722)
1c.42 PAG873 (151736722) Pool0 PAJYZLLF PAG873 (151736722)
4a.28 PAG874 (151736299) Pool0 9QJ3MF8R PAG874 (151736299)
1d.77 PAG874 (151736299) Pool0 MQ06BJMF PAG874 (151736299)
1d.28 PAG874 (151736299) Pool0 9QJ65DTB PAG874 (151736299)
4b.58 PAG874 (151736299) Pool0 9QJ68G7K PAG874 (151736299)
1d.23 PAG874 (151736299) Pool0 9QJ68KRY PAG874 (151736299)
1d.40 PAG874 (151736299) Pool0 9QJ6883E PAG874 (151736299)
1d.16 PAG874 (151736299) Pool0 9QJ4R1D0 PAG874 (151736299)
1a.45 PAG874 (151736299) Pool0 9QJ7CFQW PAG874 (151736299)
4d.74 PAG874 (151736299) Pool0 9QJ7D5RZ PAG874 (151736299)
1c.18 PAG873 (151736722) Pool0 PAHZGNWF PAG873 (151736722)
1d.66 PAG873 (151736722) Pool0 WD-WMATV4561782 PAG873 (151736722)
1b.73 PAG873 (151736722) Pool0 9QJ3HFPH PAG873 (151736722)
1b.77 PAG874 (151736299) Pool0 9QJ3Y20V PAG874 (151736299)
4a.39 PAG874 (151736299) Pool0 MQ0BPBZF PAG874 (151736299)
4d.57 PAG874 (151736299) Pool0 WD-WMATV5947506 PAG874 (151736299)
4a.43 PAG874 (151736299) FAILED 9QJ68G81 PAG874 (151736299)
4d.26 PAG874 (151736299) Pool0 N021VD5L PAG874 (151736299)
1d.41 PAG874 (151736299) Pool0 9QJ68GV0 PAG874 (151736299)
4d.25 PAG874 (151736299) Pool0 J80E6MKL PAG874 (151736299)
1a.48 CDSNA001 (118057071) Pool0 9QJ3M5A4 CDSNA001 (118057071)
4a.40 PAG874 (151736299) Pool0 WD-WMATV4647888 PAG874 (151736299)
1d.39 PAG874 (151736299) Pool0 9QJ3G4JE PAG874 (151736299)
1b.48 PAG873 (151736722) Pool0 PAKJGH4F PAG873 (151736722)
1b.23 PAG874 (151736299) Pool0 N0300SEL PAG874 (151736299)
4d.65 PAG873 (151736722) Pool0 9QJ68490 PAG873 (151736722)
1b.18 PAG873 (151736722) Pool0 PAJEMXME PAG873 (151736722)
1b.61 PAG874 (151736299) Pool0 PAKJUM9F PAG874 (151736299)
4b.69 PAG873 (151736722) Pool0 9QJ3DT43 PAG873 (151736722)
1d.42 PAG874 (151736299) Pool0 J80E37JL PAG874 (151736299)
4c.45 PAG874 (151736299) Pool0 J80ZVX6L PAG874 (151736299)
4a.65 PAG874 (151736299) Pool0 PBH2DLXE PAG874 (151736299)
4d.49 PAG873 (151736722) Pool0 PAJKB8PE PAG873 (151736722)
4a.32 PAG873 (151736722) Pool0 9QJ61VFJ PAG873 (151736722)
1b.35 PAG873 (151736722) Pool0 PAHE506F PAG873 (151736722)
1c.57 PAG874 (151736299) Pool0 PAJ5JH2E PAG874 (151736299)
1a.23 PAG874 (151736299) Pool0 PAGX1BED PAG874 (151736299)
4a.26 PAG874 (151736299) Pool0 J81K2V2L PAG874 (151736299)
4b.17 PAG873 (151736722) Pool0 9QJ4QDQY PAG873 (151736722)
1a.58 PAG874 (151736299) Pool0 PAKK9HZF PAG874 (151736299)
1a.16 PAG874 (151736299) Pool0 PBHSHX3F PAG874 (151736299)
1c.21 PAG873 (151736722) Pool0 PAGWW77D PAG873 (151736722)
1c.16 PAG873 (151736722) Pool0 PAHZENHF PAG873 (151736722)
4a.60 PAG874 (151736299) Pool0 PAJ5ND9F PAG874 (151736299)
4c.41 PAG874 (151736299) Pool0 PAJ5NKPF PAG874 (151736299)
4b.74 PAG874 (151736299) Pool0 PBH010TF PAG874 (151736299)
1c.44 PAG874 (151736299) Pool0 PAGV750A PAG874 (151736299)
1a.69 PAG873 (151736722) Pool0 PBJ0WDUE PAG873 (151736722)
4b.72 PAG874 (151736299) Pool0 PBHSJZKF PAG874 (151736299)
1c.17 PAG874 (151736299) Pool0 PAGWW5DD PAG874 (151736299)
1c.25 PAG874 (151736299) Pool0 PAJ5M3ZF PAG874 (151736299)
1c.48 PAG874 (151736299) Pool0 PBHTK02E PAG874 (151736299)
4b.75 PAG873 (151736722) Pool0 9QJ2ZSX3 PAG873 (151736722)
1a.41 PAG874 (151736299) Pool0 PAJDB3SF PAG874 (151736299)
1b.39 PAG874 (151736299) Pool0 PBHW6DUE PAG874 (151736299)
4b.20 PAG874 (151736299) Pool0 PAGX2D3D PAG874 (151736299)
4d.60 PAG874 (151736299) Pool0 PAHZ5MME PAG874 (151736299)
1c.28 PAG874 (151736299) Pool0 PAJ6252F PAG874 (151736299)
1c.20 PAG874 (151736299) Pool0 PBJ203TE PAG874 (151736299)
1c.27 PAG874 (151736299) Pool0 PBGZZK4F PAG874 (151736299)
4c.38 PAG873 (151736722) Pool0 WD-WMATV3766349 PAG873 (151736722)
4b.65 PAG874 (151736299) Pool0 PAHW5W3F PAG874 (151736299)
4a.56 PAG874 (151736299) Pool0 J810549L PAG874 (151736299)
4b.22 PAG873 (151736722) Pool0 PAGVNTPD PAG873 (151736722)
4b.64 PAG873 (151736722) Pool0 WD-WMATV8954608 PAG873 (151736722)
4a.21 PAG873 (151736722) Pool0 MS0VUH7K PAG873 (151736722)
4c.39 PAG874 (151736299) Pool0 MQ0EBV6F PAG874 (151736299)
4c.54 PAG873 (151736722) Pool0 9QJ61F4F PAG873 (151736722)
4d.29 PAG873 (151736722) Pool0 HZ1H01GL PAG873 (151736722)
1d.24 PAG874 (151736299) Pool0 PAJE4GUF PAG874 (151736299)
1d.44 PAG873 (151736722) Pool0 9QJ3J707 PAG873 (151736722)
1a.59 PAG874 (151736299) Pool0 J80G34PL PAG874 (151736299)
1a.57 PAG873 (151736722) Pool0 PAJ4AHUE PAG873 (151736722)
4c.33 PAG873 (151736722) Pool0 PAJJRW6E PAG873 (151736722)
1a.37 PAG873 (151736722) Pool0 PBGK1MNF PAG873 (151736722)
4b.26 PAG874 (151736299) Pool0 MS30443F PAG874 (151736299)
4d.52 PAG873 (151736722) Pool0 HZ30E46L PAG873 (151736722)
4a.49 PAG874 (151736299) Pool0 PBG54UHF PAG874 (151736299)
4a.61 PAG874 (151736299) Pool0 N02ZLBPL PAG874 (151736299)
4d.45 PAG874 (151736299) Pool0 J81S0ZTL PAG874 (151736299)
1d.69 PAG874 (151736299) Pool0 MS3KG5DF PAG874 (151736299)
1d.17 PAG874 (151736299) Pool0 MQ0ES43F PAG874 (151736299)
1c.26 PAG874 (151736299) Pool0 MQ0ES55F PAG874 (151736299)
4a.72 PAG874 (151736299) Pool0 MQ0AR40F PAG874 (151736299)
4b.32 PAG874 (151736299) Pool0 PAKKNBMF PAG874 (151736299)
1d.33 PAG874 (151736299) Pool0 MQ0ESYJF PAG874 (151736299)
1c.29 PAG874 (151736299) Pool0 MQ0B017F PAG874 (151736299)
4b.42 PAG874 (151736299) Pool0 PAJYW0GF PAG874 (151736299)
4a.74 PAG874 (151736299) Pool0 WD-WMATV9103926 PAG874 (151736299)
1b.29 PAG874 (151736299) Pool0 MQ0A3Z2F PAG874 (151736299)
1a.76 PAG874 (151736299) Pool0 PAJBEE9F PAG874 (151736299)
1b.24 PAG874 (151736299) Pool0 MQ0AS79F PAG874 (151736299)
1d.75 PAG874 (151736299) Pool0 WD-WMATV9103607 PAG874 (151736299)
4c.36 PAG873 (151736722) Pool0 9QJ68E7B PAG873 (151736722)
1a.66 PAG873 (151736722) Pool0 9QJ44AWG PAG873 (151736722)
1c.50 PAG873 (151736722) Pool0 9QJ7D42H PAG873 (151736722)
1b.70 PAG873 (151736722) Pool0 MQ0BMX2F PAG873 (151736722)
4b.49 PAG873 (151736722) Pool0 N009U50L PAG873 (151736722)
1b.37 PAG873 (151736722) Pool0 9QJ617WA PAG873 (151736722)
4c.55 PAG873 (151736722) Pool0 PAJ5H6BF PAG873 (151736722)
4b.34 PAG873 (151736722) Pool0 MS0XJ8LK PAG873 (151736722)
4c.51 PAG873 (151736722) Pool0 PAJM7ZWE PAG873 (151736722)
4b.59 PAG874 (151736299) Pool0 J8103R1L PAG874 (151736299)
4c.35 PAG873 (151736722) Pool0 PBHUXRKF PAG873 (151736722)
1a.22 PAG873 (151736722) Pool0 MS2ZHR6F PAG873 (151736722)
1a.77 PAG874 (151736299) Pool0 9QJ3WLXW PAG874 (151736299)

 

Please, what I should do to fix the problem?.

 

Thank you.

 

LUN resize isn't always reported to iSCSI clients

$
0
0

I'm trying to debug something related to iSCSI LUN resizing.

In our setup we use Debian Linux 8 (jessie) with Open-iSCSI and multipath-tools connected to a FAS2552 (ONTAP 8.3.2RC2) device.
When we resized our LUN from 6 TiB to 14 TiB yesterday some of the paths got in a weird state.

 

We noticed that the iSCSI clients (the Debian machines) detect the resize:

 

kernel: [5606835.845712] sd 1:0:0:0: Capacity data has changed
kernel: [5606842.329394] sd 2:0:0:1: Capacity data has changed

This is detected because the FAS2552 device is reporting this via iSCSI ("2A 09 CAPACITY DATA HAS CHANGED").

For some reason we got in this situation:

 

$ multipath -l
...
size=14T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 3:0:0:0 sdc 8:32 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
  `- #:#:#:# -   #:#  active undef running

I'm trying to reproduce this issue by creating a new 50G LUN and resizing the LUN, but the "Capacity data has changed" logline doesn't appear in our logs when we execute the resize.

 

In which case does a FAS2552 device report a resize of a LUN exactly? Or in other words, why doen't it report the resize on my 2nd LUN as it did on my 1st yesterday?

Best practice? 2 HA-Pair Load Balancing

$
0
0

G'day all

 

Questions relating to;

 

cdot-8.3.1p2

4 node cluster

2 x V6240 (high utilisation) 153%

2 x V3250

 

 

Currently having some issues with load balancing and node over utilisation, looking at OCUM & monitoring "Statistics" at the Cli looks like it is the CPU/IOPS

 

Question for you in your experiences what is the preffered/best options/config for load balancing both your individual HA pair's and cluster as a whole.

 

I have done alot of reading in relation to DNS balancing and LIF Balancing, however I beleive I'm looking for a different method.

 

So currently I have high utilisation on my 1st pair (V6240) nodes, i'm thinking at this stage i should just be migating the high IOP volumes to the "under-utilised" nodes 2nd pair (v3250) and balance the iops out across the 4 nodes more evenly.

 

So from your experiences is this sounding along the lines of what should sort my issues or am i way off ?

 

Any suggestions would be appreciated thanks

 

 

 

Statement of Volatility

Changing LIF current port

$
0
0

I need to modify the home port of a LIF.  The current port is e0m but should be a0a-25.  Is it best to recreate the LIF as it should be?

Viewing all 4962 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>