Quantcast
Viewing all 4959 articles
Browse latest View live

New Cluster Nodes Not Showing in NetApp DSM for MPIO

I've just added two new nodes to our production cluster. I've added two new iSCSI LIFs on the nodes and added those LIFs to the port group. I've used SnapDrive to add connections to the two new iSCSI LIFs on the first of many Windows hosts. After doing this, I checked the Data OnTAP DSM for MPIO software to be sure it was showing 4 nodes per LUN. It is not. It is still just showing two paths. Any ideas as to what I am missing?


Lif warning

Hello all 

 

Let me explain the configuration. This is a Strech MCC, all working with iSCSI conections, and when I execute metrocluster check run I've got one warning:

 

Component Result
------------------- ---------
nodes ok
lifs warning
config-replication ok
aggregates ok
clusters ok
5 entries were displayed.

 

LIF placement Failure Type: There are no ports on node <nodename> the destination cluster that have the adapter type "NIC" in IPspace "Default" required to host the secondary Vserver <vservername> iSCSI LIF <lif_name>

 

There are no ports on node <nodename>in the destination cluster that have the adapter type "CNA" in IPspace "Default" required to host the secondary Vserver <vservername>iSCSI LIF <lif_name>.

 

And that is because in site A we make the iSCSI connections through nic and cna ports and in site B the opposite:

 

SAN A nic -------- cna SAN A

SAN B cna ------- nic. SAN B

 

I was serching and found this Kb: https://kb.netapp.com/support/s/article/metrocluster-lif-is-not-using-the-expected-interface-on-the-partner-cluster?language=en_US

 

That is very similar but not the same case... So my question: is this a supported configuration and the warning is only "cosmetic"?

 

Many thanks in advanced.

 

Kind regards, 

 

J. Salas.

 

 

VTL700 Nearstore Disk sanitization procedure

Does anyone here have experience zeroing disks in a VTL700? I realize it is an older product EOL/EOS, but couldn't find any info anywhere. The licensed feature available in DoT hardware is not available.

 

 

Thanks.

Can't shrink a volume that is 7 TB bigger than the LUN that's on it

We have come up against the 16TB size lmit for a LUN in Ontap 8.3.1. The LUN is on a volume that has been grown to 23.04 TB, yet the LUN is only 16 TB. There are no other LUNs on the volume, yet the vol is 100% full. What is using this extra space on the volume? What is "Reserved Space for Overwrites" and why is it 6.45TB?

 

There are no snapshots on the volume and a 5% snapshot reserve.

 

How can we reclaim this 7+TB of space on the volume?

 

lun show:

/vol/VMDK_01/VMDK_01            online  mapped   vmware    15.97TB

vol show:

VMDK_01 netapp_clr301_01_aggr1
    online RW 23.04TB 0B 100%

vol show detail:

netapp-clr301::> vol show -vserver netapp-iscsi301 -volume VMDK_01

                                   Vserver Name: netapp-iscsi301
                                    Volume Name: VMDK_01
                                 Aggregate Name: netapp_clr301_01_aggr1
                                    Volume Size: 23.04TB
                             Volume Data Set ID: 1100
                      Volume Master Data Set ID: 2147484748
                                   Volume State: online
                                    Volume Type: RW
                                   Volume Style: flex
                         Is Cluster-Mode Volume: true
                          Is Constituent Volume: false
                                  Export Policy: default
                                        User ID: 0
                                       Group ID: 0
                                 Security Style: unix
                               UNIX Permissions: ---rwxr-xr-x
                                  Junction Path: -
                           Junction Path Source: -
                                Junction Active: -
                         Junction Parent Volume: -
                                        Comment:
                                 Available Size: 0B
                                Filesystem Size: 23.04TB
                        Total User-Visible Size: 21.89TB
                                      Used Size: 21.89TB
                                Used Percentage: 100%
           Volume Nearly Full Threshold Percent: 95%
                  Volume Full Threshold Percent: 98%
           Maximum Autosize (for flexvols only): 30TB
(DEPRECATED)-Autosize Increment (for flexvols only): 1GB
                               Minimum Autosize: 23.04TB
             Autosize Grow Threshold Percentage: 98%
           Autosize Shrink Threshold Percentage: 50%
                                  Autosize Mode: off
           Autosize Enabled (for flexvols only): false
            Total Files (for user-visible data): 31876689
             Files Used (for user-visible data): 101
                          Space Guarantee Style: volume
                      Space Guarantee in Effect: true
              Snapshot Directory Access Enabled: true
             Space Reserved for Snapshot Copies: 5%
                          Snapshot Reserve Used: 0%
                                Snapshot Policy: none
                                  Creation Time: Fri May 20 14:22:19 2016
                                       Language: C.UTF-8
                                   Clone Volume: false
                                      Node name: netapp-clr301-01
                                  NVFAIL Option: on
                          Volume's NVFAIL State: false
        Force NVFAIL on MetroCluster Switchover: off
                      Is File System Size Fixed: false
                                  Extent Option: off
                  Reserved Space for Overwrites: 6.45TB
                             Fractional Reserve: 100%
              Primary Space Management Strategy: volume_grow
                       Read Reallocation Option: off
               Inconsistency in the File System: false
                   Is Volume Quiesced (On-Disk): false
                 Is Volume Quiesced (In-Memory): false
      Volume Contains Shared or Compressed Data: true
              Space Saved by Storage Efficiency: 660.4GB
         Percentage Saved by Storage Efficiency: 3%
                   Space Saved by Deduplication: 660.4GB
              Percentage Saved by Deduplication: 3%
                  Space Shared by Deduplication: 72.02GB
                     Space Saved by Compression: 0B
          Percentage Space Saved by Compression: 0%
            Volume Size Used by Snapshot Copies: 0B
                                     Block Type: 64-bit
                               Is Volume Moving: false
                 Flash Pool Caching Eligibility: read-write
  Flash Pool Write Caching Ineligibility Reason: -
                     Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
                        Constituent Volume Role: -
                          QoS Policy Group Name: -
                            Caching Policy Name: -
                Is Volume Move in Cutover Phase: false
        Number of Snapshot Copies in the Volume: 0
VBN_BAD may be present in the active filesystem: false
                Is Volume on a hybrid aggregate: false
                       Total Physical Used Size: 5.87TB
                       Physical Used Percentage: 25%

  

SRM Path and its license information

Hi All,

 

We have NetApp Oncommand Manager Version 6.4P1 and would like to have SRM configured, is the option available by default or need to purchase the license?

 

Would like to know how the license is chagred? Based upon capacity or number of shares access in the path?

 

Also would like to know what type of information this SRM path gives us? CIFS and NFS details including no.of folders, subfolders, files, each of its size, owner of the file/sub folder/folder and what else it gives?

 

Please help me on the information.

 

Thanks,

Bhuvan

How is licensing SMBR for Microsoft Exchange Server in two controller array with Premium Bundle

Hello,

 

Could you say, how is licensing SMBR for Microsoft Exchange Server in two controller array with Premium Bundle?

 

Look at the screen:

 

Image may be NSFW.
Clik here to view.
2016-10-12 10_54_43-13916542_16-Sep-16_10-41-28_2016-09-16-FAS8020HA 24x1200GB Premium Bundle 5y sup.jpg

 

We have here SMBR (2500 mailboxes) and two controllers, does it mean that I can use 2 * 2500 = 5000 mailboxes on the array?

 

Or it's 2500 mailboxes  for the whole two controller array?

 

Thanks in advance

FAS2552 with DS4243 Shelf - ADP

Hello Everyone 

 

I just converted my FAS2552 to CDOT and in the process of upgrading to 8.3. I would also like to do ADP to efficiently use the SAS drives on the internal shelf. I will be using this KB to do the ADP https://library.netapp.com/ecmdocs/ECMP1636022/html/GUID-CFB9643B-36DB-4F31-95D4-29EDE648807D.html  

 

My question is, what will happen to my external disk shelf disks, will it be partioned too? Or is ADP only for the internal shelf. Of course i dont want my external disk shelf disks to be partioned because it will be a waste of space.

 

Anthony

What could possibly happen if I lost /etc/resolve.conf file from the system in 7mode ?

What could possibly happen if I lost /etc/resolve.conf file from the system in 7mode ?

And what if I disable DNS on cDOT cluster ?

 

Am using NIS and all the external servers; for example NTP are configured with their IPs (not by hostname).

 

Regards,

 

Arsalan 


why Aggregate and Volume IOPS from CLI do not match?

 This is the CLI out put from our device, I would assume that total ops in aggregate should be the sum of its related volume.

Same for Read Ops and write Ops.

 

Thanks!

 

 

 

 

 

 

                                                               *Total Read Write 

                      Aggregate               Node    Ops  Ops   Ops 

------------------------------- ------------------ ------ ---- ----- 

aggr0_dataontap_vsim_cluster_02 dataontap-vsim-cm2     23   20     1 

aggr0_dataontap_vsim_cluster_01 dataontap-vsim-cm1     20   20     0 

              molsi_test_aggr_c dataontap-vsim-cm1      0    0     0 

                          aggr2 dataontap-vsim-cm2      0    0     0 

                          aggr1 dataontap-vsim-cm1      0    0     0 

 

 

dataontap-vsim-cluster::statistics> volume show   

 

dataontap-vsim-cluster : 10/11/2016 20:21:54

                                                           

                          *Total Read Write Other  Read Write Latency 

Volume Vserver    Ops  Ops   Ops   Ops (Bps) (Bps)    (us) 

------ ------- ------ ---- ----- ----- ----- ----- ------- 

  vol0       -    415   28    78   309 23424 37677     626 

molsi_iscsi_volc

       vserver1

                   30    2     0    27  1280   128      48 

ESXDC7vol1

       vserver1

                   27    2     0    25  1024     0      23 

NetApp_HyperV

       vserver1

                   16    2     0    14  1024     0      17 

molsi_nfs_volc

       vserver2

                   15    2     0    13  1024    42      89 

vserver2_root

       vserver2

                    9    2     0     7  1024     0      16 

vserver1_root

       vserver1

                    9    2     0     7  1024     0      16 

molsi_vol1

       vserver1

                    9    2     0     7  1024     0      16 

  vol0       -      7    1     0     5   719   468     174 

 

 

Moving Volumes with LUNs used by SQL through iSCSI

All, thanks in advanced for any assistance on this.  Hopefully some others have had success doing this.

 

I recently purchased a new AFF and am going to be moving my SQL load over to it.  We have volumes containing LUNs that are presented to Windows Clustering Service through iSCSI and SQL access it like that.  On the Windows servers we have MPIO and Snapdrive setup so ALUA can identify the best path to access the LUNs.  Also by the time we do this upgrade we will be on cDOT Data ONTAP 9.0

 

I want to move the volumes to the new AFF but I can't have any downtime so I'm seeking some assistance on what the best course of action is.  I'll add thew AFF to the existing cluster and create the iSCSI lifs as needed for the nodes.  I'll also ensure that the new nodes are added to the SVM that contains the volumes.  From where I'm a bit lost on what can/should happen.  Do I just perform a vol move from the original nodes to the new AFF?  If so how can I guarentee that there won't be any downtime?  If it works automatically does anyone know how quickly ALUA will kick in and start using the new path?  I have 20ish volumes to move so should I move them one at a time giving plenty of time inbetween for ALUA or MPIO to figure out the new path?

 

Thanks in advance, any user stories or experiences would be great!

change root path of vfiler

Hi,

 

I have this problem with vfilers. All the vfilers in our environment have their root path in qtree's. Is there a way to migrate the root path of a vfiler from a qtree to a volume?

/vol/org/qt  ->  /vol/new

 

Regards,

DJeff

Powershell Toolkit Permissions issue

Having an issue with a script i've written to provision volumes from a import-csv command.  Login to the filers no problem, but at the point i run the 'new-navol' command i get the following:

 

New-NaVol : Incorrect credentials for x.x.x.x.
At line:1 char:1
+ New-NaVol -Controller x.x.x.x -Name testVol -Aggregate aggr2_sata -SpaceR ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (172.31.100.232:NaController) [New-NaVol], NaAuthException
+ FullyQualifiedErrorId : ApiException,DataONTAP.PowerShell.SDK.Cmdlets.Volume.NewNaVol

 

Have tried this as root and my admin user, also tried creating a new role with "volume-*" capabilities, but still get the above error.  Looks like a permissions error, but other than that i'm stuck here, can anyone advise?

 

Running powershell 4.0, Toolkit 4.1, FAS8020 8.2.1 7-Mode

 

Andy

Is there a way to get the currently connected controller ?

Hi,

 

Clustered OnTAP 8.3GA

NFS v3

 

Is there a way to list all NFS clients currently connected to an NFS export, assuming TCP is used?

 

Thanks,

 

 

Remove old restricted aggregate

Hello,

 

It seems that I have an "old aggregate" that is restricted and I cant find a way to delete it. This is inconvenient because I can't use the disks. Moreover the aggregate is visible only from one node. Any ideea how I can delete this old aggregate?

 

netapp11::> system node run -node * sysconfig -r
2 entries were acted on.

Node: netapp11-01
Aggregate aggr0 (restricted, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.01.6 0b 1 6 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
parity 0a.00.6 0a 0 6 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
data 0b.01.4 0b 1 4 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400

Aggregate aggr0_netapp11_01_0 (online, raid_dp) (block checksums)
Plex /aggr0_netapp11_01_0/plex0 (online, normal, active, pool0)
RAID group /aggr0_netapp11_01_0/plex0/rg0 (normal, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.0 0a 0 0 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
parity 0a.00.1 0a 0 1 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
data 0a.00.2 0a 0 2 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400


Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block checksum
spare 0a.00.3 0a 0 3 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.4 0a 0 4 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.5 0a 0 5 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.7 0a 0 7 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.8 0a 0 8 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.9 0a 0 9 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.10 0a 0 10 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.11 0a 0 11 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.12 0a 0 12 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.13 0a 0 13 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.14 0a 0 14 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.15 0a 0 15 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.16 0a 0 16 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.17 0a 0 17 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.18 0a 0 18 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.19 0a 0 19 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.20 0a 0 20 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.21 0a 0 21 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.22 0a 0 22 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0a.00.23 0a 0 23 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.3 0b 1 3 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.5 0b 1 5 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.7 0b 1 7 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.8 0b 1 8 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.9 0b 1 9 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.10 0b 1 10 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.11 0b 1 11 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.12 0b 1 12 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.13 0b 1 13 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.14 0b 1 14 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.15 0b 1 15 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.16 0b 1 16 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.17 0b 1 17 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.18 0b 1 18 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.19 0b 1 19 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.20 0b 1 20 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.21 0b 1 21 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.22 0b 1 22 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400
spare 0b.01.23 0b 1 23 SA:B 0 FSAS 7200 5614621/11498743808 5625872/11521787400

Partner disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 0b.01.2 0b 1 2 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.0 0b 1 0 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.1 0b 1 1 SA:B 0 FSAS 7200 0/0 5625872/11521787400

Node: netapp11-02
Aggregate aggr0_netapp11_02_0 (online, raid_dp) (block checksums)
Plex /aggr0_netapp11_02_0/plex0 (online, normal, active, pool0)
RAID group /aggr0_netapp11_02_0/plex0/rg0 (normal, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.01.0 0b 1 0 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
parity 0b.01.1 0b 1 1 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400
data 0b.01.2 0b 1 2 SA:A 0 FSAS 7200 5614621/11498743808 5625872/11521787400


Pool1 spare disks (empty)

Pool0 spare disks (empty)

Partner disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
partner 0a.00.10 0a 0 10 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.11 0a 0 11 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.23 0a 0 23 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.21 0a 0 21 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.7 0a 0 7 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.18 0a 0 18 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.16 0a 0 16 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.20 0a 0 20 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.12 0a 0 12 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.8 0a 0 8 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.5 0a 0 5 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.9 0a 0 9 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.14 0a 0 14 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.17 0a 0 17 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.19 0a 0 19 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.22 0a 0 22 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.13 0a 0 13 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.15 0a 0 15 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.20 0b 1 20 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.22 0b 1 22 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.10 0b 1 10 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.4 0b 1 4 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.12 0b 1 12 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.8 0b 1 8 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.14 0b 1 14 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.16 0b 1 16 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.5 0b 1 5 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.18 0b 1 18 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.13 0b 1 13 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.6 0b 1 6 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.17 0b 1 17 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.15 0b 1 15 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.21 0b 1 21 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.11 0b 1 11 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.7 0b 1 7 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.9 0b 1 9 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.19 0b 1 19 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.3 0b 1 3 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0b.01.23 0b 1 23 SA:A 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.4 0a 0 4 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.6 0a 0 6 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.3 0a 0 3 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.1 0a 0 1 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.2 0a 0 2 SA:B 0 FSAS 7200 0/0 5625872/11521787400
partner 0a.00.0 0a 0 0 SA:B 0 FSAS 7200 0/0 5625872/11521787400

 

 

 

netapp11::> disk show
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.0.0 5.35TB 0 0 FSAS aggregate aggr0_netapp11_01_0
netapp11-01
1.0.1 5.35TB 0 1 FSAS aggregate aggr0_netapp11_01_0
netapp11-01
1.0.2 5.35TB 0 2 FSAS aggregate aggr0_netapp11_01_0
netapp11-01
1.0.3 5.35TB 0 3 FSAS spare Pool0 netapp11-01
1.0.4 5.35TB 0 4 FSAS spare Pool0 netapp11-01
1.0.5 5.35TB 0 5 FSAS spare Pool0 netapp11-01
1.0.6 5.35TB 0 6 FSAS aggregate aggr0 netapp11-01
1.0.7 5.35TB 0 7 FSAS spare Pool0 netapp11-01
1.0.8 5.35TB 0 8 FSAS spare Pool0 netapp11-01
1.0.9 5.35TB 0 9 FSAS spare Pool0 netapp11-01
1.0.10 5.35TB 0 10 FSAS spare Pool0 netapp11-01
1.0.11 5.35TB 0 11 FSAS spare Pool0 netapp11-01
1.0.12 5.35TB 0 12 FSAS spare Pool0 netapp11-01
1.0.13 5.35TB 0 13 FSAS spare Pool0 netapp11-01
1.0.14 5.35TB 0 14 FSAS spare Pool0 netapp11-01
1.0.15 5.35TB 0 15 FSAS spare Pool0 netapp11-01
1.0.16 5.35TB 0 16 FSAS spare Pool0 netapp11-01
1.0.17 5.35TB 0 17 FSAS spare Pool0 netapp11-01
1.0.18 5.35TB 0 18 FSAS spare Pool0 netapp11-01
1.0.19 5.35TB 0 19 FSAS spare Pool0 netapp11-01
1.0.20 5.35TB 0 20 FSAS spare Pool0 netapp11-01
1.0.21 5.35TB 0 21 FSAS spare Pool0 netapp11-01
1.0.22 5.35TB 0 22 FSAS spare Pool0 netapp11-01
1.0.23 5.35TB 0 23 FSAS spare Pool0 netapp11-01
1.1.0 5.35TB 1 0 FSAS aggregate aggr0_netapp11_02_0
netapp11-02
1.1.1 5.35TB 1 1 FSAS aggregate aggr0_netapp11_02_0
netapp11-02
1.1.2 5.35TB 1 2 FSAS aggregate aggr0_netapp11_02_0
netapp11-02
1.1.3 5.35TB 1 3 FSAS spare Pool0 netapp11-01
1.1.4 5.35TB 1 4 FSAS aggregate aggr0 netapp11-01
1.1.5 5.35TB 1 5 FSAS spare Pool0 netapp11-01
1.1.6 5.35TB 1 6 FSAS aggregate aggr0 netapp11-01
1.1.7 5.35TB 1 7 FSAS spare Pool0 netapp11-01
1.1.8 5.35TB 1 8 FSAS spare Pool0 netapp11-01
1.1.9 5.35TB 1 9 FSAS spare Pool0 netapp11-01
1.1.10 5.35TB 1 10 FSAS spare Pool0 netapp11-01
1.1.11 5.35TB 1 11 FSAS spare Pool0 netapp11-01
1.1.12 5.35TB 1 12 FSAS spare Pool0 netapp11-01
1.1.13 5.35TB 1 13 FSAS spare Pool0 netapp11-01
1.1.14 5.35TB 1 14 FSAS spare Pool0 netapp11-01
1.1.15 5.35TB 1 15 FSAS spare Pool0 netapp11-01
1.1.16 5.35TB 1 16 FSAS spare Pool0 netapp11-01
1.1.17 5.35TB 1 17 FSAS spare Pool0 netapp11-01
1.1.18 5.35TB 1 18 FSAS spare Pool0 netapp11-01
1.1.19 5.35TB 1 19 FSAS spare Pool0 netapp11-01
1.1.20 5.35TB 1 20 FSAS spare Pool0 netapp11-01
1.1.21 5.35TB 1 21 FSAS spare Pool0 netapp11-01
1.1.22 5.35TB 1 22 FSAS spare Pool0 netapp11-01
1.1.23 5.35TB 1 23 FSAS spare Pool0 netapp11-01
48 entries were displayed.

 

 

BIG THANKS in advance!!!

 

 

ONTAP Select on VSAN

Hi,

 

assuming that a customer has a ready architecture based on a VMware VSAN datastore (that's a common VMFS datastore).

Is it possibile to deploy the OVA of Select on that?

 

Or does the NetApp software assumes to find a VMFS from a RAID on local disk?

My opinion is that it should work, after all (see pic) the vmdk of Select have to stand on a VMFS. In case it works, is it also supported? 


Regards,

 

Image may be NSFW.
Clik here to view.
select2.jpg


Role Permission for Halting Only does not working - CDOT

Hi,

 

I'm trying to create an user with the following role permission:

 

netappcdot823::> security login role show -vserver netappcdot823 -role operators

 

VServer       Role Name Command/Directory Query Access Level
------------------------------------------------------------
netappcdot823 operators DEFAULT                 none
netappcdot823 operators system node halt        all

 

The objective is to create an user with the halt capability only, and no more permissions if possible.

 

When I login with that user and issue a "system node halt" command, it seems there is a lack of other permissions.

 

netappcdot823::> system node halt

Warning: Are you sure you want to halt node "netappcdot823-01"? {y|n}: y

Error: not authorized for that command

 

Note: I'm doing this on Ontap Simulator 8.2.3 CDOT.

 

Changing the "DEFAULT" access level to "all" works, but this is not desired because all other commands are also allowed (acts like an admin user).

 

Any idea?

 

Thanks!

 

 

Data Ontap 9.0 NDMP backup report error when LISTEN after upgrade .


Hi all,

This is a FAS2240-2 2-node Data Ontap 9.0
Just upgraded from Ontap 8.3RC2

The volumes under vserver are having cifs/nfs exports.
cifs/nfs volumes can be mounted, data read/write works properly on them.

When I added the Ontap 9.0 into a ndmp backup software , browsing works to see all the volumes under the vserver.
But when I was performing a backup.
The netapp Data Ontap 9.0 reported error ,
Here is the log from Netapp side:

 

MGMT_RPC::rpc_msg_input: 0x81041b400 here buf 0x0x81048e000 length 140

00000006.0000c7ee 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: MGMT_RPC::msg: incoming SERVER message 0x81041b400

00000006.0000c7ef 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: MGMT_RPC::using procedure 6 (procnum 10)

00000006.0000c7f0 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: MGMT_RPC::buffer_size_check: 0x0 0 vs 100

00000006.0000c7f1 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: MGMT_RPC::buffer_size_check: growing 0x81041b4b0 from 0 to 1024

00000006.0000c7f2 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: S<BKUP (ndmp_server) BR2NDMPD_DEBUG_LOG

00000006.0000c7f3 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      sess_handle 0xd00000000 seq 0 (0x0)

00000006.0000c7f4 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      LOG sev (3) msg (NDMP Vserver Listen: couldn't find any appropriate IP address to listen on)

00000006.0000c7f5 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: MGMT_RPC::server_reply: 0x81041b400 start

….

 

 

DATA_ABORT pending callback

00000006.0000c844 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: DATA cb_abort_on_listen_connect_failure: errnum 0: Success or No Error

00000006.0000c845 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: DATA set_ndmp_state: data_op 1 (DATA_OP_IDLE) => 1 (DATA_OP_IDLE)

00000006.0000c846 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: cb_ndmp4_data_listen: here

00000006.0000c847 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: generic_xmit_ndmp_message: start

00000006.0000c848 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG: DMA<<S V4 sequence=11 (0xb)

00000006.0000c849 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      Time_stamp=0x57f8b7d5 (Oct  8 09:09:41 2016)

00000006.0000c84a 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      message type=1 (NDMP4_MESSAGE_REPLY)

00000006.0000c84b 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      message_code=0x409 (NDMP4_DATA_LISTEN)

00000006.0000c84c 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      reply_sequence=9 (0x9)

00000006.0000c84d 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      error_code=0 (NDMP4_NO_ERR)

00000006.0000c84e 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      error=7 (NDMP4_IO_ERR)

00000006.0000c84f 0093d17a Sat Oct 08 2016 09:09:41 +00:00 [kern_ndmpd:info:4052] [39093]  DEBUG:      addr_type=0 (NDMP4_ADDR_LOCAL)

 

I checked all the network interface in CLI:

                    Logical    Status                 Network            Current       Current  Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port        Home

----------- ---------- ---------- ------------------ ------------- ------- ----

Cluster

            clidev-fas2246-cx-01_clus1

                         up/up    169.254.141.39/16  clidev-fas2246-cx-01

                                                                                                                                e1a     true

            clidev-fas2246-cx-01_clus2

                         up/up    169.254.195.115/16 clidev-fas2246-cx-01

                                                                                                                                e1b     true

            clidev-fas2246-cx-02_clus1

                         up/up    169.254.247.155/16 clidev-fas2246-cx-02

                                                                                                                                e1a     true

            clidev-fas2246-cx-02_clus2

                         up/up    169.254.36.24/16   clidev-fas2246-cx-02

                                                                                                                                e1b     true

clidev-fas2246-cliqa-vserver1

            clidev-fas2246-cliqa-vserver1-lif1

                         up/up    10.6.254.201/24    clidev-fas2246-cx-01

                                                                                                                                e0a     true

 

 

Does anyone know what's the possible reason for this failure?

Thanks.

 

 

 

Clustered Data ONTAP Management Pack for System Center OpsMgr 2012

Dear Community,

 

we have installed the OnCommand Plug-in for System Center Operations Manager 2012 (v 4.2) with following Management Packs:

- OnCommand Clustered Data ONTAP (v4.1.1.7474)

- OnCommand Clustered Data ONTAP MetroCluster (v4.1.1.7474)

- OnCommand Clustered Data ONTAP MetroCluster Reports (v4.1.1.7474)

- OnCommand Clustered Data ONTAP Reports (v4.1.1.7474)

- OnCommand Data ONTAP (v4.1.1.7300)

- OnCommand Data ONTAP Reports (v4.1.1.7300)

- OnCommand Data ONTAP Shared Library (v4.1.1.7474)

- OnCommand Data ONTAP Virtualization (v4.1.1.7300)

- OnCommand Data ONTAP Virtualization Reports (v4.1.1.7300)

 

The Management Pack for NetApp Controllers in 7-Mode works just fine. But we cannot monitor c-dot NetApp Clusters.

It seems that the discovery is working, but the Monitoring.ps1 and GetPerformanceSample.ps1 always get dropped after a certain time (Warning Alert in SCOM).

Also no Volume Free Space Monitors submit any alerts!

 

The run as config looks like this:

 

Domain user with scom admin rights, also has a Profile on all Management Servers.

Distribution of RunAsAccount - Clustered Data ONTAP: Management Server Resource Pool

Target of RunAsProfilet - Clustered Data ONTAP Group

 

I'm a little helpless here...

 

Hopefully someone can help me figure out the main problem.

 

Thank you in advance!

 

greetings Matthias

 

File Clone Parent Size

HI - Still very much a n00b to NetApp.  Have used flexclones on individual VMDK files for cloning dev VMs and would like to know if I can see how the changes on the children are affecting the size of the parent.  Looked for a command to list all of the file clones on a volume but have come up empty.  

Block some files related to CryptoLocker from ONTAP shares

Hi

IHAC who wants to block some files related to CryptoLocker from ONTAP shares. It seems he can do that with native FPolicy, but they would like to be notified as well. Does anyone know if it's possible to find the block info in any log files and if it can be forwarded in any way?

Rgs Erik

Viewing all 4959 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>