Quantcast
Channel: ONTAP Discussions topics
Viewing all 4991 articles
Browse latest View live

Snapmirror in CDOT

$
0
0

Hello All

 

I know that snapmirrors are different in CDOT and since our NetApp expert who usually does these has moved on to a different company I am now have to do them.  But alas here is my issue.

 

SVM-QA-02 has two different volumes on different aggerates  HMG_DB and HMG_DG.

 

I need to snapmirror from DB to DG. Both volumes are online and running, but the Oracle DBA will stop the listeners for the data guard side to allow me to snapmirror to that DG side.  I think this is what I do based on my knowledge of 7-mode

 

1. unmount DG volume from server

2. unmount from namespace

3. put DG volume in restricted mode

4. run on cluster snapmirror create -source-path svm-qa-02:hmg_db -destination-path svm-qa-02:hmg_dg -schedule 15_minute

5. run on cluster snapmirror initialize svm-qa-02:hmg_dg

6. mount in namespace

7. let snapmirror run every hour at :15 until DBA is ready for final sync/break

8. final sync/break and then mount on server again

 

Is that correct? Can it be done via the webui?

 

thanks

ed

 


Best way to migrate luns from one NetApp array to another NetApp with minimal or no downtime

$
0
0

 

Hello,

 

I would like to move luns (presented to a windows 2008 server) from one NetApp FAS to another FAS. We have SQL databases runnign on it so would prefer online  migrations is possible. Can you suggest my options. 

 

I've used FLI to migrate from 3rd party to NetApp.

 

Thanks and appreciate your response.

 

 

Can someone explain what this guy is talking about regarding cluster-mode only doing CIFS & NFS?

$
0
0

This was written in 2011 and I wanted to understand more about the history of Data ONTAP, and my google-fu lead me to that page.  However, he says this near the end:

 

"In 2003, NetApp acquired Spinnaker Networks, who made appliances running SpinFS, a distributed file system.   Subsequently, NetApp made separate clustered versions of Data ONTAP that were based on SpinFS and had little to do with “real” Data ONTAP.  These include Data ONTAP 7GX (“X” = cluster) and Data ONTAP 8 cluster-mode.  Cluster-mode only supports NAS protocols (NFS, CIFS), it doesn’t do block level protocols (iSCSI, FCP).  People running Cluster-Mode are typically doing lots of file shares or home directories across geographically dispersed filers., and is commonly used in high performance computing clusters."

 

Comments below are made well into 2014, but none clear it up for me.  CDOT most certainly supports iSCSI and FCP I thought (I admin a 7mode NetApp SAN myself, and have only read about CDOT), but correct me if I'm wrong.

 

Was there a time in the past that CDOT didn't do iSCSI/FCP?  What am I missing here?

 

Here is the page I am talking about.

Flexpod Express implementation guide NVA-0018

$
0
0

Hi,

I am following a great guide for setting up a flexpod setup. (http://www.netapp.com/us/media/nva-0018-flexpod-express.pdf)

But i get an error when setup export-policy on netapp. (FAS2650, ontap 9.1RC)

On page 34 it says the following:

 

Run all commands to configure SMB on the Vserver.

1. Secure the default rule for the default export policy and create the FlexPod export policy.

vserver export-policy rule modify -vserver Infra-SVM -policyname default -ruleindex 1 -rorule never -rwrule never -superuser none

 

My filer response with:

Error: command failed: entry doesn't exist

 

What does that command do? Later in the guide I will setup export-policy rule for all my hosts, but at this point there are no rules to modify.
Do I really need to enter that command?

Thanks in advanced..

/Pelle

Migrate snapvault primary to new aggregate - without breaking protection manager

$
0
0

Hello,

 

We need to migrate a volume to a new aggregate (as its current aggregate is nearing capacity).

 

How can we do this without losing protection manager snapvault and backups (or more accurately, restore) capability - and without rebaselining?

 

Regards,

 

Darrin.

/mroot/etc/log no access

Clustered Ontap 8.3 p6

$
0
0

Is anyone facing issues with saving of files with the above Ontap version. Saving of word or excel files gives a Document not saved message and creates a tmp file in the same location where the file is. I am trying to diagnose if it is a netapp issue or SAN AV issue

Snapvault Restore

$
0
0

Hi All

 

Thanks in Advance,

 

this is my first Question to NetApp Ontap community.

 

in our new envoronment we have snapvault license, we are going to take daily & weekly snapshot as per the schedule for each NFS volume. the volumes are assinged to ESXi host.

 

what are the possibilities to restore individual files ? and volume level restore?

 

Note : customer is not ready to by any license and also they are not ready to install third party softwares.

 

 

thanks and regards

Changalrai.krushi@gmail.com

 


About disk ZCS/BCS question

$
0
0

I have serveral questions about disk want to ask.

 

1、

What is the advantage of using a 520-byte sector on a disk than a 512-byte sector?
Can I use 512-bytes sector disk and 520-bytes sector disk in the same raid group?

 

2、
ZCS is use the ninth block to checksum.
BCS is use the Sixtieth-four block to checksum.
After the raid group is created.
Can I understand the raid group have double checksum ?
BCS/ZCS and checksum disk.

 

3、
How BCS/ZCS work?

ONTAP SELECT DEPLOY UTILITY "ERROR 500"

$
0
0

I am unable to access the deploy utility after installing the ova into vcenter.  I can't ping or access the web UI after the ova deployment is complete.  After logging in via console any command gives an Error 500 "Either the server is overloaded or there is an error in the application."

 

Any ideas?

 

Ben

Permanently delete files and ensure they are unrecoverable

$
0
0

Hi,

Due to special requirements to delete files on the filer via NFS mounts and ensure they are not recoverable even at the block level, we are evaluating Linux tools such as srm and shred. My questions are:

1. Is there a better way to delete specific files to meet the requirements and ensure they are unrecoverable?
2. After deleting the files, they can still be recovered via snapshots. Is there a difference between

    a. remove all snapshots first and then delete the files vs.

    b. delete files first and then remove all snapshots?

 

    I am concerned about if deleting files first, since snapshots are still holding the data blocks, the data blocks may still be intact even after snapshots are removed (assuming removing snapshots is just freeing up the pointers, not touching the data blocks).

Thanks,

FAS8020 Converted to CDOT and Upgraded to OnTap 9 - Missing e0c and e0d Ports

$
0
0

Hi

 

We have zeroed our FAS8020, converted to clustered, upgraded to OnTap 8.3 from 8.2.4 and then upgraded to OnTap 9.1

 

Previously we didn't have an Cluster Interconnect as it was 7 Mode. I used e0a and e0b fibre ports to interconnect. I want to use e0c and e0d on each node for my data network but they do not show in OnCommand management and they do not show when I do ana ifconfig -a

 

I can't find anything on the internet about it.

 

Please help.

 

 

Data backup from backup exec to the netapp cifs network share drive is hanging at 772 GB

$
0
0

Hi Guys,

 

We are facing issues with our cifs network share drive. We are taking  exchange dag backups to cifs network share drive from backup exec. Data backup from backup exec to the netapp cifs network share drive is hanging  at 772 GB with error "loding media eorror". Please help me to resolve this issue.

 

Backup exec version : 2015

 

NetApp FAS 2554

Data ontap version : 8.3.1

 

cifs share details :

 


                                      Vserver: svm-hrm-cifs
                                        Share: dagbackup1
                     CIFS Server NetBIOS Name: SVM-HRM-CIFS
                                         Path: /dagbackup1
                             Share Properties: oplocks
                                               browsable
                                               changenotify
                                               continuously-available
                           Symlink Properties: symlinks
                      File Mode Creation Mask: -
                 Directory Mode Creation Mask: -
                                Share Comment: dag backup volume
                                    Share ACL: Everyone / Full Control
                                               PEANUTS\Rajiv.Gali / Full Control
                                               PEANUTS\tlx.svc.buservice / Full Control
                File Attribute Cache Lifetime: -
                                  Volume Name: dagbackup1
                                Offline Files: manual
                Vscan File-Operations Profile: standard
            Maximum Tree Connections on Share: 4294967295
                   UNIX Group for File Create: -

 

Volume details :

 


                                   Vserver Name: svm-hrm-cifs
                                    Volume Name: dagbackup1
                                 Aggregate Name: aggr2_SATA_USCS2554FAS01
                                    Volume Size: 15TB
                             Volume Data Set ID: 1067
                      Volume Master Data Set ID: 2147484715
                                   Volume State: online
                                    Volume Type: RW
                                   Volume Style: flex
                         Is Cluster-Mode Volume: true
                          Is Constituent Volume: false
                                  Export Policy: default
                                        User ID: -
                                       Group ID: -
                                 Security Style: ntfs
                               UNIX Permissions: ------------
                                  Junction Path: /dagbackup1
                           Junction Path Source: RW_volume
                                Junction Active: true
                         Junction Parent Volume: svm_hrm_cifs_root
                                        Comment:
                                 Available Size: 11.55TB
                                Filesystem Size: 15TB
                        Total User-Visible Size: 15TB
                                      Used Size: 3.45TB
                                Used Percentage: 23%
           Volume Nearly Full Threshold Percent: 95%
                  Volume Full Threshold Percent: 98%
           Maximum Autosize (for flexvols only): 18TB
(DEPRECATED)-Autosize Increment (for flexvols only): 768GB
                               Minimum Autosize: 15TB
             Autosize Grow Threshold Percentage: 98%
           Autosize Shrink Threshold Percentage: 50%
                                  Autosize Mode: off
           Autosize Enabled (for flexvols only): false
            Total Files (for user-visible data): 31876689
             Files Used (for user-visible data): 858
                          Space Guarantee Style: volume
                      Space Guarantee in Effect: true
              Snapshot Directory Access Enabled: true
             Space Reserved for Snapshot Copies: 0%
                          Snapshot Reserve Used: 0%
                                Snapshot Policy: none
                                  Creation Time: Fri Jan 06 09:36:43 2017
                                       Language: C.UTF-8
                                   Clone Volume: false
                                      Node name: USCS2554FAS01
                                  NVFAIL Option: off
                          Volume's NVFAIL State: false
        Force NVFAIL on MetroCluster Switchover: off
                      Is File System Size Fixed: false
                                  Extent Option: off
                  Reserved Space for Overwrites: 0B
                             Fractional Reserve: 0%
              Primary Space Management Strategy: volume_grow
                       Read Reallocation Option: off
               Inconsistency in the File System: false
                   Is Volume Quiesced (On-Disk): false
                 Is Volume Quiesced (In-Memory): false
      Volume Contains Shared or Compressed Data: false
              Space Saved by Storage Efficiency: 0B
         Percentage Saved by Storage Efficiency: 0%
                   Space Saved by Deduplication: 0B
              Percentage Saved by Deduplication: 0%
                  Space Shared by Deduplication: 0B
                     Space Saved by Compression: 0B
          Percentage Space Saved by Compression: 0%
            Volume Size Used by Snapshot Copies: 0B
                                     Block Type: 64-bit
                               Is Volume Moving: false
                 Flash Pool Caching Eligibility: read-write
  Flash Pool Write Caching Ineligibility Reason: -
                     Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
                        Constituent Volume Role: -
                          QoS Policy Group Name: -
                            Caching Policy Name: -
                Is Volume Move in Cutover Phase: false
        Number of Snapshot Copies in the Volume: 0
VBN_BAD may be present in the active filesystem: false
                Is Volume on a hybrid aggregate: false
                       Total Physical Used Size: 3.45TB
                       Physical Used Percentage: 23%

 

Thanks in advance.

OnTap 9.1 OnCommand Port Monitoring

$
0
0

All,

We just installed a new FAS8200 with OnTap 9.1.  When you log into OnCommand System Manager and go to the Dashboard, there is a section named "Alerts and Notifications" that is showing in Red, that 4 ports are down.  These 4 ports are actually not being used and not plugged into any switches.

 

My question is, is there a way to stop the alerts for said ports being they are unused?  I have tried disabling them and also tried setting the port to,  -ignore-health-status true, however, this did not stop the alerts from showing a port down.

 

Thanks for any help,

 

-joe

Need help in configuring flash pool on FAS 8040

$
0
0

Hi Guys,

 

We have FAS 8040 which is our main prod system. We have 12 * 400GB SSD drives. The guys who configured this system previously created a aggregate with these SSD drives. This SSD aggregate hosting only one volume. I am planning to move this volume from SSD aggregate to SAS aggregate. I want to delete this SSD aggregate and want to convert two of our SAS aggregates on thos system to flash pools to improve the performance. Please find the configurations below.

 

EDCNETAPPCL01::> disk show -type SSD
                     Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.10.0              372.4GB    10   0 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.1              372.4GB    10   1 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.2              372.4GB    10   2 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.3              372.4GB    10   3 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.4              372.4GB    10   4 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.5              372.4GB    10   5 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.6              372.4GB    10   6 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.7              372.4GB    10   7 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.8              372.4GB    10   8 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.9              372.4GB    10   9 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.10             372.4GB    10  10 SSD     aggregate   F2_SSD1   EDCFILER02
1.10.11             372.4GB    10  11 SSD     spare       Pool0     EDCFILER02

 

EDCNETAPPCL01::> aggr show -r -aggregate F1_SAS

Owner Node: EDCFILER01
 Aggregate: F1_SAS (online, raid_dp) (block checksums)
  Plex: /F1_SAS/plex0 (online, normal, active, pool0)
   RAID Group /F1_SAS/plex0/rg0 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     dparity  2.20.0                       0   SAS    10000   1.63TB   1.64TB (normal)
     parity   2.21.0                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.1                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.1                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.2                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.2                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.3                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.3                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.4                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.4                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.5                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.5                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.6                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.6                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.7                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.22                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.8                       0   SAS    10000   1.63TB   1.64TB (normal)

   RAID Group /F1_SAS/plex0/rg1 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     dparity  2.21.8                       0   SAS    10000   1.63TB   1.64TB (normal)
     parity   2.20.9                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.9                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.10                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.10                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.11                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.11                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.12                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.13                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.14                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.15                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.16                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.17                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.18                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.19                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.20                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.20.21                      0   SAS    10000   1.63TB   1.64TB (normal)

 

EDCNETAPPCL01::> aggr show -r -aggregate F2_SAS

Owner Node: EDCFILER02
 Aggregate: F2_SAS (online, raid_dp) (block checksums)
  Plex: /F2_SAS/plex0 (online, normal, active, pool0)
   RAID Group /F2_SAS/plex0/rg0 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     dparity  2.21.12                      0   SAS    10000   1.63TB   1.64TB (normal)
     parity   2.22.0                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.13                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.1                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.14                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.2                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.15                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.3                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.16                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.4                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.17                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.5                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.18                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.6                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.19                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.7                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.20                      0   SAS    10000   1.63TB   1.64TB (normal)

   RAID Group /F2_SAS/plex0/rg1 (normal, block checksums)
                                                              Usable Physical
     Position Disk                        Pool Type     RPM     Size     Size Status
     -------- --------------------------- ---- ----- ------ -------- -------- ----------
     dparity  2.22.8                       0   SAS    10000   1.63TB   1.64TB (normal)
     parity   2.21.21                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.9                       0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.22                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.10                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.21.23                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.11                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.12                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.13                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.14                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.22                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.16                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.17                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.18                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.19                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.20                      0   SAS    10000   1.63TB   1.64TB (normal)
     data     2.22.21                      0   SAS    10000   1.63TB   1.64TB (normal)

 

Please help me to convert above aggregates to flash pools. I need suggestions from you guys. How many SSD's do i need to add to each of the aggregate. What are the best practices according to my configurations. Thanks in advance.


Verify Etc/rc file

$
0
0

Please may you help?

 

I have created a vif lacp and succesfully added the changes to the /etc/rc file.

 

#Auto-generated by setup Fri Aug  2 18:46:55 GMT 2013
hostname ED-STRG-DR
ifgrp create multi VIF_1DR -b ip e0d e0c
ifgrp create lacp vif0 -b ip e0a e0b
ifconfig VIF_1DR `hostname`-VIF_1DR netmask 255.255.255.0 mtusize 1500 trusted wins up
ifconfig vif0 `hostname`-vif0 -b ip e0a e0b netmask 255.255.255.0 mtusize 9000
ifconfig vif0 10.10.20.1 mediatype auto netmask 255.255.255.0 mtusize 9000
ifconfig e0a flowcontrol full
ifconfig e0b flowcontrol full
ifconfig e0M flowcontrol full
#route add default NONE 1
routed on
options dns.enable off
options nis.enable off
savecore

 

i need confirmation on my etc/rc file. marked in red is what I have recently added. Is my rc file in order?, should a need to rebbot will this work

 

 

 

Thank You

Can't Stop Disk from Blinking

$
0
0

Hi

 

I have a FAS8020 running OnTap 9.1

 

We have a few disk failures and I set the blink on (2.2.5 and 2.2.9).

 

I can't seem to stop the disk blinking by using either the blinkoff or off command for disk set-led.

 

I'm pretty sure I have the right disk numbers. Is there any way of listing disks that have blinkon? 

 

Thanks

Present external storage to a 7-mode filer: both emc vnx and IBM A9000

$
0
0

Hi, I am trying to present storage form both an EMC VNX and an IBM A9000 to a netapp 7-mode filer. (maybe doing the same with cDOT later)
I am trying to  expand my 7-mode filers storage with both A9000 (first) and then EMC VNX storage.
I cannot find any documentation - anyone have any links/advice?

How to uniquely identify hardware components using Manage ONTAP 5.1 A.P.I.

$
0
0

Hi guys,


I'm trying to understand how I could uniquely identify hardware components (fans, power supplies, temperatures, voltages) attached to a shelf, using Manage ONTAP A.P.I. (v5.1).


Let's focus on the fans : I know the storage system I'm testing my queries on (which is a cluster) has 4 fans and 2 power supplies attached to its only shelf; it also has 2 nodes.

When I run the following query in ZExplorer :


<storage-shelf-environment-list-info><node-name>node1</node-name></storage-shelf-environment-list-info>


here is what I get :


<results status='passed'><shelf-environ-channel-list><shelf-environ-channel-info><channel-name>0a</channel-name><is-channel-monitor-enabled>true</is-channel-monitor-enabled><is-shelf-channel-failure>false</is-shelf-channel-failure><node-name>node1</node-name>...<shelf-environ-shelf-list><shelf-environ-shelf-info>
                        ...<cooling-element-list><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>1</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>2</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>3</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>4</cooling-element-number><rpm>3000</rpm></cooling-element-info></cooling-element-list>...<is-shelf-monitor-enabled>true</is-shelf-monitor-enabled><power-supply-list><power-supply-info><is-auto-power-reset-enabled>false</is-auto-power-reset-enabled><power-supply-element-number>1</power-supply-element-number><power-supply-firmware-revision>020F</power-supply-firmware-revision><power-supply-is-error>false</power-supply-is-error><power-supply-part-no>***</power-supply-part-no><power-supply-serial-no>***</power-supply-serial-no><power-supply-swap-count>0</power-supply-swap-count><power-supply-type>9C</power-supply-type></power-supply-info><power-supply-info><is-auto-power-reset-enabled>false</is-auto-power-reset-enabled><power-supply-element-number>2</power-supply-element-number><power-supply-firmware-revision>020F</power-supply-firmware-revision><power-supply-is-error>false</power-supply-is-error><power-supply-part-no>***</power-supply-part-no><power-supply-serial-no>***</power-supply-serial-no><power-supply-swap-count>0</power-supply-swap-count><power-supply-type>9C</power-supply-type></power-supply-info></power-supply-list>...<shelf-id>0</shelf-id><shelf-status>normal</shelf-status><shelf-type>iom6e</shelf-type><status-reads-attempted>1154193</status-reads-attempted><status-reads-failed>0</status-reads-failed>...</shelf-environ-shelf-info></shelf-environ-shelf-list><shelves-present>1</shelves-present></shelf-environ-channel-info></shelf-environ-channel-list></results>


At this point, I have 2 questions :


1) What is a channel (0a in the response) ? Does this term refer to a physical component or a logical component ?


2) The 4 cooling elements are obviously the 4 fans.

     However, my understanding is that the <cooling-element-number> attributes are not identifiers such as the serial numbers we have for the power supplies.

     If I'm correct, how could I reliably discriminate each fan ?




Now, if I run this query :


<storage-shelf-environment-list-info><channel-name>0b</channel-name><node-name>node1</node-name></storage-shelf-environment-list-info>



I get the same <cooling-element-list> and <power-supply-list>.


3) Why did the channel 0b not show up in the first response, when I did not specify any channel in the query ?


4) Is channel 0b a redundancy of channel 0a ?




Lastly, if I run the first query, but switch the nodes :


<storage-shelf-environment-list-info><node-name>node2</node-name></storage-shelf-environment-list-info>




I still get the same <cooling-element-list> and <power-supply-list>.


5) Do all the nodes have access to the same shelves ?


6) If I have to discover all the hardware components attached to a shelf in a NetApp cluster,

     does it mean I just have to query the first node available, find the first channel available and get its <shelf-environ-shelf-list> ?


7) Same question for a 7-Mode :

     if I have to uniquely identify all the hardware components attached to a shelf, is all I have to do is get the first channel available and then its <shelf-environ-shelf-list> ?



Sorry for the long post, and if my questions are newbie-ish... I hope I made myself clear enough though.


Hopefully someone could help me understand what I'm not getting already.


Thanks guys !


Elvis

How to uniquely identify hardware components using Manage ONTAP 8.3.2 A.P.I.

$
0
0

Hi guys,


I'm trying to understand how I could uniquely identify hardware components (fans, power supplies, temperatures, voltages) attached to a shelf, using Manage ONTAP A.P.I. (v8.3.2).


Let's focus on the fans : I know the storage system I'm testing my queries on (which is a cluster) has 4 fans and 2 power supplies attached to its only shelf; it also has 2 nodes.

When I run the following query in ZExplorer :


<storage-shelf-environment-list-info><node-name>node1</node-name></storage-shelf-environment-list-info>


here is what I get :


<results status='passed'><shelf-environ-channel-list><shelf-environ-channel-info><channel-name>0a</channel-name><is-channel-monitor-enabled>true</is-channel-monitor-enabled><is-shelf-channel-failure>false</is-shelf-channel-failure><node-name>node1</node-name>...<shelf-environ-shelf-list><shelf-environ-shelf-info>
                        ...<cooling-element-list><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>1</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>2</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>3</cooling-element-number><rpm>3000</rpm></cooling-element-info><cooling-element-info><cooling-element-is-error>false</cooling-element-is-error><cooling-element-number>4</cooling-element-number><rpm>3000</rpm></cooling-element-info></cooling-element-list>...<is-shelf-monitor-enabled>true</is-shelf-monitor-enabled><power-supply-list><power-supply-info><is-auto-power-reset-enabled>false</is-auto-power-reset-enabled><power-supply-element-number>1</power-supply-element-number><power-supply-firmware-revision>020F</power-supply-firmware-revision><power-supply-is-error>false</power-supply-is-error><power-supply-part-no>***</power-supply-part-no><power-supply-serial-no>***</power-supply-serial-no><power-supply-swap-count>0</power-supply-swap-count><power-supply-type>9C</power-supply-type></power-supply-info><power-supply-info><is-auto-power-reset-enabled>false</is-auto-power-reset-enabled><power-supply-element-number>2</power-supply-element-number><power-supply-firmware-revision>020F</power-supply-firmware-revision><power-supply-is-error>false</power-supply-is-error><power-supply-part-no>***</power-supply-part-no><power-supply-serial-no>***</power-supply-serial-no><power-supply-swap-count>0</power-supply-swap-count><power-supply-type>9C</power-supply-type></power-supply-info></power-supply-list>...<shelf-id>0</shelf-id><shelf-status>normal</shelf-status><shelf-type>iom6e</shelf-type><status-reads-attempted>1154193</status-reads-attempted><status-reads-failed>0</status-reads-failed>...</shelf-environ-shelf-info></shelf-environ-shelf-list><shelves-present>1</shelves-present></shelf-environ-channel-info></shelf-environ-channel-list></results>


At this point, I have 2 questions :


1) What is a channel (0a in the response) ? Does this term refer to a physical component or a logical component ?


2) The 4 cooling elements are obviously the 4 fans.

     However, my understanding is that the <cooling-element-number> attributes are not identifiers such as the serial numbers we have for the power supplies.

     If I'm correct, how could I reliably discriminate each fan ?




Now, if I run this query :


<storage-shelf-environment-list-info><channel-name>0b</channel-name><node-name>node1</node-name></storage-shelf-environment-list-info>



I get the same <cooling-element-list> and <power-supply-list>.


3) Why did the channel 0b not show up in the first response, when I did not specify any channel in the query ?


4) Is channel 0b a redundancy of channel 0a ?




Lastly, if I run the first query, but switch the nodes :


<storage-shelf-environment-list-info><node-name>node2</node-name></storage-shelf-environment-list-info>




I still get the same <cooling-element-list> and <power-supply-list>.


5) Do all the nodes have access to the same shelves ?


6) If I have to discover all the hardware components attached to a shelf in a NetApp cluster,

     does it mean I just have to query the first node available, find the first channel available and get its <shelf-environ-shelf-list> ?


7) Same question for a 7-Mode :

     if I have to uniquely identify all the hardware components attached to a shelf, is all I have to do is get the first channel available and then its <shelf-environ-shelf-list> ?



Sorry for the long post, and if my questions are newbie-ish... I hope I made myself clear enough though.


Hopefully someone could help me understand what I'm not getting already.


Thanks guys !


Elvis

Viewing all 4991 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>