Quantcast
Channel: ONTAP Discussions topics
Viewing all 5004 articles
Browse latest View live

Promote new volume for SVM root. Non-disruptive?

$
0
0

I'm in the process of replacing an existing 4-node cluster with 4 new nodes and disks. The SVM root volume load-sharing, for some reason, shows the actual mounted volume for root as one of the destination volumes, not the source. So I have root, root_m2, root_m3, root_m4. root is the source/primary, but looking at the namespace, root_m4 is the volume actually mounted. Rather than mess with all that, I figured I could just create a brand new root_vol and promote it, then re-setup the load-sharing mirrors after the old hardware is pulled out of the cluster. So my question is, is the promotion of a new SVM root volume disruptive at all? Can I do this any time or should I do it after hours or schedule true downtime?

 

Running 9.1P11 with a FAS6250, FAS3220 and FAS9000 with an AFF-300A waiting to join once I remove the 6250 and 3220. Thanks in advance!


There are not enough spare disks.... Need pointing in the right direction

$
0
0

Hi there. My name is Chris. Long story short, I came onboard a company whos 2 network engineers left about the same time (not on good terms I believe). The company has a NetAPP FS3020 running Data ONTAP 7.3.5.1P1. Of course there is no service contract/agreement on it and I'm pretty sure its already reached EOL. The IT Manager knows basically nothing about it other than the login credentials and the fact that it is what all our VMware VMs are running on. 

 

I've never used a NetAPP SAN. All I have are the credentials to login and I can already tell there are a few things wrong with it. See attached image below.

 

In addition to this it looks like a complete raid group has failed.  And beyond that, There is a spare 4th shelf that we have, complete with disks, that is sitting there untouched. I don't know why we are not using this but I would like to install it on the rack and connect it to expand our storage space and offer more spare disks for the RAID arrays. from what I can tell, these seems to be only 1 spare disk left.

 

If anyone can point me in the right direction to fix the error message about not enough spare disks and a guide on how to install/integrate the 4th spare shelf. 

 

Any help would be appreciated. 

 

Thanks. 

 

netapp1.JPG

 

Here is the output of sysconfig -r

 

Aggregate aggr0 (failed, raid_dp, foreign, partial) (block checksums)
Plex /aggr0/plex0 (offline, failed, inactive)
RAID group /aggr0/plex0/rg1 (partial)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity FAILED N/A 272000/557056000
parity FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data 0d.41 0d 2 9 FC:B - FCAL 10000 272000/557056000 280104/573653840 (prefail)
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
Raid group is missing 11 disks.
Plex is missing 2 RAID groups.

Aggregate aggr2 (online, raid_dp) (block checksums)
Plex /aggr2/plex0 (online, normal, active)
RAID group /aggr2/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.16 0b 1 0 FC:B - ATA 7200 847555/1735794176 847827/1736350304
parity 0b.17 0b 1 1 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.18 0b 1 2 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.19 0b 1 3 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.20 0c 1 4 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.21 0c 1 5 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.22 0c 1 6 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.23 0c 1 7 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0b.24 0b 1 8 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.25 0b 1 9 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.26 0c 1 10 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0b.27 0b 1 11 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.28 0c 1 12 FC:A - ATA 7200 847555/1735794176 847827/1736350304

Aggregate aggr1 (online, raid_dp) (block checksums)
Plex /aggr1/plex0 (online, normal, active)
RAID group /aggr1/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0d.16 0d 1 0 FC:B - FCAL 10000 272000/557056000 280104/573653840
parity 0d.17 0d 1 1 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.18 0a 1 2 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.22 0a 1 6 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.39 0d 2 7 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.19 0a 1 3 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.20 0a 1 4 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.23 0a 1 7 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.25 0d 1 9 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.21 0d 1 5 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.26 0a 1 10 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.32 0a 2 0 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.33 0a 2 1 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.34 0d 2 2 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.35 0d 2 3 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.36 0d 2 4 FC:B - FCAL 10000 272000/557056000 280104/573653840

RAID group /aggr1/plex0/rg1 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0d.37 0d 2 5 FC:B - FCAL 10000 272000/557056000 280104/573653840
parity 0d.27 0d 1 11 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.38 0d 2 6 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.28 0d 1 12 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.29 0d 1 13 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.44 0a 2 12 FC:A - FCAL 10000 272000/557056000 274845/562884296
data 0d.42 0d 2 10 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0d.45 0d 2 13 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0d.43 0d 2 11 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0a.40 0a 2 8 FC:A - FCAL 10000 272000/557056000 280104/573653840


Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0b.29 0b 1 13 FC:B - ATA 7200 847555/1735794176 847827/1736350304

 

 

 

 

 

Manageontap JAR Keep-Alive header's timeout

$
0
0

In my project i am making frequent calls of invokeEle() method of manageontap JAR.

This was creating too much traffic on AD server as for each invokeEle() NetApp makes 1 AD authentication.

 

After research i found NaServer have setKeepAliveEnabled() method If we set that as "true", Manageontap will maintain session and reuse it for subsequent calls.

But from NetApp side in response we could see following message from NetApp

 

Read Line === Keep-Alive: timeout=5, max=100
Read Line === Connection: Keep-Alive

 

This means even though we set "Keep-Alive" flag at client set, NetApp will bydefault keep connection timeout only for 5 seconds, If there is gape of 5+ seconds between 2 NetApp calls, Server reject the request and it require AD authentication again.

 

Anyone can please tell me how we can increase this timeout from 5 to 100 second?

That will be really helpful.

 

Also there is a method in NaServer setTimeout() , but this timeout is only for how long a client will wait for response after request sending.

 

clientmatch in NFS export policy with wildcards?

$
0
0

Dear Sir or Madam,

 

I need to setup export policies for a large group of clients, these clients have a similar name but are not grouped in the same network, so the subnet mask option described in https://library.netapp.com/ecmdocs/ECMP1196891/html/frameset.html is not valid in my case. The netgroup option is an alternative but it introduces another separate point of maintenance.

 

In other systems, it exists the option to define an export with a wildcard, like "pcdevice*", but I don't find it in Ontap. The only option is to setup each client name or IP in the export rule?

 

Yours Faithfully,

Jose

What is 7-mode cluster architecture

$
0
0

Can someone explain me how 7-mode architecture has got that name. I can see and understand what is and how does a 7-mode architecture looks like but I didn't get how did it get the name 7-mode . I'm a newbie. 

Name mapping for vol security style mixed

$
0
0

Hello, i have created volume with mixed security style for CIFS and NFS.

 

Volume mount to linux system sucessfully and able to read, write and modify file. 

 

Problem is with cifs share, share has been mapped to windwos system sucessfully but not able to create ane file and folder gets belwo error.

 

"You need permission to perform this action"

 

windows user is domain user i did below name mapping but no luck.

 

vserver name-mapping create -direction win-unix -position -pattern domain\\windows-user -replacement root (local unix user)

 

i tried by configuring default unix user as well but no luck. 

 

regards

Gsingh

 

How to prevent directory from accidental deletion

$
0
0

We are using FAS 2552

 

We want to prevent directory from accidental deletion. Please let us know how we can achive this on CentOS like chattr +a.

configuration backup with tftp results in 0 size files

$
0
0

I'm working on uploading configuration backups to an external server. In my case, I'm trying to use simply TFTP.

 

I have a number of Cisco devices in this setup, they are able to successfully upload their configuration backup files to a TFTP server I have installed on my Windows laptop, called OpenTFPTServerMT.

This should indicate the server is working and no firewalls are blocking.

All devices are on the same subnet.

 

configuration backup upload -node cluster01-01 -backup cluster01.8hour.2018-06-04.10_15_02.7z -destination tftp://xxx.xx.xxx.xxx/

(system configuration backup upload)
Uploading the configuration backup file.
tftp upload in progress...........tftp upload in progress...........
Configuration backup file uploaded successfully.


cluster01::system configuration backup*>

 

The file cluster01.8hour.2018-06-04.10_15_02.7z appears on my TFTP server's incoming files area, but the size is 0 kB.

 

In the TFTP server log, I get repeated messages of timeout. I've increased the timeout to the max value, but the result is the same.

 

Ideally, I'd like to use SFTP, but Netapp doesn't appear to have that option.

 

 

FAS8200, Ontap 9.3P2


CLI or Powershell command to find controller shelf PSU and Fan details

$
0
0

Been searching for a while, but I've not been able to find the commands to use to give a report of the controller shelf serial and part numbers for the power supplies and fan modules.

I can use Get-NcStorageShelf to give me all details for the disk shelves, but not the actual controller shelf.

 

I can get node serials, CPU serials and some other info from Get-NcNode and Get-NcNodeInfo, but if someone could point me in the right direction for the fans and PSUs I'd appreciate it.

 

FAS8200, 9.3P4.

Creating SVM from snaplock root aggregate

$
0
0

Hi,

 

Im Trying to create SVM from snaplock aggregate but i couldnt do it because root aggregate box is empty, can anyone help me? .ThanksSmiley Happy

Clone cluster

$
0
0

Hi

 

Is there anybody out there who had tried to clone a single cluster (HA-Pair) from a "configuration backup file"?

Did it work?

 

It would be great if it worked since I'm about to install several systems that are identical and standalone, so "cloning" by taking the configurations file from one (working) system and install the other systems would be a nice solution :-)

 

//Bjorn

Support date passed 2038.

$
0
0

Hello,

 

We are a software company that allows our customers to store data on netApp. We have code that writes out files onto netAPP with retention period. We recently hit the issue of not being able to set retention passed 2038. We were testing against netApp 8.2. Intrestingly enough when a retention date passed 2038 was passed to netapp, and the file made read only, it automatically wraps it around. That behavior has been changed in 9.3, where it does not allow us to use date beyond 2038. So my questions are the following:

 

1. Should my code explicitly map date greater thatn 2038 to dates between 1970 and 2003?

2. Is the difference I mentioned between 8.2 and 9.3 correct?

3. If we have a customer that would like to retain files passed 2071, how do we do that?

 

 

Thank you for taking the time to read this and I await your response

 

Kaleb

schedule vscan disable on particular cifs in NetApp 7-mode systemshell crontab

$
0
0

Hello team,

 

 

 

Is it okay to schedule anything in systemshell crontab in NetApp ( 7 Mode :8.2.4P6)

 

I want to disable scan on a paritcular share at a fixed time daily

 

 

cifs shares change xyz -novscan (to be done at 16:00)

cifs shares change xyz -vscan (to be done at 16:30)

 

 

https://kb.netapp.com/app/answers/answer_view/a_id/1033705

 

 

 

 

Thanks in advance!

Copy data to Snowball

$
0
0

Hello

I am trying to copy data from my Netapp to my windows. Would anyone know the correct commands to perform this action? My company is moving to the cloud so I'm trying to migrate everything to the snowball. Any suggestions on this would be greatly appreciated. If there is any documentation please send it to amagee@bluecanopy.com. Thanks!!

FAS 2020 Config Questions

$
0
0

Gents

I have done a factory reset on my FAS 2020 old system and after that i have lost my FC and ISCSI licences.

 

Please advise is there anyway i can use this system ?

 

 


filesys-size-fixed changed

$
0
0

I have several volumes on a cluster that I changed to filesys-size-fixed false.    1 week later the autosupport shows that they are set back to true.  I found this because a volume became full because it was unable to autogrow.   

 

None of these volumes were ever the destination of a snapmirror.    the filer in question is running ontap 9.2p1 and is now part of a 4 node cluster. What are the possible causes of this setting being chnaged to true?  

Getting rid of old root disks...

$
0
0

I want to repurpose disks earlier used in a cluster. The disks are added to the cluster and are visible/appearing as unassigned disks. During the assignment of the disks to a node of the cluster, 6 disks are identified as being the old root-disks. Is there a way to get rid of the data on those disks.

 

netapp01::*> storage disk show -disk 3.*.*
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
....
3.41.0 - 41 0 FSAS unassigned - -
3.41.1 - 41 1 FSAS unassigned - -

....

 

netapp01::*> storage disk assign -disk 3.41.0 -owner netapp01-07

 

After changing the ownership of the disk, the disk is identified not being a spare.

 

netapp01::*> storage disk show -disk 3.*.*
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
...
3.41.0 3.63TB 41 0 FSAS aggregate n05root netapp01-07
3.41.1 - 41 1 FSAS unassigned - -

...

 

Is there a way to revert the disk as being a spare? I know it could be done by using maintenance mode. But then I have to reboot the node and as this is production I would like to find out if there are other ways.

 

 

DF Output - LUNs

$
0
0

I have:

 

a 7168GB volume - space resv = volume

in that volume, a 5325GB LUN with space resv enabled and frac resv set to 0%

 

LUN is presented to ESXi and the full capacity is formatted for the datastore

 

Datastore shows Capacity = 5.20TB

 

Yet when I run DF I get:

 

Filesystem                   total   used  avail capacity
/vol/Volume01_vol/          7168GB 3414GB 3753GB      48%
/vol/Volume01_vol/.snapshot     0B  300GB     0B       0%

 

Why is my USED not 5TB...  I have two other similar vols/LUNs in prod as well yet they are showing the 5TB Used in DF.... what am I missing here?

 

Filesystem                    total   used  avail capacity
/vol/Volume01_vol/           7168GB 3414GB 3753GB      48%
/vol/Volume01_vol/.snapshot      0B  300GB     0B       0%
/vol/Volume02_vol/           7168GB 5474GB 1693GB      76%
/vol/Volume02_vol/.snapshot      0B  347GB     0B       0%
/vol/Volume03_vol/           7168GB 5140GB 2027GB      72%
/vol/Volume03_vol/.snapshot      0B     0B     0B       0%

 

Thank you!

GN

Promote new volume for SVM root. Non-disruptive?

$
0
0

I'm in the process of replacing an existing 4-node cluster with 4 new nodes and disks. The SVM root volume load-sharing, for some reason, shows the actual mounted volume for root as one of the destination volumes, not the source. So I have root, root_m2, root_m3, root_m4. root is the source/primary, but looking at the namespace, root_m4 is the volume actually mounted. Rather than mess with all that, I figured I could just create a brand new root_vol and promote it, then re-setup the load-sharing mirrors after the old hardware is pulled out of the cluster. So my question is, is the promotion of a new SVM root volume disruptive at all? Can I do this any time or should I do it after hours or schedule true downtime?

 

Running 9.1P11 with a FAS6250, FAS3220 and FAS9000 with an AFF-300A waiting to join once I remove the 6250 and 3220. Thanks in advance!

There are not enough spare disks.... Need pointing in the right direction

$
0
0

Hi there. My name is Chris. Long story short, I came onboard a company whos 2 network engineers left about the same time (not on good terms I believe). The company has a NetAPP FS3020 running Data ONTAP 7.3.5.1P1. Of course there is no service contract/agreement on it and I'm pretty sure its already reached EOL. The IT Manager knows basically nothing about it other than the login credentials and the fact that it is what all our VMware VMs are running on. 

 

I've never used a NetAPP SAN. All I have are the credentials to login and I can already tell there are a few things wrong with it. See attached image below.

 

In addition to this it looks like a complete raid group has failed.  And beyond that, There is a spare 4th shelf that we have, complete with disks, that is sitting there untouched. I don't know why we are not using this but I would like to install it on the rack and connect it to expand our storage space and offer more spare disks for the RAID arrays. from what I can tell, these seems to be only 1 spare disk left.

 

If anyone can point me in the right direction to fix the error message about not enough spare disks and a guide on how to install/integrate the 4th spare shelf. 

 

Any help would be appreciated. 

 

Thanks. 

 

netapp1.JPG

 

Here is the output of sysconfig -r

 

Aggregate aggr0 (failed, raid_dp, foreign, partial) (block checksums)
Plex /aggr0/plex0 (offline, failed, inactive)
RAID group /aggr0/plex0/rg1 (partial)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity FAILED N/A 272000/557056000
parity FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data 0d.41 0d 2 9 FC:B - FCAL 10000 272000/557056000 280104/573653840 (prefail)
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
data FAILED N/A 272000/557056000
Raid group is missing 11 disks.
Plex is missing 2 RAID groups.

Aggregate aggr2 (online, raid_dp) (block checksums)
Plex /aggr2/plex0 (online, normal, active)
RAID group /aggr2/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.16 0b 1 0 FC:B - ATA 7200 847555/1735794176 847827/1736350304
parity 0b.17 0b 1 1 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.18 0b 1 2 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.19 0b 1 3 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.20 0c 1 4 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.21 0c 1 5 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.22 0c 1 6 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.23 0c 1 7 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0b.24 0b 1 8 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0b.25 0b 1 9 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.26 0c 1 10 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0b.27 0b 1 11 FC:B - ATA 7200 847555/1735794176 847827/1736350304
data 0c.28 0c 1 12 FC:A - ATA 7200 847555/1735794176 847827/1736350304

Aggregate aggr1 (online, raid_dp) (block checksums)
Plex /aggr1/plex0 (online, normal, active)
RAID group /aggr1/plex0/rg0 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0d.16 0d 1 0 FC:B - FCAL 10000 272000/557056000 280104/573653840
parity 0d.17 0d 1 1 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.18 0a 1 2 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.22 0a 1 6 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.39 0d 2 7 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.19 0a 1 3 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.20 0a 1 4 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.23 0a 1 7 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.25 0d 1 9 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.21 0d 1 5 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.26 0a 1 10 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.32 0a 2 0 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0a.33 0a 2 1 FC:A - FCAL 10000 272000/557056000 280104/573653840
data 0d.34 0d 2 2 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.35 0d 2 3 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.36 0d 2 4 FC:B - FCAL 10000 272000/557056000 280104/573653840

RAID group /aggr1/plex0/rg1 (normal)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0d.37 0d 2 5 FC:B - FCAL 10000 272000/557056000 280104/573653840
parity 0d.27 0d 1 11 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.38 0d 2 6 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.28 0d 1 12 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0d.29 0d 1 13 FC:B - FCAL 10000 272000/557056000 280104/573653840
data 0a.44 0a 2 12 FC:A - FCAL 10000 272000/557056000 274845/562884296
data 0d.42 0d 2 10 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0d.45 0d 2 13 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0d.43 0d 2 11 FC:B - FCAL 10000 272000/557056000 274845/562884296
data 0a.40 0a 2 8 FC:A - FCAL 10000 272000/557056000 280104/573653840


Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0b.29 0b 1 13 FC:B - ATA 7200 847555/1735794176 847827/1736350304

 

 

 

 

 

Viewing all 5004 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>