Quantcast
Channel: ONTAP Discussions topics
Viewing all 4957 articles
Browse latest View live

autogrow in Cdot 9.0P3

$
0
0

Hello all 

 

im using Data ontap version 9.0p3  

 

As i see there is no -increment-size in the options for volumes. There is only these options:

 

 { [[-maximum-size] {<integer>[KB|MB|GB|TB|PB]}]  Maximum Autosize

    [ -minimum-size {<integer>[KB|MB|GB|TB|PB]} ]  Minimum Autosize

    [ -grow-threshold-percent <percent> ]          Grow Threshold Used Space Percentage

    [ -shrink-threshold-percent <percent> ]        Shrink Threshold Used Space Percentage

    [ -mode {off|grow|grow_shrink} ]               Autosize Mode

  | [ -reset [true] ] }                            Autosize Reset

 

how can config the incremenet size for volume because in my case it is not enough for me just to config  maximum-size  i need to config how much it should grow and then i set the max grow .

 

Could you please help ?


Cascading 2 Cluster Interconnect Switches

$
0
0

Greetings everyone,

 

This question may sound strange, but I do wonder if we can cascade two CN1610's to form the cluster interconnection. By that I mean a node is connected to a CN1610, and this CN1610 is connected to another CN1610, then to another node. To provide path redundency, another 2 CN1610's are needed, so this involves total 4 CN1610's.

 

If I recall correctly, NetApp uses Automatic Private IP Addressing (APIPA) for the IP addresses of cluster LIF, that is an IP address like 169.254.x.x. For each interconnection, 2 CN1610's form a single Layer-2 so two controllers at each side can still communicate with each other using its 169.254.x.x IP address as if only one CN1610 is between them.

 

Any replication would be appreciated in advance. Thank you.

Rules regarding SnapLock Enterprise and Encryption

$
0
0

Hi:

 

I'm trying to understand the exact support statements around SnapLock Enterprise and encryption.

 

1.) Is SnapLock Enterprise supported on a FAS system with NSE drives that are configured using either external or internal key management?

 

2.) Is SnapLock Enterprise supported on a FAS system with NetApp Volume Encryption?

 

I am pretty sure #2 is "no" but I'm trying to figure out if we encrypt at the hardware level, can we use SnapLock?  

LIF Failover conditions

$
0
0

Hello

 

I'm trying to confirm the conditions that will initiate a LIF failover.  Aside from a whole node going down, will a LIF only failover if there is a link loss?

 

If that's the case, is there any other options I can employ to protect against the upstream switch failing but NOT taking a port offline?  I can't use LACP across two switches in this environmment.

 

Thanks

 

Steven

"CIFS Top" in cDOT 9.1

The speed of c-mode vol move...

$
0
0

I know the speed of c-mode vol move can actually depend on the loading of the source and destination aggregate, and possibly on the interconnect switch as well. I have been searching around but did not find any documentation describing the speed of a vol move operation. Assuming very low loading in controllers and aggregates, what is the typical throughput of a vol move operation? If this vol move go across cluster interconnect, will it have a very different throughput from that within the same controller?

 

Any replication would be appreciated, thank you.

Broadcast Domains missing after Upgrade from 8.2.x to 8.3.x

$
0
0

Hello Community, 

 

we observed during Upgrade from 8.2.x to 9.0 that during the steps from 8.2.x to 8.3.x the broadcast domains have not been created and the upgrade-revert check shows the upgrade as completed but with errors. 

 

After manually fixing the broadcast domains for the Default IP Space we figured out that we are not able to create the Cluster Broadcast Domain.

Getting error "The IPSpace Cluster can not have more than one Broadcast Domain".

However, with Broadcast-domain show, we can not see a broadcast domain. --> something hidden.

 

NetApp Support says, we have to go back to 8.3 and fix the problem there but this requires Downtime.

 

My question is:

  • Does someone else faced this problem before and how did you fix it?
  • Is there any other way to create the Broadcast Domain for the cluster ports?
  • Would it help to run the cluster setup again?

cheers Frank

Clear disks and installing in a new filer

$
0
0

Hi!

 

I'm about to expand my cluster with two new nodes.

From reasons of lack of space, we are scraping two old filers that take 6U in the rack, inserting FAS8040 instead and connecting the disk shelfs two the new FAS.

 

I am considering how to wipe the data from the disks.

The first way I thought is to destroy the aggregates and raid groups, disconnect the old filers, connect the FAS8040 and initialize the disks in boot menu.

however, my workspace is not at the site of the netapp rack, so I need to prepare the disks before I go to the site to install the new FAS.

My current plan is to boot in maintenance mode in the old filers, destroy the aggregates and raid groups, assign all the disks as spares and zero them. The next day I will go to the site, install the new filer at the site. The only problem I can think of is the wheter the fact that the disks will be spares in another filer (owned by another filer) will effect the install.

I would be happy to hear your opinions

 

Thanks,

Dan


Flash Cache in SPM

$
0
0

Greetings everyone.

 

I am not sure if I can ask about SPM here. I ran into a situation that if FlashCache is specified then the throughput prediction goes down.

 

FAS2650 comes with builtin 1TB FlashCache. My config is one FAS2650 with 2 DS224C fully populated with 24 960SSD's, and each FAS2650 controlloer owns one shelf, thus one aggregate, respectively. Then I generate 2 workload profiles of 5TB capacity, 90% random read, 10% random write. The 2 workloads are assigned to 2 aggregates of 2 controllers respectively.

 

If I chose no FlashCache, the overall throughput is 120K IOPS, and if I chose 512GB FlashCache per controller, it turned out 113K IOPS. Is this reasonable?

ONTAP 9 downgrade to 8.2.X 7-mode in fresh installation

$
0
0

Hi All,

 

Anyone know how to downgrade from ONTAP 9 to 8.2.X 7-mode in fresh installation?

 

Thanks!!

 

Regards,

Paul

C DOT command to find last snapshot

$
0
0

Looking for a command to find last snapshot created for a volume in C-dot

netapp clustered mode cifs

$
0
0

NetApp Release 8.3P1. Model FAS8060

 

I came across these properties in the NetApp documentation with regards to optimizing cifs changenotify

 

  • cifs.neg_buf_size
  • cifs.changenotify.buffersize
  • smb_boxcar_expire_ms

and wanted to find out the values in our customer environment. They said that these properties are for 7-mode, but they are running in Clustered mode.

 

What is the equivalent of these properties in NetApp clustered mode?

Setting tree quota at volume level on Clustered DataOntap

$
0
0

Hello.

 

I have volumes without qtree, and due to dedup/compression, I need to set quota to have user see right data used and remaining space.

 

It was possible to set quota on volume on 7-Mode (something like /vol/volname/-) , but when I set quota on volume it set it as default values for qtree, but do not applied to the volume itself.

 

do you know if it is possible on C-Dot ?

 

Thank you for your help

 

Régis

 

CDOT 8.3.2 percent-used, percent-physical-used, and volume show-space

$
0
0

This feels like it should be a very basic Storage Guy question that I should already know the answer to, but I'm not having a lot of luck so far.

 

A certain volume hit 93% usage, which threw an alert. By the time I looked at it, it was down to 90%, which is fine. It prompted me to look further into that volume though, and some numbers aren't adding up. 

 

The volume hosts a handful of iSCSI LUNs. The percent-used value is still 90%. The percent-physical-used value is 20%. The volume show-space command shows 30% used (because of the +10% snapshot reserve).  There's no snapshot spillover - I'm currently using less than half the reserve. What distinguishes one of the used percent values from another? How can I account for the difference?

 

Below is the output of the vol show -instance, and the vol show-space. Relevant items are highlighted. Thanks!

 

Vserver Name: [redacted]
Volume Name: [redacted]
Aggregate Name: [redacted]
Volume Size: 50TB
Volume Data Set ID: 1589
Volume Master Data Set ID: 2147485245
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: 0
Group ID: 0
Security Style: unix
UNIX Permissions: ---rwxr-xr-x
Junction Path: -
Junction Path Source: -
Junction Active: -
Junction Parent Volume: -
Comment: [redacted]
Available Size: 4.71TB
Filesystem Size: 50TB
Total User-Visible Size: 45TB
Used Size: 9.87TB
Used Percentage: 90%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 50TB
(DEPRECATED)-Autosize Increment (for flexvols only): 100GB
Minimum Autosize: 38.76TB
Autosize Grow Threshold Percentage: 92%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: grow
Autosize Enabled (for flexvols only): true
Total Files (for user-visible data): 31876689
Files Used (for user-visible data): 166
Space Guarantee Style: none
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshot Copies: 10%
Snapshot Reserve Used: 43%
Snapshot Policy: 3dailys
Creation Time: Tue Apr 12 15:45:58 2016
Language: C.UTF-8
Clone Volume: false
Node name: [redacted]
NVFAIL Option: on
Volume's NVFAIL State: false
Force NVFAIL on MetroCluster Switchover: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: true
Space Saved by Storage Efficiency: 11.38TB
Percentage Saved by Storage Efficiency: 54%
Space Saved by Deduplication: 1.72TB
Percentage Saved by Deduplication: 8%
Space Shared by Deduplication: 1.57TB
Space Saved by Compression: 9.66TB
Percentage Space Saved by Compression: 45%
Volume Size Used by Snapshot Copies: 2.15TB
Block Type: 64-bit
Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: -
Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
Constituent Volume Role: -
QoS Policy Group Name: -
Caching Policy Name: auto
Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 3
VBN_BAD may be present in the active filesystem: false
Is Volume on a hybrid aggregate: true
Total Physical Used Size: 9.83TB
Physical Used Percentage: 20%


[cluster]::> vol show-space [redacted]

Vserver : [redacted]
Volume : [redacted]

Feature Used Used%
-------------------------------- ---------- ------
User Data 9.70TB 19%
Filesystem Metadata 27.93GB 0%
Inodes 128KB 0%
Snapshot Reserve 5TB 10%
Deduplication 72.02GB 0%
Performance Metadata 72.03GB 0%

Total Used 14.87TB 30%

Total Physical Used 9.83TB 20%


huge disprepancy in du output

$
0
0

ONTAP 9.1 RC1

CentOS 6.3

NFS v3

 

Hi,

 

As an exmaple, I have filer:/vol/vol0/home/[username] hosting user's homes. The size of vol0 is a few TB and it also contains other directories other than home but individual user home has quota set to 10GB. On Linux I have automount set up so that each user's home mounted like:

 

/home/[username]  ->  filer:/vol/vol0/home/[username]

 

If I am in a user's home directory (ie, /home/[username]) and do 'du .', it will take some time and the returned size is huge. By huge I mean it's over 1TB. If I cd to the same user's home via /net/filer/vol/vol0/home/[username]/ and du, it's fast and returns a few GB. If I am back in /home/[username]/ and do 'du . --exclude *snapshot*' then it returns a few GB which matches the results from doing du via /net/...

 

I understand du is not an accurate way to see actual disk usage but it can't be this off. And I don't think this only applies specifically to user home directories.

 

What's going on here? Why du in /home/[userhome] seems to take the snapshots into account? Is it expected behavior? I don't think it's like this in 7 mode at least and possibly in the earlier version of cDOT?

 

The goal here is to have a way to allow users to see his/her space consumption without admin's involvement.

 

 

Thanks,

 


Ansible modules for cDOT playbooks

$
0
0

The Ansible documentation says there are modules for cDOT. They all start with na_cdot_. I installed the netapp-lib on my Ansible Tower server but they didn't get installed. Where do I get them? All I see are the ordinary modules for Netapp.

Fabric MetroCluster and DWDM clarification

$
0
0

Hi,

 

IHAC that is planning to implement Fabric MetroCluster using a DWDM topology.

 

DWDM is already in place and it is used by EMC VPLEX. 

 

I have read TR-3548 for Fabric MetroCLuster ISL considerations and I talked to customer in order to understand DWDM infrastructure:

- they have unused ports for attach FC ISL for NetApp on both sides

- they can use a different lambda for NetApp

 

Imagem inline 1

 

AFAIK it seems a valid requirement, but I would like to confirm it.

 

Any comments? 

 

Regards, Rafael.

API for dashboard

$
0
0

Hello

 

Like there is api "dashboard-alarm-get-iter" for dashboard alarm, I am looking API for dashboard performance and dashboard health. API related to perfromance (perf-object-get-instances) is not giving the data what dashboard performance commands can give.

 

Also is there any pointer for where I can get a consolidated information about Netapp API's. I know there is a way in Zexplorer to find infromation about API by hovering a mouse over the api name but I am looking for a consolidate document.

 

Thanks

GV

FAILED aggregate, not/stack reconstructing

$
0
0

Hi,

 

We have an aggregate that is not reconstructing and stack on reconstructing. aggr status -r output below.

 

luneta> aggr status -r aggr10
Aggregate aggr10 (failed, raid_dp, partial) (block checksums)
Plex /aggr10/plex0 (offline, failed, inactive)
RAID group /aggr10/plex0/rg0 (partial, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.03.8 0b 3 8 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
parity 0b.02.7 0b 2 7 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.8 0b 2 8 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.9 0b 3 9 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data FAILED N/A 560000/ -
data 0b.02.11 0b 2 11 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.12 0b 2 12 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.13 0b 2 13 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.14 0b 2 14 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.15 0b 2 15 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data FAILED N/A 560000/ -
data 0b.02.17 0b 2 17 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.18 0b 2 18 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0a.02.12 0a 2 12 SA:B 0 SAS 15000 560000/1146880000 560208/1147307688
data 0b.02.20 0b 2 20 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.21 0b 2 21 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0a.02.17 0a 2 17 SA:B 0 SAS 15000 560000/1146880000 560208/1147307688 (reconstruction 99% completed)
data 0b.03.0 0b 3 0 SA:A 0 SAS 15000 560000/1146880000 560208/1147307688
data 0b.02.6 0b 2 6 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.02.23 0b 2 23 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.2 0b 3 2 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.3 0b 3 3 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.4 0b 3 4 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.5 0b 3 5 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.6 0b 3 6 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
data 0b.03.7 0b 3 7 SA:A 0 SAS 15000 560000/1146880000 560879/1148681096
Raid group is missing 2 disks.

 

it says, 2 disk missing and a reconstruction that is stack in 99%. i tried to manually failed or replace the disk but no luck.

 

Anyone know this issue?

 

Thanks.

 

John

QTree Snapmirror: Cannot connect to source filer

$
0
0

I am attempting a qtree snapmirror to a distant location. We have gotten through the networking portion of the connection (long haul path, firewall, ACLs) however we are still unable to get a successful snapmirror. We receive the following error:

 

Transfer aborted: Cannot connect to source filer.

 

When I perform a snapmirror status after this it shows the relationship however the status is "Uninitialized" and the state is "Idle".

 

We are not creating a qtree on the destination as it is supposed to generate it during the snapmirror. Although it says the snapmirror is erroring out, when I look in our qtree inventory I see that it is creating the qtree.

 

Here is the syntax for the initialize command performed from our vfiler.

 

snapmirror intialize -S <Source IP>:/vol/source_vol/source_qtree <Dest vfiler name>:/vol/dest_vol/dest_qtree

 

After running this command I can see the qtree was created through the GUI, and it shows the status as "snapmirrored"

 

However during the initialize it errors out and the snapmirror status shows state uninitialized and status as idle. We have confirmed options.snapmirror setting and the source has a snapmirror.allow file with our information. We have also confirmed the /etc/hosts files have the source/destination information in them.

 

My biggest question is if we cannot connect to the source filer, how/why is the qtree being created at the destination?

 

Any help with why this is not completing would be appreciated. Thanks.

Viewing all 4957 articles
Browse latest View live