Quantcast
Channel: ONTAP Discussions topics
Viewing all 4957 articles
Browse latest View live

On-Access vscan policy behavior when creating files

$
0
0

Hi,

 

IHAC who wants to use on-access vsan policies with mandatory scan (ONTAP 9.1P5) and I would like to be sure about expected behavior whan AV servers are down.

 

 

Doc is not very verbose on expected behavior and just says that :

- standard profile triggers a scan for open, close and rename

- strict profile triggers a scan for open, close, read, rename

- write_only profile triggers a scan for close after modifcation.

 

When testing with all AV servers down, the customer is seeing that read and open files cannot be opened in all cases and file creation seems to be authorized.

 

 

I just want to be sure this behavior is normal (when scan is mandatory and AV servers down) , with standard or strict profiles.

- files can not be opened or read or renamed.

- files can not be modified (saved under the same name).

- new files can be created (by copying them from elsewhere)

- files can be modified when saved under a name, which means creating a new file.

 

With writes-only profile, files can be opened and modified (simple test using a notepad): I managed to modify a file without a problem, which is not normal IMHO.

 

 

Is this correct ?

Or is there a problem ?

 

Best regards

 

Régis

 

 


Netapp Tools for Monitoring and Capacity/Performance Reporting

$
0
0

System Model: FAS2552
Release 8.2.2RC2 Cluster-Mode

 

Hello,

 

I have a single 2 node NetApp system in clustered mode.

 

Is there a NetApp tool used for monitoring, capacity and performance reporting?

 

With this being a small single system, what would be the best solution? 

cDOT NFS export failed, reason given by server: No such file or directory

$
0
0

I have a single FAS2650 running with clustered data ontap.

I created a vserver svmtest, a volume NFStest  and a policy pol_test

vserver export-policy create -vserver svmtest -policyname pol_test

Then I added 2 rules to pol_test

vserver export-policy rule create -vserver svmtest -policyname pol_test -ruleindex 1 -clientmatch @testhosts,192.168.1.0/24 -protocol nfs -rorule sys -rwrule sys   -superuser sys  -anon 65534 -allow-suid true  -allow-dev true

vserver export-policy rule create -vserver svmtest -policyname pol_test -ruleindex 2 -clientmatch hostA,hostB,hostC -protocol nfs -rorule sys -rwrule sys   -superuser none  -anon 65534 -allow-suid false  -allow-dev false

and assigned the policy to my NFStest volume

volume  modify -vserver svmtest -volume NFStest -policy pol_test

The web GUI shows everything as shown above but when I try to mount either on hostA or on a host from @testhosts or from the range 192.168.1.0/24 I get the following error:

mkdir -p /tmp/XXX
mount -vv -t nfs svmtest:/NFStest /tmp/XXX
mount.nfs: timeout set for Wed Aug  2 07:34:35 2017
mount.nfs: trying text-based options 'vers=4,addr=192.168.1.178,clientaddr=192.168.1.79'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.1.178'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.178 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.178 prog 100005 vers 3 prot UDP port 635
mount.nfs: mount(2): No such file or directory
mount.nfs: mounting svmtest:/NFStest failed, reason given by server: No such file or directory

How can I troubleshoot the issue?

Increase Inodes on Data Protection Volume

$
0
0

Hello,

 

for increasing inodes I use the following command: volume modify <volume_name> -files <number_of_files>

On a Data Protection Volume (Snapvault Destination) I get the following error when using this command:

Error: command failed: Modification of the following fields: files not allowed
for volumes of the type "Flexible Volume - DP read-only volume".

 

Is there another way to increase inodes on these volumes?

 

Regards,

 

Christian

Unable LUN after NDMPcopy

$
0
0

Hello,

 

I'm migrating some LUNs provided to Windows hosts from one filer to another.

ONTAP is 8.2.4P6 7-Mode

 

I successfuly migrate LUN by "ndmpcopy":

ndmpcopy -da root:<password> /vol/vol_test_01/S137B301_test_lun_001.lun Filer2:/vol/vol_test_001/S137B301_lun_test_002.lun

 

I see lun as a file on a destination filer. No, I want unmap LUN from source and map it from destination filer.

No torubles with unmap. But how can I online and map LUN from destination filer?

 

When I tried just "lun online", it give error that no such LUN exist:

lun online /vol/vol_test_001/S137B301_lun_test_002.lun
lun online: /vol/vol_test_001/S137B301_lun_test_002.lun : No such LUN exists

 

7mode snapmirror version for source and destinations

$
0
0

I'm aware of the rule that one's 7mode Snapmirror  destination ONTAP version must be higher than source, but wondering whether this applies to P number too ?

 

So, i have a large , thousands of users cifs / nfs filer running presently 824P4, which a slew of source snapmirror filers on 8.1.2P4. 

 

I'm in process of upgrading all of this estate and more to 824P6. 

 

Question is am i able to update my snapmirror sources to 824P6 and leave the single destination at 824P4 ?  

 

Would a difference in patch # break snapmirror ?

 

Thanks 

 

jc

VServer connection to FPolicy Server fails, NetApp Release 8.3.2RC2 cDOt

$
0
0

Hi, I just enabled Varonis to collect some stats - it had been disabled for some months as it was deemed to causing latency.

However, it has stopped working - anyone have any ideas?- neither the netapp filer fpolicy or varonis config has been altered.

The external engines can see the filer and it can see them.

 

 

Vserver       Policy Name               Number  Status   Engine
------------- ----------------------- --------  -------- ---------

PG7-Cluster3  Varonis                        1  on       fp_ex_eng

 

8/2/2017 11:19:43   PG7NETAPPP04-03  WARNING       fpolicy.server.disconnect: Connection to the Fpolicy server '10.13.110.220' is broken ( reason: 'FPolicy server is removed from external engine.' ).
8/2/2017 11:19:42   PG7NETAPPP04-01  WARNING       fpolicy.server.disconnect: Connection to the Fpolicy server '10.13.110.220' is broken ( reason: 'FPolicy server is removed from external engine.' ).
8/2/2017 11:19:42   PG7NETAPPP04-02  WARNING       fpolicy.server.disconnect: Connection to the Fpolicy server '10.13.110.220' is broken ( reason: 'FPolicy server is removed from external engine.' ).
8/2/2017 11:19:42   PG7NETAPPP04-04  WARNING       fpolicy.server.disconnect: Connection to the Fpolicy server '10.13.110.220' is broken ( reason: 'FPolicy server is removed from external engine.' ).

 

So

 

engine-connect -node PG7NETAPPP04-03 -vserver PG7-Cluster3 -policy-name Varonis -server 10.13.110.220

 

Result:

 

vserver fpolicy show-engine -vserver PG7-Cluster3 -node PG7NETAPPP04-02 -fields disconnect-reason,server-status,disconnected-since,disconnect-reason
node            vserver      policy-name server        server-status disconnected-since disconnect-reason
--------------- ------------ ----------- ------------- ------------- ------------------ ----------------------------------------
PG7NETAPPP04-02 PG7-Cluster3 Varonis     10.13.110.220 disconnected  8/2/2017 14:19:40  TCP Connection to FPolicy server failed.

 

 

 show-engine -vserver PG7-Cluster3 -node PG7NETAPPP04-02 -fields disconnect-reason,server-status,disconnected-since,disconnect-reason-id
node            vserver      policy-name server        server-status disconnected-since disconnect-reason                        disconnect-reason-id
--------------- ------------ ----------- ------------- ------------- ------------------ ---------------------------------------- --------------------
PG7NETAPPP04-02 PG7-Cluster3 Varonis     10.13.110.220 disconnected  8/2/2017 16:13:26  TCP Connection to FPolicy server failed. 9307

 

 

 ping -destination 10.13.110.220
10.13.110.220 is alive

 

secd.conn.auth.failure: Vserver (vs1) could not make a connection over the network

$
0
0

Hi There,

 

 

Question???

 

Is there a way to stop ONTAP 9.1 P1 from polling specific unavailable MS-DC's?

 

I have an issue with my svms where they are filling the events every 5mins with the error message:

 

secd.conn.auth.failure: Vserver (vs1) could not make a connection over the network to server (ip 10.0.0.1) Error: Operation timed out.

 

Message Name:
secd.conn.auth.failure
Sequence Number:
1028123
Description:
This message occurs when the Vserver cannot establish a TCP/UDP connection to or be authenticated by an outside server such as NIS, LSA, LDAP and KDC. Subsequently, some features of the storage system relying on this connection might not function correctly.
Action:
Ensure that the server being accessed is up and responding to requests. Ensure that there are no networking issues stopping the Vserver from communicating with this server. If the error reported is related to an authentication attempt, ensure that any related configurable user credentials are set correctly.

This is happening as the MS-DC server in question is in a DMZ.

 

I can't get my filers access across the firewall to this MS-DC Server, so it shows as unavailable using the command:

 

>vserver cifs domain discovered-servers show -vserver vs1 -domain mydomain

mydomain    MS-DC    adequate   adserver     10..0.0.1     unavailable

 

local MS-DC's return 'OK'

 

Any help appreciated.

 

 

Thanks,

 

John


lun pah for mapping

$
0
0

Hello, 

 

I am new to Netapp and have a very naive question.The organization is planning to run DR test . I am following a an old doc on mapping luns to igroups created.

 

I would like to know how the  lun path is defined . Its cluster mode and this wil be run on the DR cluster .

 

 

> lun map -vserver svmt1 -lun /vol/hq_t0_dd01_mirror/hq_t0_dd01 -igroup DR_IGROUP -lun-id 4

 

is it something like /vol / volume name / lun name . Can I be sure there is no qtree for the lun ?  also, is there a commnd  to identify the LunID for the volume .

Kindly clarify

Different VLANS in Broadcast Domains failing over to right Port?

$
0
0

Hello,

 

I have some troubles with one of our NetApp clusters, which was not configured by us. I have to update them and I saw a very confusing configuration of the BroadCast domains.

 

Bobby::> network port broadcast-domain show 
IPspace Broadcast                                         Update
Name    Domain Name    MTU  Port List                     Status Details
------- ----------- ------  ----------------------------- --------------
...
ubimet  ubimet        9000
                            Pesto:a0a-9                   complete
                            Pesto:a0a-10                  complete
                            Pesto:a0a-8                   complete             
                            Pesto:a0a-11                  complete             
                            Pesto:a0a-12                  complete             
                            Pesto:a0a-7                   complete             
                            Pesto:a0a-6                   complete             
                            Pesto:a0a-13                  complete             
                            Pesto:a0a-14                  complete             
                            Pesto:a0a-24                  complete             
                            Pesto:a0a-22                  complete             
                            Pesto:a0a-23                  complete             
                            Squit:a0a-6                   complete             
                            Squit:a0a-7                   complete             
                            Squit:a0a-9                   complete             
                            Squit:a0a-8                   complete             
                            Squit:a0a-11                  complete             
                            Squit:a0a-10                  complete             
                            Squit:a0a-12                  complete             
                            Squit:a0a-13                  complete             
                            Squit:a0a-14                  complete             
                            Squit:a0a-22                  complete             
                            Squit:a0a-23                  complete             
                            Squit:a0a-24                  complete
...

As you can see, there are completely different VLANs in the same broadcast domain. Does this really work in a case of a failover, so that the partner node uses the right port?

 

Normally, I configure broadcast domains like this:

Bobby::> network port broadcast-domain show 
IPspace Broadcast                                         Update
Name    Domain Name    MTU  Port List                     Status Details
------- ----------- ------  ----------------------------- --------------
...
ubimet  172.18.20.0/23 
                      1500
                            Squit:a0a-80                  complete
                            Pesto:a0a-80                  complete
...

Here, each broadcast domain has only one VLAN configured with one port on each partner. This is working when a node is failovered.

 

Would be very nice, if someone can give me here a statement, if the above config is failover compatible.

 

Best regards

Florian

SVM migrate to new node

$
0
0

HI All,

 

Is there any facility in ONTAP that allows the movement of an SVM from one node to another within the same cluster?

 

I have considered SVM-DR but this requires the destination SVM to have a different name, which I would like to avoid.

 

Thanks,

 

John

Remove Reporting Nodes

$
0
0

A year ago, I engaged in a very helpful discussion with bobshouseofcards regarding adding reporting nodes:

 

http://community.netapp.com/t5/Data-ONTAP-Discussions/New-Cluster-Nodes-Not-Showing-in-NetApp-DSM-for-MPIO/td-p/123988

 

Now I'm in the opposite situation. We are temporarily adding a fifth and sixth node to the cluster, migrating content to those nodes, and then decommissioning two of the original nodes. I will add nodes 5 and 6 as reporting nodes before moving volumes containing LUNs to those nodes, but afterwards I want to cleanly remove nodes 1 and 2 as being reporting nodes, but keeping nodes 3, 4, 5, and 6.

 

In my reading of the lun mapping remove-reporting-nodes command, it appears to be very limited compared to add-reporting-nodes. There doesn't appear to be any way to identify which nodes you want to remove.

 

The "-remove-nodes" option description reads: "If specified, remove all nodes other than the LUN's owner and HA partner from the LUN mapping's reporting nodes." That's not what I want, but I see no other option to determine which nodes to remove. Also, what happens if this option is not specified? Which nodes are removed? Any help would be appreciated!

ONTAP Recipes: Easily create a SnapLock volume with Volume Append mode enabled

$
0
0

Did you know you can…?

 

Easily create a SnapLock volume with Volume Append mode enabled

 

 

1. Install the SnapLock license

license add -license-code <key>

 

2. Initialize the compliance clock on all the nodes of the cluster

snaplock compliance-clock initialize -node <nodename>

 

3. Create a SnapLock aggregate of the appropriate SnapLock type

storage aggregate create -aggregate <aggrname> -diskcount <count> -snaplock-type <enterprise|compliance>

 

4. Create a volume on the aggregate created.

volume create -vserver <vservername> -volume <volname> -aggregate <aggrname> -size <size>

 

5. Once the volume is created, enable the volume append mode

volume snaplock modify -vserver <vservername> -volume <volname> -is-volume-append-mode-enabled true

 

 

For more information, see the ONTAP 9 documentation center

7MTT transition multiple 7-Mode source to single SVM - Request For Enhancement

$
0
0

I’m testing the 7MTT v3.2 in preparation for 7mode Filer migration/consolidation.  Got this error when trying to create a 7MTT project to consolidate two 7-Mode Filers into one SVM at the same time.

 

"The tool does not allow the simultaneous consolidation of volumes from different 7-Mode sources to an SVM."

 

This means I would need to finish the cutover for first 7-Mode Filer before I can start on the second 7-Mode Filer to the same SMV.

 

 

The workaround is to run another instance of 7MTT on a different server.  For each 7MTT instance, you can only migrate from one source 7-Mode Filer to one SVM destination.  If you need to migrate multiple source 7-Mode Filer to the same SVM destination, you need to run another 7MTT on different Windows/Linux server.

 

This will be a headache to consolidation multiple volumes from 7-Mode sources to a single SMV.  Especially, with an application data residing on multiple 7-Mode Filers.

 

 

 

Request For Enhancement:

7MTT can run mulitple 7-Mode source volumes to same destination SVM.

Removing Nodes From a Cluster

$
0
0

Scenario: we have a 4-node cluster. We are temporarily adding 2 additional nodes, migrating content, and decommissioning 2 nodes.

 

The System Admin Guide says the following about removing a node:

 

------------------------------

If the node you want to remove is the current master node, reboot the node by using the system node reboot command to enable another node in the cluster to be elected as the master node. The master node is the node that holds processes such as mgmt, vldb, vifmgr, bcomd, and crs.

------------------------------

 

Node 2 is the master nodeand is one of the nodes I'm removing. Of course I don't want to reboot and have node 1 take over since it's also being removed. Should I remove node 1, reboot node 2, then remove node 2? Why does it recommend rebooting rather than failover / giveback? Any insights will be appreciated!


ONTAP Recipes: Easily identify remaining performance capacity

$
0
0

ONTAP Recipes: Did you know you can…?

 

Easily identify remaining performance capacity

 

Performance capacity, or headroom, measures how much work you can place on a node or an aggregate before performance of workloads on the resource begins to be affected by latency. Knowing the available performance capacity on the cluster helps you provision and balance workloads.

 

1. Change to advanced privilege level:

set -privilege advanced

 

2. Start real-time headroom statistics collection:

statistics start –object [resource_headroom_cpu | resource_headroom_aggr]

 

3. Display real-time headroom statistics information:

statistics show - object [resource_headroom_cpu | resource_headroom_aggr]

 

For complete command syntax, see the man page.

 

 

Example:

 

Cluster1::*> statistics show -object resource_headroom_cpu -raw -counter ewma_hourly

 

Object: resource_headroom_cpu

Instance: CPU_node1

Start-time: 7/9/2017 16:06:27

End-time: 7/9/2017 16:06:27

Scope: node1

 

 

Counter                                           Value

--------------------------------                  ---------

ewma_hourly                                            -

current_ops                                         4376

current_latency                                    37719

current_utilization                                   86

optimal_point_ops                                   2573

optimal_point_latency                               3589

optimal_point_utilization                             72

optimal_point_confidence_factor                        1

 

 

You can compute the available performance capacity for a node by subtracting the optimal_point_utilization counter from the current_utilization counter. In this example, the utilization capacity for CPU_node1 is -14% (72%-86%), which suggests that the CPU has been overutilized on average for the past hour.

 

 

For more information, see the ONTAP 9 documentation center

 

 

Workflow for automatic weekly lun clone refresh/mapping

$
0
0

Hi guys, I got a problem.

 

There's a Oracle DB that I need to provide a test env, and to save space, I'm planning to use lun clones.

In am IBM Storage, I got clone relationships that can be "refreshed" with one plain command, effectively zeroing the RW clones to the state the source LUNs (in IBM case, volumes) are right now. This command is given to a "consistency group" (a group of same relationships of multiple volumes/luns), keeping the multiple clones and it's mountings unhardmed, saving me the hassle of mapping them again.

 

Is there anything close to it in NetApp? (I'm using 9.1)

If no... here's my scenario: 10 LUNs that I need to create clones, map them to an igroup, mount them in ESX VM, refresh them every week.

I know it can be done thru scriptting, I just would like to know if there is a smarted way, or if don't, if you guys can point me to some threads that already talk about this matter.

ps.: Sorry the bad engrish

ONTAP Recipes: Easily identify throughput and latency between nodes

$
0
0

ONTAP Recipes: Did you know you can...?

 

Easily identify throughput and latency between nodes

 

You can check throughput and latency to identify network bottlenecks, or to prequalify network paths between nodes. You can check pairs of node in the same cluster or in two peered clusters.

 

1. Change to advanced privilege level:

set -privilege advanced

 

2. Measure throughput and latency between nodes:

network test-path -source-node name -destination-cluster cluster_name -destination-node name -session-type Default

 

 

Example:

 

cluster1::> network test-path -source-node node1 -destination-cluster cluster2 -destination-node node3 -session-type Default

Test Duration:          10.88 secs

Send Throughput:        48.23 MB/sec

Receive Throughput:     48.23 MB/sec

MB sent:                524.74

MB received:            524.74

Avg latency in ms:      301.47

Min latency in ms:      61.14

Max latency in ms:      856.86

 

If performance does not meet expectations for the path configuration, you should check node performance statistics, use available tools to isolate the problem in the network, check switch settings, and so forth.

 

 

For more information, see the ONTAP 9 documentation center

 

FAS3140 CIFS share copy to synlogy

$
0
0

I have a FAS3140 CIFS share of around 30 TB , would like to copy it outside (synlogy same network).Initially I tried to copy it through windows share through desktop but the speed is 10 MB/s which may take 6 month to complete.

I have shell access to both synlogy & netapp filler, what command I can use to copy it fast ?sftp ? or should I mount netapp cifs share on synlogy ? or is there any other way ?I just need the data, there is no permission issue.

Please suggest


Note: My netapp02 is down(bcz of one of the disk shelf module got failed) which has all this cifs share & netapp01 has taken over the netapp02 & the respective cifs share.

High CPU usage after Upgrade from 8.3.2P to 9.1P2

$
0
0

Hello,

 

Don't know if anyone has experienced this one, I have a two node switchless Fas8040 HA setup with three trays attached, One DS2246 populated with 10k SAS owned by Node 1, and two DS4246 Shelfs populated with Sata disk owned by Node 2.

 

I recently upgraded from On tap 8.3.2P to 9.1P2 successfully after generating the auto support upgrade advisor and following the guide.

 

However after the upgrade upto a week later I’m still seeing higher than usual CPU utilisation, only on Node1 which prior to the upgrade averaged way below 40% and now averages 80 - 90% with no additional load been placed on the filer.

 

Before the upgrade

Node1::> node run -node Node1 -command sysstat -c 10 -x 3

 

CPU %

12

14

13

14

26

21

16

14

19

28

 

After the upgrade to 9.1 constantly no change after a week with no jobs running out of the norm and only on Node 1

 

node run -node Node1 sysstat -M 1

 

ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+  AVG CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 Nwk_Excl Nwk_Lg Nwk_Exmpt Protocol Storage Raid

 100%  100%  100%   98%   92%   80%   66%   51%  86%  76%  87%  87%  87%  87%  88%  88%  87%       1%     2%       90%       0%      0%   0%

 

Raid_Ex Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt

3%            0%       9%      2%         21%(14%)             0%                      0%

 

Exempt SSAN_Ex Intr Host   Ops/s    CP

10%       19%     7%    523%    6673    0%

 

Command above is truncated for ease of viewing, monitored for a period of time with the values changing marginally but all a lot higher than where it ran before the upgrade.

Viewing all 4957 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>