Quantcast
Channel: ONTAP Discussions topics
Viewing all 4959 articles
Browse latest View live

Unified System manager for C-Mode

$
0
0
Guys using many C-Mode clusters. I have to login to the embedded system manager on a per cluster basis. Is there a unified tool to manage them all i.e. is the old Java based system manager applicable to C-mode 9.x?

Unified System Manager for C-Mode

$
0
0
Guys using many C-Mode clusters. I have to login to the embedded system manager on a per cluster basis. Is there a unified tool to manage them all i.e. is the old Java based system manager applicable to C-mode 9.x?

NetApp Data ONTAP Edge to ONTAP Select 9 Conversion Program

$
0
0

NetApp Data ONTAP Edge to ONTAP Select 9 Conversion Program 

 

Following the release of NetApp® ONTAP® Select 9 as well as Data ONTAP® Edge EOA announcement, NetApp is offering a conversion program that enables our current Edge customers with an active SSP to convert their licenses to ONTAP Select Standard free of charge. This limited-time offer is available no matter which type of Edge license—7-Mode Value, 7-Mode Premium, or clustered Data ONTAP—the customer has. The program ends July 31st, 2017.

https://netapp--c.na28.content.force.com/servlet/fileField?id=0BE1A000000PLed

 

Data ONTAP Edge to ONTAP Select Transition Notes

https://netapp--c.na28.content.force.com/servlet/fileField?id=0BE1A000000PM6X

 

Performing maintenance on nodes of a cluster

$
0
0

Good afternoon experts,

 

We have a six node (3 HA pair) cluster. Our cluster is relatively  simple in terms of how we use them. All SAN via FCoE. We want to perform maintenance on one of the HA pairs because we want to relocate to a different rack. We already accept the outage for any data hosted on the HA-pair.

 

Is there any thing I should watch out for when I shutdown this HA pair? Anything special I should take into consideration to make sure that the cluster and the remaining HA pairs continues to function?

 

Thanks,

DB

How to move root volume to aggr with less disks?

$
0
0

Hi there.

 

I've setup our first AFF200 system with ONTAP 9.1. I powered it up and went through the Guided Cluster Setup.

 

Reviewing the nodes afterwards I see the root aggregate is taking 10 of the SSD 3.8TB drives and set them to 53.88GB of usable space out of the 3.49TB of physical size. The root volume is set to 368GB.

 

So my first question is what's going on here? Why 10 disks? Why has the system set the drives to 53.88GB of usable space? Is this the standard default setup now? Is it still best practices to have a dedicated root aggregate consuming this many drives?

 

How can I move this to the standard dedicated 3 drives?

 

Thanks for the help.

Message: ses.multipath.ReqError: SAS disk shelf detected without a multipath configuration.

$
0
0

Hi All,

 

I have been getting the alert as below may i know if this is a kind of BUG ?

 

Ontap Version :- 8.2.3P5

 

Message: ses.multipath.ReqError: SAS disk shelf detected without a multipath
configuration.

Description: This message occurs when the system detects that a SAS disk shelf
is in a single-path cabling configuration.

Action: Check for SAS disk shelf enclosures with only a single path using the
'sasadmin expander_map' command. Physically inspect all SAS cables on the
attached storage for secure and correct connection.

 

Thanks, 

Nayab

Netapp snapmirror question

$
0
0

hello , 

 

We 're going to perform a DR test for the customer . The Production will be up during the test and customer would be bringing up the  servers at the DR to test . 

 

I am new to netapp so want to clarify the steps : at DR>

 

Break the Replication Mirrors

Create the igroup:

Once the luns are mapped, scan the ESX servers for the presented luns and proceed with VM recovery.

 

I am refering  to an Old document that was prepared an year back and the document doesn't mention quiesceing . My question is whether  its not necessary to quiesce the replication prior to breaking it in this scenario?

 

Thanks !

 

 

 

 

Adjusting Load Sharing Mirrors For New Equipment

$
0
0

Scenario: we have a 4 node 8.3.2P10 cluster consisting of an 8060 HA Pair and an 8080 AFF HA Pair. We are adding a new 8080 AFF HA Pair tto the cluster and migrating everything from the 8060s to the new 8080s. We will then decommission the 8060s.

 

My question relates to load sharing mirrors for SVM root volumes. The root volume for all 18 SVMs is on node 1, one of the 8060s that will be decommissioned. What is the most straightforward way to move the root volume to the new 8080 AFF system and keep the replication jobs intact? Would vol move work and not disrupt the replication jobs? Any suggestions?


Cluster peering and Snapmirror question

$
0
0

Hi,

 

Below diagram is intercluster LIF setup, IC1 & IC2 LIFs created in node A, IC3 & IC4 LIFs created in node B, IC5 & IC6 LIFs create in other cluster (node C & D).

 

Network address IC1 & IC3 is 10.0.20.98,99

Network address IC2 & IC4 is 10.0.30.98,99

Network address IC5 & IC6 is 10.0.50.98,99

 

We will setup cluster peering between 2 sites.

 

1. Does anyone know the configuration is correct?

2. How the replication traffice route?

 

Thanks

Paul

 

Snapmirror intercluster LIF.png

How to Convert 7-Mode (8.2.3P3) to Cluster mode (9.2)

$
0
0

Hello All,

 

Could some please share the procedure to convert a 7-Mode (FAS8020/8.2.3P3) to Cluster-Mode 9.2, got the licenses.

 

Thanks

Chaitan

SVM DR preparation guide for CIFS doesn't work

$
0
0
HI

I followed the ontap 9 SVM DR preparation to create CiFS at destination but it prompts error

::> vserver cifs domain preferred-dc add -vserver svmDR -domain -preferred-dc x.x.x.x -domain my.local

Error: command failed: This operation is not permitted on a Vserver that is configured as the destination for Vserver DR.

Different MTU size on one multimode_lacp ifgrp.

$
0
0

Need inputs If we have two ports part in LACP and set as 9000 MTU from switch side and if we configure two interface like with VLAN tag a1a-247 with MTU 1500 and a1a-320 with MTU 9000 will it work fine?

ONTAP Recipes: Easily create a SnapLock volume

$
0
0

 

Did you know you can…

 

Easily create a SnapLock Volume?

 

1. Install the SnapLock license.

license add -license-code <key>

 

2. Initialize the compliance clock on all the nodes of the cluster.

snaplock compliance-clock initialize -node <nodename>

 

3. Create a SnapLock aggregate of the appropriate SnapLock type.

storage aggregate create -aggregate <aggrname> -diskcount <count> -snaplock-type <enterprise|compliance>

 

4. Create a volume on the aggregate.

volume create -vserver <vservername> -volume <volname> -aggregate <aggrname> -size <size>

 

 

For more information, please see the ONTAP 9 documentation center.

ONTAP Recipes: Easily make a file into a WORM file on a SnapLock volume via NFS

$
0
0

ONTAP Recipes: Did you know you can…?

 

Easily make a file into a WORM file on a SnapLock volume via NFS

 

1. On the NFS host on which the volume is mounted, change the permissions on the file to read only. This makes the file a WORM file.

 

  chmod -w <file>

 

2. To override the default retention period setting and set a retention period greater than the default, you can either:

 

  • Explicitly set the time of the file to the retention period required.

      touch -a -t <[[CC]YY]MMDDhhmm[.ss]> <file>

 

  • Set the autocommit scan period on the volume. This ensures that any file that has not been modified for the period set, will automatically be made a WORM file.

      volume snaplock modify -vserver <vservername> - volume <volumename> -autocommit-period <value>

 

 

For more information, please see the ONTAP9 documentation center

ONTAP Recipes: Easily make a file a WORM_APPEND file on a SnapLock volume via NFS

$
0
0

ONTAP Recipes: Did you know you can…?

 

Easily make a file a WORM_APPEND file on a SnapLock volume via NFS

 

 

On the NFS host on which the volume is mounted, create an empty file, change the permissions on the file to read only. This makes the file a WORM file. Now, change the permissions back to read-write. This will make the file a WORM_APPEND file.

 

touch <file>

chmod -w <file>

chmod +w <file>  (setting the write permission to ‘group’ or ‘other’ will also work)

 

 

 

For more information, please see the ONTAP9 documentation center


Difference between VSC/SMVI snapshots and OnTap native snapshots?

$
0
0

If we use VSC snapshots ( without quiescing the datatstore) we can mount them in vSphere and do the recovery. Likewise, we can do the same by mounting Snapshots taken by OnTap native snapshots.

 

Are these two methods the same?

What the fomer one can do but the latter one canno't do?

If the same, can I then use OnTap snapshots to replace VSC/SMVI?

 

Thank you for your inputs in advance!

Cannot Access Ontap Software After performing factory reset

$
0
0
Dear, I'm new in NetApp Storage we have FAS2020 with 12x2tb hardisk single controller i have an issue accesing the Ontap software after performing the Factory reset ( during the bootup i choose option 4 ). Here's the error from CLI missing /etc/java/rt131.jar
Java virtual machine is inaccesible filler view cannot start until you resolve this problem. , sysconfig : table of valid configurations (etc/sysconfigtab) is missing Thanks in advance sir.

SP Firmware update

$
0
0

Hello all,

 

I have a question regarding firmware on FAS2650. Can I update Service Processor firmware version from 5.1 to 5.1P1 while keeping ONTAP on 9.1GA? There is no information about SP 5.1P1 version in the Service Processor Support Matrix. What are the rules when it comes to compatibility of firmwares between "P" versions of SP and ONTAP?

 

Thank you in advance.

 

Regards.

Data ONTAP 8.2.3 7-mode simulator ESXi

$
0
0

Data ONTAP 8.2.3 7-mode simulator

 

・Question

 

 Data ONTAP 8.2.3 7-mode simulator registered in ESXi 6.0.0.

 

 After executing the command below, it restarted.
 I chose (4) in the special boot menu.
 However, it repeats restart.
 Is there a solution?

 

・Step

 

 0)Console login with ssh connection.

 

 1)systemshell login

 

 on823> date
 Tue Aug 1 22:13:04 JST 2017
 on823> version
 NetApp Release 8.2.3 7-Mode: Thu Jan 15 21:30:45 PST 2015
 on823> priv set advanced
 Warning: These advanced commands are potentially dangerous; use
 them only when directed to do so by NetApp
 personnel.
 on823*> useradmin diaguser unlock

 on823*> useradmin diaguser password

 Enter a new password:
 Enter it again:

 on823*> systemshell

 Data ONTAP/amd64 (on823) (pts/0)

 login: diag
 Password:

 only when directed to do so by support personnel.

 

 2)Remove Disk v0* & v1*

 

 on823% cd /sim/dev/,disks
 on823% ls
 on823% vsim_makedisks -h

 on823% sudo rm v0*
 on823% sudo rm v1*
 on823% sudo rm ,reservations

 on823% sudo vsim_makedisks -n 14 -t 36 -a 0
 on823% sudo vsim_makedisks -n 14 -t 36 -a 1
 on823% sudo vsim_makedisks -n 14 -t 36 -a 2
 on823% sudo vsim_makedisks -n 14 -t 36 -a 3
 on823% exit

Cluster Peer Unavailable - Data and ICMP Reachable

$
0
0

Attempting to peer two clusters (Cluster #1 and Cluster #2) but cannot get past Availability = Unavailable

 

Cluster #1 is a six (6)-node cluster

 

Cluster #2 is a six (2)-node cluster

 

Tried peering with and without authentication - current peering has Authentication - OK

 

Both Data and ICMP Ping status are session_reachable and interface_reachable respectively on both sides of the peering

 

All ports involved/recognized have the same MTU (9000)

 

Cluster #1 is currently successfully peered and the source to a 3rd-party provider for DR (SnapMirror)

 

Cluster #1 displays both nodes for Cluster #2 for Remote Cluster Nodes - output of cluster peer show -instance

 

Cluster #2 only shows Cluster #1's node1 (of six) in for Remote Cluster Nodes - output of cluster peer show -instance

 

Cluster #2 DOES, however, show all six IP addresses for each node of Cluster #1 for Active IP Addresses - output of cluster peer show -instance

 

Viewing all 4959 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>