Quantcast
Channel: ONTAP Discussions topics
Viewing all 4962 articles
Browse latest View live

ONTAP recipes: Easily create storage pools

$
0
0

Did you know you can...?

 

Easily create a storage pool

 

Use System Manager to combine SSDs to create a storage pool, or collection of SSDs (cache disks), which enables you to share the SSDs and SSD spares between an HA pair for allocation to two or more Flash Pool aggregates at the same time.

 

  • Both nodes of the HA pair must be up and running in order to allocate SSDs and SSD spares through a storage pool.
  • Storage pools must have a minimum of 3 SSDs.
  • All SSDs in a storage pool must be owned by the same HA pair.
  • You cannot use partitioned SSDs when creating a storage pool by using System Manager.
  1. Click Storage > Aggregates & Disks > Storage Pools.
  2. In the Storage Pools window, click Create.
  3. In the Create Storage Pool dialog box, specify the name for the storage pool, disk size, and the number of disks.
  4. Click Create.

 

For more information, see the ONTAP9 documentation center.


ONTAP Recipes: Deploy MongoDB replica sets that retain storage efficiency

$
0
0

ONTAP Recipes: Did you know you can…?

 

Deploy MongoDB replica sets that retain storage efficiency

 

  1. Create a flexvol to host the MongoDB primary. Mount the flexvol on the primary host.
  2. Take a snapshot on the flexvol created in step 1.
  3. Create volume clones based on the snapshot taken in step 2. Create as many volume clones as there are secondaries in the replica set.
  4. Mount one clone volume on each secondary host.
  5. Initialize the replica set. Since the data in the primary and clone volumes is same, the primary and secondary replica sets are in sync immediately.
  6. As the source and clone volumes diverge due to new writes coming in, customers can destroy the older snapshot (from step 2) and recreate a newer set of clones.

The workflow can be easily automated using NetApp WFA (workflow automation) tool or shell scripts and works for both SAN and NAS.

 

For more information, see the ONTAP9 documentation center

Connect-NcController using current context

$
0
0

Using the dataontap 4.4 powershell toolkit to try to connect to a cDOT cluster/vserver and I have it working but one question.

 

Is there no way to tell the command to use the currently logged on user token?  Do I have to prompt for the users credentials even though they are right there in the windows address space to use?

 

When I use the connect-nacontroller, I only need to supply the -name parameter and it uses the currently logged on user.

 

SnapDrive process failure in Linux -- more

$
0
0
The 'SnapDrive process failure in Linux" was closed to add more comments. I had same issue with SDU v5.3.1 and after tried "export LVM_SUPPRESS_FD_WARNINGS=1" under /etc/profile. it did not help me. Later I found the issue from /etc/multipath.conf, after fixing it. The SDU works fine. Just want to share this out.

FAS8060 ontap upgrade from 8.3.1 to 8.3.2P11 queries

$
0
0


Hi,

 

I am performing an upgrade for cdot mentioned above. Yesterday, netapp support as part of the ongoing investigation regarding quotas event issue, had re-initialized the quota. The support personnel didn't realize that its going to be scan on the entire volune and with tons of thousands of files, it is taking ages to progress. After 2 full days it is still at 6 %, after the support consulted with senior, he said he was looking at his lab qtree re-initiaze timings and hence he made that assumption.

Anyway, as part of another investigation around ZAPI calls being dropped, we are advised to perform ontap upgrade.


Can someone advise if its ok to perform ontap upgrade while quotas are initiazlaing (scanning) ? and after the takeover giveback will it start from where it left ?


Thanks,
-Ashwin

Basic use case; utterly confused

$
0
0

Seems like a basic use case: take periodic backups from primary storage, and "vault" them to secondary, without continuing to consume space on primary.  Essentially, replace a tape backup system.

The problem with "normal" mirror/vault is that snapshots remain growing on the primary.  In our case they soon fill the primary, causing a production problem.  For a specific volume, I just want to take a monthly archival backup, and have it available for possible restore; I don't care about all the deltas during the month, which cause the snapshot to grow uncontrolled.

Support says there's no way, other than breaking the relationship after taking the copy.  But then we lose the ability to do anything with the backups if required.  Is there no way - e.g. powershell - others have found?

Getting rid of tape backup was a big selling point in getting us to buy a secondary system.  Yes, for MOST filesystems we're doing frequent snapmirrors, and accept that those snapshots must remain.  But if all I want in an occasional 100% copy of a volume...

Is it safe to perform giveback?

$
0
0

On a FAS2240-4 cluster running ontap version 8.1.2 in 7-mode. Status of the nodes:

 

Node 1: node1 has taken over Node2

Node 2: waiting for giveback

 

When doing a 'cf giveback', I'm getting the following message :

 

Partner not waiting for giveback, giveback cancelled.

To do a giveback without checking for partner readiness, please either set option "cf.giveback.check.partner" to off before doing "cf giveback" again or do "cf giveback -f".

 

I have the impressing this is the case because of the following reason:

 

  slot 0: Interconnect HBA: Mellanox IB MT25204
GUID: 0x100000a0983871ae LID: 0x4 Remote LID: 0x0
Firmware rev: 1.0.800
Hardware rev: 160
Command rev: 1
Cluster Interconnect Port: port not active

 

I guess a "cf giveback -f" will not succeed... Right?

 

Any suggestions?

 

 

 


 

FAS8040 SP losing IP address

$
0
0

I have got new FAS8040 to configure,

I am accessing sytem console from SP and booting ontap.

While doing initial node management configuration my session freezes. It is happening in different steps, once while assigning management port, once while assisgning management IP addres.

 

After that I cant reconnect via ssh to SP anymore. SP is loosing its IP configuration. Techinian in DC needs to reassign IP address manualy to let me connect back via ssh.

 

It is strange situation, never occured it before. Anyone of you had situation like this before? 

 

 


Error deploying from Ontap Edge 'vsa.ipaddr'

$
0
0

HI world

 

We are trying to deploy a netapp server from a Data Ontap Edge, and i ever get the error Invalid/missing configuration property 'vsa.ipaddr'.

I have read carefully the documentation and do everything (i think) fine, but i get this error all time

 

My configuration it's a ESX 5.5.0 updatre  with 1 Tb of SAN disk (48 cores and 256 Gb RAM). The version of edge its 824_123_v

My esx are not on vcenter and i have full comunication with world

I have try all diferent configuration (put more IP's, system and nvram on the same disk and separated, fullfillmente all request as dns, etc... and lets empty by default, install from vm wizard or vm setup, etc...)

 

I have seen other people with the same problem because they start the server from vmware, i'll do from vsadmin at the finish of wizard (answering yes, and i have try starting manually too)

 

 

The log:

Hit [Enter] to boot immediately, or any other key for command prompt.
Booting...
x86_64/freebsd/image1/kernel data=0x922918+0x3db680 syms=[0x8+0x45318+0x8+0x2e84b]
x86_64/freebsd/image1/platform.ko size 0x2b0cd0 at 0xe72000
NetApp Data ONTAP 8.2.4 7-Mode
Copyright (C) 1992-2015 NetApp.
All rights reserved.
md1.uzip: 39168 x 16384 blocks
md2.uzip: 7040 x 16384 blocks
WARNING: Data ONTAP mode is not specified, defaulting to 7-mode.
ERROR: Invalid/missing configuration property 'vsa.ipaddr'.
ERROR: Data ONTAP startup failed - initiating shutdown.
Here are the permissions on /mroot directory:
drwxr-xr-x 3 root wheel 512 Oct 27 2015 /mroot
/mroot directory doesn't exist or is it not writeable... aborting coverage dump
Waiting for PIDS: 709.
The VMware service must be run from within a virtual machine.
Log: The VMware service must be run from within a virtual machine.
Log: Backtrace:
The VMware service must be run from within a virtual machine.
Log: The VMware service must be run from within a virtual machine.
Log: Backtrace:
Terminated

 

Any idea to help ???

downtime require to enable options nfs.vstorage.enable on ?????

$
0
0

To enable "options nfs.vstorage.enable" for VAAI do we need any downtime or to follow any prerequisites.

 

Can we enable it on fly, will it impact any thing????

 

Thanks in advance!!!!!

NETAPP and Recent SMB1 Issues

$
0
0

Can anyone point me to any offical NETAPP responses to the SMB1 discussed here:

 

https://whyistheinternetbroken.wordpress.com/2017/02/22/smb1-vuln-ontap/

 

I am not in a postion today to move to ONTAP 9.2 and disable SMB1.  However our

security scanner are reporting this daily now.

 

If I can quota an offical response from NETAPP stating its SMB1 is not vulnerable then

I can get  a risk acceptance to cover the scan failures until I upgrade to 9.2.

ONTAP SELECT Server moved to new IP Range, How to re-import into deploy server

$
0
0

A Customer needed to move an ESX server running ONTAP Select 9 to another site, which required an IP Address change for both the host interfaces and the SELECT management and data interfaces.  Is there a way to re-import this into a deploy server, or another scenario, if you lost your current deploy server, is there a way to redeploy one and import the SELECT system?

ONTAP Recipes: Work around the lack of FTP support in ONTAP with CIFS/SMB or NFS

$
0
0

ONTAP Recipes: Did you know you can?

 

Work around the lack of FTP support in ONTAP with CIFS/SMB or NFS

 

Clustered ONTAP natively does not support FTP access, which means you can't set up an FTP server in Clustered ONTAP like you could with 7-Mode. That doesn't mean you can't use ONTAP to host data accessed via FTP.

 

ONTAP steps

 1. Create a volume on the storage system to host your FTP-accessed data. If desired, create qtrees in the volume to act as directories for users. Create folders inside the qtree for FTP access, if desired. 

2. Create a way to access the data (either CIFS share or NFS export).

3. Lock down permissions to the volume at a file/folder level.

 

FTP server steps (Windows): 

Whether you're using a Windows server or client, you can turn FTP on or off as a Windows feature in most versions of Windows:

1. Enable FTP (method will vary based on Windows OS - for example: https://technet.microsoft.com/en-us/library/hh831655(v=ws.11).aspx#Step1)

2. Add an FTP site; set the UNC path to the CIFS share as the physical path.

3. Configure FTP as desired (SSL, IP addresses, ports, authentication, security, etc).

4. Click on sites and click on the FTP site. Click "Basic Settings" and click the "Test Settings" button.

 

justin.png


5. If successful, test FTP access, puts, gets, etc. 

 

FTP server steps (CentOS 7):

The following link shows how to configure FTP in CentOS 7.
https://www.unixmen.com/install-configure-ftp-server-centos-7/


For Ubuntu:
https://www.digitalocean.com/community/tutorials/how-to-set-up-vsftpd-for-a-user-s-directory-on-ubuntu-16-04

 

Additionally, you can find many examples of FTP servers leveraging Docker containers at the Docker hub.

https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=vsftp&starCount=0

1. Install FTP on the Linux VM

2. Install NFS client tools (if not already installed)

3. Mount the NFS export to the client and add it to /etc/fstab to remount on reboots, or use autofs configuration to use homedirs for FTP. Recommended mount options:

   rsize=65536,wsize=65536,bg,hard

4.Configure FTP to use the NFS export as the FTP datastore. For vsftp, add the following lines to the /etc/vsftpd/vsftpd.conf file:

   user_sub_token=$USER

   local_root=/NFSpath/$USER/ftp

5. Configure/secure FTP as desired.

6. Test FTP access, puts, gets, etc.

 

 

ONTAP Recipes: Manage storage efficiency policies in ONTAP

$
0
0

ONTAP Recipes: Did you know you can...?

 

Manage storage efficiency policies in ONTAP  

 

Background (i.e. Post-process) data deduplication and data compression can be run on a FlexVol volume or Infinite Volume.

  • You can schedule a deduplication operation by creating job schedules as part of the efficiency policies or you can specify a threshold percentage, which triggers the deduplication or compression operation only after new data has exceeded the specified percentage.
  • You can disassociate an efficiency policy to stop running any further scheduled based deduplication and data compression operations on the volume by using the volume efficiency modify command.

A volume efficiency policy exists in the context of a Storage Virtual Machine (SVM). The volume efficiency policies support only job schedules that are of type cron."Volume efficiency" is already aliased to "sis" in cluster-shell. Apart from enabling/disabling various storage efficiency features, there are various policies that govern the behavior.

 

ONTAP provides following storage efficiency policies:

  • None: none of the storage efficiency features are enabled.
  • Default:dedupe is ON and deduplication operation start time is set to midnight.
  • --: dedupe is ON, but the user has set a different schedule than default (midnight), so policy becomes "-".
  • Inline: inline data compression, inline data deduplication, inline data compaction and inline data zero detection are enabled.

 You can typically create policies based on:

  • Schedule: when the post-process operation (deduplication/compression) should start.
  • Data Change Rate: how much percentage of data is changed since last operation.
  • QoS: QoS setting

Following are the useful commands to configure deduplication policies in an ONTAP system.

 

Creating an Efficiency Policy

 

  Create a volume efficiency policy named pcy1 that triggers an efficiency operation daily. 

 volume efficiency policy create -vserver vs1 -policy pcy1 -schedule daily

 

  Create a volume efficiency policy named pcy that triggers an efficiency operation when the threshold percentage reaches 40%.

 volume efficiency policy create -vserver vs1 -policy pcy1 -type threshold -start-threshold-percent 40%

 

Setting a Volume Efficiency Policy 

 

  Assign the volume efficiency policy named pcy1 with volume vol1.

 volume efficiency modify -vserver vs1 -volume vol1 -policy pcy1

 

Modifyung a Volume Efficiency Policy

 

  Modify the volume efficiency policy named pcy1 to run every hour.

 volume efficiency policy modify -vserver vs1 -policy pcy1 -schedule hourly

 

  Modify a volume efficiency policy named pcy1 to threshold 20%.

 volume efficiency policy modify -vserver vs1 -policy pcy1 -type threshold -start-threshold-percent 20%

 

Viewing Volume Efficiency Policies

 

  Display information about the policies created for the SVM vs1.

 volume efficiency policy show -vserver vs1

 

  Display the policies for which the duration is set as 8 hours

 volume efficiency policy show -duration 8

 

Disassociate a Volume Efficiency Policy from a Volume

 

  Disassociate the volume efficiency policy from volume vol1.

 volume efficiency modify -vserver vs1 -volume 1ol1 -policy -

 

Delete a Volume Efficiency Policy Not Associated with any Volume

 

   Delete a volume efficiency policy named pcy1.

  volume efficiency policy delete -vserver vs1 -policy pcy1

ONTAP Recipes: Use ONTAP Storage Efficiency features to get maximum storage efficiency for AFF

$
0
0

 

ONTAP Recipes: Did you know you can...?

Use ONTAP Storage Efficiency features to get maximum storage efficiency for AFF 

  

Storage efficiency is the ability to store and manage data while consuming the least amount of space on disk with little or no impact on performance, resulting in a lower overall cost. Efficiency is not just about making the most of the physical blocks on the disk, it's also about being able to easily provision flexible storage. 

 

All ONTAP Storage Efficiency features are supported on All Flash FAS, including:

  • Thin provisioning: storage space is dynamically allocated to each volume or LUN as data is written.
  • Compression: reduces the physical capacity that is required to store data on the storage system by compressing data chunks.
  • Deduplication: eliminates extra copies of same data by saving just one copy of the data and replacing the other copies with pointers that lead back to the original copy.
  • Compaction: reduction of storage of data without loss of information by eliminating unnecessary redundancy.

Features enabled on the volume can be viewed and modified using the volume efficiency config command. 

 volume efficiency config –volume $VOL_NAME –vserver $SVM_NAME

 

Thin provisioning is enabled by default (space-guarantee will be none)

  volume show -vserver $SVM_NAME -volume $VOL_NAME -fields space-guarantee

 

About Storage Efficiency

Two modes of operation support flexible volume deduplication, compression and data compaction:

  • Inline Storage Efficiency
  • Background (i.e. Post-process) Storage Efficiency

Inline Storage Efficiency 

The storage efficiency savings are achieved before the data is written to disk. This also includes cross volume deduplication. All inline efficiency features are enabled by default on newly created or existing flexible volumes of an AFF system. They can be explicitly assigned later by attaching an "inline-only" policy. 

 

To explicity assign an inline policy:

  vol efficiency config -volume $VOL_NAME -vserver $SVM_NAME -policy inline-only

    Specific Inline features can be enabled by using corresponding fields in above CLI.

 

Background (i.e. Post-process) Storage Efficiency

The storage efficiency savings are achieved after the data is written to disk. Process can be run by issuing manual commands or setting volume efficiency policies based on schedule or threshold.This is by default disabled on AFF and can be enabled on a volume by attaching a policy/schedule.

 

 To get deduplication, compression and compaction savings on the data written after recent efficiency operation:

  volume efficiency start -volume $VOL_NAME -vserver $SVM_NAME dedupe true compression true compaction true

 

To achieve efficiency savings on old data in the volume:

  volume efficiency start -volume $VOL_NAME -vserver $SVM_NAME -scan-old-data true dedupe true compression true compaction true

 

For background deduplication, a system created policy called "dafault" runs everyday at 12:00AM. To create a custom policy with a different schedule: 

  volume efficiency modify -vserver $SVM_NAME -volume $VOL_NAME -policy $POLICY_NAME

 

To enable both inline and background storage efficiencies on a volume which has an "inline-only" policy:

volume efficiency modify -vserver $SVM_NAME -volume $VOL_NAME -policy default

 

volume efficiency modify -volume $VOL_NAME -inline-dedupe true

-cross-volume-inline-dedupe true -compression true -inline-compression true -data-compaction true

 


How to check for snapmirror version-flexible mirrors in ONTAP?

$
0
0

I am trying to check for this setting in ONTAP from the CLI and/or SysMgr.

 

Does anyone know how to see this in the system?

Flexgroup replication

$
0
0

We are in the process of implementing flexgroups on one of our large volume storage clusters. We have two main data centers and we use SVM DR extensively on our other c-mode clusters, but SVM DR is not supported with flexgroups. Obviously we can snapmirror the flexgroup, but how do we retain all the export policies, etc, for when we ever have to bring up the SVM on our other data center?

HWU - Nexus 5596 Cluster Switch Support for 40g/4x10g Breakout Cables

$
0
0

HWU only shows that the 3 metre breakout cable (X66120-3) is supported by the Cisco Nexus 5596UP cluster switches. The 5 metre version (X66120-5) is not listed, yet the NetApp CN1610 and Nexus 3132Q switches support both lengths.

 

Is this correct or an omission?

Snaplock Privilegd delete.

$
0
0

Hi All,

 

     I am trying the Snaplock feature in Netapp simulator storage. 

 

     I get the process to create Enterprise WORM folder, Audit Log and privileged delete account.

 

     But I have a question about the system administrator and vsadmin-snaplock.

     1. In order the prevent system administrator has too much power to delete the WORM file in Enterprise mode. So, we have to create another account has privileged delete the WORM file. Is it the major purpose to separate the system administrator and vsadmin-snaplock acccount?

     2. If yes, there is no any method can prevent system administrator to create a vsadmin-snaplock account or modify the password of vsadmin-snaplock account. It means that administrator can do the privileged delete when he wanted. Is it right?

 

      I know the audit log will save all the process. But the log is just for record, it can not prevent the wrong happen.

      Do you know if there is any manner can prevent administrator to create or modify vsadmin-snaplock account in anytime?

 

Thanks,

Billy

Routing problems with Intercluster LIF

$
0
0

Hello,

 

in a small environment i have a 2 node system and a single node system in a remote office connected via vpn. I make daily snapmirror updates through that vpn.

 

I have now a problem:  How to make a every time working Intercluster configuration.

 

When the intercluster is in the same IP subnet as the management interfaces, i got problems, that sometimes services (like Autosupport) can use the intercluster IP as source and not the node management LIF. (Documented in the network management guide)

 

So i configured a own management network, with ONTAP 9.1 all was fine, but after a upgrade to 9.2 i can't reach the SSH and HTTP services on the management interfaces from clients in the same subnet as the Intercluster LIF.

 

Subnet A - NetApp Management interfaces

Subnet B - Intercluster and also some Clients

 

With enabled Intercluster in Subnet B i can ping the Management interfaces in Subnet A from Subnet B from a client, but can't use SSH or HTTP. When i disable the Intercluster i can connect via SSH or HTTP.

 

As a workaround i made now 3 subnets, a own subnet for Intercluster.

 

But can that be correct, that the management isn't useable when a Intercluster is present in the source network?

 

Kind regards

Stefan

 

Viewing all 4962 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>