Quantcast
Channel: ONTAP Discussions topics
Viewing all 4957 articles
Browse latest View live

SnapManager for SQL does not delete old filestream snapshots?

$
0
0

I have a SQL database with file stream. mdf is in a dedicated vol/lun. ldf is in a dedicated vol/lun. filestream is in a dedicated vol/lun. SMSQL backup job have settings to keep 10 snapshots on primary filer. Snapvault backup is for 6 months on secondary filer. On primary filder, old snapshots on mdf volume were deleted but old snapshots in ldf and filestream volumes are not being deleted. Is this expected or I missed some configuration on the backup job? Thanks.


Quota report shows more space used than available in the qtree's volume

$
0
0

A student reported that a quota is showing 18TB used but the size of the volume is only 9TB. Maybe I'm having a brain freeze but what is going on there?

 

 

02.jpeg

 

04.jpeg

 

Thanks!

Neil

SolidFire and FAS

$
0
0

Trying to find out if Solidfire talks to FAS or not. Can someone please help?

compliance report for a secure multi-tenancy SVM setup for several customers

Re: NFS exported and mounted but access denied

$
0
0

Both your vol4 and vol5 are unix-style volumes (means permissions are governed by the UNIX file permissions) with the same world-rw exports and share permissions.  However, vol5 has 700 permissions so only the owner account root can access it and, unless being accessed from 10.22.85.8, that root is not the same as the client's local root account.   If you plan to access this volume from multiple clients and/or as any account other than root on 10.22.85.8, then you need to adjust at least the top-level directory's ownership and permissions (or extend your root export option I suppose).

wafliron with optional commit

$
0
0

I am looking for information on wafliron with optional commit procedure.

Does system or nodes reboot after completion?

FAS2240-2 Inaccessible Filer - HA Down

$
0
0

Hi All

Really hoping someone can help here as I am very new to FAS

 

One of our arrays had a disk fail, so we replaced this last week, assigned ownership and assumed rebuild started, but since this point:

 

1. We cannot CLI to the filer (1A) hosting this LUN - it asks for credentials and then closes once we enter these

2. We now cannot GUI into the clustered pair - previously this was working partially - could not access disks or aggregates for the filer with the dead disk - now it sits there "authenticating to filer 1A" and never completes

3. Snapmirror between this filer (1A) and a partner unit (2A) has stopped

 

The disk that eventually died was in rebuild back in October - the filer started a repair of the disk and ground the systems to a halt - we lost CLI access then as well, clients very much noticed the rebuild process as their data was running slowly, but it restored itself eventually and since then we havent had any reports.

 

HA mode apparently has been offline since that point though which we were not alerted on.

 

1A
Message:
HA mode, but takeover of partner is disabled due to reason : status of backup mailbox is uncertain.

 

CLI from partner filer:
1B> cf monitor
current time: 27Dec2017 09:54:56
UP 68+07:35:13, partner '1A', CF monitor enabled
VIA Interconnect is up (link up), takeover capability on-line
partner may be down, last partner update TAKEOVER_ENABLED (20Oct2017 22:59:23)
takeover scheduled 00:00:15

1B> cf status
1A may be down, takeover will be initiated in 15 seconds.
VIA Interconnect is up (link up).

1B> cf hw_assist status
Local Node(1B) Status:
Active: 1B monitoring alerts from partner(1A)
port 4444 IP address 192.168.1.15
Partner Node(1A) Status:
Active: 1A monitoring alerts from partner(1B)
port 4444 IP address 192.168.1.14

 

 

I am not sure where to go from this point being this is our first time managing a FAS unit but I am almost at the point of moving all the data from this LUN to protect our clients setup.

Any help would be extremely appreciated!

Quick Suggestion on a aggregate layout

$
0
0

Recenty I got a setup to configure (2 nodes cdot cluster) with total 48 disks

 

Disk layout ->

 

Shelf 0

 

1.0.0,1.0.2,1.0.4,1.0.6,1.0.8,1.0.10,1.0.12,1.0.14,1.0.16,1.0.18,1.0.20,1.0.22 (12 disks) - shared - node 1 (owner)

 

1.0.1,1.0.3,1.0.5,1.0.7,1.0.9,1.0.11,1.0.13,1.0.15,1.0.17,1.0.19,1.0.21,1.0.23 (12 disks) - shared - node 2 (owner) -

 

Shelf 1

 

1.1.0,1.1.2,1.1.4,1.1.6,1.1.8,1.1.10,1.1.12,1.1.14,1.1.16,1.1.18,1.1.20,1.1.22 (12 disks) - spare (non shared) - node 1 (owner)

 

1.1.1,1.1.3,1.1.5,1.1.7,1.1.9,1.1.11,1.1.13,1.1.15,1.1.17,1.1.19,1.1.21,1.1.23 (12 disks) - spare (non shared) - node 2 (owner)

 

so both the node aggr0 on shared drives.

 

Now normally in smaller system like this we create an active - passive ADP data aggregarte by assigning all the disk to node 1.

In this case, not all the disks were shared so I tried creating one big aggregate of two raid groups (20+2).

Since not all disks were shared (in ADP) , I really faced hardtime in planning on aggr - I did manual allocation of disks to aggregate so as both  the aggregate can have shared disk and then tried to add spare (non ADP ) disks so that those disks can also convert to shared -------- but this doesnt seems to be working fine

 

I did'nt want to perform the initial setup again to make all the disks as shared

 

Finally , I tried below configuration :-

 

 

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
netappds010_n1_aggr0
           368.4GB   17.86GB   95% online       1 netappds010-n1    raid_dp,
                                                                   normal
netappds010_n1_aggr1
           28.47TB   28.47TB    0% online       0 netappds010-n1    raid_dp,
                                                                   normal
netappds010_n2_aggr0
           368.4GB   17.86GB   95% online       1 netappds010-n2    raid_dp,
                                                                   normal
netappds010_n2_aggr1
           29.41TB   29.41TB    0% online       0 netappds010-n2    raid_dp,
                                                                   normal

 

that is -

 

1) both node aggr0 on ADP config.

2) assigned data partition of all node 2 disks to node 1 and created ADP data aggregate (one raid group 20+2,)

3) assigned all the non - ADP disks to node 2 and created normal data aggregate (one raid group 20+2,) 

 

The spare disks are like :-

 

netappds010::> storage aggregate show-spare-disks

Original Owner: netappds010-n1
 Pool0
  Root-Data Partitioned Spares
                                                              Local    Local
                                                               Data     Root Physical
 Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
 1.0.20           SAS    performance  10000 block            1.58TB  53.88GB   1.64TB not zeroed
 1.0.22           SAS    performance  10000 block            1.58TB  53.88GB   1.64TB not zeroed

Original Owner: netappds010-n2
 Pool0
  Spare Pool

                                                             Usable Physical
 Disk             Type   Class          RPM Checksum           Size     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- --------
 1.1.22           SAS    performance  10000 block            1.63TB   1.64TB not zeroed
 1.1.23           SAS    performance  10000 block            1.63TB   1.64TB not zeroed

Original Owner: netappds010-n2
 Pool0
  Root-Data Partitioned Spares
                                                              Local    Local
                                                               Data     Root Physical
 Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
 1.0.21           SAS    performance  10000 block                0B  53.88GB   1.64TB zeroed
 1.0.23           SAS    performance  10000 block                0B  53.88GB   1.64TB zeroed
6 entries were displayed.

 

My question are :--

 

1) Is the above kind of configuration (with both node aggr0 on ADP aggr , node 1 data aggr on ADP and node 2 data aggr as non -ADP) fine, o is there anything problemtic here

2) does the spare disks covering aggregates looks fine

 

Please share your comments

 


Changing the SnapMirror source to a new volume

$
0
0

We are about to create a new volume and migrate our existing VMs to it. The current volume has a SnapMirror relationship that will need to be recreated before we delete the old volume. I found the following guide about the topic, but it is written assuming you are moving the volume to a different device. This should still be relevent even if you are keeping the new volume on the same device correct?

 

https://library.netapp.com/ecmdocs/ECMP1368826/html/GUID-6C850BA4-E522-4F68-841F-D7E273ADE782.html

Any ways to find and list volumes being deleted on particular days/time?

$
0
0

event log show only can keep a couple of days, could not find such information in Unified Manager.

 

What would you think?

Licensing for FAS 3220

$
0
0

Hi All,

 

Hoping somebody can point me in the right direction here. We have acquired 2 x FAS 3220 Filers and 2 X DS4243 Shelves. I have cabled this together ready for HA and completed all inital setup.

 

The filers have come with Data Ontap 8.2.3p5 in 7-mode. I have accessed the filer via oncommand system manager and I have no licenses for anything. I mainly plan on using this as a SAN for our vcentre servers but also will likely use it for file/folder storage.  So for this I need an NFS/CIFS and an iSCSI license. From what I have read in version 8 of data ontap the license is linked to the serial of the filers? I know this device/software version is a few years old now, so what would be the best way of getting the license I need? I have looked downgrading the controllers to version 7.3.6 but from what I read I don't believe the controllers will run this older version.

 

If anyone could give me a hand on my next steps it would be greatly appreciated.

 

Thanks 

 

 

DNS Question

$
0
0

Greetings all,

 

Hope everyone had a nice Christmas.

 

Have a question.  WHile doing some troubleshooting, I have found where only 1 of my 3 DNS entries are showing up.  I have found documentation on how to change the DNS on a filer, but nothing on how to change the status from DOWN to UP.  Any assistance is appricated.

 

Have a safe and joyous New Year.

 

James

Migrate Aggregate to another cluster

$
0
0

Hello,

 

On Ontap 9.1, I have a data aggregate located on a single shelf.

This aggregate contains DP and XDP destinations.

I want to move this aggregate to another cluster (same Ontap version).

How can I reassign my aggregate to the new cluster and reapply SnapMirror / Vault relationships?

 

Thanks for your help

Problem configuring network interface

$
0
0

Hi All,

 

Please can someone help with this? In one command e0d and e0f are down and in another they are up. Evidently they are down even after issueing the command - net port modify -node xxxxx -port e0d -up-admin. I need to get the ports up.

 

xxxxx::*> net port ifgrp show -node xxxx -ifgrp a0a

(network port ifgrp show)

 

                 Node: xxxx

Interface Group Name: a0a

Distribution Function: ip

       Create Policy: multimode_lacp

         MAC Address: yyyyyyyyyyyyyyyyyyy

   Port Participation: none

       Network Ports: e0d, e0f

             Up Ports: -

           Down Ports: e0d, e0f

 

xxxxx::*> net port show -node xxxx

(network port show)

 

Node: xxxx

                                                                       Ignore

                                                 Speed(Mbps) Health   Health

Port     IPspace     Broadcast Domain Link MTU Admin/Oper Status   Status

--------- ------------ ---------------- ---- ---- ----------- -------- ------

a0a       Default     -               down 1500 auto/-     -       false

a0a-211   Default     136.171.211.0/24 down 1500 auto/-     -       false

e0M       Default     136.171.211.0/24 up   1500 auto/100   healthy false

e0a       Default     -               down 1500 auto/10   -       false

e0b       Default     -               down 1500 auto/10   -       false

e0c       Cluster     Cluster         up   9000 auto/10000 healthy false

e0d       Default     -               up   1500 auto/10000 healthy false

e0e       Cluster     Cluster         up   9000 auto/10000 healthy false

e0f       Default     -               up   1500 auto/10000 healthy false

e7a       Default     -               down 1500 auto/10   -       false

e7b       Default     -               down 1500 auto/10   -       false

 

Removal of nodes from a cluster - Hardware Upgrade

$
0
0

Hi,

 

I have a question around the permanent removal of nodes from a cluster, such as during a hardware refresh and I'm trying to establish the correct procedures/checklists to go through to ensure that during a hardware upgrade nothing is overlooked and user disruption is kept to a minimum.

 

Scenario: (All nodes running ONTAP 9.1 P7 and are HA pairs)

2 x FAS 8020 units (cluster nodes 1 to 4)

1 x AFF A200 unit (cluster nodes 5 and 6)

 

Assuming that all user data volumes have been migrated to the aggregates on the AFF nodes 5+6 then this should, I believe, just leave the following steps however I would welcome any comments to correct or enhance the process.

 

• Migrate Data SVM root volumes from FAS8020 aggregates on to AFF nodes (we already have LS-mirror copies but the ‘live’ version is still on the 8020s)
• Data LIFs for CIFS and iSCSI (hosted on the 8020s) have to be removed, or migrated to the new AFF nodes.
• Make the 8020 nodes ineligible for cluster RDB operations (i.e. remove eligibility forcing it to use one of the AFF nodes) - using command: node modify -node nodenametoberemoved -eligibility false
• Perform a 'cluster leave' operation on the 8020’s - does this need to be performed on each node in a HA pair?

• Physically disconnect the node from cluster network
• Zero the data disks

 

Thanks in advance,

Garth


Bringing up FAS2552 after holidays fails with volume offline.

$
0
0

We gracefully powered the Netapp off before the holidays and came back in today and powered it on.  It came up with an error about the battery being drained and needing to be charged.  I pressed "c enter" to override the wait period and force it to continue to boot.  It PANIC'd, dumped and then rebooted.  After that it appeared to get further in the bootup process but I'm still getting errors and still cannot access the storage.  See below:

 

Army_Sustainment_NetApp::> reboot
(system node reboot)

Warning: Internal error. Failed to get cluster HA information when validating
reboot / halt command.
Do you want to continue? {y|n}: y


SP-login: Terminated
.
Uptime: 34m59s
Top Shutdown Times (ms): {shutdown_raid=3747, if_reset=500, shutdown_wafl=150(multivol=0, sfsr=0, abort_scan=0, snapshot=0, hit_update=0, start=58, sync1=4, sync2=1, mark_fs=87), wafl_sync_tagged=27}
Shutdown duration (ms): {CIFS=5435, NFS=5435, ISCSI=5434, FCP=5434}
System rebooting...

Phoenix SecureCore(tm) Server
Copyright 1985-2008 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 8.3.0
Portions Copyright (c) 2008-2014 NetApp, Inc. All Rights Reserved

CPU = 1 Processors Detected, Cores per Processor = 2
Intel(R) Xeon(R) CPU C3528 @ 1.73GHz
Testing RAM
512MB RAM tested
18432MB RAM installed
256 KB L2 Cache per Processor Core
4096K L3 Cache Detected
System BIOS shadowed
USB 2.0: MICRON eUSB DISK
BIOS is scanning PCI Option ROMs, this may take a few seconds...
...................


Boot Loader version 4.3
Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2014 NetApp, Inc. All Rights Reserved.

CPU Type: Intel(R) Xeon(R) CPU C3528 @ 1.73GHz


Starting AUTOBOOT press Ctrl-C to abort...
Loading X86_64/freebsd/image1/kernel:0x100000/7950592 0x895100/4206472 Entry at 0x80171230
Loading X86_64/freebsd/image1/platform.ko:0xc99000/1987543 0xe7f000/288800 0xec5820/272560
Starting program at 0x80171230
NetApp Data ONTAP 8.3P1
Copyright (C) 1992-2015 NetApp.
All rights reserved.
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
original max threads=40, original heap size=41943040
bip_nitro Virtual Size Limit=167074201 Bytes
bip_nitro: user memory=2029756416, actual max threads=115, actual heap size=121215385
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version in flash: 5.4.9
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version bundled: 5.4.51
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version in flash: 5.4.9
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version bundled: 5.4.51
WAFL CPLEDGER is enabled. Checklist = 0x7ff841ff
Module Type 10GE Passive Copper(Compliant)[3 m]
Module Type 10GE Passive Copper(Compliant)[3 m]
Module Type 10GE Passive Copper(Compliant)[3 m]
add host 127.0.10.1: gateway 127.0.20.1
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:cf.fm.notkoverClusterDisable:warning]: Failover monitor: takeover disabled (restart)
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:LUN.nvfail.vol.proc.started:warning]: LUNs in volume lun_21092016_154842_vol (DSID 1030) have been brought offline because an inconsistency was detected in the nvlog during boot or takeover.
Army_Sustainment_NetApp-02
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:LUN.nvfail.vol.proc.started:warning]: LUNs in volume lun_21092016_144306_vol (DSID 1028) have been brought offline because an inconsistency was detected in the nvlog during boot or takeover.
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:LUN.nvfail.vol.proc.complete:warning]: LUNs in volume lun_21092016_154842_vol (DSID 1030) have been brought offline because an inconsistency was detected in the nvlog during boot or takeover.
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:LUN.nvfail.vol.proc.complete:warning]: LUNs in volume lun_21092016_144306_vol (DSID 1028) have been brought offline because an inconsistency was detected in the nvlog during boot or takeover.
Jan 02 10:55:46 [Army_Sustainment_NetApp-02:kern.syslog.msg:notice]: The system was down for 145 seconds
Jan 02 10:55:47 [Army_Sustainment_NetApp-02:cf.fsm.takeoverOfPartnerDisabled:error]: Failover monitor: takeover of Army_Sustainment_NetApp-01 disabled (Controller Failover takeover disabled).
Jan 02 10:55:47 [Army_Sustainment_NetApp-02:snmp.agent.msg.access.denied:warning]: Permission denied for SNMPv3 requests from root. Reason: Password is too short (SNMPv3 requires at least 8 characters).
Jan 02 10:55:47 [Army_Sustainment_NetApp-02:clam.invalid.config:warning]: Local node (name=unknown, id=0) is in an invalid configuration for providing CLAM functionality. CLAM cannot determine the identity of the HA partner.
Ipspace "acp-ipspace" created
Jan 02 10:55:52 [Army_Sustainment_NetApp-02:cf.fsm.partnerNotResponding:notice]: Failover monitor: partner not responding
Jan 02 10:56:00 [Army_Sustainment_NetApp-02:monitor.globalStatus.critical:CRITICAL]: Controller failover of Army_Sustainment_NetApp-01 is not possible: Controller Failover takeover disabled.
Jan 02 10:56:01 [Army_Sustainment_NetApp-02:ha.takeoverImpNotDef:error]: Takeover of the partner node is impossible due to reason Controller Failover takeover disabled.
Jan 02 10:57:25 [Army_Sustainment_NetApp-02:mgmtgwd.rootvol.recovery.changed:EMERGENCY]: The contents of the root volume might have changed and the local management databases might be out of sync with the replicated databases. This node is not fully operational. Contact technical support to obtain the root volume recovery procedures.
Jan 02 10:57:25 [Army_Sustainment_NetApp-02:callhome.root.vol.recovery.reqd:EMERGENCY]: Call home for ROOT VOLUME NOT WORKING PROPERLY: RECOVERY REQUIRED.

Tue Jan 2 10:57:26 MST 2018
login: SP-login: admin
Password:

 

I'm concerned about the statement: "LUNs in volume lun_21092016_154842_vol (DSID 1030) have been brought offline because an inconsistency was detected in the nvlog during boot or takeover."  I suspect that's why I still can't access storage.

 

Also, I get the following message when I login:  "The contents of the root volume may have changed and the local management configuration may be inconsistent and/or the local management databases may be out of sync with the replicated databases. This node is not fully operational. Contact support personnel for the root volume recovery procedures."

 

I don't see anything on the interwebs about root volume recovery procedures.

 

Any help would be appreciated.

7-mode fpolicy

$
0
0

Hello,

 

Does ONTAP 7-mode fpolicy support reporting to an external server? If so, how would I go about configuring 7-mode to report to an external server?

 

I used the following commands on ONTAP c-mode to configure an fpolicy server

 

 

Create FPolicy engine
vserver fpolicy policy external-engine create -vserver vserver_name -engine-name engine_name -primary-servers ip -port 8080 -ssl-option no-auth
 
Create FPolicy event

vserver fpolicy policy event create -vserver vserver_name -event-name events_name -protocol cifs -file-operations read,write,create,delete -filters first_read,first_write

 

Create FPolicy policy
vserver fpolicy policy create -vserver vserver_name -policy-name policy_name -events events_name -engine engine_name
 
Create FPolicy scope
vserver fpolicy policy scope create -vserver vserver_name -policy-name policy_name -volumes-to-include *
 
Enable FPolicy
vserver fpolicy enable -vserver vserver_name -policy-name policy_name -sequence-number 1

 

 

Will it work the same and how would I translate the above to the 7-mode equivalent commands?

What formats/protocols does the 7-mode fpolicy support?

 

Thanks for your replies and time.

Decommission of FAS2020

$
0
0

dear techs,

 

We recently moved to FAS2520 from FAS2020. I would like to decomission the FAS2020 and shut it down. Already deleted ISCSI, NFS Connections from the ESXi host. Please guide me through the decomission process of this device. Can this be acheived by the simple 'halt' command or is there a detailed process like Servers?

 

Thanks in advance,

Sabin

How to change a vfiler's network interface?

$
0
0

Hello guys


we have some vfilers spread on 4network interfaces(e1,e2,e3,e4),after their physical server migrated by IT,two of them are down permanently(ridiculous right?) say e3,e4, how can I let vfilers(originally reside on e3,e4) up again? by just assigning them the same ip address to live network interface say e2?

i found below commands for now,but seems vfiler add can not specify specific network interface?
vfiler remove vfilername [-f] [-i ipaddr [-i ipaddr]...]
vfiler add vfilername [-f] [-i ipaddr [-i ipaddr]...] [path [path ...]]

Test system NFS access to a snapshot of a production volume

$
0
0

We have a volume that contains data served via NFSv3 to a production system. From time to time, we'd like to use SnapDrive for Unix (SDU) to create a snapshot of that volume and mount the data on a test system. The test system should not have access to the production volume, for obvious reasons.

 

I have been able to create the snapshot and mount it in a different place on the production server, no problem.

 

# snapdrive snap create -fs /data/prod -snapname refresh_20171230_0040     
Starting snap create /data/prod
  WARNING:  DO NOT CONTROL-C!
            If snap create is interrupted, incomplete snapdrive
         generated data may remain on the filer volume(s)
         which may interfere with other snap operations.
Successfully created snapshot refresh_20171230_0040 on corp-dc-8040-nfs:/vol/vol_data

        snapshot refresh_20171230_0040 contains:
        file system: /data/prod
        filer directory: corp-dc-8040-nfs:/vol/vol_data/data_prod


# snapdrive snap connect \
 -fs /data/prod /data/stgtst \
 -noreserve -clone unrestricted -verbose \
 -snapname corp-dc-8040-nfs:/vol/vol_data:refresh_20171230_0040

 connecting /data/stgtst
          to filer directory: corp-dc-8040-nfs:/vol/vol_data_0/data_prod
        Volume copy corp-dc-8040-nfs:/vol/vol_data_0 ... created
                 (original: vol_data)
Successfully connected to snapshot corp-dc-8040-nfs:/vol/vol_data:step8img_stg_20171230_0040
        file system: /data/stgtst
        filer directory: corp-dc-8040-nfs:/vol/vol_data_0/data_prod
0001-860 Info: Host interface 172.16.194.56 can see storage system corp-dc-8040-nfs,
but has read-only NFS permission to directory /vol/vol_stepimages/stepimages_prod.
If this is intentional (examples: your routing setup will only use allowed
interfaces; the directories are mounted with the read-only option; etc.), you
may safely ignore this warning.
Otherwise, we suggest verifying NFS permissions on the storage system to avoid any
potential I/O errors.


 

This last bit appears to be a warning, because when the command completes, the snapshot is mounted on /data/stgtst and I can successfully create, modify, and delete files & directories in that share. So far, so good.

 

However, when I try to run the same command on the test system, which is not included in the export policy for the production volume, it fails as follows:

 

# snapdrive snap connect \
 -fs /data/prod /data/stgtst \
 -noreserve -clone unrestricted -verbose \
 -snapname corp-dc-8040-nfs:/vol/vol_data:refresh_20171230_0040

 connecting /data/stgtst
          to filer directory: corp-dc-8040-nfs:/vol/vol_data_0/data_prod
        Volume copy corp-dc-8040-nfs:/vol/vol_data_0 ... created
                 (original: vol_data)

        Cleaning up ...
 destroying empty snapdrive-generated flexclone corp-dc-8040-nfs:/vol/vol_data_0 ... done
0001-860 Info: Host interface 172.20.36.30 can see storage system corp-dc-8040-nfs,
but has read-only NFS permission to directory /vol/vol_data/data_prod.
If this is intentional (examples: your routing setup will only use allowed
interfaces; the directories are mounted with the read-only option; etc.), you
may safely ignore this warning.
Otherwise, we suggest verifying NFS permissions on the storage system to avoid any
potential I/O errors.
0001-034 Command error: mount failed: mount.nfs: access denied by server while mounting corp-dc-8040-nfs:/vol_data_0/data_prod

So the mount fails due to the test server not being included in the export policy of the production volume.

 

Is there a way using SDU to complete the mount on the test server by using a different export policy? What is the best practice for this type of operation?

 

Thanks,

Bill

Viewing all 4957 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>