I am trying to install the ONTAP 9.4 simulator on my Mac which is running Fusion 11. When I go to import the ovf I get an error " FAILED to open OVF descriptor".
Has anyone run into this ? Is there a workaround?
Thanks,
-rich
I am trying to install the ONTAP 9.4 simulator on my Mac which is running Fusion 11. When I go to import the ovf I get an error " FAILED to open OVF descriptor".
Has anyone run into this ? Is there a workaround?
Thanks,
-rich
After spending some time digging around, I haven't been able to set the nfs tcp-max-xfer-size via the ontap sdk.
Is anyone able to point me in the right direction? In this case, CLI access is not possible, it must be done via the api/sdk.
Thanks
We’re converting thin-provisioned luns to thick provisioned and it seems that to do that, we need extra storage space assigned to the volumes containing the luns.
Example. a 450GB thin-provisioned lun requires around 620GB size volume to be able to convert it from thin to thick.
Each volume has one lun and when we convert the luns from thin to thick provisioning, it needed ~250GB added to the volume to be able to complete successfully.
Right now the extra storage is less because I had the user run space reclamation and resized the volume, but it is still more than what they need. All they need is 450GB lun and the volume needs about 620GB to host a 450GB lun.
Could someone please explain this to me.
LUN Filer LUN Size (GB) Thick provisioned
UN19 nas5 460 no
UN20 nas5 470 no
UN24 nas5 450 no
UN28 nas5 450 no
UN29 nas5 450 no
UN30 nas5 450 no
UN31 nas5 450 no
Thank you in advance.
We are planning on removing a couple of HA pairs from a cluster, then of cause correspong LIF's on these nodes as well.
The problem is that these LIF's are hard coded on clients for mounting. For instance, they used SVM-lif1 for node1 and SVM-lif2 for node2 to reference volumes. When we remove these nodes, LIF's will be removed as well.
One solution I can think of is to create DNS alias for these legacy LIF's. the problem with that is these legacy LIF's would be staying there "forever".
Are there any better solution for that?
Hello,
I've got FAS2650 with two configured aggregates. Is it good to create software raid 0 combined two LUNs reside on the different aggregate in purpose of load balancing?
Hi all
Error :
Failed to apply the source Vserver configuration. Reason: Apply failed for Object: profileconfig_all_byname Method: baseline. Reason: duplicate entry
I'm experiencing snapmirror issues on 2 of my svm's. Both report the same failed error above even though one was stuck in a transferring state, but is now in a broken-off state and the other is a new snapmirror that is stuck in the initialize state. All my other snapmirrors were perfectly fine and enabled me to fail over to our DR site. Both primary and DR sote are running ONTAP version 9.3p4.
Breaking the snapmirror and forcing a resync make no difference.
Has anyone experienced this sort of behaviour before or can someone point me in the right direction for a solution please
Thanks in advance.
If I assign 100 TB of vmdks to Ontap Select, what is the actual usable space due to aggregate, volume and disk overhead?
We recently performed a Failover/Fallback activity (Work Load Swap) for User CIFS shares.
On Failover, we broke the mirror from DR & resynced from Source. So the source becomes DR & DR becomes source.
On Fallback, We tried to resync it from DR as it was before.
Resync not happening for the volumes with the following error messages:
1. The Destination volume must be DP (it became RW once the snapmirror was broken during failover)
2. No common snapshots exist between Source & DR Volumes
Is deleting the old DP volume & recreating a new one with the same one, the only solution for this? Or is there an alternative for this.
Hi Folks!
We are having an issue with our NFS Exports on Data ONTAP. They appear to mount perfectly fine on various UNIX hosts, and are accessible as the "root" user, BUT, any other user (local accounts) get a "permission denied" when trying to access the mount.
This worked fine on our current 7-Mode system and not sure what I am missing here...
I have a generic Export policy assigned to this test mount that allows any host read/write access, which appears to be working, but how do I allow Anonymous user access?
When I added one of the local users to the "SVM Settings > Host Users and Groups" (to the 'root' group 0 I may add) they could then access the share! I'm not sure how this works.... Help!
If you need any outputs just ask.
Cheers!
Hi All,
I'm having an issue with Snapvault on one of my file servers. I reinstalled OSSV (ver 3.0.1) on my Windows 2008R2 server and now nothing comes up when I run 'snapvault status -l' or 'snapvault destinations' from cmd prompt.
My question is how do I re-establish the relationship between the Source (Windows File server) and Destination (netapp filer) without rebaselining?
Our netapp filer is managed by a 3rd party company and they are not very forthcoming with a solution.
Hope someone can help - many thanks
I currently have a SnapVault relationship between Cluster "A" and "B". I'd like to free up storage on "B". I have a Cluster "C" available to be the secondary for that relationship. Can someone point me to a TR that addresses the best way to do that?
Thank you!
Hi team
I've been getting some errors regarding this disk 0b.01.10 can someone help me identify what it means?
Is this disk going to fail, if so it shows I have no spare disk on this node. Will it fail once i add an additional disk
to this node?
rtfowommr01b> Disk show
0b.01.10 rtfowommr01b(1886747403) Pool0 WD-RSRASFEAS01 rtfowommr01b(1886747403)
************
rtfowommr01b> vol status -s
Pool1 spare disks (empty)
Pool0 spare disks (empty)
************
Error message
Mon Oct 8 09:47:08 CDT [rtfowommr01b:shm.pullLogWarning:warning]: shm: Disk 0b.01.10 has returned 9 warnings and the log will be saved.
Mon Oct 8 09:47:08 CDT [rtfowommr01b:shm.pullLogWarning:warning]: shm: Disk 0b.01.10 has returned 10 warnings and the log will be saved.
NETAPP X306_WKOJN02TSSM NA00] S/N [WD-WMC1P0E3RSU0], block #385675982
Mon Oct 8 10:13:31 CDT [rtfowommr01b:raid.rg.readerr.repair.data:debug]: Fixing bad data on Disk /aggr_sata/plex0/rg1/0b.01.10 Shelf 1 Bay 10 [NETAPP X306_WKOJN02TSSM NA00] S/N [WD-rtfowommr01b], block #385675983
Mon Oct 8 10:13:31 CDT [rtfowommr01b:raid.rg.readerr.repair.data:debug]: Fixing bad data on Disk /aggr_sata/plex0/rg1/0b.01.10 Shelf 1 Bay 10 [NETAPP X306_WKOJN02TSSM NA00] S/N [WD-rtfowommr01b], block #385675984
Mon Oct 8 10:13:31 CDT [rtfowommr01b:raid.rg.readerr.repair.data:debug]: Fixing bad data on Disk /aggr_sata/plex0/rg1/0b.01.10 Shelf 1 Bay 10 [NETAPP X306_WKOJN02TSSM NA00] S/N [WD-rtfowommr01b], block #385675985
************
Tomorrow we are temporarily shutting down a switchless cluster on a NetApp 2620 running OnTAP 9.3P7. Purpose is to physically remove all equipment in the way of Hurricane Michael, bring it back in a few days, and restart.
I believe the process is straightforward but would love to hear any recommendations. Here is what I plan to do:
Turn off all backup jobs.
Invoke AutoSupport.
Enter the storage failover modify -node * -auto-giveback false command.
Enter the system node halt <node> -inhibit-takeover true -skip-lif-migration-before-shutdown true command.
Once the LOADER prompt appears on both nodes, power off the system.
In particular, should I do anything special as it relates to cluster ha?
Hello everyone
I'm curious to know if there's a way to view a series of changes & events on this netapp with a log. If so how can I
do that?
This TR describes configuring and testing third-party SSH clients, in conjunction with ActivClient software, to authenticate an ONTAP storage administrator via the public key stored on a common access card (CAC) when it is configured in ONTAP.
For more info, please click
Hi
We are migrating around 30 TB data from windows based file server to Netapp CIFS share by using robocopy command with /MIR switch.
After completing the baseline copy we are experiencing issues with incremental copy to Net app. Incremental robocopy to Netapp detects all existing files as changed or modified.
When i try the same command from windows as a source and another windows share as a destination, it copies only files those are changed or modified. It looks like some issue with Netapp.
Robocopy command I am using
robocopy <Windows Source> <Netapp Destination> /mir /copyall /ZB /w:1 /r:1 /log:\Robocopy-Logs\test.txt /TEE
robocopy <Windows Source> <Windows Destination> /mir /copyall /ZB /w:1 /r:1 /log:\Robocopy-Logs\test.txt /TEE
Did anyone experience same issue ?
Thanks
Bibin
When trying to stop this snapvault I'm getting this error message, I am removing it from it's destination.
asdfaefasd01b> snapvault stop -f asdfaefasd01bb:/vol/sv_asdfaefasd01b/qtree_11569595_092530
Snapvault configuration for the qtree has been deleted.
Could not delete qtree: replication destination could not remove temporary directory
Error log can't connect to filer is this the reason why?
<replication_dst_err_1
rtype="SnapVault"
srcfiler="ASWEDWDWR01"
srcpath="/vol/cifs_fs01/-"
dstpath="/vol/sv_asdfaefasd01/qtree_11569595_092530"
error="cannot connect to source filer"/>
Hello!
We have a problem with NFS acess to NFS volumes to rw to oracle +ASM volumes.
Oracle have told us that we need to publish this NFS v3 as
- no_root_squash
- insecure
We do not have matters with no_root_sqash.
Our problem is on "insecure" mode, which allows every RPC request, not only from default allowed if they come from a privileged source port (<1024).
How do we activate insecure mode?
Thanks
I'm trying to set up a cascaded SnapMirror relationship between B-C where A-B is already in place. When I try and initialize the B-C mirror, the A-B SnapMirror becomes busy and won't complete an update, apparently until this completes. This will take the protection outside the desired RPO.
Is there any way around this, do I simply need to ask the source filer admin to create an additional snapshot?