After I deployed to Dell R710 with vcenter6 or vcenter 6.5.,
The VM cannot boot.
I got GRUB error 2
After I deployed to Dell R710 with vcenter6 or vcenter 6.5.,
The VM cannot boot.
I got GRUB error 2
I have created a new vfiler and provisioned first nfs volume in that (other than vfiler root volume). Now the problem is, DFM is unable to detect this nfs volume and just shows the root volume in the list of volumes in DFM.
filer011>
filer011> vfiler run vfiler_01_011 df -g nfs_share_01_011
===== vfiler_01_011
Filesystem total used avail capacity Mounted on
/vol/nfs_share_01_011/ 1420GB 0GB 3839GB 0% /vol/nfs_share_01_011/
/vol/nfs_share_01_011/.snapshot 0GB 0GB 0GB 0% /vol/nfs_share_01_011/.snapshot
filer011>
filer011>
filer011>
filer011> qtree status -v nfs_share_01_011
Volume Tree Style Oplocks Status Owning vfiler
-------- -------- ----- -------- --------- -------------
nfs_share_01_011 ntfs enabled normal vfiler_01_011
nfs_share_01_011 qtree_nfs_share_01_011 unix enabled normal vfiler_01_011
filer011>
More interesting is when I check the base filer 'filer011' in DFM it shows qtree of this volume listed in it ' qtree_nfs_share_01_011' (not in the vfiler)
Steps I tried so far -
1. Deleted and added vfiler
2. Refreshed Monitoring many times
3. Checked the 'Diagno
Hi,
I have to find how many volumes present in aggr1 and how may volumes names that contains finace_vol in aggr1. Please help. I am using clusterr mode
ONTAP 9.1RC1
Hi,
Is there a way to easily restore a directory in a volume from snapshots without involving clients (e.g. cp)? The volume is exported via NFS only and the directory contains many subdirectories and files. It looks like snapshot restore-file only applies to single file. Its man page says:
The command fails if you try to restore directories (and their contents).
Thanks,
Hi to all,
I have a 18 TB volumen and i and thinking to do volumen snapshot in this volumen. My problem is that I dont have a lot of free
Hi
I would like to mount a simple qtree on NFS4 to windows 2012 server Without openldap or Active directory.
The CDOT is on the same domain as the client NFS toto.france.intra
nfs4 is enbaled
Id-numeric is enabled
the share is not visible and when we want to mount the share NFS4 i have this issue
NET HELP MSG 53
Network error - 53
path not found
it's the first time i try to mount nfs4 on windows 2012
1/ Does i posisble to know the correct processus for create and mount a nfs4 without ldap or ad ?
2/ I don't understand why the share is not visible ?
Thanks
Hi,
We have OnTap 8.3.2P3.
The NFS v4.0 is disabled.
cluster::> nfs show -v4.0 enabled
There are no entries matching your query.
but the -v4.0-write-delegation, v4.0-read-delegation and v4.0-acl are enabled.
cluster::> nfs show -vserver vserver -instance
Vserver: SVM
General NFS Access: true
NFS v3: enabled
NFS v4.0: disabled
..
..
..
NFSv4.0 ACL Support: enabled
NFSv4.0 Read Delegation Support: enabled
NFSv4.0 Write Delegation Support: enabled
So as long as NFS v4.0 is disabled, the other v4.0 options are also disabled eventhrough they were enabled?
Thanks,
Chi
I deployed ontap select 9.2 standlone version. I added virtuall SSD with modily confiuration with vm hardware ver 13. The ontap select could not show SSD
Hi,
i upgraded a 2240 from 7mode to cdot, with 8.3.2p11 i made a complete init and after a minimal config i upgraded to 9.1p3. Here i have a strange issue with CIFS.
At the SystemManager i created a new CIFS SVM, on the first screen i entered all infos, the second screen with the AD join i skipped and on the third screen i enterd a password for the vsadmin and then i completed the wizard.
On the shell i entered:
vserver cifs create -vserver cifs-test -cifs-server cifs-test -workgroup test
So i created a minimal CIFS configuration. Now i entered this command to see the rights of the local Administrator user:
diag secd authentication show-creds -node san-cl01-02 -vserver cifs-test -win-name administrator UNIX UID: pcuser <> Windows User: CIFS-TEST\Administrator (Windows Local User) GID: pcuser Supplementary GIDs: pcuser Windows Membership: User is also a member of Everyone, Authenticated Users, and Network Users Privileges (0x2000): SeChangeNotifyPrivilege
My problems:
- Why is the mapping to "pcuser", not "root"?
- Why isn't there listed the "BUILTIN\Administrators" group at Windows membership?
On a other 2240 with 9.1p3 i got with a new SVM with a workgroup this result:
diag secd authentication show-creds -node na-cl01-01 -vserver test-cifs -win-name administrator UNIX UID: root <> Windows User: TEST-CIFS\Administrator (Windows Local User) GID: daemon Supplementary GIDs: daemon Windows Membership: BUILTIN\Administrators (Windows Alias) User is also a member of Everyone, Authenticated Users, and Network Users Privileges (0x2237): SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeSecurityPrivilege SeChangeNotifyPrivilege
This happens on every CIFS SVM i create. Even when i add a different SVM to the AD, the local groups don't work.
This is the local Administrators group:
My user, the Administrator of the CIFS SVM and the Domain Administrators are member.
Enter i the command again, i got this result:
diag secd authentication show-creds -node san-cl01-02 -vserver svm-cifs1 -win-name xx\basys_raudonis UNIX UID: pcuser <> Windows User: XX\basys_raudonis (Windows Domain User) GID: pcuser Supplementary GIDs: pcuser Windows Membership: XX\User-Standard (Windows Domain group) XX\Domänen-Benutzer (Windows Domain group) XX\Domänen-Admins (Windows Domain group) XX\User-WorkerOffice (Windows Domain group) XX\Abgelehnte RODC-Kennwortreplikationsgruppe (Windows Alias) Vom Dienst bestätigte ID (Windows Well known group) User is also a member of Everyone, Authenticated Users, and Network Users Privileges (0x2000): SeChangeNotifyPrivilege
So i got all AD Groups, but no local Groups. But there must be "BUILTIN\Users" and "BUILTIN\Administrators".
The main problem with this is, i can't access directory's that only grant access to the local Administrators group.
What goes wrong here? Have i missed something?
Kind regards
Stefan
Hi Tech Guys,
Is it file-system consistent when I take volume snapshot on CIFS or NFS volume? Thanks for reply.
Hi Folk,
We're getting a regular invalid login attempt (@ 6am every day) tryiing to log into on one of our SVMs as root via ONTAPI. There isn't any root user on that SVM, and it doesn't seem to be malicious, but I would like to know where it's coming from (eg ip address)
Is the source IP address of the attempt recorded in any of the logs, or can it be turned on somewhere?
We're running 9.1P2
Thanks in advance,
Stuart
Hi,
Exsting configuration:
Interfacce group Name : a0a, Vlan tagged in interface- a0a-101, a0a-203, a0a-500, a0a-701.
Lif Configuratioin
svm1_lif- ip- 10.147.4.90 Subnet 255.255.255.192
svm default route: svm1
0.0.0.0/0 10.147.21.254 20
Now exsting server ip 10.147.4.92 is able to ping to lif 10.147.4.90.
Many lif has been configred in differenet vlan with ip range 10.147.
----------------------------------------------------------------------------------------------
Now ESX team has build new servers and added in new VLAN-1501 with assigned IP- 10.156.134.2
From stroage side did below activity:
-------- i have added Vlan 1501 in exsting interface a0a (a0a-1501)
--------configured new lif with ip 10.156.134.3, subnet mask- 255.255.255.224
--------------------------------------------------------------------------------------------------------------
Now lif is not able to ping to host.
Thank You
Gsingh.
Hi,
I'm installing a hyper-v cluster with ISCSI.
I installed the host utilities kit, the MPIO feature, the Data DSM and Snapdrive.
On my Hyper-V, I have two NICs for MPIO.
My filer has two SVM, the volume I want to connect is mapped on the SVM1, connected to an aggregate which is in the node A (KSC-02-A) . There is my network configuration on the filer :
When I want to connect my ISCSI on Snapdrive, I have 2 target and 4 initiators. I don't know what am I supposed to do to initiate the MPIO :
Kind regards.
I'm unable to find the following option : cifs.home_dir.generic_share_access_level in NetApp
I have FAS3240 - 8.2.4P6 7 mode
Does this option applies only to C-mode.
Is there anywhere I can check that we are not using SMB v1 on CIFS (on NetApp)
I have used CN1610 switch.
Is it suitable for 10Gbps Ethernet switching purposes?
Thanks
Hi,
I have a 6-node cluster but with multiple subnets used in the cluster.
cluster & node management & some SVMs are on subnet A
other SVMs are on subnet B
I have created intercluster LIFs on each node with IP in subnet B and they all on their own physical ports so no sharing with other LIFs for the ports. However, I can only ping IPs on the same subnet B. Pinging any IP outside the subnet B timed out. It does not have static routes specified. It makes me wonder: Are intercluster LIFs always using the same subnet/gateway as the cluster management? If this is the case, it makes sense since cluster management is on subnet A. If so, does it mean I have to assign subnet A IPs on all intercluster LIFs to make it reach outside the subnet?
Thanks,
Hi All,
I want to mount the particular snapshot as an LUN to Windows/Linux server without any snaprestore or Flexclone license. I am also tried this command volume file clone create but it's not working. anyone help me on this.
is there a supported martix for Netapp ONTAP Ldap ?
like netapp ontap 8.x support ldap v2 and ldap v3 ?
i have checked so many document on netapp, can not find which ldap version netapp ontap support .
Hello,
We have several clusters with spinning disks (with and without flash pool). On most of them we observe events for "node disk fragmentation" (high node utilization caused by high percentage of delayed disk write) despite that free space realloc is enabled everywhere. I have even tried to schedule from node shell the old-style full aggregate reallocation which should help for free space fragmentation but that's also not helping much (or for a long time).
How can I verify that the free space realloc is working and doing it's job ?
Here is the scensario:
Site A (local) = Cluster A (source) and Cluster D
Site B (remote) = Cluster B (destination) and Cluster C (Destination of cascade from Cluster B)
The objective here is to migrate the data (fileshares) from Cluster A to Cluster D and decommison Cluster A and Cluster B. Cluster D will snapmirror to Cluster C
Cluster A is replicating using snapmirror to Cluster B
We added Cluster C in Site B (remote)
We setup a snapmirror cascade from B to C (A-->B--->C)
We now added Cluster D in Site A (local)
We now need to migrate the data from Cluster A to Cluster D.
Cluster C is already caught up. So we can remove Cluster B from the cascade at this point and reseed Cluster A with Cluster C.
Here is the tricky part:
If I setup a new relationship between Cluster A and Cluster D and migrate the data, will snapmirror be smart enought to know that the data residing in CLuster C is the same?
Do I keep the snapmirror job from Cluster A to Cluster C running while migrating the data from Cluster A to Cluster D?
What is the best workflow to acomplsih this data migration?
Thank you for your time!
Argie