Quantcast
Channel: ONTAP Discussions topics
Viewing all 4962 articles
Browse latest View live

Questions regarding Advanced disk partitioned and normal disks.

$
0
0

Hi Team,

 

 

We have FAS2552 NetApp storage system deployed recently. At the time of installation i am not with this company. The guy who installed this 2554 sysyem may not aware of ADP concept. He created a root aggregate with 3 drives(ADP) for 2 nodes on cluster. He created data aggregates with normal disks(NON-ADP).

 

 We have total 13 ADP drives and which are only using in root aggregates on 2 nodes. Those are of 2.5 TB each. We are wasting total of 26TB by without using ADP drives.

 

I have few questions regarding this setup

 

1. Can we add those ADP drive data partitions to existing Non-ADP drive aggregate to increase the space?

 

2. If we were not able to add them to Non-ADP aggregate, i am planning to create a new aggregate with those unused drives. Is this best practice?

 

3. If any one of drive failed in ADP aggregate. We will replace it with new spare disk, by default those drives are Non-ADP drives. How to convert Non-ADP disk to ADP disks.

 

 

 

Please help me with this issue.


How to suggest right storage and model to customer

$
0
0

Hello All,

 

I am looking for the guidence over the steps to taken from Storage engineer end to suggest right storage model to the customer.

 

1)What are question we should ask customer to suggest right storage model.

 

 

2)What type of storage are suggest exchange,oracle.How is it decided over suggesting right storage 

 

 

3) Any good architect books which has good info over it,please suggest...

 

 

 

report ksh

$
0
0

hello

i would like on one command line to gather somes informations like this :

volume name / ip or policy export / used / total

did anyone know how to do it  on one command line or severals  with sh  or ksh

 

Regards

 

 

 

Netapp CIFS share not accessible by domain users whereas accessible by domain admins

$
0
0

Hi guys,

 

I have netapp cluster mode in the environment and I have an issue with CIFS shares, most of users who are domain admins. they are able to access the cifs share folders but the domain users are not able to access the folder. can anyone help me out if i need to do any settings to fix the issue.

 

regards

VK

 

NetApp FAS2552 Aggregate and Disk

$
0
0

I have a FAS2552 having 2 shelf with:-

1)  Shelf 0 having 4 x SSD + 20 x SAS 10kRPM 1TB Disk

2)  Shelf 1 having 12 x SAS 10k RPM 1 TB Disk

 

Our Vendor has configured aggregates for us, however, I notice the following:-

1) What is the difference between Disk Container Type of Aggregate and Shared??

2) I find that some disks belong to 2 aggregates, is it ok and what is the reason behind??

3) Is it ok to have a disk belong to 3 aggregates???

4) I notice that one disk is not spare and is not assigned to any aggregate but is Container Type Shared, is it normal???

 

Unble to create home directory shares with Powershell - Standard shares must define an absolute

$
0
0

NetApp support directed me here to pose my question.  If this is not the correct spot to ask please let me know where would be.  Thanks.  It seems that the NetApp Powershell module does not support creating home directory shares.  Can someone shed some light on what might be going on?  Is there a way to add them via Powershell that I'm not understanding?  I keep getting "Standard shares must define an absolute share path in the Vserver's namespace."  Also the flag -DisablePathValidation doesn't make it work either.  Any help or suggestions would be appreciated.  Thank you.

 

<user> O:\>     Get-NcVserver $DestinationSVM | Add-NcCifsShare -Name "CIFS.HOMEDIR" -Path "%w" -ShareProperties $ShareProps
Add-NcCifsShare : Failed to create CIFS share CIFS.HOMEDIR. Reason: Standard shares must define an absolute share path in the Vserver's namespace.
At line:1 char:37
+ ... nationSVM | Add-NcCifsShare -Name "CIFS.HOMEDIR" -Path "%w" -SharePro ...
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (naeau1:NcController) [Add-NcCifsShare], EAPIERROR
    + FullyQualifiedErrorId : ApiException,DataONTAP.C.PowerShell.SDK.Cmdlets.Cifs.AddNcCifsShare

 

<user> O:\> Get-NcVserver $DestinationSVM | Add-NcCifsShare -Name "%w" -Path "%w" -ShareProperties $ShareProps -DisablePathValidation
Add-NcCifsShare : Failed to create CIFS share %w. Reason: Standard shares must define an absolute share path in the Vserver's namespace.
At line:1 char:37
+ ... nationSVM | Add-NcCifsShare -Name "%w" -Path "%w" -ShareProperties $S ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (naeau1:NcController) [Add-NcCifsShare], EAPIERROR
+ FullyQualifiedErrorId : ApiException,DataONTAP.C.PowerShell.SDK.Cmdlets.Cifs.AddNcCifsShare

 

Syslog Traffic not sending through 1 node of cluster

$
0
0

I have a 6 node cluster running 8.3.1P2 that has 1 node that is not able to send syslog traffic using udp 514 traffic through the firewall.  The other 5 nodes can send through with no problem.  I have configured the event destination show with the syslog destination server and also configured the event route add-destinations -messagename * -destinations allevents.

 

Ran a pktt trace with the following:

 

1) Start the packet trace:

sxvdicl01::> node run -node SXVDINO01 pktt start all -d /etc/crash

2) Run the following cluster commands:

::> date

::> network ping -node SXDINO01 -destination <syslog-destination-IP>

::> network traceroute -node SXVDINO01 -destination <syslog-destination-IP> -port 514

::> set d; event generate -messagename asup.general.create -values “Packet Trace Test”, 2

::> network ping -node SXVDINO01 -destination <syslog-destination-IP>

::> date

3) End the packet trace:

sxvdicl01::> node run -node SXVDINO01 pktt stop all

 

From the firewall side, they can see my icmp traffic going through the firewall and ping is successful.  They can also see the traceroute information failing since that is blocked on the firewall side.  They are just not able to see any UDP 514 traffic passing through or coming out of the node.  I logged onto the node directly and entered the username and password several times to generate the syslog traffic while the pktt trace was running and still no syslog traffic was being received on the firewall side.

 

Any other ideas on what I can troubleshot as to why only one node is not getting through the firewall?  I have also verified that the IP of the node mgmt is part of the firewall rule.

 

I can't upload the pktt trace since it contains IP addresses.

source code of bug fix

$
0
0

Good afternoon, in June 2015 Netapp opened BURT 934737 to develop a fix for stuck ownblock scanners on local backups with holes, and now the source code has been fixed in cDOT 9. We've downloaded ONTAP source code from ftp://ftp.netapp.com/frm-ntap/opensource/ but cannot find the fixed code. Where can the source code of bug fix 934737 be found? Many thanks, Frank


Powershell Add-NcCifsServer keeps failing - Cannot find an appropriate domain controller

$
0
0

I'm struggling to understand why I can't create a new CIFS server from Powershell when I can do it from the CLI with the all the same settings no problem.  Since I can do it just fine from the CLI I would assume all the DNS and networking settings are correct.  Also, I've verfied I'm not making any typos when typing the userid and password for the active directory domain.  I even tried creating the computer object in AD before running Add-NcCifsServer but it didn't help.  Any help or suggestions would be appreciated!  Thanks!

 

Get-NcVserver $DestinationSVM | Add-NcCifsServer -Name $NewCIFSServerName.ToUpper() -Domain <my FQDN> -AdminUsername $DomainUser -AdminPassword $DomainPass

 

Add-NcCifsServer : Failed to create the Active Directory machine account "DRPTEST_VFILER". Reason: SecD Error: Cannot find an appropriate domain controller Details:
Error: Machine account creation procedure failed [ 0 ms] Trying to create machine account 'DRPTEST_VFILER' in domain '<my FQDN>' for
Vserver 'eau-test_vfilerrs' [ 4] Successfully connected to xxx.xxx.xxx.xxx:389 using TCP [ 107] Successfully connected to xxx.xxx.xxx.xxx:389 using
TCP [ 216] Successfully connected to xxx.xxx.xxx.xxx:389 using TCP [ 320] Successfully connected to xxx.xxx.xxx.xxx:389 using TCP [ 436] No servers found in
DNS lookup for _ldap._tcp.EauClaire._sites.<my FQDN>. [ 448] No servers found in DNS lookup for
_ldap._tcp.<my FQDN>. [ 448] No servers available for MS_LDAP_AD, vserver: 21, domain: <my FQDN>. [ 448] Cannot find any
domain controllers; verify the domain name and the node's DNS configuration **[ 448] FAILURE: Failed to find a domain controller [ 448] Uncaptured
failure while creating server account .
At line:1 char:37
+ ... nationSVM | Add-NcCifsServer -Name $NewCIFSServerName.ToUpper() -Doma ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (naeau1:NcController) [Add-NcCifsServer], EAPIERROR
+ FullyQualifiedErrorId : ApiException,DataONTAP.C.PowerShell.SDK.Cmdlets.Cifs.AddNcCifsServer

 

Name mapping Windows -- Unix

$
0
0

 

Our cDor Cluster runs on  NetApp Release 8.3.2P6. What we ould like to archieve is that we can map a Windows User to any defined Unix User. As you can see below the mapping is configured so that Windows User Dom1\hula sould be mappe to Unix User 1412 (1412 is a different unix user than hula, hula unix user also exists but has different uis)

 

Unfortunately it doesn't work and DOM1\\hula user will get mapped to pcuser, whcih basically is teh default user. It seems that the mapping rules 2 never takes place (tehre is no mapping rule 1 withing the win-unix direction so Position 2 is the first rule).

 

Basically name mapping seems to work becaus if we remove position 2, then the user DOM1\\hula will get map to the unix user hula (which is actually also present in the ldap server where all our unix user are, windows user are in AD and AD doen't have any Unix user information).

 

Was anybody able to map a Windows User to a different Unix user ?

 

 

 

test::vserver name-mapping> show -vserver test01 -direction win-unix
Vserver        Direction Position
-------------- --------- --------
test01         win-unix  2        Pattern: DOM1\\hula
                              Replacement: 1412
test01         win-unix  4        Pattern: DOM1\\(.+)
                              Replacement: \1
test01         win-unix  5        Pattern: DOM2\\(.+)
                              Replacement: \1
3 entries were displayed.

test::vserver name-mapping>

 

 

2552 8.2.2 7-Mode > Ontap 9

$
0
0

Been using a 2-node FAS2552 for quite some time for basic NFS/CIFS exports. It's running in 7-mode 8.2.2 at the moment.

 

Snapvault custom schedueles are being used for retention (creating snapshots) and some OSSV jobs running as integrated part of that.

Nothing fancy but I do have some SSD cache disks and 2x fully loaded 2240 behind it.

 

I need advice on whether it would be possible to upgrade to Ontap9? Will I loose any features? If I have a service contract am I entitled to do so without further expenses?

It is my understanding that the entire system must be rebuilt.

 

My primary goal is to gain advantage of the redesigned deduplication and probably a handful of other improved features.

 

It's primary use is as part of a Disaster Recovery site.

This non-critical role makes it possible for me to scratch the whole thing and resync the data afterwards. Meanwhile I can use other resources to retain RTO in my DR.

 

Been reading a lot about the differences between 7-mode and Cluster Ontap mode and still I am unsure if 7-mode actually holds anything that Cluster/Ontap9 can't provide?

 

Also what would be the quickest way to reinstall the system? Must be rather simple as no Copy Based or Copy Free transition is needed.

 

 

Netapp Web GUI Admin and SNMP Monitoring for Displaying Critical Warning Status

$
0
0

We tested the NetApp Storage by shutting down one of its Controller Power Supply and/or its Disk Shelf Power Supply, and we find that sometimes the Web GUI and its SNMP can missed out reflecting the critical warning status properly, and only its Console Command is working properly to display the Critical Warnining status.

 

Our Storage is NetAapp 8.3.1P1.

 

Could there be any latency of the Web GUI and its SNMP to reflect the latest Critical Warning Status, OR the Console Command is a better way to do the Monitoring????

 

Please advise, thanks.

 

Also, we find the following Recent CRITICAL Error in the Log, but seems it can't be showed up by Web GUI / SNMP / Console Command:-

 

YCKPLFR02::> event log show -severity CRITICAL
Time Node Severity Event
------------------- ---------------- ------------- ---------------------------
11/15/2016 14:00:00 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 14:00:00 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 13:00:00 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 13:00:00 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 12:00:00 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 12:00:00 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 11:40:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: There are not enough spare disks. Power Supply Status Critical: PSU1. Disk shelf fault.
11/15/2016 11:39:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: There are not enough spare disks. Disk shelf fault.
11/15/2016 11:34:01 YCKPLFR02-C01 CRITICAL hm.alert.raised: detailed_info="Alert Id = CriticalPSUFruFaultAlert , Alerting Resource = PSD041154303986", monitor="chassis", alert_id="CriticalPSUFruFaultAlert", alerting_resource="PSD041154303986"
11/15/2016 11:23:37 YCKPLFR02-C01 CRITICAL callhome.hm.alert.major: Call home for Health Monitor process nchm: DualPathToDiskShelf_Alert[50:0a:09:80:03:83:62:17].
11/15/2016 11:23:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: There are not enough spare disks. Power Supply Status Critical: PSU1. Disk shelf fault.
11/15/2016 11:23:00 YCKPLFR02-C02 CRITICAL monitor.globalStatus.critical: Power Supply Status Critical: PSU1. Disk shelf fault.
11/15/2016 11:22:28 YCKPLFR02-C01 CRITICAL hm.alert.raised: detailed_info="Alert Id = DualPathToDiskShelf_Alert , Alerting Resource = 50:0a:09:80:03:83:62:17", monitor="node-connect", alert_id="DualPathToDiskShelf_Alert", alerting_resource="50:0a:09:80:03:83:62:17"
11/15/2016 11:22:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: There are not enough spare disks. Disk shelf fault.
11/15/2016 11:22:00 YCKPLFR02-C02 CRITICAL monitor.globalStatus.critical: Disk shelf fault.
11/15/2016 11:21:49 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 11:21:49 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/15/2016 11:21:40 YCKPLFR02-C01 CRITICAL ses.status.psError: DS2246 (S/N SHFFG1551000132) shelf 0 on channel 0a power error for Power supply 1: critical status; DC undervoltage. This module is on the rear of the shelf at the bottom left.
11/15/2016 11:21:40 YCKPLFR02-C02 CRITICAL ses.status.psError: DS2246 (S/N SHFFG1551000132) shelf 0 on channel 0b power error for Power supply 1: critical status; DC undervoltage. This module is on the rear of the shelf at the bottom left.
11/14/2016 17:00:00 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/14/2016 17:00:00 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/14/2016 16:00:00 YCKPLFR02-C01 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/14/2016 16:00:00 YCKPLFR02-C02 CRITICAL monitor.shelf.fault: Fault reported on disk storage shelf attached to channel 0a. Check fans, power supplies, disks, and temperature sensors.
11/14/2016 15:40:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: Power Supply Status Critical: PSU1. Disk shelf fault.
11/14/2016 15:39:00 YCKPLFR02-C01 CRITICAL monitor.globalStatus.critical: Disk shelf fault.
11/14/2016 15:34:11 YCKPLFR02-C01 CRITICAL hm.alert.raised: detailed_info="Alert Id = CriticalPSUFruFaultAlert , Alerting Resource = PSD041154303986", monitor="chassis", alert_id="CriticalPSUFruFaultAlert", alerting_resource="PSD041154303986"
Press <space> to page down, <return> for next line, or 'q' to quit...
26 entries were displayed.

YCKPLFR02::>

SnapDrive for ONTAP 9.0P1

$
0
0

Hi,

 

I had my FAS2552 upgraded to ONTAP 9.0P1 last week and since then I have gotten a lot of problems that I think originates from SnapDrive. I have updated to version 7.1.3P3.

 

I have Hyper-V servers running over SMB3.0 to the Netapp and the virtual machines still work without problem.

 

However SnapManager for Hyper-V does not work. It is not able to connect to the volumes. So now I don't have any backups...

 

Failed to take backup of VM VMNAME[Hyper-V] as SnapDrive for Windows is unable to enumerate the following components(s) : (Snapshot location is not valid. This could mean the VM may have an unsupported storage configuration. Please confirm that VM's snapshot file location reside on NetApp storage.) for the VM on Hyper-V host HOSTNAME.

 

I think it is because when I open SnapDrive and try to set the Transport protocol settings I am not able to add the NetApps IP adresses.

 

Unable to modify or add transport protocol. Failed to get Data ONTAP version running on the storage system <ip-address>. Error description: Can't connect to host (err=10061).

 

Any tips on what I can try?

Just tested to uninstall and re-install SnapDrive and Snapmanager with no improvment.

 

 

RIP-2 Poisoning Routing Table Modification

$
0
0

Hello colleagues,

 

we are perfroming vulscan against our NetApp systems and there is this finding identified

 

https://w3.secintel.ibm.com/vscan/refs/refs.php?vuln_id=120499

 

I started to investigate its background and its risen one key question, Im not able to find answer to it.

 

What version of RIP protocol Data ONTAP 7mode uses? Is there any way how to determine it on the system?

 

Or do I understand it completely wrong, and RIP version used is not being defined by the NAS at all?

 

 

My current version is 8.2.4P1.

 

Regards

Petr

 

 

fsecurity show UNIX Security

$
0
0

We have an issue with volumes created on our NetApp and the ability to access them from Linux workstations

Background information
- NetApp Release 8.2.1  7 Mode

- NFS v3

 

Having created a new volume with associated share and export, we are mounting them to Ubuntu 14.04 using the umount command.  We have noticed that some of the more recently created volumes are mounted correctly, but it is not possible to access them from the mount point, it returns a message that you do not have sufficient permissions.

 

We have checked the following in he NetApp OnCommand System Manager.....

- The share access controls are everyone - Full control

- The client permissions for Export are set so the UNIX security has the clients All hosts with the permissions Allow Read Write

 

However, we are still not able to resolve the issue.  We have looked into the fsecurity on the SAN and noticed the following.......

 

     The volume James is accessible with read/write permissions from both a windows client and also a Unix workstation (when mounted), so as far as we are concerned, it is set up as we need it to be

 

SAN0D> fsecurity show /vol/James

[/vol/James - Directory (inum 64)]

  Security style: Mixed

  Effective style: Unix

 

  DOS attributes: 0x0030 (---AD---)

 

  Unix security:

    uid: 1000 (projname)

    gid: 1000

    mode: 0777 (rwxrwxrwx)

 

  No security descriptor available.

 

     The James2 volume can be mounted, but is not accessible from the Linux workstation

 

SAN0D> fsecurity show /vol/James2

[/vol/James2 - Directory (inum 64)]

  Security style: Mixed

  Effective style: Unix

 

  DOS attributes: 0x0010 (----D---)

 

  Unix security:

    uid: 0 (root)

    gid: 0

    mode: 0755 (rwxr-xr-x)

 

  No security descriptor available.

 

 

The difference we have noticed is in the Unix Security section with the uid, the gid and the mode.

 

Assuming that this the correct diagnosis of the issue, how do we go about making changes to these settings?

 

 

 

 

 


Domain Controller authentication and NetApp issue

$
0
0

I am trying to authencate users accessing FAS 8040 using DC.

Getting this error:

 

Error: Machine account creation procedure failed  ...        

 

Vserver 'ADSVM'  [     1] Failed to connect to X.X.X.X for DNS: No route to           host  [     1]

 

 

Error: command failed: Failed to create the Active Directory machine account "XXXX". Reason: SecD Error: no server available. 

 

Can anyone help?

CDOT LIMITATIONS

$
0
0

Hi

 

First, i would like to know if limitations are from FAS or CDOT  ?

for

max svm, snapshot export policy, qtree, lif

 

secondly, i would like to know where i can found limitation specs for max svm, max snapshot per volume,  max export policy, qtree, lif , for cdot 8.2.x FAS8080

 

thanks for help.

regards

 

How to create a destination for audit logging in clustermode NetApp Release 8.3.2

$
0
0

Hi guys,

 

I have the below command to create a policy for audit logging.

 

vserver audit create -vserver <vserver name> -destination <Unix Path> -rotate-schedule-minute <minute of the hour> -rotate-limit <no.of log files>

 

What is the destination here ?

 

its says <unix Path> but what exactly is a unix path?

 

In our system we have CIFS protocol licensing only. Therefore I cannot create a nfs export to facilitate a unix path.

 

can you please guide me?

 

Also do you guys have something like a general case, sample command in use for the above?

Error DUMP: Message from Write Dirnet: Interrupted system call"

$
0
0

When we are performing a netapp 8.3 NDMP backup with a backup software, We received error:

"DUMP: Message from Write Dirnet: Interrupted system call"

Not sure about the reason, any ideas?

Thank you very much.

 

Is qtree a "file system" boundary from hard link's perspective?

$
0
0

Hi,

 

ONTAP 9.1rc1

 

Scenario:

volume A contains 2 qtrees: tree1 and tree2

 

If I create a file in qtree1, and then try to create a hard link in tree2 for the file, it will complain "Invalid cross-device link." So it seems to me a qtree is like a volume having its own file system. In other words, say in qtree1 a file has inode 11111, it's possible in qtree2 there a file also having inode 11111. Is it correct?

 

I know hard links should be avoided in general but unfortunately the data set i am dealing with already have alot of hard links and I need to migrate them from isilon to NetApp.

 

Thanks,

 

Viewing all 4962 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>