Quantcast
Channel: ONTAP Discussions topics
Viewing all 4989 articles
Browse latest View live

A300 Preconfigured CNA port Operational down

$
0
0

Hi Netapp Team,

 

I have a new A300 pre-configured with 8 FC LIFs and having 4 CNA port and 4 native FC port. I have 2 brocade FC switches. The native FC ports are both admin up and operationally up. While the CNA ports are admin up but operationally down. I then change the CNA port to FC target port, and reboot the A300. However, the port are admin down and operationally down. How can I fix it? Any command to make it right? Thanks for advice.

 

 

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
AFF_SAN_DEFAULT_SVM
            SAN_SVM_FCLIF_aff-01_0e
                         up/down  20:03:00:a0:98:aa:5c:9f
                                                     aff-01        0e      true
            SAN_SVM_FCLIF_aff-01_0f
                         up/down  20:05:00:a0:98:aa:5c:9f
                                                     aff-01        0f      true
            SAN_SVM_FCLIF_aff-01_0g
                         up/up    20:07:00:a0:98:aa:5c:9f
                                                     aff-01        0g      true
            SAN_SVM_FCLIF_aff-01_0h
                         up/up    20:08:00:a0:98:aa:5c:9f
                                                     aff-01        0h      true
            SAN_SVM_FCLIF_aff-02_0e
                         up/down  20:00:00:a0:98:aa:5c:9f
                                                     aff-02        0e      true
            SAN_SVM_FCLIF_aff-02_0f
                         up/down  20:01:00:a0:98:aa:5c:9f
                                                     aff-02        0f      true
            SAN_SVM_FCLIF_aff-02_0g                  
                         up/up    20:04:00:a0:98:aa:5c:9f
                                                     aff-02        0g      true
            SAN_SVM_FCLIF_aff-02_0h
                         up/up    20:06:00:a0:98:aa:5c:9f
                                                     aff-02        0h      true

 

 

 

Original port config:

aff::> ucadmin show
                       Current  Current    Pending  Pending    Admin
Node          Adapter  Mode     Type       Mode     Type       Status
------------  -------  -------  ---------  -------  ---------  -----------
aff-01        0e       cna      target     -        -          online
aff-01        0f       cna      target     -        -          online
aff-01        0g       fc       target     -        -          online
aff-01        0h       fc       target     -        -          online
aff-02        0e       cna      target     -        -          online
aff-02        0f       cna      target     -        -          online
aff-02        0g       fc       target     -        -          online
aff-02        0h       fc       target     -        -          online

 

 

 

 

 

 

After I change the cna to fc target:

aff::> ucadmin show
                       Current  Current    Pending  Pending    Admin
Node          Adapter  Mode     Type       Mode     Type       Status
------------  -------  -------  ---------  -------  ---------  -----------
aff-01        0e       fc       target     -        -          offline
aff-01        0f       fc       target     -        -          offline
aff-01        0g       fc       target     -        -          online
aff-01        0h       fc       target     -        -          online
aff-02        0e       fc       target     -        -          offline
aff-02        0f       fc       target     -        -          offline
aff-02        0g       fc       target     -        -          online
aff-02        0h       fc       target     -        -          online

 

 

 

 

 

 

 

 Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
AFF_SAN_DEFAULT_SVM
            SAN_SVM_FCLIF_aff-01_0e
                       down/down  20:03:00:a0:98:aa:5c:9f
                                                     aff-01        0e      true
            SAN_SVM_FCLIF_aff-01_0f
                       down/down  20:05:00:a0:98:aa:5c:9f
                                                     aff-01        0f      true
            SAN_SVM_FCLIF_aff-01_0g
                         up/up    20:07:00:a0:98:aa:5c:9f
                                                     aff-01        0g      true
            SAN_SVM_FCLIF_aff-01_0h
                         up/up    20:08:00:a0:98:aa:5c:9f
                                                     aff-01        0h      true
            SAN_SVM_FCLIF_aff-02_0e
                       down/down  20:00:00:a0:98:aa:5c:9f
                                                     aff-02        0e      true
            SAN_SVM_FCLIF_aff-02_0f
                       down/down  20:01:00:a0:98:aa:5c:9f
                                                     aff-02        0f      true
            SAN_SVM_FCLIF_aff-02_0g                            
                         up/up    20:04:00:a0:98:aa:5c:9f
                                                     aff-02        0g      true
            SAN_SVM_FCLIF_aff-02_0h
                         up/up    20:06:00:a0:98:aa:5c:9f
                                                     aff-02        0h      true

 


BUILTIN\Administrators not working properly for CIFS share

$
0
0

We have quite a few Clusters and 7 Mode boxes in our environment. In an effort to reduce work during future changes, we decided to add our NAS Admin domain group (we'll call it Domain\NASAdmins) and our Off Shore Support group (call it Domain\OffShore) to the BUILTIN\Administrators group of our Clusters. Then, when adding security to the top Level Volumes we just add \\SVMName\Administrators to the volume with Full Control. That way, everyone in Domain\NASAdmins and Domain\OffShore should have full control over the volumes and the qtree's and if we ever add/change a support group, we don't have to re-push the new group to all 100K+ shares. But, it doesn't work. If we add the domain groups directly to the volume or share, they work as expected. I have tested this on multiple clusters and 7 mode pairs all with the same results. I have checked the domain groups and have tried Group Scope types of Universal and Global.

 

edit: I have also tried adding admin Domain accounts directly to the BUILTIN\Administrators with the same outcome. 

 

All CIFS access and authentication is working for all shares properly and have been for a long time. This is just a new change we are trying to make.

 

builtin_admins.jpg

High cpreads compared to writes

$
0
0

Hello,

 

Seeing some latency issues to some LUNs used by SQL. Looking at the STATIT Disk Statistics for the holding aggregate, the disks aren't heavily utilised (aroung 50%) but noticed the cpreads figure is over double the amount of writes and about half the amount of ureads.

 

Do you think this is a case for reallocation on the LUNs?

Not enough destination disk space for the SVM DR

$
0
0

HI,

 

 

I am setting up the SVM DR vservers, unfortunately the destination cluster has limited disk space, so during the SVM DR procedures can I 

 

1). preselect which volume(s) can be snapmirrored to the destination cluster?

2). manually select aggregate(s) to be the destination aggregate?

 

I have Ontap 8.3.2P2

 

 

Thanks,

 

Chi

 

Are fs_size_fixed and fractional reserve changes nondisruptive?

$
0
0

I have a vFiler root volume that is 5GB, and I want to resize it to <500MB because it's using very little space. I got the 'fs_size_fixed' error when attempting the resize. I've never seen this on anything other than a SnapMirror destination volume so I'm not sure how that setting got applied to this vFiler root volume, but it did.

 

i also noticed fractional reserve was set to 100. On all other vFiler root volumes I have it set to 0. Again, not sure how this happened but it did.

 

i would like to change both settings. Would either be disrptive to a CIFS vFiler?

 

We are on DoT 8.2.4 P4, 7-mode.

Automate Shutdown

$
0
0

I need a way to automate a shutdown of my FAS2240, DS2246 Shelf, and FAS2220 DR. I'm wanting this to take place when we lose power. I have a TrippLite UPS solution, and I recently set it up to where it will shutdown my vCenter environment when on battery. However, I know NetApp is only compliant with APC. Is there any way to accomplish this?

 

Thanks in advance.

snapmirror replication transfer failed to complete

$
0
0

Snapmirror failed to update while giving the error replication transfer failed to complete.

I have checked everything on both source and destination.It is s VSM relationship and space,hosts entry,ontap version,snapmirror.conf entry,host access are not the issues.

 

I am not able to telnet filer to filer on 10566 to check port status.

 

Any ideas?

 

Data ONTAP Release 8.1.2P4 7-Mode --------Destination

Data ONTAP Release 7.3.7P3  -------------------Source

 

 

 

Need help to choose one of the technology between flash pool and storage pool.

$
0
0

Hi Guys,

 

We are seeing so many latency issues in my environment. I have 12*400GB SSD drives. I want to configure flash pool using them, but i am in confusion. I have gone through storage pool concept also. Both the technologies can be used to increase the aggregate read write performances. Can any one suggest me that can i go with flash pool or storage pool?

 

We have two SAS aggregates on my cluster data ontap HA pair. I want to convert these two aggregates as flash pools. Please help me with this.

 

Thanks in advance.


Script needed - Which CIFS shares are also exported to Linux?

$
0
0

We are in the process of upgrading a few hundred SVMs to VFilers to LDAP and part of the project I am being asked to gather a list for our Linux support group. They are wanting to gather a list of all CIFS share that ALSO are exported. I have beeing digging through things for a while and here is what I believe needs to be done. 

Cluster Mode

1) Gather All Namepsaces that do not have default export policy (will work for us as we never use default)

2) Gather list of all export policies

3) Gather list of all CIFS Shares

This shouldn't be to bad, but will be time consuming because it is over about 100 Cluster/7 Mode HA Pairs. 

Then, it is really working the data in Excel to determine overlaps. 

 

BUT. We do have OCUM and DFM running, along with WFA so I can connect to the MySQL Database and cross reference also, but this is where I am getting lost.

 

Just so that I am not re-creating the wheel here, does anyone have a script or MySQL select statement already made?

old snapshots after snap sched is set 0

$
0
0

I have been thinking that if a volume has already a snap schedule set for it and it contains snapshot taken as per the schedule specified and now if I disable snap sched then will the snapshots get autodeleted ?

And if yes then why?

 

Thanks in advance

SOLVED - How to Match iSCSI LUN ID (uuid) on NetApp to Solaris 10 device file

$
0
0

Couldn't find this anywhere. If you create a bunch of luns for a ZFS Zpool, and need to knwo which is which to build your vdevs

in the right order / arrangement, knowing which LUN is which is v. important.

 

scsiadm list target -vS

.

.

.

LUN: 1
Vendor: NETAPP
Product: LUN C-Mode
OS Device Name: /dev/rdsk/c1tXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX66d0s2
LUN: 0
Vendor: NETAPP
Product: LUN C-Mode
OS Device Name: /dev/rdsk/c1tXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX67d0s2

.

.

.

 

Back on the netapp you can just look at the LUN ID in the lun mapping.

> lun mapping show

.

.

.

Vserver  Path                     Igroup   LUN ID  Protocol

vserver1 /vol/datalun_a           data     0       iscsi
vserver1 /vol/datalun_b           data     1       iscsi

Clear Alert on "Alerts and Notification" Panel on System Manager Portal

$
0
0

HI Netapp team,

 

I don't intend to use 4 10G Ethernet port and e0d ports yet, bu there is port down alert on the "Alert and Notification" panel on the System Manager portal. I would like to know methods to acknowledge it or clear it as it is not actually failure and I have to explain it to the mgmt about the alert.

Can we allocate same lun to fc igroup and iscsi igroup

$
0
0

Hi guys,

 

My existing environment we are using FCP to allocate luns to esxi servers. We want to remove fcp and planning to implement iscsi. Is there any way that we can implement fcp and iscsi on single vserver and add the existing luns allocate to both fcp and iscsi igroups?

.temp files on CIFS shares

$
0
0

Hi I am the administrator of a c mode filer for one of my companies customers and the customers are having some issues with CIFS shares. This issue is that they are getting .temp files left in all folders of the share regardless of the application used which they think are creating a mess and are manually removing. Is there anything that can be checked or changed on the storage end to change this behavior or is it more likely to be an issue somewhere else ?

Cannot deploy ontap select

$
0
0

I am currently installing ONTAP Select Version: 2.1 VM Version: 2.1  on VMware ESXi, 6.0.0, 3073146

 

When the cluster is created I get the following error

ClusterCreateFailed :  NodeStartErr: Node ontapcluster-1 failed to start: (Cannot 'start' VM ontapcluster-1: NoCompatibleHost).

 

 

 

 

Can anyone advise what the issue is?


Best practices for new storage FAS 8000 implementation

$
0
0

We have receiving Net app filer 4 node cluster. It supports 80TB and we are separating for Finance data, HR Data, Engineering data, Public data separately and workloads for SAN boot for ESXI data. We are enabling ISCSI, FCOE, NFS shares and CIFS.

 

I need a  best practices before implementing this filer in our organization.

 

  1. Which is best options to Design for non-disruptive operations during upgrades and Power event failure in case of two nodes failure (All the times data should accesable).
  2. All storage is managed by storage Admins so is it recommended to create single VSM? Or different VSM for different teams like HR, Finance etc,
  3. Is recommended to create single aggregate or multiple aggregates for different teams.
  4. Is recommended to create aggregate based Protocol we used like FC, NFS, ISCSI separately.
  5. For performance perspective I need ESXI boot to be more fasters I have around 10x2TB SSD drives out 80TB Raw size .
  6. Enabling Snapshots

Ontap 9.1 Upgrade

$
0
0

Hi,

 

We upgraded system to ontap 9.1 on two node metrocluster.  One node upgraded succesfully but the other node version still showing 8.3.2 . 

 

When system opening it starting with 9.1 but after system opening system giving '

clusterA::*> version
NetApp Release 8.3.2: Wed Feb 24 03:29:11 UTC 2016

Info: The output from the version command above may not be correct because
upgrade is in progress or has failed in one or more nodes in the cluster.
Use the "upgrade-revert show" command in advanced mode to view the status
of upgrade.

message.

 

We tried to upgrade-revert upgrade but it did not work.

 

I opened case but we could not solved yet. 

 

Thanks,

Tuncay

Error creating vserver subtype dp-destination with PowerShell Toolkit

$
0
0

I'm trying to script DR vserver creation on a FAS we have at our DR site using the PowerShell Toolkit. When I try and create the vserver using the New-NcVserver command with subtype dp-destination, it prompts me for the root volume information as shown in the first snippet of text below. But then it tells me I can't supply anything other then the vserver name, comment and ipspace. If I supply the root volume information with the command, I get the same error. If I leave the prompts to the root volume information blank as shown in the second snippet, it comes back with a different error. The command appears schizophrenic. Any ideas what I'm doing wrong?

 

PS> New-NcVserver -Name dr_01234fs01 -Subtype dp-destination
cmdlet New-NcVserver at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
RootVolume: dr_01234fs01_root
RootVolumeAggregate: aggr1_dd_netapp02_02
RootVolumeSecurityStyle: NTFS
New-NcVserver : Cannot specify options other than Vserver name, comment and ipspace for a Vserver that is being configured as the destination for Vserver DR.
At line:1 char:1
+ New-NcVserver -Name dr_01234fs01 -Subtype dp-destination
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (netapp02.vcpi.com:NcController) [New-NcVserver], NaUnknownErrnoException
    + FullyQualifiedErrorId : ApiException,DataONTAP.C.PowerShell.SDK.Cmdlets.Vserver.NewNcVserver

 

 

 

PS C:\Users\jking> New-NcVserver -Name dr_01234fs01 -Subtype dp-destination
cmdlet New-NcVserver at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
RootVolume:
RootVolumeAggregate:
RootVolumeSecurityStyle:
New-NcVserver : Cannot bind argument to parameter 'RootVolume' because it is an empty string.
At line:1 char:1
+ New-NcVserver -Name dr_01234fs01 -Subtype dp-destination
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidData: (Smiley Happy [New-NcVserver], ParameterBindingValidationException
    + FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,DataONTAP.C.PowerShell.SDK.Cmdlets.Vserver.NewNcVserver

snapshot policy question

$
0
0

Are these the commands I would run to create a snapshot policy on the volume “xxxx01_vol1” to take snapshots every 8 hours and keep them for 60 days?

What would happen with the snapshots created under the existing policy?

 

#volume snapshot policy create -vserver xxxx01 -policy snappolicy_8hrs -schedule1 8hrs -count1 240 -prefix1 every_8_hour

#volume modify -vserver xxxx01 -volume xxxx01_vol1 -snapshot-policy snappolicy_8hrs

 

Ontap 8.3.5 MetroCluster/AIX Hosts

$
0
0

Anyone had success configuring AIX hosts MPIO settings in a metrocluster environment.

Using NetApp Host Utilities 6.0

NPIV VIO clients used for testing.  AIX Version 7.1.3

Experiencing IO delays/hang for approximately 50 seconds during switch over and switch back.

Assuming this is a disk rw_timeout issue after looking at nsanity logs from the host.

Not aware of any way to reduce the 30 second read/write on the LUN.

Viewing all 4989 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>