Quantcast
Channel: ONTAP Discussions topics
Viewing all 4989 articles
Browse latest View live

Questions on viler ip spaces and vfiler-DR on 7-Mode

$
0
0

Currently I am working on a project for a customer to look at setting up DR for them.

We are running ONTAP 7-Mode 8.2.4P5 on a FAS8080HA. Majority of NFS workload is on Controller 01, and CIFS on Controller 02, although there are little bits of NFS/CIFS spread across both Controllers. There is a vfiler running at the moment on Controller 02 which is for CIFS shares sat on a different domain, this vfiler also has a different (non-default) ip space setup.

At the moment, eveyrthing else is sat on the root vfiler0 on both Controllers. To not have to cook up some horrible scripted DR nightmare on the root vfilers, we are are looking to leverage the functionality and (relative) simplicity of vfiler-DR.

Of course to be able to do this, we now need to look at the unenviable task of re-organising the existing (flat) storage into vfilers, so we can then replicate the data and config to the DR site and use “vfiler-DR” properly as it is intended.
 
As this data is all NAS based, and due to the nature the of the environment setup, and how it is nestled together and intertwined quite tightly, I am currently thinking about creating only 2 vfilers max per Controller – a CIFS vfiler, and an NFS vfiler, to keep things simple.


With the above in mind, I have a few queries I hoped some 7-Mode Netapp savvy folk could advise on so we can get on with planning the future layout:

1)    If we create new CIFS and NFS vfilers, instead of creating a dedicated ip-space for each vfiler, can we just use the default-ipspace on each hosting Netapp Controller?. I have read a few bits and pieces online that seem to imply it’s quite possible, but isn’t necessarily the done thing.

2)    If we do use the default-ipspace, would the two vfilers be able to “talk” to one another?. The reason I ask is that for instance we have at least a few servers here and there which access CIFS shares and Unix NFS shares, and I am just wondering if they will be able to use both protocols, and talk to each vfiler, the vfilers might need to talk between each other as well. There might be servers wanting to access data on an interface or volume owned by the other vfiler, I am not sure this will work?

 

3)    In the main NFS environment, we use Oracle LDOM technology and this has a couple of “swap volumes” which contain the Solaris swap data/pagefiles (almost identical I think in principle to ESX swap areas). As these are mounted on the Solaris LDOM primaries via NFS (and therefore technically fall into the NFS vfiler category) I am just wondeing if we will need replicate these volumes via vfiler-DR also, or if they can be excluded (and perhaps brought up manually in DR?).


My current understanding is if I was to create a dedicated ip-space in each vfiler, they could definitely not talk to each other (as that is the idea) but I just wanted to check this approach, and seek advice on the cross-talk/communication between the vfilers.

Likewise for the Solaris “swap volumes”, my current understanding of vfiler-DR is that any and all volumes served by the source vfiler have to be replicated to the other site for them to be brought up in a DR situation, however this is going to waste a lot of replication bandwidth for swap file data that we don’t actually need to replicate if we can avoid it.





Simulator Ontap 8.3 to 9.0 upgrade

$
0
0

Hi,

 

Should we able to upgrade ontap 8.3 to 9.0 in cluster mode with non distrutipe services on simulator?

 

Thanks,

Ontap 9 select crash after upgrade to esxi 6.5

OS disks , adding disks and Raid

$
0
0

Hi Team,

 

I am new to Netapp Please help me with below points.

 

1. How many OS disks and which type of OS RAID

 

As per my knowlege 3 disks with RAID-DP, but to configure RAID DP we need minimum 5 disks. Is this correct info?

 

2. we can add  more disks to root aggr ?

 

Yes we can add more disks to increase. Correct me if am wrong

 

3. Can we change RAID-DP to RAID 4 for root aggr ?

Data ONTAP DSM Management Service

$
0
0

I wanted to find out if the Data ONTAP DSM 4.1 For Windows MPIO is compatible with Windows VM’s running on VMware ESXi 6.0U2?

 

We have setup iSCSI on the VM’s and installed DSM but although the volumes are working ok, when using DSM, no volumes appear there. I just wondered if it was a supported configuration or if something else was going on.

scripted quota management in Ontap 9

$
0
0

I am migrating from an older NetApp running 7-mode to Ontap 9 in cluster mode. Currently I am using etc/quotas file in the root volume of the NetApp for end user quotas like this:

 

1061 user@/vol/vol_castor 10000000K -
1062 user@/vol/vol_castor 100000K -
1064 user@/vol/vol_castor 100000K -
1065 user@/vol/vol_castor 1000000K -
1066 user@/vol/vol_castor 1000000K -

 

I have about 5000 entries in it. The user quotas are in a database and I have a script write out a new quotas file periodicly. I see that I can manage quotas on the GUI but is there any other way to do this in a scriptable or automated way in Clustered Ontap?

 

I am not able to use the 7-mode migration tool to migrate my data.

 

Thanks,

Luke

ontap select is running on ESXi which cannot connect to iscsi itself

$
0
0

I created 3 ESXi enviroment. One of ESXi 6 had local SSD. I created Ontap select on this local SSD. I created iscsi LUN on this Ontap VM. Other two ESXi server could use iscsi connect to connect this Ontap VM. But esxi could not connect itself.

LOG_ALERT email for non-reudundant node management LIF - why?

$
0
0

I'm getting LOG_ALERT emails from a dual-node FAS-2552 system nagging me about a non-redundandt port, but that port is the node management LIF for node 02 which (as I understand it) is not intended to be failed over. Curiously, I never get these about node 01.

 

Is there an actual problem that I need to fix? Or is there a way to just make the alert go away?

 

TOASTER::network interface> show -instance -lif TOASTER-02_mgmt1

                    Vserver Name: TOASTER
          Logical Interface Name: TOASTER-02_mgmt1
                            Role: node-mgmt
                   Data Protocol: -
                       Home Node: TOASTER-02
                       Home Port: e0M
                    Current Node: TOASTER-02
                    Current Port: e0M
              Operational Status: up
                 Extended Status: -
                         Is Home: true
                 Network Address: 10.2.1.111
                         Netmask: 255.255.255.0
             Bits in the Netmask: 24
                 IPv4 Link Local: -
                     Subnet Name: MGMTnet
           Administrative Status: up
                 Failover Policy: local-only
                 Firewall Policy: mgmt
                     Auto Revert: true
   Fully Qualified DNS Zone Name: none
         DNS Query Listen Enable: false
             Failover Group Name: Default Network
                        FCP WWPN: -
                  Address family: ipv4
                         Comment: -
                  IPspace of LIF: Default
  Is Dynamic DNS Update Enabled?: -

TOASTER::network interface> show -instance -lif TOASTER-01_mgmt1

                    Vserver Name: TOASTER
          Logical Interface Name: TOASTER-01_mgmt1
                            Role: node-mgmt
                   Data Protocol: -
                       Home Node: TOASTER-01
                       Home Port: e0M
                    Current Node: TOASTER-01
                    Current Port: e0M
              Operational Status: up
                 Extended Status: -
                         Is Home: true
                 Network Address: 10.2.1.121
                         Netmask: 255.255.255.0
             Bits in the Netmask: 24
                 IPv4 Link Local: -
                     Subnet Name: MGMTnet
           Administrative Status: up
                 Failover Policy: local-only
                 Firewall Policy: mgmt
                     Auto Revert: true
   Fully Qualified DNS Zone Name: none
         DNS Query Listen Enable: false
             Failover Group Name: Default Network
                        FCP WWPN: -
                  Address family: ipv4
                         Comment: -
                  IPspace of LIF: Default
  Is Dynamic DNS Update Enabled?: -

Missing disks DS14MK4

$
0
0

I have a old Netapp running ontap 7.3.

The system has not been used for some time now, before it was shut down it was working fine.

 

Now when I try to start, it boots with the error that there are not enough spare disks for the aggregation.

After some debugging I found that the all disks led's flashing green expect for 2, these stay solid green.

 

When I run "show disk -v" the missing disks are not displayed, but when I do "sysconfig -a" the missing disks are displayed together with all other disks.

The weird thing is that they are showed as 0.0GB 0B/sect instead of 272.0GB 520B/sect

 

Does this mean that the disk are broken and should be replaced?

I would expect amber led when the disk was malfunctioning, or is that a wrong assumption?

 

Thanks!

 

 

 

 

 

 

Ontap Select vSphere License Prerequisite

$
0
0

Hi all,

 

to setup a supported version of Ontap Select I've found that there are a set of too much strict hardware prerequisites. But I can understand to avoid poor performance Select instances.


But this I don't understand at all:

 

ONTAP Select is supported on VMware Vsphere Enterprise or Enterprise+ license only. ONTAP Select is a virtual storage appliance and requires certain capabilities from the underlying hypervisor. You would need either Vsphere Enterprise or Enterprise+ license

 

Why Enterprise or Enterprise+? 
Imagine to use a Select on a branch office in my experience I've never met customer using such high license for their small vSphere cluster on a peripheral site.

 

Why to not let me the choice to install a single node (or a multinode also) version of Select on a simple ESXi free hypervisor? Or on a standard edition of vSphere?

 

Regards,

 

 

 

Moving a volume from a 32bit aggregate to a 64bit aggregate

$
0
0

Hi

 

We are currently running Ontap version 8.1.4P8 in our 7Mode filer - in the filer we have 1 32bit aggregate which is pretty much 100% full and a 64bit aggregate with plenty of space.

 

I am planning to move one of the volumes from the 32bit aggregate over to 64 bit aggregate using the following method:

 

  1. Create a volume on the 64bit aggr the same size as the source volume on the 32bit aggr.
  2. Create a snapmirror relationship with 32bit volume as the source and the 64bit volume as the destination.
  3. Start the snapmirror relationship so the data can be copied from source to destination.
  4. Terminate CIFS on the vfiler (outage arranged and users infomed).
  5. Run a final update so both source and destination are in sync.
  6. Break the snapmirror relationship so the destination volume is accessible.
  7. Take a copy of the /etc/cifsconfig_share.cfg from the root volume of the vfiler.
  8. Rename source volumes to _ORIG
  9. Rename destination volumes to what the original volumes were called
  10. Take a copy of the /etc/cifsconfig_share.cfg and rename the file to /etc/cifsconfig_share_POST_VOLUME_change.cfg
  11. Copy the version of /etc/cifsconfig_share.cfg taken in step 7 and make it the current version.
  12. Restart CIFS.
  13. Test share access.

 

The volume has many cifs shares configured at the filer end so copying the /etc/cifsconfig_share.cfg back seems the easiest way of doing it.  

 

Please let me know if the above looks ok - I have tested in on a test volume and I was able to browse the cifs shares post the change.  

 

The volume only holds CIFS shares - no NFS present.

interconnect error

$
0
0

Hi,

 

We have metrocluster system and getting interconnect error messages.

 

[Node02:monitor.globalStatus.critical:CRITICAL]: Controller failover of Node01 is not possible: unsynchronized log.
[Node02:scsitarget.vtic.down:notice]: The VTIC is down.
[Node02:cf.fsm.takeoverOfPartnerDisabled:error]: Failover monitor: takeover of Node01disabled (interconnect error).
[Node02:scsitarget.vtic.up:notice]: The VTIC is up.

 

Ontap keep going giving that message.

 

We checked switch config both fcvi port showing healty than checked cables , SFP but none of them has problem.

 

We also opened a case but still could not find out why we are getting this messages.

 

 

Any ideas ?

 

Thanks,

Tuncay

What can be the maximum size of an aggregate in netapp 7 mode

$
0
0

Hi All,

 

I am planning to extend the aggregate NAN01 39 TB in SIze and its 92% utilized, below is the details for the detials.

Kindly suggest if its feasible.

 

Aggregate                total       used      avail capacity
NAN01                     39TB       36TB     3063GB      92%

 

netappXX> version
NetApp Release 8.0.4 7-Mode: Wed Sep  5 10:55:50 PDT 2012

 

Aggregate NAN01 (online, raid_dp) (block checksums)
  Plex /NAN01/plex0 (online, normal, active)
    RAID group /NAN01/plex0/rg0 (normal)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------
      dparity   3b.01.0         3b    1   0   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      parity    0a.00.3         0a    0   3   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.1         3b    1   1   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.4         0a    0   4   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.2         3b    1   2   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.5         0a    0   5   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.3         3b    1   3   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.6         0a    0   6   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.4         3b    1   4   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.7         0a    0   7   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.5         3b    1   5   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.8         0a    0   8   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.6         3b    1   6   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.9         0a    0   9   SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816

    RAID group /NAN01/plex0/rg1 (normal)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------
      dparity   3b.01.7         3b    1   7   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      parity    0a.00.10        0a    0   10  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.8         3b    1   8   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.11        0a    0   11  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.9         3b    1   9   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.12        0a    0   12  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.10        3b    1   10  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.13        0a    0   13  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.11        3b    1   11  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.14        0a    0   14  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.12        3b    1   12  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.15        0a    0   15  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.13        3b    1   13  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.16        0a    0   16  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.14        3b    1   14  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      0a.00.17        0a    0   17  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816
      data      3b.01.15        3b    1   15  SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816

Periodic very low performance on cluster node

$
0
0

Hello everybody,

I'd like to ask whether any of you experienced the same problem as me, and if a solution/troubleshooting procedure is known for this problem.

 

We've 2 FAS2554 ONTAP 8.3P2 in cluster (4 nodes). Each node has one big aggregate (plus the ONTAP "service" aggregate), for a total of 4 aggregates for data storage.

One SVM is dedicated to providing iSCSI LUNs to Windows Server 2012 R2 virtual machines and is hosted on aggregate 3. iSCSI LUNs are distributed on the 4 aggregates using 1 volume for each LUN.

 

I noticed that virtual machines using iSCSI LUNs hosted on aggregate 3 experience periods of very low disk access performance.

I installed Harvest+Graphite+Grafana and noticed that periodically aggregate 3 shows very low IOPS, throughput and latency. At the same time it shows very low reads from HDDs and very high reads from RAM. The behavior seems very regular, with anomaly periods appearing approximately every 100 minutes and lasting about 30 minutes.

I attach a couple of graphs taken from Grafana.

 

Does anybody have an idea of what's happening?

 

Many thanks in advance!

Regards

 

"not a directory" when cd to a top level directory in the namespace

$
0
0

Hi,

 

Client: CentOS 5.11

NetApp: ONTAP 9.1RC1

 

I have a Linux client running CentOS 5.11 and I have been able to browse via /net/svm/toplevel_dir/ via automount for some time. However, recently I have been getting 'not a directory' instead when ls or cd any of the directories right below the svm. For example, one of the toplevel directories under the SVM's root is projects:

 

# ls /net/svm/projects/
ls: /net/svm/projects/: Not a directory

 

However, it works fine if:

 

* if  I manually mount svm:/projects on this Linux client

or

* if I browse from another CentOS 5.11 client with same configurations

 

The export of "projects" allows r/w for this and other clients and no root squash.

 

Please let me know if you hav any suggestions on how to fix this.

 

Thanks,

 

 


Reboot needed after change of an option

$
0
0

Ontap 8.1.1 7-mode, I want to change the option:  disk.target_port.cmd_queue_depth from 256 to 8 on a V3270 filer (serial: 200000694455). Do I need to reboot the filer or perform a takeover/giveback to activate this or is it activated after I entered the command?

 

option disk.target_port.cmd_queue_depth 8

 

 

Whitepaper or best practice for Microsoft Cluster service using a File share witness

$
0
0

customer wants to install a MS windows 2012 cluster in an VMware (ESXi V5.5) environment using a file share witness as a quorum. The storage is a V6240 (serial: 200000635057) NetApp Ontap 8.2.4P5 (7-Mode). I need to know, if it is supported and if yes, under which conditions.


Do you have a whitepaper or best practice document for that?

 

Thanks Carsten

unable to SSH without specifying algorithm

$
0
0

After completing the recommended changes to our filer we can't just ssh to either controller without specifiying the algorithm to use.

 

https://kb.netapp.com/support/s/article/ka31A0000000yGnQAI/how-to-disable-sslv2-and-sslv3-in-data-ontap-for-cve-2016-0800-and-cve-2014-3566?language=en_US

 

FAS2220 8.1.1 7-mode

 

If you try SSH to either controller on the shelf you see the following 

Unable to negotiate with IP_ADDRESS port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

 

However using this option works 100%

> ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 user@filer

 

We're mostly a Mac shop so I usually SSH from Mac, currently 10.12.3

 

 

 

Options ssh

 

ssh.access *
ssh.enable on
ssh.idle.timeout 600
ssh.passwd_auth.enable on
ssh.port 22
ssh.pubkey_auth.enable on
ssh1.enable off
ssh2.enable on

Folder not showing on CIFS

$
0
0

Hi,

 

I have noticed a strange behaviour regarding folders inside shares hosted on a CIFS SVM that is runnung on a 2 node clustered FAS2552.

 

I have deployed the user home folders on this CIFS. Recently a user has requested some data from administration, that should be stored in his home drive. So i have created a new folder and put the data in the users folder (\\filer\home$\someuser\datafolder). When the user opens his home drive (mapped by the ms active directory on logon) he doesn't see the folder on his drive and is not able to access it by direct accessing (neither by \\filer\home$\someuser\datafolder nor by h:\datafolder). After a few minutes, the folder was appearing to the user.

 

Is this an expected behavior with CDOT due to the clustering of data? If not, what may cause this effect?

 

Kind regards,

Sascha

Can I create/assign multiple interfaces (in different VLANs) to a single SVM?

$
0
0

Hi,

 

I've been working with 7-mode for years, but we just bought some cDot 9.x FAS 2650 filers. I'm having problems configuring a SVM as I'd like. cDot and SVMs are a big change for me.

 

We have the same data (volumes) we want to export/serve via NFS/CIFS to clients on multiple VLANs. We don't want to route the NFS traffic (which would also go through a FW in our case), so I've typically created VIFs on a trunked port to multiple VLANs. I'm trying to replicate this on an SVM. I've tried w/o subnets and IP Spaces, but stopped, scratching my head. I then created multiple subnets, broadcast domains and IP spaces with the goal of exporting the SVM's data to multple subnets, but I can't seem assign an IP/Subnet/IPSpace to the SVM where it's already been assigned an IP/Subnet/IPSpace.

 

Essentially, the volume in an SVM would be availble to interfaces on (let's say) VLAN10, VLAN12, VLAN200. Again, the technical reason to do this is to avoid routing and going through our network team's intervlan firewalls.

 

 

Is this even doable in cDot 9.x? If so, can someone provide some pointers?

 

Thanks in advanced.

Viewing all 4989 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>