This command doesnt work from any nodes in one cluster, but works in the other cluster.
Can Experts please advise what LIF's will be used in this command? or what should I look into?
Thanks!
This command doesnt work from any nodes in one cluster, but works in the other cluster.
Can Experts please advise what LIF's will be used in this command? or what should I look into?
Thanks!
Hello all,
I am in need of assistance with Powershell Toolkit for Netapp. I need a command that would be equivalent to:
environment status chassis all
Please assist if you are able to .
Hi
I ran the below command to check the latency on a volume and notice the network is high. What is this meaning to me? Network congestion?
'qos statistic volume latency show -vserver - volume'
Hello,
We have been using Netapp FAS 2040 in HA mode with 2 Controllers. And having 33 Disks (500GB/1TB/2TB)s providing effective Size of 11 TB. Netapp operates on 1 GB Network with NFS connectivity.
We are looking for a new mid-range solution as the Model FAS2040 is out of warranty and support. The recommendation we received is FAS 2720 with ( 12 x 2 TB) disks.
1. Firstly I would like to know if this suggested upgrade is practical? Has anyone experience using this model FAS2720 or FAS2700 Series?
2. I feel 24 TB will be not sufficient, considering the fact that Netapp's RAID-DP will have 3 Disks reserved plus there will be some disks kept in reserve as well. So lets says 3 Disks for reserve and 3 for RAID-DP , giving approximate size of 6 or 7 x 2 TB ~ 14 TB. I feel then we should go for 4 TB Disks giving us, 12 x 4 TB ~ 48 GB or effective 6 x 4TB = 24 TB Size.
I would appreciate your comments/suggestions on this topic. I am currently reading the datasheet of FAS2720.
I would update this case with more questions as and when required. Thanks in advance.
Regards,
admin
We recently upgarded from 9.2P4 to 9.3P10 on our AFF 4 node 8080 cluster. One of the features I wanted to implement was volume-level background deduplication. I removed scheduling from every volume and set everything to auto policy. I thought it was great that I wouldn't have to manage scheduling of deduplication jobs anymore.
After several weeks, I'm noticing a sharp uptick in the number of volumes alerting that they are running out of space. In each case, my first instinct is to run a quick manual deduplication job just to make sure I really need to resize the volume. In every case so far, the alerting volumes were "deprioritized" by the auto policy so I couldn't even run dedupe manually without promoting the volume.
As I reviewed this situation, I noticed how the "auto" policy actually works. I thought it effectively eliminated the need for scheduled deduplication - i.e. that essentially each volume would just do the inline dedupe/compression and get the same benefits it would have had previously had I done that + scheduled dedupe. What I discovered was regular deduplication jobs running at very random times (in addition to the inline dedupe). Those random jobs might run hours after the nightly backup, so it misses some of the savings it would have had if it were run before snapshots were generated.
The last straw was this morning when one of my VMware datastore volumes alerted that it was low on space. Even it was deprioritized, and these volumes have the higest rate of dedupe/compression savings on our cluster. Although I don't want to, I'm starting to think I need to revert this feature and go back to scheduling.
Does anyone have insight into this issue? In particular, are there any improvements to this feature in 9.4 or 9.5? Any suggestions or feedback is appreciated!
Anyone have steps on how to prep a previously used (NetApp) disk with the v7.2.4 ONTAP ?
I am looking for the steps to relabel replacement "used" disks, under ONTAP 7.2.4. (I know, I know!)
Sure enough, I lost two disks in two separate arrays (all within a 4 day time period!)
The Controller shut itself down before I could fly out to replace the disks.
I swapped out the disks with "refurbished" ones from ServerSupply, and went into Maintenance Mode.
Performed "disk assign all" and restarted.
I ran "aggr status -f" because all of the new disks complain about not having any valid labels!
*> aggr status -f
Thu Apr 4 20:31:14 GMT [raid.assim.disk.nolabels:error]: Disk 0c.35 Shelf 2 Bay 3 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WM0PEC] has no valid labels. It will be taken out of service to prevent possible data loss.
Thu Apr 4 20:31:14 GMT [raid.assim.disk.nolabels:error]: Disk 0b.59 Shelf 3 Bay 11 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WEGPHC] has no valid labels. It will be taken out of service to prevent possible data loss.
Thu Apr 4 20:31:15 GMT [raid.assim.disk.nolabels:error]: Disk 0b.54 Shelf 3 Bay 6 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WHSSPC] has no valid labels. It will be taken out of service to prevent possible data loss.
Thu Apr 4 20:31:15 GMT [raid.assim.disk.nolabels:error]: Disk 0b.42 Shelf 2 Bay 10 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WKVY2C] has no valid labels. It will be taken out of service to prevent possible data loss.
Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0c.35 Shelf 2 Bay 3 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WM0PEC] has bad label.
Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.59 Shelf 3 Bay 11 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WEGPHC] has bad label.
Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.54 Shelf 3 Bay 6 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WHSSPC] has bad label.
Thu Apr 4 20:31:15 GMT [raid.config.disk.bad.label:error]: Disk 0b.42 Shelf 2 Bay 10 [NETAPP X279_HVIPB288F15 NA01] S/N [J8WKVY2C] has bad label.
Broken disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
bad label 0b.42 0b 2 10 FC:B - FCAL 15000 272000/557056000 274845/562884296
bad label 0b.54 0b 3 6 FC:B - FCAL 15000 272000/557056000 274845/562884296
bad label 0b.59 0b 3 11 FC:B - FCAL 15000 272000/557056000 274845/562884296
bad label 0c.35 0c 2 3 FC:A - FCAL 15000 272000/557056000 274845/562884296
No root aggregate or root traditional volume found.
You must specify a root aggregate or traditional volume with
"aggr options <name> root" before rebooting the system.
I don't know what the commands should be to force a new label, my Pool is Pool0, my owner is file01.
Not sure what else it is looking for.
IMPORTANT - I AM IN MAINTENANCE MODE, and unable to work, so a quick response would be very welcome!
Thanks everyone!
Dear All.
Hello
I need help about NetApp Flashpool.
I need help about FlashPool Sizing.
Currently, the system is configured as below.
FAS8060 (2 node Cluster)
1. Aggregate_n1 = 85 TB usable with 1 TB flashpool
2. Aggregate_n2 = 85TB usable with 1 TB flashpool
3. Only FCP Use.
Question 1.
What is the ratio of Usable Size to Flashpool Size?
I looked for the material, No appropriate rate data was found.
Questuon 2.
The hit ratio of FlashPool is shown as below.
I wonder if the values below are appropriate or insufficient.
Average Read Hit = 27% (service normal time) ~ 34.1% (Service peak time)
Average Write Hit = 39.8% (service normal time) ~ 42.2% (Service peak time)
It is server virtualization based on VMware, and it provides various services such as WEB, DB, and APP.
I have tried the NetAPP SR, Not receiving the right answer.
I would like to get some advice on your various experiences.
Best Regards
yoon
Hi,
While trying to copy the image to the cluster for ontap upgrade using FTP..getting error message "Cannot find the url path specified ".
Any tips to fix the issue.
Thanks
Naga
Hi,
Currently our DFM folder is located in C drive. Would like to migrate the application to another data drive in same system.
Currently DFM version we are running is 4.0.2.
To acomplish this do we need to uninstall and install the DFM application after having a backup of database.
1. Uninstall and install the DFM 4.0.2 with new destination path ?
2. Is OCUM 5.2 core package supports IBM N Series storage filers? If so can I perform the upgrade and will it provide an option for me to install the new version in another path ?
Please let me know if anyone has done this before for IBM N series filers.
Thanks
Naga
Hi,
I did some test (using SLAG to set SACL) and i'm not able to get this event logged(event 4670 "DACL changes",).
In "nfs auditing and security guide" there is a chapter named "SMB events that can be audited" with a table
And this not clear, we can see that event 4664 is in the category "file acess", for the event 4670 and 4907 there is no category.
So can someone tell me if yes or no we can audit event 4670 and 4907.And if yes what is the trick?
Here is the configuration :
Vserver: fr0-svmval10
Auditing State: true
Log Destination Path: /vol/fr0_svmval10_NBK01/log
Categories of Events to Audit: file-ops, cifs-logon-logoff,
audit-policy-change
Log Format: xml
Log File Size Limit: 100MB
Log Rotation Schedule: Month: -
Log Rotation Schedule: Day of Week: -
Log Rotation Schedule: Day: -
Log Rotation Schedule: Hour: -
Log Rotation Schedule: Minute: -
Rotation Schedules: -
Log Files Rotation Limit: 20
Strict Guarantee of Auditing: true
and either with :
Storage-Level Access Guard security
SACL (Applies to Directories):
AUDIT-Everyone-0x140000-SA
0... .... .... .... .... .... .... .... = Generic Read
.0.. .... .... .... .... .... .... .... = Generic Write
..0. .... .... .... .... .... .... .... = Generic Execute
...0 .... .... .... .... .... .... .... = Generic All
.... ...0 .... .... .... .... .... .... = System Security
.... .... ...1 .... .... .... .... .... = Synchronize
.... .... .... 0... .... .... .... .... = Write Owner
.... .... .... .1.. .... .... .... .... = Write DAC
.... .... .... ..0. .... .... .... .... = Read Control
.... .... .... ...0 .... .... .... .... = Delete
.... .... .... .... .... ...0 .... .... = Write Attributes
.... .... .... .... .... .... 0... .... = Read Attributes
.... .... .... .... .... .... .0.. .... = Delete Child
.... .... .... .... .... .... ..0. .... = Execute
.... .... .... .... .... .... ...0 .... = Write EA
.... .... .... .... .... .... .... 0... = Read EA
.... .... .... .... .... .... .... .0.. = Append
.... .... .... .... .... .... .... ..0. = Write
.... .... .... .... .... .... .... ...0 = Read
or that SACLS set up
Storage-Level Access Guard security
SACL (Applies to Directories):
AUDIT-Everyone-0x1f01ff-SA
0... .... .... .... .... .... .... .... = Generic Read
.0.. .... .... .... .... .... .... .... = Generic Write
..0. .... .... .... .... .... .... .... = Generic Execute
...0 .... .... .... .... .... .... .... = Generic All
.... ...0 .... .... .... .... .... .... = System Security
.... .... ...1 .... .... .... .... .... = Synchronize
.... .... .... 1... .... .... .... .... = Write Owner
.... .... .... .1.. .... .... .... .... = Write DAC
.... .... .... ..1. .... .... .... .... = Read Control
.... .... .... ...1 .... .... .... .... = Delete
.... .... .... .... .... ...1 .... .... = Write Attributes
.... .... .... .... .... .... 1... .... = Read Attributes
.... .... .... .... .... .... .1.. .... = Delete Child
.... .... .... .... .... .... ..1. .... = Execute
.... .... .... .... .... .... ...1 .... = Write EA
.... .... .... .... .... .... .... 1... = Read EA
.... .... .... .... .... .... .... .1.. = Append
.... .... .... .... .... .... .... ..1. = Write
.... .... .... .... .... .... .... ...1 = Read
event i need are not logged.
Thanks for your help
Florent
After an ONTAP upgrade to 9.1 from 8.3 the Harvest Node graph drop down to select node connections erroneous data..
drop down list should just contain node names
I have a customer who is interested in locking down some users to be able to access specific volumes and perform a limited set of operations on those volumes.
Sounds like a perfect scenario to use a custom role. I've done some lab on demand testing to sound out the requirements.
The requirements for the role are to have the following commands avaialble.
vol snapshot create
vol snapshot delete
vol snapshot show
vol snapshot restore
set -confirmations off
So far so good. The second requirement is that of each user should only be able to perform the above options on a specific set of volumes. To make it easy lets call them
produser - accessing volumes prod*
testuser - accessing volumes test*
devuser - accessing volumes dev*
The issue I've hit is with the snap restore command set.
I can create a role with the following
sec login role create -role prodrole -cmddirname volume -query "-volume prod*" -access all
But this doesn't include the volume snapshot restore commands So we add the follow
sec login role create -role prodrole -cmddirname volume snapshot -query "-volume prod*" -access all
again this doesn't include the volume snapshot restore commands.
So when we attempt to add this final extentionto the allowed commands
sec login role create -role prodrole -cmddirname volume snapshot restore -query "-volume prod*" -access all
"which includes the snapshot promote command"
The wildcard on the query is rejected. So we can only add a single volume here, with multiple volumes required. Is there way to list a set of volumes we can allow the user to perform restores for? Pipe and command seperation doen't seem to apply. I can't see anything in the documentation that hints at adding mulitple valid queries.
The prod, test and dev volumes are on the same vserver so to get the granularity we require if possible we'ed need to lock down the command
Each week, the Tech ONTAP Podcast dives into all-things NetApp, including storage, public & private cloud, and much more. The team will also be interviewing subject-matter experts from across the industry and detailing various best practices for getting the most out of your datacenter.
Hello everyone,
I have some problems by deploying a 2-Node Cluster at the "POST DEPLOY SETUP" task.
All tasks were marked as sucessfull so far and I can also connect to the cluster- and node-shell(s)
Commands like "cluster show", "net int show", "stor disk show" only tell me that everything is up and running. No errors at all.
Regarding to Netapp: If the cluster-deployment is not completely done, the default login will be "admin / changeme123".
As this exact login is working in my situation, I guess that the cluster is still not finally deployed .
Cluster "POST DEPLOY SETUP" keeps stuck at "ClusterMgmtIpPingable" for over two hours now.
From the perspective of VMware, everything looks fine also.
As I can find absolutelly nothing regarding to this kind of behaviour/situation, I hope to get some help from the Netapp community.
Best regards and thank you in advance.
Hi,
I have 3 nodes 2750 each with 2 controller. I have 2 netapp switch.
e0a from each controller is connected to Switch A 6 connections and e0b from each controller to Switch B. All connections are solid and on every controller e0a and e0b are populated with internal IP's.
on my first Node 1 controller 1 i created a cluster and Node 1 controller 2 joined without any issues.
Node 2 and 3 controllers 1 and 2 ran cluster setup and join and provided { e0a IP Node 1 controller 1 } and they all failed and this is happening on all 4 controllers. I also tried to do it from GUI Guided setup and failed.
Error stated that node are not rechable, not able to ping.
From Node 2 and 3 from every controller i am able to ping clus1 and clus2 default interconnect lifs cluster e0a and e0b and e0a and e0b.
Need some help what am i missing ??
Any help will be appreciated.
Hello,
we were adding some disks to existing aggregate and got this error message:
Hi guys, my customer is creating an enterprise snaplock aggregate but he wants change to compliace. Is it possible?
How can i do this change?
As I understand, FabricPool tiering would not transfer deduplicated data over to AWS S3, and it would have to be rehydrated first. My question is:
1. Where rehydrating process would take place, on the storeage cluster?
2. then when retrieving data back to the sotrage cluster from S3, will data be duplicated again?
3. how much performance could be degraded?
Thanks!
I have an Ontap Select system that is up and running. The ESX system is being moved to another location and thus the naming of the virtual machines is being changed. Is it possible to rename the deploy and select system? Since the Select is running Ontap I believe I can rename the cluster but will this break the deploy system?
I've looked for documentation and haven't been successful about finding whether or not this is possible and if so how to move forward. If anyone has any information or know of documentation on how to do this I would appreciate the help.
Thanks,
Travis
Anyone have success configuring BGP in OnTap 9.5 yet. Read through the documentation several times, and successfully created peer groups, but am just missing some key to it.