Quantcast
Viewing all articles
Browse latest Browse all 4945

FAS2552 Hybrid with additional disk shelf optimal aggregate structure

Hello, ONTAP gurus!

Help find the best layout for this equipment and load:

- original FAS2552 dual-controller DS2246 with 4x400GB SSDs and 20x1TB SAS - shelf 1.0

- later added additionsl disk shelf DS2246 with 24x1TB SAS drives - shelf 1.1.

 

Main goal is to get maximim storage flexibility and feature usage (Flash Pool) for small data center with following loads:

- user data access via CIFS/NFS

- small vitrual cluster with 2-4 Hyper-V nodes with SMB 3 data VM storage.

 

This is current ONTAP 9.8 configuration recommentation and system state when use System Manager/ONTAP auto-configuration:

SSDs 1.0.0-1.0.3 - untouched by recomendation and not used at all, but want to use Flash Cache for user data access;

HDDs 1.0.4-1.0.23 partitioned to store node1/2 root agregates and have raid group 0 for each node data aggregate.

HDDs 1.1.0-1.1.19 equally assigned to each node and have raid group 1 for each node data aggreagate.

Two aggregates auto created with two raid groups each for node 1 and node 2 - see picture

Image may be NSFW.
Clik here to view.
vlsunetapp_0-1636240231714.png

We have 4 unused SSDs and 4 HDDs reserved for spares (hm, why not 2?).

 

Each created data aggregate on node 1 and node 2 have two RAID groups rg0 (10 partitions on shelf 1.0)and rg1 (10 drives on shelf 1.1).

 

Auto-partition function says that in can not reccomend what to do with SSDs and have created total two new data aggregates (1 for each node).

 

My questions are:
1. Can we manually switch/recreate such configuration for aggregates without conflicting optimal ONTAP recommendations:
- create SSD pool using 4xSSDs;
- create for both nodes two different partitions for used data access and some cata center loads:
aggr_NodeA_01 - with SSD flash pool caching for user data access via CIFS/NFS - 10 partitions on shelf 1.0;
aggr_NodeA_02 - Hyper-V VM storage with SMB 3 access - 10 drives on shelf 1.1;
aggr_NodeB_01 - with SSD flash pool caching for user data access via CIFS/NFS - 10 pertitions on shelf 1.0
aggr_NodeB_02 - Hyper-V/SQL etc. storage for data center loads (NAS or SAN) - 10 drives on shelf 1.1.
2. Does configuration in (1) conflicts with optimal NetApp raid group values or creates some performance issues?
3. Because of 4 aggregates we now have some flexibility to turn on/off Flash Pool for aggregates on shelf 0 (we have to recreate aggregate to turn off Flash Pool, so its flexibility to move volumes) - isn't it?
4. Having experience with traditional RAID controllers, I want to know how data volume stored on default created aggregate which consists of two raid-dp groups rg0 (10 partitions) and rg1 (10 drives) - does volume on such aggregate use only one rg or both?
5. Are there any chances to use two more spares for data?

 

Thanks for any help!

 

 


Viewing all articles
Browse latest Browse all 4945

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>