Quantcast
Channel: ONTAP Discussions topics
Viewing all articles
Browse latest Browse all 4944

Cluster Configuration

$
0
0

Hi All,

 

I am relatively new managing Netapp storage systems. Any help suggestions I can help will be much appreciated.

 

We currently have 10 controllers and a network configuration that has all the lifs on a particular vlan in the same broadcast domain and failover group. However we are about to replace 4 fas8080 filers with 4 fas9000 filers. The issue is that our network team is also moving into a new network configuration whereby the old vlans will no more be used in the new configuration and a network folk said that the new vlans on the new core will be layer 3 vlans while the old ones are layer 2, but data going to the old vlans can be routed to the new core switch. We plan to connect these 4 new filers to a new core switch in the new configuration and this core will not have any of the old vlans configured on it. There seem to be two options (as we will prefer not to have the new filers connected to the old network since we will soon migrate everything over to the new network configuration):

 

1. Follow this route above and join the new nodes to the same cluster as the old ones but have broadcast domains and failover groups of the new nodes that are different from the those of the others in the old vlans. So failovers will be between 4 nodes instead of 10.

2. Avoid the use of vlans all together in configuring the lifs and configure lifs on top of ifgrps. Is this possible if the switch ports are configured as access ports instead of trunked ports? What about traffic to/from the intended clients for the different vlans getting/sending their data?

 

Please does anybody envisage any problems down the line, especially with vservers, data traffic, snapmirror, snapvaulting, etc.?

 

Examples

 

broadcast domains
Default 136.171.74.0/24 1500
                            node1:a0a-1000             complete
                            node2:a0a-1000             complete
                            node3:a0a-1000             complete
                            node4:a0a-1000             complete
                            node5:a0a-1000             complete
                            node6:a0a-1000             complete
                            node7:a0a-1000             complete
                            node8:a0a-1000             complete
                            node9:a0a-1000             complete
                            node10:a0a-1000             complete
        146.36.200.0/21 1500
                            node7:a0b-2000              complete
                            node3:a0b-2000              complete
                            node2:a0b-2000              complete
                            node6:a0b-2000              complete
                            node4:a0b-2000              complete
                            node2:a0b-2000              complete
                            node1:a0b-2000              complete
                            node5:a0b-2000              complete
       
Failover Groups      
node00
                 136.171.74.0/24
                                  node1:a0a-1000, node6:a0a-1000,
                                  node2:a0a-1000, node7:a0a-1000,
                                  node3:a0a-1000, node8:a0a-1000,
                                  node4:a0a-1000, node9:a0a-1000,
                                  node5:a0a-1000, node10:a0a-1000
                 146.36.200.0/21
                                  node1:a0b-200, node5:a0b-200,
                                  node2:a0b-200, node6:a0b-200,
                                  node3:a0b-200, node7:a0b-200,
                                  node4:a0b-200, node8:a0b-200

 

Thanks.


Viewing all articles
Browse latest Browse all 4944

Trending Articles