Clustering in WiNG5.1 - undocumented changes

// Expert user has replied.
A Arsen Bandurian 3 years 5 months ago
2 1 0

Hello, Team. I've been experimenting with clustering in WiNG5.0 and 5.1 and discovered some significant changes, which, I believe, are not documented. Some things changed even between 5.1-050B and 5.1-074R. Hope this saves a bit of headbanging against the wall to some of you and relieves our tech support from some extra calls. 1. Clustering in WiNG5.1 DOES exist. There's been a comment on last WiNG5.1 TAVT that there's no clustering in WiNG5.1. That's not exactly true. I've been able to set up cluster of 2 switches on 5.1-074R firmware. So it IS there. But there are some gotcha's. 2. Clustering in 5.1 is not limited to 2 devices. OK, I got a third RFS4000 and ran a cluster on all three. I don't know where the limit is, the 5.0 design doc said up to 24 switches in the cluster, but with 5.1 this could've changed. Let's wait for an official answer from BU on this. Apparently, syncing configs between 24 switches when multiple RFSes leave and join the cluster was causing too many issues (I lost my configs a few times in 5.0 labs). As a result, the way how config is synced and cluster is set up has changed:

In 5.0 cluster config was merged from individual configs of all RFSes in cluser, and in case of conflicts the master's settings had the priority, but there were glitches.

In 5.1 they just overwrite the slave's config entirely with master's, all settings on the slave that are nnot in the master's config are lost. So you MUST have slave's settings in the master's config, otherwise you'll lose the slave right after it joins. See below.

This is essentially return to the cluster-config mechanism of WiNG4.x. From what I know, the BU decided to run with such limited functionality for the time being, and focus on hierarchical management, which will (hopefully) solve the sync problem and many others. 3. Cluster is now set up differently. Because of the way config syncs now, the way cluster is set up is now also different. In 5.0 in order to set the cluster you had to configure cluster name and cluster member IPs on each RFS. But each RFSs config had to contain info about this RFS only. In 5.1 you must

Configure ALL switches configs on the designated master controller. You may not do the entire config, if you want, but you must configure cluster name and vlan/ip and also the necessary networking settings. Otherwise your slaves will be lost right after they join the cluster.
On slave(s) you need to configure essential networking settings, AND cluster configuration (cluster name and vlan/ip) of ALL other members. Cluster will form only between members that have each other in config! This looks like a precaution.

The good thing of 5.1 is that if you have switches on the same VLAN you don't need to specify IP addresses - use "Cluster VLAN" instead.  So the clustering config can now be just two lines:  cluster name my-cluster  cluster member vlan 10 Because these lines do not contain information specific to either RFS1 or RFS2/3/4/5 - you can put them into Profile and save yourself some headache whenever you want to reconfigure something. So, the resulting config will include something like profile rfs4000 cluster-profile cluster name my-cluster  cluster member vlan 10 ! rfs4000  use profile cluster-profile  cluster priority 255  ...some local stuff like hostname and ip... ! rfs4000  use profile cluster-profile  ...some local stuff like hostname and ip... ! rfs6000  use profile cluster-profile ... and so on Again, this has to be present on ALL switches to make the cluster possible now. So the easiest way would be to set up everything on the designated master (set the device override to specify priority, see bug below) and then copy the config to other switches. Then, once the switches are in cluster, it doesn't matter on switch switch you edit the config. I haven't tested what happens if you edit the config on multiple switches simultaneously, though. 4. Priority bug WiNG5.x doc says that the device with LOWER priority value gets to be the cluster master, whereas the reality shows that the HIGHER priority has the lead. Be careful as the slave's config get overwritten by master's when cluster is formed! Hope this will be fixed in the updated doc. I hope this helps, and let's see how this will be documented in 5.1 documentation set. Any comments from the BU will be appreciated.

Please Register or Login to post a reply

1 Replies

A Arsen Bandurian

I have got my hands on third device and updated the post as a result.

CONTACT
Can’t find what you’re looking for?