Clustering of Mobility Controllers – 8.x

Posted on Posted in Aruba cluster, Aruba Mobility Master 8.x

Seamless roaming of clients between APs

Seamless client failover in the event of a connectivity failure to the active controller.

Load balancing of clients across controllers that are cluster members.

Clustering is a new feature introduced in AOS 8 that enables:

MM – Mobility Master

MC – Mobility Controller

VMC – Virtual Mobility Controller (x86 based)

MD – Managed Device (Mobility Controller managed by a Mobility Master)

AAC – AP Anchor Controller

A-AAC – Active AAC

S-AAC – Standby AAC

UAC – User Anchor Controller

A-UAC – Active UAC

S-UAC – Standby UAC

CoA – Change of Authorization

AP – Access Point

AOS – Aruba Operating System

Seamless AP failover: When MCs are part of a cluster, APs that come up will connect to their LMS IP (i.e. one of the cluster members), called the Active AP Anchor Controller (or A-AAC). The AP builds a standby tunnel to a Standby AAC (or S-AAC) that is selected by the cluster leader. When the A-AAC goes down, the AP seamlessly fails over to the S-AAC

Seamless client failover: When MCs are part of a cluster, high value client sessions (such as voice, video, FTP, SSH etc.) are synchronized between active and standby members of a cluster, thereby making the client failover seamless. When a client joins the cluster, it always terminates on a dedicated MC in the cluster called Active User Anchor Controller (or A-UAC). The cluster leader then selects a Standby UAC (S-UAC) from the cluster. Since the client L2 state and high value client sessions are maintained between A-UAC and S-UAC, if connectivity to the A-UAC is lost, the client is able to failover to the S-UAC without the user noticing a connection drop for high value sessions.

User load balancing: Clients are evenly load balanced among cluster members based on the platform capacity of cluster members and the configured load-balancing threshold. 

Cluster VRRP: The cluster members can be in a configured VRRP group for L2 Cluster. This VRRP group is configured manually in addition to the cluster configuration.

  • Cluster VRRP VRID must be between 1-219

Authorization server interaction (RADIUS CoA): To authenticate new users, the A-UAC may forward authentication requests to an external RADIUS server with the A-UAC’s IP as the NAS-IP. The external RADIUS server sets this NAS-IP (i.e. A-UAC’s IP) in its client database. The NAS-IP is used later to change the state or attributes of the client. However, if the client changes its UAC, the authorization server is not updated and hence cannot send CoA updates to the client. To resolve this issue, VIP and VLAN are configured in each node in the cluster.

VRID 220 and higher is used by cluster members for VRRP-IP for purposes of authorization server interaction (RADIUS CoA).

reference – Aruba 8.x docs

Create a cluster profile:

The profile can be created on the CLI or GUI. The configuration will be done at the folder level of a specific site and will NOT be a global config.

One the cluster profile has been created the “container” will be empty, the controllers MUST be added to the group




The next step is to add each controller to the previously created group-profile r7102_Virtual_MDs and exclude any required VLANs


For seamless client failover, ensure that all the members in a cluster are L2-connected, i.e. the same user VLANs exist on all controllers (certain VLANs such as management VLAN can be marked as excluded VLANs).






All three controllers are part of the same VRRP cluster. The VRRP IP address of 10.0.17.22 is the local mobility switch (LMS IP) that the access points will “talk” to in order to join the controller.

If the configuration is correct the three controllers should form a L2 cluster



Verify status of APs on each controller: show ap database long. The “flags” filed indicates the state of the access point. For now i am interested in the 2S flags:

WLC0001

WLC0002


WLC0003

The access point built a primary and secondary tunnel to the two controllers.

Connect a client to the WLAN:



A closer look shows that the AP is on WLC0001 but the client is NOT in the database of WLC0001


The client is actually in the database of WLC0003

Testing fail over by shutting down WLC0001.

The AP failed over to WLC0003 as secondary


WLC0002 is the primary

The client did not drop a single packet

AP status on WLC0003 before shutting down WLC0002

AP status on WLC0003 after WLC0002 is shutdown. Notice that the AP does NOT have a “standby” IP address and only has the 2 flag.


again! the client did NOT drop a single packet.

Verification that both WLCs are down

Verify cluster on the Mobility Master



WLCs back online and functional for next blog post

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.