This page is intended to be read by network managers in institutions on the CUDN who host University Wireless access points on their institutional network. It gives details about how the network needs to be configured to allow the access points to work.
For most institutions, a single access point management VLAN (as described below) will fed to their CUDN PoP switch. The institution will be advised of the ID and asked how they would like it presented on the PoP, to carry into their network; the access points need to be connected to ports on which this VLAN is presented untagged.
There is a separate page with information about features available on access points (such as local SSIDs).
- Network types
- Management networks
- Security of uplink traffic and access point modes
- Wired port configuration
There are three main types of network used by University Wireless access points:
|Type||Purpose||Presentation to AP|
|Management network||Used to allow the access points to communicate with the central controller infrastructure in University Information Services (UIS) for management, authorisation and logging.||Untagged (although this can be changed — see below).|
|Central service networks||What the clients are connected to, when they successfully join a wireless network managed by UIS - the eduroam and UniOfCam SSIDs are examples of these.||None (tunnelled back to controller over management network).|
|Locally-bridged networks||Only present if an institution asks for an SSID to be added to an access point for local use. Clients connecting to these SSIDs will be placed on a locally-bridged network, fed directly out of the access point into the institutional network.||Tagged (although can be untagged, in a special configuration).|
When a University Wireless access point starts up, it uses DHCP on the management network to obtain its IP addressing information, then DNS to find the IP address of the central controller, before attempting to establish a connection with it. This connection with the controller is used to handle not only management traffic but also tunnel client traffic over a VPN using IPSec or GRE.
In most cases, institutions need only provide a management network connection to access points.
The management network is used to allow access points to communicate back to the central controller infrastructure hosted by UIS. It does not offer any services directly to clients: their traffic is either tunnelled over this network or bridged out locally at the access point (see above).
An institution has various options for providing a management network to their access points. Which is selected is a matter of local preference:
- Managed VLAN - in this situation, a new VLAN will be supplied to the institution's PoP switch from the upstream CUDN routers. The institution will be advised of the VLAN ID and will need to state how they want it presented on their PoP: port(s) and untagged/tagged. They must then feed it through their network and present it untagged on the edge switch ports where the access points are attached. The institution can also opt to connect the access points directly to the PoP (if this is physically possible) and avoid the need to reconfigure their network. The UIS will manage the IP addresses, DHCP and access control on this VLAN, much as with the voice client VLANs used by the VoIP system, as well as resizing the subnet, if it becomes too small. This choice is the most common and easiest to implement in most institutions.
- Managed routed subnet - if an institution does not wish to carry a VLAN from the PoP for network engineering reasons (e.g. they have their own router(s) and want to route the VLAN there), they can have a UIS-managed subnet routed to them. The UIS will provide the address space and DHCP servers, but the institutional will need to set up and maintain their router and associated access lists.
- Local subnet - if an institution prefers to manage the address space and allocation itself (perhaps to put individual APs on specific addresses), it can run the management network itself. Typically this would be done by the institution handling the routing and addressing themselves, on their own routers, but it is also possible for the UIS to provide the subnet as an additional (special) VLAN from the PoP. The institution will need to provide the subnet for this itself (additional address space can be requested from IP Register, if required) and operate a DHCP server providing IP details, as described below.
The differences between these options are summarised below:
|Institution must ...||Managed VLAN||Managed subnet||Local subnet|
|Carry VLAN from PoP to APs||Yes||No||Optional|
|Provide routers and maintain ACLs||No||Yes||Optional|
|Provide DHCP server(s)||No||No||Yes|
|Provide address space||No||No||Yes|
|Reconfigure if subnet moved/resized||No||Yes||Yes|
An institution can freely transition between the different types (or even mix them), without reconfiguration of the access points. However, the UIS should be notified so that the provision of a VLAN from the PoP is either set up or removed. When a new institution comes onto the University Wireless system, the institution should state whether it wants to make use of a VLAN from the PoP, or provide its own.
Regardless of type, it is recommended that DHCP Snooping, ARP Protection/Inspection and IP Source Guard are enabled, on the management network, where available.
The subnets used on a VLAN fed from the PoP will typically be created with capacity for around 25 access points (unless a larger number are ordered from the outset). If a subnet begins to fill up, the UIS will handle resizing it. If an institution has a clear plan for the growth of its University Wireless installation, it should make that clear so an appropriately-sized subnet can be allocated at the outset and avoid renumbering (although this is fairly straightforward and can be done with only a short interruption to service).
It should be noted that the management VLAN offers no useful service to end user devices and they should not be allowed to connect to them. If they do, they are unlikely to get an IP address and, if they did, the access restrictions on the network do not allow communication other than to the central infrastructure required to make an access point work.
If an institution wish to route the managed subnet itself, it is responsible for configuring its own routers:
- The UIS will advise the institution of the subnet range.
- The institution will need to state how the range is to be routed to them - via a static route or as part of a BGP peering.
- The institutional routers must be configured using the top usable address (i.e. the one below the broadcast address) to provide the gateway. The next two addresses below this are available for physical router addresses (i.e. the real addresses of the routers, if the gateway is provided using a first hop redundancy protocol). This is referred to as the "top" scheme.
- DHCP relaying must be set up to the UIS wireless DHCP servers at 172.28.208.86 and 172.28.208.87.
- Access via TCP and UDP to the University central recursive DNS servers on 22.214.171.124 and 126.96.36.199 must be permitted.
- The network must allow unfiltered communication between the access points and the central controller infrastructure in 188.8.131.52/26 without any form of NAT. In particular, this communication will make use of GRE and/or IPSec, which may have difficulty passing through a NAT or firewall.
If an institution wishes to operate its own local management network on the CUDN, there are several requirements:
- The network must use CUDN-wide addressing - either public or CUDN-wide private (not institution private). It is strongly recommended CUDN-wide private addresses are used as the access points do not need to communicate directly with the internet, only with other hosts on the CUDN. Institutions may request additional address space from IP Register, if required.
- A DHCP server must be operated to provide the access points with the appropriate host, netmask, gateway and DNS server addresses. It is NOT necessary for individual access points to have fixed, known addresses - upon booting, an access point registers itself with the controller and listens for commands from the controller across this. However, an institution may wish to do this so it can use the addresses to determine which access points are up and down.
- Access to a DNS server to resolve the addresses of names in cam.ac.uk (including the private.cam.ac.uk subdomain) and reverse-lookups of CUDN addresses (both public and private).
- As well as connectivity to the DHCP and DNS servers, the network must allow unfiltered communication between the access points and the central controller infrastructure in 184.108.40.206/26 without any form of NAT. In particular, this communication will make use of GRE and/or IPSec, which may have difficulty passing through a NAT or firewall.
It is strongly recommended that access control or other security is employed to prevent general network access (other than the required access described above) to and from the subnet if an unwanted device is connected to it. As access points (and their network connections) are often situated in public locations, this can be especially important.
It is possible, but not recommended, to connect access points to the same VLAN as used by regular hosts within the institution (e.g. the one used for desktop computers). If this is done, the institution must still provide the remainder of the services above (in particular, DHCP with an address from the institution's own range).
The requirements for remote access points are slightly different from above and are described below.
Management traffic between the central controller infrastructure and access points is always encrypted and mutually authenticated so can be regarded as secure.
However, the security of client traffic differs, depending on the mode an access point is operating in:
- The majority of access points are situated on institutional networks, directly connected the CUDN, where the security of user traffic between the access point and the central controller is not so important, as it passes over a network with reasonable security (certainly no worse than any regular CUDN connection over a wired cable). Such access points are typically configured as a Campus Access Point (CAP) where client traffic is not encrypted but just encapsulated using GRE.
- If an access point is to be deployed at a remote site (off the CUDN), it can be configured as a Remote Access Point (RAP) where all of the traffic is secured against interception and tampering using an IPSec VPN. These are described below.
Whilst CAPs could be deployed at a remote site across a separate VPN service back to the CUDN, this is not recommended due to double encapsulation, which can present issues with fragmentation.
In mosts cases, only a single wired port on the access point will be connected: the first one (usually labelled E0 or ENET0) will have the management VLAN presented untagged by the upstream switch and carry all the management and client traffic back to the central controllers.
However, this can be changed in various ways:
- Access points with multiple ports can be configured as VLAN-capable switches.
- The management VLAN presentation can be changed to tagged.
- On some access points, multiple uplink ports can be aggregated to increase available bandwidth.
Some access points also include a pass-through port.
For more information about the configuration of wired ports on access points, please contact Network Support.
The wireless access points support LLDP to advertise their model, firmware version, connect port name and capabilities and IP address. This can be helpful when verifying connectivity, for example:
sw-ucs-rnb-n1#show lldp neighbors g4/0/13 detail
Chassis id: 24de.c6c6.51a8
Port id: 24de.c6c6.51a8
Port Description: eth0
System Name: 24:de:c6:c6:51:a8
ArubaOS (MODEL: 134), Version 220.127.116.11 (44205)
Time remaining: 99 seconds
System Capabilities: B,W
Enabled Capabilities: W
LLDP-MED to advertise voice VLANs and other policies is currently not supported due to a known software issue. It is hoped this issue will be resolved in the near future.
The management VLAN used to uplink the access point to the network is normally presented untagged. However, if required, this presentation can be changed to tagged.
This option can be usefully combined with the local VLAN bridging functionality to configure a room wall port with the management VLAN tagged and a data VLAN untagged: the access point can use a tagged management VLAN and also bridge traffic on the untagged data VLAN through to the downstream ports. If the access point is removed, the data VLAN will still work directly on the wall port.
It should be noted that, as it's required during boot up, before the connection to the controller has been established, the option to use a tagged uplink VLAN is stored in the internal memory of the access point and not dynamically downloaded from the central controllers. Once a tagged uplink VLAN is configured, the access point will require that presentation to work: if the access point is moved back to an untagged management VLAN port (or one with a different management VLAN tag), it will no longer start up. This usually means that some coordination is required between the institution and UIS Networks to enable this feature and avoid an extended outage.
Institutional networks may use 802.1X on their wired ports to authenticate attached devices, perhaps using this information to assign an appropriate VLANs per device or user: the Aruba access points can themselves be configured as 802.1X supplicants. There are some points to note about this configuration:
- The EAP method used will be PEAP with MS-CHAP-V2 username/password.
- The access point does not validate the identity presented by the server (neither that certificate is signed by a trusted authority, nor that the DN is anything specific).
- If the access point is unable to authenticate using 802.1X (either because it is not enabled on the switch port it is connected to, or because the credentials are invalid), it will attempt to come online anyway, in case a default (fallback) connection is available. In this situation, the access point will appear to be running normally but will report this error situation to the controllers; access points (or the switch) should not be left in this situation and the error rectified.
Institutions wishing to use this feature should contact Network Support with the required credentials (username and password) and which access points are to be reconfigured.
Note: this section is only applicable to some access point models (currently the AP-225 and AP-224). All other access points should not be connected using aggregated ports.
Some access points with high-bandwidth radios (typically the 3x3 MIMO, 802.11ac access points such as the AP-224 and AP-225) can theoretically exceed the maximum bandwidth of a 1Gbit/s ethernet connection for client traffic (1.3Gbit/s for the 5GHz radio and 450Mbit/s for the 2.4GHz radio). To cope with this, these access points support the use of LACP (the Link Aggregation and Control Protocol) — a standard method of grouping ports together to increase available bandwidth.
Most network equipment uses a hashing algorithm to distribute traffic across multiple ports in an aggregated group (rather than evenly distributing it across available links, perhaps in a round robin fashion). This hashing algorithm is often configurable but is usually based on one or more of the MAC (Ethernet) address, IP address and TCP/UDP port number of the sender and/or receiver. As such, the traffic between the controller and the access point needs to be distinguishable in one of the above ways, to ensure traffic is distributed.
Distribution across to the two links is achieved by the access point using a second IP address for the same controller for all traffic to and from the 2.4GHz (802.11g) radio. All management and client traffic from the 5GHz (802.11a) radio will use the first (normal) IP address. This second address is configured to be the same as the first but with the last octet +1. As such, the upstream switch(es) should be configured to include at least the source IP address (that of the controller) in its hashing algorithm, when forwarding traffic down to the access point.
Access points which support this feature will automatically attempt to aggregate the first two wired ethernet ports (E0 and E1), negotiating this via LACP — it does not need to be explicitly enabled. If this is successful, traffic will automatically be distributed across them, as described above.
If LACP is not configured on the port(s), the connection will fall back to using an unaggregated single link for all traffic to and from the controller. It is important that, if LACP is not configured on the switch, only a single uplink is connected to the access point and the other uplink port is left unconnected: if both links are connected, the access point will fall back to aggregating the ports in a static (unmanaged/forced) channel group, causing disruption with the upstream switch. However, this behaviour must not be relied upon and only LACP should be used for aggregated connections.
Traffic on locally-bridged VLANs will be forwarded according to the same criteria except, as the traffic is not tunnelled from the controller but directly from the source itself, the switch will distribute it according to the pattern of the traffic itself (e.g. if the hashing is based solely on source IP address which link is used will depend on source, regardless of which radio the client is ultimately connected).
Note: Due to an issue with the way PoE is negotiated on an access point with two network interfaces, PoE MUST only be enabled on the switch port connected to port E0 on the access point. If PoE is enabled to E1, the access point will run in power-saving mode with some functionality disabled, resulting in lower performance.
Last updated: 30th January 2017