This is a briefing note for institutional (i.e. College or departmental) Computer Officers to make them aware of the technological requirements for the data network in their institution to be able to support Voice-over-IP (VoIP) compatibly with that proposed for the University Telephone Network (UTN). The document does not deal with the issues of support of VoIP by institutional Computer Officers or with issues of VoIP over Wi-Fi (IEEE 802.11).
The document has been produced for the JTMC Technical Working Group as the result of collaboration between the University Telecommunications Office and the University Computing Service.
The word must is used to indicate an absolute requirement, without which service will be impossible; the word should is used to indicate a recommendation without which service is likely to be degraded, even to the point of being practically unusable in some situations.
1. The minimum requirement
1.1 The cabling should conform to category 5 or better.
1.2 The network must be ethernet and all links carrying VoIP traffic should run at 100 Mbps or more.
1.3 The active network equipment should consist only of switches and possibly routers - that is, the network should not contain any repeaters (hubs).
1.4 Use of private IP addresses must conform to Private (RFC1918) IP addresses within the University of Cambridge and the CUDN***. Note that the addressing scheme on centrally managed VoIP VLANS will be prescribed centrally.
1.5 VLAN numbering within the institution must conform to that described in #mce_temp_url#umber allocation within the CUDN. Note that centrally managed VoIP-only VLANs will not normally be local to the institution.
1.6 Centrally managed VoIP-only VLANs should bypass any institutional firewall as firewalls can be a significant source of latency and jitter.
Such a network will be able to carry UTN VoIP traffic but, without provision for QoS, the speech quality may be noticably inferior to that provided by the existing analogue UTN. In particular, short gaps may appear in speech at times of heavy use of the institution's network.
2. Additional requirements recommended by the JTMC's Technical Working Group
For the avoidance of doubt, all the requirements in 1. above also apply.
2.1 The network equipment should support IEEE 802.1p Quality of Service (QoS). The implication of supporting QoS is that the equipment will have adequate buffering and priority queuing mechanisms so that packet loss and jitter will not be an issue.
2.2 The network equipment should support IEEE 802.1q VLANs. Voice and data traffic should be transported on separate VLANs as, at present, QoS facilities are insufficiently well developed; note that this may not be feasible for softphones, but should be possible for dedicated VoIP instruments. Security concerns for the voice service are mitigated somewhat by using separate VLANs for voice and data traffic.
2.3 Where VoIP traffic is not carried on VoIP-only VLANs, there should be provision for marking data packets to indicate that enhanced quality of service is required for packets carrying voice traffic. There should be provision for policing that no data packets, other than those which are to be given an enhanced quality of service (e.g. UTN VoIP data packets), are so marked.
2.4 Barring pathological 'network storm' conditions, latency is unlikely to be a problem within institutional networks. However, the number of routers employed within an institutional network should be kept small unless those routers also provide wire-speed routeing.
Subject to VLAN separation, such a network will generally be able to carry UTN VoIP traffic and the speech quality will be comparable to that provided by the existing analogue UTN.
3. Reliability/availability options for improved service
3.1 Electrical power
The existing analogue UTN provides a high degree of reliability and availability, in particular providing 2-hour capacity uninterruptible power supplies for all equipment, other than for those handsets and associated equipment (such as Ambassador systems) that are powered from the institution's mains supply. Institutions will be accustomed to expect this degree of reliability and availability from the UTN.
The VoIP UTN service is provided using a range of infrastructure (CUDN, telephony equipment, institutional networks) that is more diverse than that used by the existing analogue UTN. In addition, that infrastructure is provided at present via several different management domains, which is not the case for the existing UTN.
Power for the central CUDN equipment is supplied by a UPS, backed up with a diesel generator with a 24-hour tank of fuel. The CUDN core switch/routers each have a 20-minute UPS (it is intended that these should be upgraded to match the current 2-hour capacity of the UTN). The CUDN Point of Presence (PoP) at each institution is not provided with a UPS as part of the standard installation, although a few institutions have provided uninterruptible power facilities for their PoP.
If an institution wishes its VoIP telephone instruments to be capable of operation during a local power failure, it will need to ensure that the instrument remains powered, e.g. by one of the following means:
a) battery back-up in the instrument;
b) UPS supplying the instrument;
c) power is supplied to instrument using Power-Over-Ethernet (POE) from the network equipment or from separate POE injectors (adaptors), provided for network equipment that does not have POE capability; (see, for example, background information on POE; the JTMC Technical Working Group views support of POE as highly desirable).
The institution will also need to ensure that its network equipment and the CUDN PoP are protected by UPSs of capacity sufficient for the institution's requirements.
Clearly, if softphones (implementations of telephony handsets using software in personal computers) are to remain usable during a local power failure, the power supply to the personal computer must be maintained: it is envisaged that many institutions will decide that there is insufficient cost-benefit to provide UPSs for personal computers.
3.2 Environmental issues
An increasing amount of network and other IT infrastructure installed in institutions is installed in locations that are inadequate from the point of view of environmental conditions. This has usually arisen because the amount of of equipment and the amount of electrical power consumed by the equipment has grown over the years. Institutions should initiate a regular review of the environments in which their IT infrastructure is located to ensure that the environment remains suitable for the equipment and should take steps to rectify any inadequacies, in particular:
a) the ambient temperature should not exceed 25 deg C and must not exceed 30 deg C, even in summer conditions (much modern equipment is rated for operation at ambient temperatures of up to 40 deg C; some modern equipment will switch itself off when its temperature limit is exceeded; the recommendation of 30 deg C is to give a measure of headroom for hot-spots and temporary fault conditions);
b) the humidity should be between 20 and 90 % R.H.;
c) the location should be free from contaminants, including dirt.
If the equipment is protected by a UPS and housed in an environment that is cooled or ventilated artificially, the effects of a failure of the local electrical supply may be that the equipment is kept running but without the necessary cooling.
3.3 Physical security
The main point of presence should be in a physically secure location - this is a very strong recommendation - to prevent unauthorised access to (e.g. tapping into) the ends of network cables and any management console ports on the equipment. The same applies to other locations of network equipment within the institution, but with marginally less strength as rather less of the institution's installation would be vulnerable. It is possible to tap into copper network cabling, much less so fibre-optic cabling, and use of trunking (rather than cable tray) is recommended in areas that are accessible.
It is conceivable that changes in the application of data protection legislation in the future might mandate a stricter approach to the physical security of networks. Institutions might wish to consider this when planning upgrades to their networks.
The institution must make arrangements for access by University data and voice network staff for fault diagnosis, repair and upgrade purposes. These arrangements need to take the need for emergency access which might be required out of hours - consider the effect of local failure of all voice network communications at the start of a bank holiday weekend and might be required because failure of equipment in one institution may affect services in another. It is acceptable that access is provided indirectly via permanently staffed facilities (e.g. University Security Office or a College Porters' Lodge). Such access must be taken in to account when changes are made to access arrangements within an institution.
3.4 Reduction of single points of failure within the institution's network
Although modern network equipment is highly reliable, failures and accidents do occur. Consideration should be given to reducing single points of failure, for example, by the introduction of redundancy, both in equipment and network links. Network switches should support IEE802.1w Rapid Spanning Tree Protocol (RSTP - also known as Fast Spanning Tree Protocol, FSTP) to enable them to support multiple-path networks.
Although duplicate links from the CUDN point-of-presence switch within the institution to the relevant area switch-routers in the CUDN are planned, the CUDN PoP switch itself would remain a single point of failure. This is perhaps comparable with the Remote Peripheral Equipment (RPE) being a single point of failure in the existing analogue UTN. Institutions may wish to consider the cost and benefit of a second CUDN connection to eliminate this single point of failure.