skip to content

IT Help and Support

University Information Services
 

The UIS hosting service allows institutions to have their servers hosted in UIS-managed data centre facilities, either physically or virtually. This page describes the networking available with this service.

More general information about the physical hosting service is available on the hosting service page.

Contents

Options

The networking depends on the type of hosting:

  • Physical hosting (also known as "colocation") — an institution's own hardware is physically housed in one of the UIS-managed data centre facilities. The institution is responsible for the maintenance, replacement and upgrades of the hardware and physical connectivity.
     
  • Virtual hosting — UIS provides virtual machines that share the same physical hardware as other virtual machines. We provide and maintain the physical hardware and connectivity, and handle maintenance, replacement and upgrades. This option is currently only available by special arrangement.

For physical hosting, the networking can done in two different ways, depending on whether a dedicated rack is taken or not:

  • Dedicated rack — here the institution has an entire rack in which to locate their equipment: the rack has only dark fibre connections and it is the responsibility of the institution to organise networking to it, using GBN circuits and internal data centre dark fibres, or take a UDN PoP switch.
     
  • Shared rack — the institution is allocated space in a rack that is shared between themselves and other institutions. UIS provides network equipment at the top of the rack, which has connectivity to the UIS Data Centre Network (DCN).

The following table summarises the responsibilities with the different options:

  Physical with dedicated rack Physical with shared rack Virtual
Network equipment Institution UIS UIS
Physical connections Institution Institution UIS
IP/VLANs Institution Institution UIS and institution

 

Physical connectivity in shared racks

This section applies only to physical hosting in a shared rack.

The network equipment provided is through a Top-of-Rack (ToR) Cisco Nexus "leaf" switch.  All the ports on the DCN are 1/10/25GE-capable SFP+ ports, giving two options for cabling:

  • Direct Attach Cable (DAC) (recommended) capable of running at 10 or 25G.
  • Copper RJ45 transceiver capable of running 1G.

DAC cables and transceivers can be supplied by the UIS at cost, or customers can provide these themselves, as long as they are Cisco Nexus-compatible.  Although 10GE RJ45 transceivers are available but are unsupported on the Cisco Nexus platform: customers wanting to use these can provide them themselves but take responsibility for any issues.

Hosts can then use either a single or dual connection:

  Single Dual
Physical connectivity One connection to a single leaf switch in the same rack. A pair of connections to two separate leaf switches in the same rack.
Redundancy

Single leaf switch provides a single point of failure.

If it is unavailable, connectivity will not be restored until it is repaired/replaced.  Customers should ensure that the service does not require high uptime, or arrange for redundancy in the design of the system (failover, or redundant hosts).

Dual leaf switches provide redundancy in the event of a single unit failing.
Link configuration Simple, standalone port.

The ports can be configured in two ways:

  • As two standalone ports.  Here, the host must manage distributing traffic across them (perhaps using host-based distribution on a vhost), or use them as active-standby, or
  • A single logical port in a Link Aggregation Group (LAG) — sometimes called a 'port-channel' (Cisco) or 'trunk' (HPE/Aruba).  Typically this would be done through LACP with Slow LACPDUs, but can be done statically.
Availability during software upgrades

The single leaf device will go offline for 5–10 minutes during a software upgrade.

Customers will be notified of this work in advance but cannot ask for it to be rescheduled.

One leaf device will go offline and return before the other is similarly upgraded.  Connectivity should be maintained throughout.

Customers will be notified of this work in advance.  They cannot ask for it to be rescheduled but should ensure that maintenance on their hosts is not taking place during this time, degrading redundancy.

Typically, the main in-band connection to a host would be done through a redundant, dual connection.  If a host has an out-of-band Lights Out Management (LOM) or other similar connection, this would be done through a single link, as redundancy is not so important.

LANs/IP addresses

This section applies to both physical hosting in a shared rack and virtual hosting.

Once physical connectivity has been established, one or more VLANs, with IP subnets, will need to be presented on the links to make them useful.

In all cases, the VLAN provided to a host will be one specific to the client institution (or group within an institution, if appropriate). Further hosts will typically be added to the same VLAN.

The VLAN and subnet must be separate from the ones provided to an institution elsewhere on the UDN — for example, they cannot be the same VLAN fed to an institution's PoP switch and, as such, will require that any hosted equipment uses IP addresses in a distinct subnet.

The subnet will come from a range designated for use on the DCN, separate from the blocks used by an institution at their other sites, and be sized appropriately for the hosting needs of the client institution. When a new new subnet is set up, the future requirements for growth will be discussed. In the event that a subnet is filled and a new, larger one is allocated, the institution will be expected to renumber their hosts into the new range, over an appropriate period of time.

The routing for this network will be on the DCN, reached through the DCN firewall and out onto the UDN.  Traffic between the hosted servers and other sites used by the customer (e.g. their institutional building) will pass across the main UDN network.

 

Contact us

Data Centre Operations' core hours

Data Centre Operations' core operating hours are Monday to Friday, 08:00 to 18:00 (under normal circumstances).

We will still be available to assist beyond 6pm for pre-planned work, providing we receive a minimum of 2 working days' notice.
 

General enquiries

For general enquiries and visitor booking (requires 24 hours' notice) contact us:

Phone padded  01223 760105
Email padded  wcdc@uis.cam.ac.uk
 

Emergency and out-of-hours contact

Phone padded  01223 760100 
Email padded  wcdc-security@uis.cam.ac.uk

 

Last modified: 24th October 2024