You are here
Timing for 5G - Part 1
Delivering sub-microsecond time accuracy to cellular base stations was one of the major challenges that cellular providers faced as they started to deploy their new LTE networks. This was further exacerbated by LTE-A’s stringent synchronization requirements and, now when 5G is at our doorstep, time accuracy requirements are being “stretched” again, challenging the boundaries of what physics can deliver.
A brief evolution of the Radio Access Network (RAN)
Generally speaking, a cellular communications system is comprised of the following parts:
- Air interface: from the base-station to the user handheld (UE)
- Core network: broadband packet network
- Radio Access Network: everything else, or put in other words, the intermediate network that connects the core with the Air interface.
Over the years and mobile generations the air interface has radically changed to accommodate higher data-rates:
- 2G: ≤ 100 kbps per UE (GPRS) with delays exceeding 500 ms
- 3G: ~ 10 Mbps per UE (HSDPA) with delays around 100 ms
- 4G: ≤ 300 Mbps per UE (1 Gbps with LTE-A) with delays of 10x ms
- 5G: ≤ 1 Gbps per UE with millisecond delays for relevant services
And, at the same time, the RAN has followed:
- GRAN: GSM radio access network (2G)
- GERAN: GRAN plus EDGE packet services (2.75G)
- UTRAN: UMTS radio access network (3G)
- E-UTRAN: LTE RAN (4G)
- 5G New Radio (NR)
Why are RAN Timing Requirements Becoming Stricter?
Frequency and Time requirements of mobile networks are defined to assure efficient and proper functioning of the air interface. Base stations require increasingly accurate timing to:
- Maximize data rates
- Minimize guard frequencies/times in order to maximize spectral efficiency
- Utilize bandwidth-boosting technologies as Carrier Aggregation (CA) and MIMO/CoMP
- Optimize user experience
- Smooth handover
- Reduced delay
- Location-based services (LBS)
Generally speaking, base stations (including NodeB, eNodeB, gNodeB) obtain timing from the RAN (unless they have a local source of timing, e.g., GNSS). Hence, advances on the air interface capabilities put heavier load on the RAN to provide the required timing.
xHaul Evolution and 5G RAN
The transition to 5G is also fundamentally changing the architecture of RAN networks. 2G/3G RANs relied solely on backhauling, that is, transporting data packets from the base station back to the core network and vice versa.
LTE and LTE-A RANs employ a more sophisticated transport architecture:
- X2 interfaces: between neighboring cell sites for handoff and CoMP.
- CPRI fronthaul: between antenna and centralized baseband processing (also referred to as Cloud-RAN)
5G, on the other hand, takes the RAN architecture onto a whole new level where multiple functional splits, between the mobile core and air-interface, are possible. Hence, additional xHaul options are now possible introducing further implementation flexibility and, hopefully, cost reduction at the end of the day.
Absolute vs. Relative Time Error (Clustering)
Another change that 5G brings (already started with LTE-A) is the transition from Absolute to Relative timing. Today still, ITU-T’s time accuracy requirements address absolute time error allowed between UTC (Universal Time Coordinates) and the air-interface. However, MIMO (Multiple Input Multiple Output) technologies and LBS only require time alignment between a cluster of neighboring base-stations (“synchronization cluster”). Hence, new requirements for relative time-error need to be constructed for 5G. This may enable meeting stringent requirements without significantly changing existing time distribution practices.
Timing Distribution for Fronthauling
Existing LTE RAN CPRI interfaces have a very strict transport delay accuracy property of 16 nanoseconds. Even if this requirement will be relaxed for the 5G’s NR-RAN, it is still likely to stay in the order of a few dozens of nanoseconds. This means that NR fronthauling, regardless of the transport technology (e.g, mwave, Ethernet, OTN) or functional split used, will have to support extremely accurate time distribution. How this may be accomplished is still under study by the relevant SDOs (e.g. the ITU-T SG15/Q13). In any case, it is pretty certain that timing distribution on-path support (e.g., PTP TC or BC) will be mandatory.
Check out part 2 of this post, where we discuss how PTP grandmaster miniaturization is perfectly tuned to cater for the needs of 5G.
About RAD's Blog
We’ll be blogging on a wide range of hot topics affecting service providers and critical infrastructure network operators. Our resident experts will be discussing vCPE, Cyber Security, 5G, Industrial IoT and much, much more.