Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Monday, January 11, 2021

The Myth of "Always Best Connected"

 (This was originally posted as a LinkedIn Newsletter article. See this link, read the comment thread, and please subscribe)

It Was the Best of Times, it Was the Worst of Times

One of the most ludicrous phrases in telecoms is "Always Best Connected", or ABC. It is typically used by an operator, network vendor or standards organisation attempting to glue together cellular and Wi-Fi connections. It's a term that pretends that some sort of core network function can automatically and optimally switch a user between wireless models, without them caring - or even knowing - that it's happening.

Often, it's used together with the equally-stupid term "seamless handover", and perhaps claims that applications are "network agnostic" or that it doesn't matter what technology or network is used, as long as the user can "get connected". Often, articles or papers will go on to describe all Wi-Fi usage on devices as "offload" from cellular (it isn't - perhaps 5% of Wi-Fi traffic from phones is genuine offload).

There's been a long succession of proposed technologies and architectures, mostly from the 3GPP and cellular industry, keen to embrace but downplay Wi-Fi as some sort of secondary access mechanism. Acronyms abound - UMA, GAN, IWLAN, ANDSF, ATSSS, HetNets and so on. There have been attempts to allow a core network to switch a device's Wi-Fi radio on/off, and even hide the Wi-Fi logo so the user doesn't realise that's being used. It's all been a transparent and cynical attempt to sideline Wi-Fi - and users' independent choice of connection options - in the name of so-called "convergence". Pretty much all of these have been useless (or worse) except in very narrow circumstances.

To be fair, accurate and genuine descriptions - let's say "Rarely Worst-Connected" or "Usually Good-Enough Connected" or "You'll Take What Connection We Give You & Shut Up" - probably don't have the same marketing appeal.

Who's Better, Who's Best?

The problem is that there is no singular definition of "best". There are numerous possible criteria, many of which are heavily context-dependent.

Which "best" is being determined?

  • Highest connection speed (average, or instantaneous?)
  • Lowest latency & jitter
  • Lowest power consumption (including network, device and cloud)
  • Highest security
  • Highest visibility and control
  • Lowest cost (however defined)
  • Greatest privacy
  • Best coverage / lowest risk of drops while moving around
  • Highest redundancy (which might mean 2+ independent connections)
  • Connection to the public Internet vs. an edge server

In most cases involving smartphones, the basic definition of "best" is "enough speed and reliability so I can use my Internet / cloud application with OK performance, without costing me any extra money or inconvenience". Yet people and applications are becoming more discerning, and the network is unaware of important contextual information.

For instance, someone with flatrate data may view "best" very differently to someone with a limited data quota. Someone in a vehicle at traffic lights may have a different connection preference to someone sitting on the sofa at home. Someone playing a fast-paced game has a different best to someone downloading a software update. A user on a network with non-neutral policies, or one which collects and sells data on usage patterns, may want to use alternatives where possible.

In an era of private cellular, IoT, multiple concurrent applications, encryption, cloud/edge computing and rising security and privacy concerns, all this gets even more complex.

In addition to a lack of a single objective "best", there are many stakeholders, each of which may have their own view of what is "best", according to their particular priorities.

  • The user
  • The application developer
  • The network operator(s)
  • The user's employer or parents
  • The building / venue owner
  • The device or OS vendor
  • A third-party connection management provider (eg SD-WAN vendor)
  • The government

On some occasions, all these different "bests" will align. But on others, there will be stark divergence, especially where the stakeholders have access to different options for connectivity. A mobile phone network won't know that the user has access to an airport lounge's premium Wi-Fi, because of their frequent flyer status. A video-streaming app can't work out whether 5G or Wi-Fi will route to a closer, lower-power edge server.

So who or what oversees these conflicts and makes a final decision on which connection (or, increasingly, connections plural) is chosen? Who's the ultimate arbiter - and what do the other stakeholders do about it?

This problem isn't unique to network connectivity - it's true for transport as well. I live in London, and if I want to get from my home to somewhere else, I have lots of "best" options. Tube, bus, drive, taxi, walk, cycle and so on. Do I want to get there via the fastest route? Cheapest? Least polluting? Easiest for social-distancing? Have a chance to listen to a podcast on the way? If I want to put the best smile on the most people's faces, maybe I should go by camel or unicycle? And what's best for the city's air, Transport for London's finances, other travellers' convenience, or whoever I'm meeting (probably not the unicycle)?

 



There are multiple apps that give me all the options, and define preferences and constraints. The same is true for device operating systems, or connection-management software tools.

Hit Me With Your Best Shot

There are also all sorts of weird possible effects where "application-aware networks" end up in battle with "network-aware applications". Many applications are designed to work differently on different networks - perhaps "only auto-download video on Wi-Fi" or "ask the user before software updates download over metered connections". Some might try to work out the user's preferences intelligently, and compress / cache / adjust the flow when they appear to be on cellular, or uprate video when the user is home - or perhaps casting content to a larger screen. The network has little grasp of true context or user/developer desire and preferences.

Networks might attempt to treat a given application, user or traffic flow differently - perhaps giving it priority, or slowing or blocking it, or assigning it to a particular "slice". The application on the other hand might try to second-guess or game the network - either by spoofing another application's signature, or just using heuristics to reverse-engineer any "policy" or "optimisation" that might get applied.

You're My Best Friend

So what's the answer? How can the connectivity for a device or application be optimised?

There's no simple answer here, given the number of parameters discussed. But some general outlines can be created.

  • Firstly, there needs to be multiple connections available, and ways to choose, switch, arbitrage between them - or bond them together.
  • The operating system and radios / wired connections of the device should allow the user (or apps) to know what's available, with which characteristics - and any heuristics that can be deduced from current and previous behaviour.
  • The user or device-owner needs to know "who or what is in charge of connections" and be able to delegate and switch that decision function when desired. It might be outsourced to their MNO, or their device supplier, or a third party. Or it could be that each application gets to choose its own connection.
  • As a default, the user should always be aware of any automated changes - and be given the option to disable them. These should not be "seamless" but "frictionless" or low-friction. (Seams are important. They're there for a reason. Anyone disagreeing with this statement must post a picture of themselves wearing a seamless Lycra all-in-one along with their comment).
  • Connectivity providers (whether SPs or privately-owned) should provide rich status information about their services - expected/guaranteed speed & latency, ownership, pricing, congestion, the nature of any data-collection or traffic inspection practices, and so on. This will be useful as input to the decision engines. Over time, it will be good to standardise this information. (Governments and policymakers - take note as well)
  • We can expect connectivity decisions to be partly driven by external context - location, movement, awareness of indoor/outdoor situation, environment (eg home, work, travelling, roaming), use of accessories like headphones or displays, and so on.

Going forward, we can expect wireless devices to have some form of SD-WAN type control function. Using technologies such as multipath TCP, it will become easier to use multiple simultaneous connections - perhaps dedicated some to specific applications, or bonding them together. For security and privacy, the software may send packets via diverse routes, stopping any individual network monitoring function from seeing the entire flow.

Growing numbers of devices will have eSIM capability, allowing new network identities / owners to be added. Some may have 2+ cellular radios, as well as Wi-Fi (again, perhaps 2+ independent connections), USB and maybe in future satellite or other options as well.

Add in the potential for Free 5G (link), beamforming, private 5G, local-licensed spectrum WiFi, relaying & assorted other upcoming innovations to add even more layers here.

The bottom line is that "best connected" will become even more mythical in future than it already is. But there will be more options - and more tools - to try to optimise it, based on a dynamic and complex set of variables - especially when going beyond connectivity towards overall "quality of experience" metrics spanning eyeball-to-cloud. There's likely be plenty of opportunities for AI, user-experience designers, standards bodies and numerous others.

But (with apologies to the Tina Turner), users should always be wary of any software or service provider that claims to be "Simply the Best".

If you've enjoyed this article, please sign up for my LinkedIn Newsletter (link). Please also reach out to me for advisory workshops, consulting projects, speaking slots etc.

#5G #WiFi #cellular #mobile #telecoms #satellite #wireless #smartphones #connectionmanagement

Sunday, December 06, 2020

10 Principles for Telecoms Vendor Diversification in the UK & Beyond

This was originally published as one of my newsletter articles on LinkedIn. Click here for discussion and commentary & to subscribe. 

 Introduction

The UK is currently a hive of activity for government and regulatory involvement in telecoms. I can’t remember a time when so much emphasis has been put on my domain – from election commitments on gigabit broadband, to concerns over “high risk vendors” (HRVs) – notably Huawei.

This week has seen further progress through Parliament of the Telecom Security Bill (link) which makes telcos face legislation on cybersecurity and HRVs. There has also been the linked publication of the 5G Supply Chain Diversification Strategy (link), which ties the removal of Huawei gear with the government’s intentions to expand operators’ choice of other vendors.

I’m going to be spending considerably more time on the policy aspects of telecoms in coming months – not just my normal areas like spectrum, but more broadly the intersection with geopolitics, technology evolution and industrial strategy, competition and trade.

This article focuses on the diversification aspects - my thoughts on the published strategy, plus what I’d like to see in recommendations from the Task Force and policies from government in 2021. It’s a follow-on from my recent post on interoperability. Note: I’m not revisiting the HRV or Huawei issue here.  

I should stress that this isn’t just parochial and UK-specific - it has wider ramifications on the global telecom market, and links up with activities in Brussels, Washington and elsewhere, such as the US Open RAN Policy Coalition, and the EU’s cybersecurity “toolbox” and upcoming European Cybersecurity Strategy review.

Disclosure – my advisory clients span a broad range of UK and international organisations, from startups to large vendors, service providers of numerous types, investors and branches of government. I work with companies and organisations that enable closed macro & small-cell networks, Open RAN, Wi-Fi, satellite connectivity and more. As people who know me will attest, my opinions are my own – and attempts to influence them will often backfire, even if made by paying clients. In fact, people pay me because I regularly say things they don’t want to hear. I like saying “no”.

Background

 Even before the pandemic there was huge UK government engagement – and manifesto commitments - on “full fibre”, 5G mobile networks, sponsored testbeds & trials, and even satellite communications with the investment in OneWeb.

A lot of my own focus in recent years has been triggered by the Future Telecom Infrastructure Review in 2018, which kicked off the current regulatory enthusiasm for localised spectrum, enterprise/private cellular and neutral host networks – although other commentators had also advocated this for some time previously (*coughs modestly*).

In the last 6-12 months, there has been a specific focus on “supply chain diversification”, and a desire by policymakers to increase the number of equipment/software vendors in the market for network infrastructure. This isn’t new – the Government published its initial Telecom Supply Chain Review in mid-2019 – but it has lately taken on greater urgency.

The largest catalyst has been the recent action taken on Huawei and what that means for supply of equipment in the UK as a result, particularly for national 5G RAN build-outs by the four main UK MNOs BT, Vodafone, Telefonica O2 and 3UK.

The net result of this has been the establishment of the UK Telecoms Diversification Task Force as an advisory group (link), aligned with an internal project to develop a strategy and policy for broadening the vendor base, being run by DCMS (Department of Digital, Culture Media & Sport).

The new strategy document highlights what it sees as a duopoly of Nokia and Ericsson, especially for macro RAN gear, and suggests that if that continues it implies a risk to future resilience of the supply-chain. During the various Science & Technology committee hearings this year, there has been input from vendors, operators, security officials, task force members and others.

The discussion has largely been 5G-dominated, although the strategy document also mentions fixed-infrastructure diversification (subject to ongoing consultation and review). Many of the parliamentarians seem to think 5G is something special, and have bought into the “unicorn” visions of GDP uplift and “ubiquity”. (My regular readers know that 5G is “just another G” – an important upgrade, but not something which will change the world).

The strategy proposes three areas of action:

  • “Supporting incumbent suppliers” (Nokia and Ericsson) as major vendors, but suggests various approaches towards nudging them to greater levels of openness.
  • “Attracting new suppliers into the UK market” – this essentially means working out ways to get Samsung, NEC & Fujitsu more involved, as well as others. The parliamentary debate’s speakers also name-checked Mavenir, Parallel Wireless, Rakuten’s platform business and others.
  • “Accelerating open-interface solutions and deployment” – which refers more to the realm of industrial policy around Open RAN, and components such as semiconductors.

As you might imagine, I’ve got some fairly trenchant opinions on much of this.

Is the market that concentrated?

Clearly, the UK MNOs are today almost entirely dependent on Huawei, Nokia and Ericsson for their macro RAN deployments, although Samsung has previously been present in the 3UK’s 4G network, and Vodafone has recently started deploying gear from Mavenir in its Open RAN deployment.

However, some countries such as the US and Japan have maintained a greater diversity in macro RAN supply, despite a lack of Huawei gear - although there are some differences compared to the UK. Continued support of older 2G/3G services currently relying on combined “single RAN” infrastructures is a valid concern – and the Diversification report suggests it might be possible to sunset or improve interoperability there. The Samsung presentation and letter to the committee also had some suggestions about this (link).

I think there’s perhaps also a link to the historical “3GPP monoculture” in UK/Europe. Other regions had a mix of GSM, CDMA and local alternatives, which fostered greater supply fragmentation originally, which endured over time as the "single RAN" approach wasn't as much of an obvious win (or lock-in).

It is worth noting that there is already good diversity for private cellular networks and specific mobile products such as 4G/5G cores, indoor wireless and other niches such as fixed-wireless access. Many alternative suppliers are gaining traction first in rural and other “secondary” areas, rather than dense urban macro locations.

One aspect the government hasn’t appeared to consider is how much of the anticipated 5G “upside” (whether you believe the $billions GDP numbers or not) is conveniently located in these very contexts which have greater levels of supply diversity. Many of the expected new 5G applications are indoors (in factories, hospitals etc), or in sectors such as agriculture.

Another set of “advanced connectivity” applications have alternative technology options, especially over the 3-5 years it will take 5G to mature. WiFi 6/6E/7, LoRa, 60GHz FWA, new satellite constellations and proprietary platforms like Amazon Sidewalk all offer alternatives to 5G. Yet I still hear people talking about 5G for low-latency AR/VR in peoples’ homes when it’s obvious that 90%+ of that will use Wi-Fi, for multiple reasons.



Reading the report and listening to the debates, there seems to be a certain amount of hindsight here, with regrets that previous governments hadn’t thought through possible consolidation from three big cellular vendors to two, irrespective of which was taken out of the equation or how. Some speakers went back further, to the days of Nortel and Marconi, mourning the loss of greater diversity and national sovereign capability.

There’s also an implied sense of worry that one of the existing incumbents might make a mis-step. It’s notable that the “supporting incumbents” line was absent in January discussions, but was perhaps catalysed by Nokia’s 5G woes earlier in this year. The US Attorney General floating the possibility of a US company acquiring either Nokia or Ericsson, probably raised the stakes even further, even if that suggestion was rapidly shot down at the time.

Other concurrent drivers have related to Brexit, trade deals with Japan (and presumably EU, US and S Korea in future) and the enthusiasm of the current administration for more “industrial policy”. There is interest in state-aid for many areas of technology, ranging from hydrogen-powered aircraft (“Jet Zero”) to biotech to quantum computing, with the aim of improving the UK’s export and trading prospects in new and emerging areas. Telecoms technology needs to be seen in the context of a very expansive vision from artificial meat to nuclear fusion. (Wearing my futurist hat, I heartily approve of this).

Open RAN & disaggregation

Perhaps the least-cohesive part of the strategy document (and some initial actions like the testing and interoperability lab announcements) is the focus on Open RAN as the main saviour of supply-chain diversification. It got a huge amount of airtime in the DCMS report, as well as in politicians’ speeches.

In my view, Open RAN is similar to 5G more generally – important, but getting rather over-hyped. It’s going to be very important in future, but it's not the only game in town. Perhaps it will form the centrepiece of 6G, but for 5G macro – which is being deployed now – it’s going to be secondary, even if some of the Huawei rip/replace by 2027 uses it.

There seems to be quite a lot of disagreement between the MNOs as well – Vodafone is clearly a fan, while BT and 3UK seem more sceptical, with O2 somewhere in the middle.

I’m far from convinced that some of the detailed aspects in the document and annex – going as far as discussing eCPRI interfaces and 7.2 O-RAN splits – are the pivot-points for the overall diversification or resilience story. We don’t have TIP specs for OpenRAN 5G Massive MIMO yet, and may not get there for quite a while.

We’ll see a growing amount of vendor orientation on cloud and open RAN approaches anyway – Samsung, NEC and even Nokia are pursuing it. Ericsson and Huawei are being more diffident, but also seem to recognise that virtualisation is important, even if they’re not breaking open all bits of the RAN. Ericsson's recent Cloud RAN announcement could reasonably be described as "tentative" (link).

While there’s a lot of action and excitement with Rakuten, Dish and other greenfield networks, that doesn’t mean that operators in the UK or elsewhere would necessarily follow suit, even if they could do it tomorrow. It would be nice for the option to be there – but I’m a little concerned that the document asserts that interoperability should always be a default rather than a viable option. (If you haven’t seen my post on interop, have a scan through it here). Different operators have different views - and different legacy infrastructure.

Think of an analogy: should the government also suggest that Airbus planes should interoperate with Boeing avionics? Or, for that matter, how many of the advocates would accept Linux as the “default” OS for their laptops, rather than being able to choose Windows or MacOS if they prefer?

I expect we'll see a growing amount of Open RAN in rural and then perhaps suburban areas - but it's going to be a long time before it's common in existing MNOs' urban cores and high-density macro domains. It's an interesting platform for neutral host networks too, as the NEC trial points out. It is part of the overall “choice architecture” for future networks, but arguably the most interesting domains for advanced connectivity will get more choice / vendor competition from non-5G technology options. The normal 5G macro RAN is more about capacity for smartphone broadband, rather than clever new applications. 



What we should aim to see from future UK Diversification recommendations & policy

What comes next is the Diversification Task Force recommendations, which are expected early in 2021. This will feed into the policies and actions taken by the rest of government – potentially DCMS, although some have suggested aspects should reside with Ofcom, the security agencies or other departments.

As some external input, I thought I’d lay out some my own preferences, principles and what I’d like to see. (I may also submit more formal comments into the consultation process).

  • Clarity of purpose(s): There is a tendency in the report and parliamentary debate to conflate security, supply resilience, competition, innovation, export opportunity and other drivers for telecoms (de)regulation. All are valid concerns and thus represent areas for government to become involved – but any individual recommendations or rules should break out the underlying purpose(s) clearly. Obviously, few politicians or media commentators are experts in telecoms networks arcana – so communications across Westminster and beyond needs to be crisp, and misconceptions and misrepresentations pointed out swiftly. Soundbites and spin always get attention – but must be rooted in technical reality rather than convenience and media-friendliness.
  • Technology neutrality: While there are specific concerns about 5G RAN as it’s a major current focus of investment – and because the intelligence/core functions are increasingly distributed – it’s far from the only important telecom technology, or the only one with a concentrated supplier base. 4G mobile, fibre and fixed-line broadband infrastructure, satellite and assorted other wireless technologies should also be considered as part of diversification. There’s no major UK Wi-Fi player, for instance, which ideally would be rectified. At a component level, we should rightly be considering semiconductors, but also many areas of cloud and software elements involved in ever-more-virtualised telecom networks as well.
  • Business model neutrality: This links to my recent post on interoperability. Governments shouldn’t mandate either proprietary or interoperable interfaces, or vertically-integrated or disaggregated solutions – as long as there’s enough competition. Openness is good – but both highest-performance and lowest-cost options may involve “black boxes”. Open RAN (which in any case needs more careful definitions and comes in multiple variants) has huge promise, but shouldn’t be a political football either. We should be encouraging market forces to operate effectively, in the demand side of telecoms networks. Choice is imperative. (You could say the same about net neutrality: if customers have a choice of 10+ ISPs, it doesn't really matter if one of them sells "Ain'ternet" as long as it's accurately marketed & distinguished from the real thing).
  • Realistic time horizons & paths: Regular readers of my posts may have noticed increasing mentions of “path dependence”. Timelines matter. If there’s an awkward 4-year gap between promise and reality for a given technology, for instance because of lengthy testing and commercialisation, that needs to be recognised upfront. We can’t leap straight to 6G, terabit FTTx or massive LEO satellite constellations, even if the UK might have an edge in specific components. The new rules need to reflect realistic time horizons – including buffers for delays. That’s especially relevant for things like Massive-MIMO 5G radios.
  • Removing obstacles: The UK’s telcos will continue to need large and medium sized international vendors for the foreseeable future. Ericsson and Nokia will obviously remain central, and we should be looking to encourage Samsung, NEC and Fujitsu in 5G – as well as the continued roles for Mavenir, AirSpan, Parallel Wireless, Commscope, Cisco, Juniper, Microsoft and so on. We need to address why, for instance, Samsung is largely absent from UK MNOs’ networks, despite its profile in Korea and the US. If it is about the need for continued support of 2G/3G and other legacy systems (for instance to support eCall), then we should be considering creative solutions for this. I could even imagine a government-sponsored 2G shared network to support M2M and emergency calls, leaving MNOs to focus on 4G/5G differentiation (and reclaiming spectrum).
  • Global vision: While I can understand why government likes the idea of home-grown UK telecom startups thriving, this vision needs to be tempered with reality. It isn’t realistic to expect UK firms to tackle all aspects of network infrastructure at the scale and expertise needed by major telcos. This doesn’t just mean “heavy iron” macro 5G networks, but also future elements such as fibre transport or hyperscale cloud for next-generation platforms. There won’t be a UK (or European) equivalent to AWS or Azure any time soon, nor a Qualcomm equivalent. If domestic self-sufficiency and ownership was a desire, there would have been obvious questions about recent sales of ip.access and Metaswitch. The diversification review should address areas where the UK should expect to collaborate internationally – as well as its contribution to new standards, for instance on 6G development.
  • Supporting cast: For all the various reasons mentioned above – security, supply resilience, export opportunity and so forth – the “leading actors” of MNOs, semiconductor designers and network hardware/software vendors will need other sets of market players to evolve in tandem. Government is right to be creating testing labs, but should also look at training centres for engineers and installers, university courses, systems integrators, infrastructure financiers, insurance providers and many others. It doesn’t have to (and probably shouldn’t) fund all of these, but it can perhaps advocate for their growth, and help remove barriers if they exist. How many indoor mmWave 5G URLLC vertical specialist engineers - or OpenRAN Massive MIMO maintenance teams - are there in the UK? How can we multiply that by 100x?
  • Flexibility to respond to emergent events: Linked to path-dependence is the concept of protecting “optionality”. I can come up with a range of scenarios under which the world might evolve in surprising directions, both technologically and geopolitically. China might reach a different set of compromises with Joe Biden on network vendors, components and trade. Brexit and new UK trade deals may impact supply chains and telecoms demand in unexpected ways – positive or negative. New cybersecurity vulnerabilities might come to light – or new safeguards developed. Any new policies on diversification should aim to enable new vendors and standards, rather than add constraints such as mandating specific interfaces.
  • Industry verticals & new applications: The UK authorities, like others around the world, seem focused on Industry 4.0, automation, IoT and the potential benefits of greater network-intensity in many sectors. This filters through to the idea of private networks, cloud/edge computing and other adjacent domains. It may also feature high on the telecoms diversification agenda. My view is that this should revolve around a general principle of “advanced connectivity”, rather than specifically relating to 5G and its supply chain. Wi-Fi, fibre, LoRa, Bluetooth and even proprietary network solutions have equally-important roles to play, and as before, neutrality of policy is desirable. The government should consider technology substitution between options, as well as vendor choice within one technology.
  • Awareness of energy & CO2 implications: One of the trade-offs of “abstraction layers” and simplicity/flexibility can sometimes be increased power consumption. “Software-defined X” or “Adaptive Y” can involve lower efficiency than something optimised or hardware-based. The UK should be thinking about a future of networks where everything has a CO2 budget – perhaps with cascading carbon taxes built in. Rather than least-cost routing, we might find networks built around lowest-energy optimisation. I didn't see anything about energy or CO2 in the strategy document.

Overall, as a UK-telecom industry analyst and advisor, I see this as both worthwhile and exciting – and I’m keen to participate in one way or another when possible. I’m certainly intending to check up on how the ongoing pronouncements fit with the principles I’ve outlined here. (I'll also be pondering the international ramifications and linkages).

I think the existing Diversification Strategy makes some good points and has clearly taken inputs from numerous well-placed and knowledgeable sources. However, it’s a bit too focused on 5G, Open RAN and macro networks, rather than the broader realm of “Advanced Connectivity”. I'd like to see more technology neutrality and optionality across the board.

It also blends together multiple issues – cybersecurity, resilience, UK industrial policy, competition, technical philosophy and so on – when they sometimes only have tenuous or debatable links. Interoperability is used as a “glue” to stick together the separate parts. I’d rather see broad top-level goals such as “security” and “optionality” and separate self-consistent analysis for each purpose.

As always, I'll aim to respond to the comments and discussion as much as possible. And please get in touch via email or LinkedIn, if you'd like a deeper dive on any of these areas.

#5G #policy #DCMS #wireless #telecoms #regulation #openran #interoperability #wifi #fibre #broadband #IoT #neutralhost #6G


Wednesday, November 25, 2020

Interoperability is often good – but should not be mandated

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

Context: I'm going to be spending more time on telecom/tech policy & geopolitics over the next few months, spanning UK, US, Europe & Global issues. I'll be sharing opinions & analysis on the politics of 5G & Wi-Fi, spectrum, broadband plans, supply-chain diversity & competition.

Recently, I've seen more calls for governments to demand mandatory interoperability between technology systems (or between vendors) as a regulatory tool. I think this would be a mistake - although incentivising interop can sometimes be a good move for various reasons. This is a fairly long post to explain my thinking, with particular reference to Open RAN and messaging.

Background & history

The telecoms industry has thrived on interoperability. Phone calls work from anywhere to anywhere, while handsets and other devices are tested & certified for proper functioning on standardised networks. Famously, interoperability between different “islands” of SMS led to the creation of a huge market for mobile data services, although that didn't happen overnight in many countries.

Much the same is true in the IT world as well, with everything from email standards to USB connections and Wi-Fi certification proving the point. The web and open APIs make it easier for cloud applications to work together harmoniously.

Image source: https://pixabay.com/illustrations/rings-wooden-rings-intertwined-100181/

 But not everything valuable is interoperable. It isn't the only approach. Proprietary and vertically-integrated solutions remain important too.

Many social media and communications applications have very limited touch-points with each other. The largest 4G/5G equipment companies don’t allow operator customers to mix-and-match components in their radio systems. Many IT systems remain closed, without public APIs. Consumers can’t choose to subscribe to network connectivity from MNO A, but telephony & SMS from ISP B, and exclusive content belonging to cable company C.

This isn't just a telecom or IT thing. It’s difficult to get different industrial automation systems to work together. An airline can’t buy an airframe from Boeing, but insist that it has avionics from Airbus. The same is true for cars' sub-systems and software.

Tight coupling or vertical integration between different subsystems can enable better overall efficiency, or more fluid consumer experience - but at the cost of creating "islands". Sometimes that's a problem, but sometimes it's actually an advantage.

Well-known examples of interoperability in a narrow market subset can obscure broader use of proprietary systems in a wider domain. Most voice-related applications, beyond traditional "phone calls", do not interoperate by default. You could probably connect a podcast platform to a karaoke app, home voice assistant and a critical-communications push-to-talk system.... but why would you? (This is one reason why I always take care to never treat "voice" and "telephony" synonymously).

Hybrid, competitive markets are optimal

So there is value in interoperable systems, and also in proprietary alternatives and niches. Some sectors gravitate towards openness, such as federation between different email systems. Others may create de-facto proprietary appoaches - which might risk harmful monopolies, or which may be transferred to become open standards (for instance, Adobe's PDF document format).

And even if something is based on theoretically interoperable underpinnings, it might still not interoperate in practice. Most enterprise Private 4G and 5G networks are not connected to public mobile networks, even though they use the same standards.


Interoperability can be both a positive and negative for security. Open and published interfaces can be scrutinised for vulnerabilities, and third-parties can test anything that can be attached to something else. Yet closed systems have fewer entry points – the “attack surface” may be smaller. Having a private technology for a specific purpose – from a military communications infrastructure to a multiplayer gaming network – may make commercial or strategic sense.

In many all areas of technology, we see a natural pendulum swing between openness and proprietary. From open flexibility to closed-system optimisation, and back again. Often there are multiple layers of technology, where the pendulum swings with a different cadence for each. Software-isation of many hardware products means a given system might employ multiple layers at the same time.

 Consider this (incomplete and sometimes overlapping) set of scenarios for interoperability:

  • Between products: A device needs to be able to connect to a network, using the right radio frequencies and protocols. Or an electrical plug needs to fit into a standardised socket.
  • Within products or solutions (between components): A product or service can be considered to be just a collection of sub-systems. A computer might be able to support different suppliers’ memory chips or disks, using the same sockets. A browser could support multiple ad-blockers. A telco’s virtualised network could support different vendors for certain functions.
  • Application-to-application / service-to-service: An application can link to, integrate or federate with another - for instance a reader could share this article on their Twitter feed, or mobile user can roam onto another network, or a bank can share data access with an accounting tool.
  • Data portability: Data formats can be common from one system to another, so users can own and move their "state" data and history. This could range from a porting a phone number, to moving uploaded photos from one social platform to another.

There’s also a large and diverse industry dedicated to gluing together things which are not directly interoperable – and acting as important boundaries to enforce security, charging or other functions. Session Border Controllers link different voice systems, with transcoders to translate between different codecs. Gateways link Wi-Fi or Bluetooth IoT devices to fixed or wireless broadband backhaul. Connectors enable different software platforms to work together. Mapping functions will eventually allow 5G network slicing to work across core, transport and radio domains, abstracting the complexities at the boundaries.

Added to this is the entire sphere of systems integration – the practice of connecting disparate systems and components together, to create solutions. While interoperability helps SIs in some ways, it also commoditises some of their business.

Coexistence vs. interoperation

Yet another option for non-interoperable systems is rules for how they can coexist, without damaging each other’s operation. This is seen in unlicensed or shared wireless spectrum bands, to avoid “tragedies of the commons” where interference would jam all the disparate systems. Even licensed bands can be "technology neutral".

Analogous approaches enable the safe coexistence of different types of road users on the same highway - or in the voice/video arena, technologies such as WebRTC which embed "codec negotiation" procedures into the standards.

Arguably, improving software techniques, automation, containerisation and AI will make such interworking and coexistence approaches even easier in future. Such kludginess might not please engineering purists who value “elegance”, but that’s not the way the world works – and certainly shouldn’t be how it’s regulated.

In a healthy and competitive market, customers should be able to choose between open and closed options, understanding the various trade-offs involved, yet be protected from abusive anti-competitive power.

A great example of consumer gains and "generativity" in innovation is that of the Internet itself, which works alongside walled-garden, telco or private-network alternatives to access content and applications.

Customers can have the best of both worlds - accelerated, because of the competitive tensions involved. The only risk is that of monopolies or oligopolies, which requires oversight.

Where does government & regulatory policy fit in this?

This highlights an important and central point: the role of government, and its attitude to technology standards, interoperability and openness. This topic is exemplified by various recent initiatives, ranging from enthusiasm around Open RAN for 5G in the US, UK and elsewhere, to the EU’s growing attempts to force Internet platform businesses to interoperate and enable portability of data or content, as part of its Digital Services Act.

My view is that governments should, in general, let technology markets, vendors and suppliers make their own choices.

It is reasonable that governments often want to frame regulation in ways to protect citizens from monopolists, or risks of harm such as cybersecurity. In general, competition rules are developed across industries, without specific rules about products, unless there is unfair vertical integration and cross-subsidy.

Governments can certainly choose to adopt or even incentivise interoperability for various reasons – but they should not enshrine it in laws as mandatory. If you're a believer in interventionist policies, then incentivising market changes that favour national champions, foster inward investment and increase opportunities can make sense - although others will clearly differ.

(Personally, I think major tranches of intervention and state-aid should only apply to game-changers with huge investment needs - so perhaps for carbon capture technology, or hydrogen-powered aviation).

Open RAN may be incentivised, not mandated

A particular area of focus by many in telecoms is around open radio networks. The O-RAN Alliance and the TIP OpenRan project are at the forefront, with many genuinely impressive innovations and evolutions occurring. Rakuten's deployment is proving to be a beacon - at least for greenfield networks - while others such as Vodafone are using this architectural philosophy for rural coverage improvements.

Governments are increasingly involved as well - seeing a possible way to meet voters' desires for better/cheaper coverage, while also offsetting perceived risks from concentrations of power in a few large integrated vendors. This latter issue has been pushed further into the limelight by Huawei's fall from favour in a number of countries, which then see a challenge from a smaller number of alternative providers - Nokia, Ericsson and in some cases Samsung and NEC or niche providers.

This combination of factors then gets further conflated with industrial policy goals. For instance, if a country is good at creating software but not manufacturing radios, then Open RAN is an opportunity, that might merit some form of R&D stimulus, government-funded testbeds and so on.

So I can see some arguments for incentives - but I would be very wary of a step to enshrine any specific interop requirements into law (or rules for licenses), or for large-scale subsidies or plans for government-run national infrastructure. The world has largely moved to "tech neutral" approaches in areas such as spectrum awards. In the past, governments would mandate certain technologies for certain bands - but that is now generally frowned upon.

No, message apps should not interoperate

Another classic example of undesirable "forced interoperability" is in messaging applications. I've often heard many in the telecoms industry assert that it would be much better if WhatsApp, iMessage, Telegram, Snap - and of course the mobile industry's own useless RCS standard - could interconnect. Recently, some government and lobbying groups have suggested much the same, especially in Brussels.

Yet this would instantly hobble the best and most unique features of each - how would ephemeral (disappearing) messages work on systems that keep them stored perpetually? How would an encrypted platform interoperate with a non-encrypted platform? How could an invite/accept contact system interwork with a permissive any-to-any platform? How would a phone-number identity system work with a screen-name one?

... and that's before the real unintended consequences kick in, when people realise that their LinkedIn messages now interoperate with Tinder, corporate Slack and telemedicine messaging functions.

That doesn't mean there's never a reason to interoperate between message systems. In particular, if there's an acquisition it can be useful and imporant - imagine if Zoom and Slack merged, for instance. Or a gaming platform's messaging might want users to send invitations on social media. I could see some circumstances (for business) where it might be helpful linking Twitter and LinkedIn - but also others where it would be a disaster (I'm looking at you, Sales Navigator spamming tools).

So again - interoperability should be an option. Not a default. And in this case, I see zero reasons for governments to incentivise.

Conclusion

Interoperability between technology solutions or sub-systems should be possible - but it should not be assumed as a default, nor legislated in areas with high levels of innovation. It risks creating lowest-common denominators which do not align with users' needs or behaviours. Vertical integration often brings benefits, and as long as the upsides and downsides are transparent, users can make informed trade-offs and choices.

Lock-in effects can occur in both interoperable and proprietary systems. I'll be writing more about the concept of path dependence in future.

Regulating or mandating interoperability risks various harms - not just a reduction in innovation and differentiation, but also unexpected and unintended consequences. Many cite the European standardisation of GSM 2G/3G mobile networks as a triumph - yet the US, Korea, Japan, China and others allowed a mix of GSM, CDMA and local oddities such as iDen, WiBro and PHS. No prizes for guessing which parts of the world now lead in 5G, although correlation doesn't necessarily imply causation here.

There's also a big risk from setting precedents that could lead to unintended consequences. Perhaps car manufacturers would be next in line to be forced to have open interfaces for all the electronic systems, impacting many automakers' potential revenues. Politicians need to think more broadly. As a general rule, if someone uses the obsolete term "digital" in the context of interop, they're not thinking much at all.

I've written before about the possible risks to telcos from the very "platform neutrality" concept that many have campaigned for. Do they imagine regulators wouldn't notice that many have their own ambitions to be platform providers too?

In my view, an ideal market is made up of a competitive mix of interoperable and proprietary options. As long as abuses are policed effectively, customers should be able to make their own trade-offs - and their own mistakes.



As always - please comment and discuss this. I'll participate in the discussions as far as possible. If you've found this thought-provoking, please like and share on LinkedIn, Twitter and beyond. And get in touch if I can help you with internal advisory work, or external communications or speaking / keynote needs.

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

#5G #openran #regulation #telecom #mobile #interoperability #competition #messaging #voice #innovation


Thursday, October 08, 2020

Platform regulation? Are you *sure*?

There's currently a lot of focus on regulation of technology platforms, because of concerns over monopoly power or privacy/data violations.

It's a central focus of the Digital Services Act proposed by the European Commission

It's under scrutiny as part of the US Congress House Judiciary Committee report on antitrust

Other governments also focus on "platforms", especially Amazon, Facebook, Google, Apple and a few others.

Typically, traditional telcos cheer on these moves against companies they (still!) wrongly refer to as "OTTs".

Yet there's a paradox here. While there are indeed concerns about big-tech monopoly abuse that must be addressed by regulators... they're not the only platforms that could be captured by the law.

I've lost count of the times I've heard "the network as a platform", or 5G is a platform" with QoS, network slicing etc often hyped as the basis for the future economy.

Yet telcos can have as much lock-in as Apple or Amazon. I can't get an EE phone service on my Vodafone mobile connection. I can't port-out my call detail records & online behaviour to a new operator. There's no "smart home portability law" if I sign up to my broadband provider's service. Or slice portability laws for enterprises.
 
On my LinkedIn version of this post [link], a GSMA strategist commented that unbundling some telco services "does not solve a customer pain point". Yet unbundling *does* often enable greater competition, innovation & lower consumer prices. You only have to look at the total lack of innovation in MNO/3GPP telephony & messaging services in the last 20 years to see the negative effects of lock-in & too-tight integration here. (VoLTE is not innovative, RCS is regressionary). 
 
Even more awkwardly, most of the mobile industry is currently using the exact same arguments in its push to get vendors to disaggregate the RAN.
 
Want 5G to be a platform? You'll be subject to the rules too. Be careful what you wish for... 
 
(By the way, I first wrote about this issue 6 years ago. The arguments haven't changed much at all since then: https://disruptivewireless.blogspot.com/2014/07/so-called-platform-neutrality-nothing.html )
 

Wednesday, September 30, 2020

Rakuten 5G launch - quick takes

A quick post, copied from my LinkedIn (link) which is probably where comment / discussion will flow:

I just watched the Rakuten Mobile, Inc. #5G press conference.

Quick takeouts (+see Twitter thread link in comments):

- Rakuten is following Jio in undercutting incumbent MNOs with a greenfield / low-cost infrastructure & lightweight organisation
- Simple consumer-centric plan called Un-Limit V (ie V=5) with some of its own phones. It reckons it's 70% cheaper than rivals
- Big pitch for cloud + #OpenRAN
- Doing sub-GHz with NEC + Intel , plus Qualcomm for #mmWave radios
- Initial 870Mbps, upgraded to 2Gbps in a few months
- Unclear on NSA vs. SA support for new phones & network
- No mention of enterprise, verticals, Industry 4.0 etc. All about entertainment & "experience", with XR, gaming & streaming. Maybe enterprise is via APIs
- New "Big" 5G phone available from today
- I'll politely ignore the RCS-based communicator app

If I was a legacy MNO elsewhere in the world, I'd be nervously looking at my strategy team (& advisors) right now:
- Is enterprise really the key to #5G ?
- Will consolidation 4>3 or 3>2 MNOs just allow in a new greenfield entrant in our market?
- How fast can we reduce our legacy cost base?
- Is our government watching this as well?
- What happens when Rakuten pitches its platform internationally? Could *it* directly enter our market?


See also my Twitter thread with more screenshots & comment: https://twitter.com/disruptivedean/status/1311184039274074112?s=20

Monday, September 28, 2020

Verticals 5G: It's more than just MNOs vs. Private Networks, there's a whole new universe of other service providers too

For the last few years, I've written and spoken extensively about 4G or 5G cellular networks optimised for enterprises, whether that's for a factory, a port, an electricity grid - or even just a medium-sized office building. Recent trends confirm the acceleration of this model.

  • CBRS in the US is growing rapidly, including for local and industrial/utility uses
  • Localised 4G/5G spectrum is now available in UK, Germany, Netherlands, France, Japan and elsewhere, with many new countries examining the options
  • Many campus/dedicated network strategies by traditional mobile operators (MNOs)
  • Assorted testbeds and trials sponsored by governments, groups like 5G ACIA etc.
  • Growing intersections with Open RAN and neutral host models

An inflection point has now been reached.

Enterprise/local cellular is happening, finally

It's been a long time coming. In fact, I've been following the broad concept of enterprise cellular since about 2001, when I first met with a small cell vendor, called ip.access. Around 2005-2009 there was a lot of excitement about local 2G/3G networks, with the UK and Netherlands releasing thin slices of suitable spectrum. A number of organisations deployed networks, although it never hit the massmarket, for various reasons.

Now, however, private 4G and 5G is becoming "real". There's a critical mass of enterprises that are seriously interested, as this intersects with ongoing trends around IoT deployment, workforce automation, smart factory / city / building / etc concepts, and the availability of localised spectrum and cloud-based elements like network cores. It's still not easy, but the ingredients are much more accessible and easier to "cook".

A binary choice of MNOs vs enterprise?

But throughout this whole story we've had an underlying narrative of a two-way choice:

  • Enterprises can obtain private / on-premise cellular networks from major MNOs as a service, perhaps with dedicated coverage plus a "slice" of the main macro network and core functions.
  • Enterprises can build their own cellular networks, in the same way they build Wi-Fi or wired ethernet LANs today, or operate their wider private mobile radio (PMR) system.

This is a "false binary". A fallacy that there's only two options. Black & white. Night & day.

In reality, there's a whole host of shades-of-grey - or perhaps a better analogy, multi-coloured dawns and sunsets.

Not just MNOs

There is a lengthening cast-list of other types of service provider that can build, run and sell 4G and 5G networks to enterprises or "verticals" (the quaint & rather parochial term that classical telcos use to describe the other 97% of the economy).

An incomplete list of non-traditional MNOs targeting private mobile networks includes:

  • Fixed and cable operators, especially those which have traditionally had large enterprise customer bases for broadband, VPNs, PBXs / UC, managed Wi-Fi etc.
  • MVNOs wanting to deploy some of their own radio infrastructure to "offload" traffic from their usual host provider in select locations.
  • TowerCo's moving up the value chain into private or neutral networks (for instance, Cellnex and Digital Colony / Freshwave)
  • IT services firms affiliated to specific enterprises (for example, HubOne, the IT subsidiary of the company running Paris's airports)
  • Industrial automation suppliers acting as "industrial mobile operators" on behalf of customers (maybe a robot or crane supplier running/owning a local 5G network for a manufacturer or port, as an integral part of their systems)
  • Utility companies running private 4G/5G and providing critical communications to other utilities and sectors (for instance Southern Linc in the US), or perhaps acting as a neutral host, such as a client in Asia that I've advised.
  • Dedicated MNOs for particular industries, such as oil & gas, often in specific regions
  • Municipalities and local authorities deploying networks for internal use, citizen services or as public neutral-host networks for MNOs. The Liverpool 5G testbed in the UK is a good example, while Sunderland's authority is looking at becoming an NHN.
  • Railway companies either for neutral-host along tracks, or acting as FWA service providers in their own right, to nearby homes and businesses.
  • Specialist IoT connectivity providers, perhaps focusing on LPWAN connectivity, such as Puloli in the US.
  • FWA / WISP networks shifting to 4G/5G and targetting enterprises (eg for agricultural IoT)
  • Overseas MNOs without national spectrum in a market, but which want to service multinational enterprise clients' sites and offices. Verizon is looking at private cellular in the UK, for instance - and it wouldn't surprise me if Rakuten expands its footprint outside Japan.
  • Property and construction companies, especially for major regeneration districts or whole new smart-city developments.
  • UC/UCaaS and related voice & communications-centric enterprise SPs, such as Tango Networks with CBRS
  • Universities creating campus networks for students, or other education/research organisations servicing students, staff and visitors
  • Major cloud providers creating 4G / 5G networks for a variety of use-cases and enterprise groups - Amazon and Google are both tightly involved (albeit opaquely, beyond Google's SAS business), while Microsoft's acquisition of Metaswitch points to cloud-delivered private 5G, albeit perhaps not with spectrum and RAN managed itself.
  • Tourism and hospitality service providers providing connectivity solutions to hotels or resorts - although that's probably taking a backseat given economic & pandemic woes.
  • Broadcasters, event-management and content-production companies deploying private networks on behalf of sports and entertainment venues, festivals
  • Dozens more options - I'm aware of numerous additional categories and more will inevitably emerge in coming years. Ask me for details.

Conclusion: beyond the MNO/Enterprise binary fallacy

You get the picture. The future of 4G / 5G isn't just going to split between traditional "public mobile operators" (typically the GSMA membership) vs. individual enterprises creating DIY networks. There will be an entire new universe of SPs of many different types.

You can call them "new telcos", "Specialist Wirelss SPs", "Alternative Mobile Operators" or create assorted other categories. Many will be multi-site operators. Some may be regional or national.

We will see MNOs set up divisions that look like these new SP types, or perhaps acquire them. Some vendors will become quasi-SPs for enterprise, too. This is a hugely dynamic area, and trying to create fixed buckets and segments is a fool's errand.


Understanding this new and heterogeneous landscape is critical for enterprises, policymakers, vendors and investors - as well as traditional MNOs. I've been saying for years that "telecoms is too important to be left to the telcos", and it appears to be becoming true at a rapid pace.

Many in the mobile industry assert that 5G will transform industries. In many cases it will.... but the first industry to get transformed is the mobile industry itself.

This newsletter & my services

Thanks for reading this article. If you haven't subscribed to my LinkedIn Newsletter updates, please look for the "subscribe" button here. If it has resonated, please like this post and share it with others, either on LinkedIn or on other channels.

If you have a relevant interest in this and related topics around the future of telecoms and technology, please connect with me. (But no spammers and "lead generation" people, please).

I do advisory projects, strategy workshops and brainstorms, or real/virtual speaking engagements on the 5G, spectrum, private network and broader "telecom futurism" space. Drop me a message about how I can help you.

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.