Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.


Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency


Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.


The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?


Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds


Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Tuesday, August 25, 2020

Voice: So much more than Phone Calls

 [Originally published on LinkedIn. Please subscribe to my new LinkedIn Newsletter here]

Trivia Question: When was the first example of network-based music streaming launched?

I'll bet many of you guessed that it was Spotify in 2006, or Pandora in 2000. Maybe some of you guessed RealAudio, back in 1995.

But the actual answer is over a century earlier. It was the Théâtrophone, first demonstrated in 1881 in Paris, with commercial services around Europe from 1890. It allowed people to listen to concerts or operas with a telephone handset, from another location across town. It even supported stereo audio, using a headset. It finally went out of business in the 1930s, killed by radio. Although by then, another form of remote audio streaming - Muzak, delivering cabled background music for shops and elevators - was also popular.

Why is this important? Because these services used "remote sound" (from the Greek tele+phonos) over networks. They were voice/audio communications services.

Yet they were not "phone calls".

Over the last century, we've started to use the words "voice communications", "telephony" and "phone calls" interchangeably, especially in the telecoms industry. But they're actually different. We often talk about "voice" services being a core component of today's fixed and mobile operators' service portfolios.

But actually, most telcos just do phone calls, not voice in general. One specific service, out of a voice universe of hundreds or thousands of possibilities. And a clunky, awkward service at that - one designed 100+ years ago for fixed networks, or 30+ years ago for mobile networks.

*Phone rings, interrupting me*


"Oh, is that Dean Bubley?"

"Yes, that's me"

"Hi, I'm from Company X. How are you today?"

"I'm fine, thanks. How can I help you?"

... and so on.

It's unnatural, interruptive and often unwanted. A few years ago a 20-something told me some words of wisdom "The only people who phone me are my parents, or people I don't want to talk to". He's pretty much right. Lots of people hate unsolicited calls, especially from withheld numbers. They'll leave their phones on silent. (They also hate voicemails even more).

I used to go into meetings at operators and ask them "Why do people make phone calls? Give me the top 10 reasons". I'd usually get "to speak to someone" as an answer. Or maybe a split between B2B and B2C. But never a list of actual reasons - "calling a doctor", "chatting to a relative", "politely speaking to an acquaintance but wishing they'd get to the point".

Now don't get me wrong - ad-hoc, unscheduled phone calls can still be very useful. Person A calling Person B for X minutes is not entirely obsolete. It's been good to speak to friends and relative during lockdown, or a doctor, or a bank or prospective client. There's a lot of interactions where we don't have an app to coordinate timings, or an email address to schedule a Zoom call.

But overall, the phone call is declining in utility and popularity. It's an undifferentiated, lowest-common denominator form of communications, with some serious downsides. Yet it's viewed as ubiquitous and somehow "official". Why do web forms always insist on a number, when you never want to receive a call from that organisation?

Partly this relates to history and regulation - governments impose universal service obligations, release numbering, collect stats & make regulations about minutes (volume or price), determine interconnect and wholesale rates and so on. In turn, that has driven revenues for quite a lot of the telecom industry - and defined pricing plans.

But it's a poor product. There are no fine-grained controls - perhaps turning up the background noise-cancellation for a call from a busy street, and turning it down on a beach so a friend can hear the waves crashing on the shore. There's no easy one-click "report as spam" button. I can't give cold-callers a score for relevance, or see their "interruption reputation" stats. I can't thread phone calls into a conversation. Yes, there's some wizardry that can be done with cPaaS (comms platforms-as-a-service) but that takes us beyond telephony and the realm of the operators.

Beyond that, there's a whole wider universe of non-call voice (and audio) applications that operators don't even consider, or perhaps only a few. For instance:

  • Easy audioconferencing
  • Push-to-talk
  • Voice-to-text transcription (for consumers)
  • Voice analytics (e.g. for behavioural cues)
  • Voice collaboration
  • Voice assistants (like Alexa)
  • Audio streaming
  • Podcasts
  • Karaoke
  • One-way voice / one-way video (eg for a doorbell)
  • Telecare and remote intercom functions for elderly people
  • Telemedicine with sensor integration (eg ultrasound)
  • IoT integrations (from elevator alarms to smartwatches)
  • "Whisper mode" or "Barge-in" for 3-person calls
  • Stereo
  • De-accenting
  • Voice biometric security
  • Data-over-sound
  • In-game voice with 3D-positioning
  • Veterinary applications - who says voices need to be human?

There are dozens, maybe hundreds of possibilities. Some could be blended with a "call" model, while others have completely different user-interaction models. Certain of these functions are implemented in contact-centre and enterprise UCaaS systems, but others don't really fit well with the call/session metaphor of voice.

I've talked about contextual communications in the past, especially with WebRTC as an enabling technology, which allows voice/video elements to be integrated into apps and browser pages. I've also written before about the IoT integration opportunities - something which is only now starting to pick up (Disclosure: I'm currently working with specialist platform provider iotcomms.io to describe "people to process" and event-triggered communications).

But what irritates me is that the mainstream telecoms industry has just totally abdicated its role as a provider and innovator of voice services and applications. You only have to look at the mobile industry currently talking about Vo5G ("5G Voice") as a supposed evolution from the VoLTE system used with 4G. It's basically the same thing - phone calls - that we've had for over 100 years on fixed networks, and 30 years on mobile. It's still focused on IMS as a platform, dedicated QoS metrics, roaming, interconnection and so on. But it's still exactly the same boring, clunky, obsolescent model of "calls".

There was a golden opportunity to rethink everything for 5G and say "Hey, what *is* this voice thing in the 2020s? What do people actually want to use voice communications *for*? What interaction models and use-cases? What would make it broader & more general-purpose?" In fact, I said exactly the same thing around 10 years ago, when VoLTE was being dreamed up.

Nothing's changed, except better codecs (although HD voice was around on 3G) and lame attempts to integrate it with the even-worse ViLTE video and perennially-useless RCS messaging functions. The focus is on interoperability, not utility. Interop & interconnection is a nice-to-have for communications. Users need to actually like the thing first.

Some of the vendors pay lip-service to device integration and IoT. But unless you can tune the underlying user interface, codecs, acoustic parameters, audio processing, numbering/identity and 100 other variables in some sort of cPaaS, it's useless.

I don't want a phone call on a smartwatch - I want an ad-hoc voice-chat with a friend to ask what beer he wants when I'm at the bar. I want tap-to-record-and-upload of conversations, from my sunglasses, when someone's trying to sell me something & I suspect they're scamming me. I want realtime audio-effects like an audio Instagram filter that make me sound like I'm a cartoon character, or 007. (I don't want karaoke, but I imagine millions do)

So remember: the telecoms industry doesn't do "voice". It just does one or two voice applications. VoLTE is actually ToLTE. It's not too late - but telcos and their suppliers need to take a much broader view of voice than just interoperable PSTN-type phone calls. Maybe start with Théâtrophone 2.0?

This post was first published via my LinkedIn Newsletter - see here + also the comment stream on LI

#voice #telecoms #volte #phone #telephony #IMS #VoLTE #telcos #cPaaS #conferencing

If you're interested in revisiting your voice strategy, get in touch via email or LinkedIn, to discuss projects, workshops and speaking engagements. We can even discuss it by phone, if you insist.

Saturday, August 08, 2020

A rant about 5G myths - chasing unicorns​

Exasperated rant & myth-busting time.

I actually got asked by a non-tech journalist recently "will 5G change our lives?"

Quick answer: No. Emphatically No.

#5G is Just Another G. It's not a unicorn

Yes, 5G is an important upgrade. But it's also *massively* overhyped by the mobile industry, by technology vendors, by some in government, and by many business and technology journalists.

- There is no "race to 5G". That's meaningless geopolitical waffle. Network operators are commercial organisations and will deploy networks when they see a viable market, or get cajoled into it by the terms & timing of spectrum licenses.

- Current 5G is like 4G, but faster & with extra capacity. Useful, but not world-changing.

- Future 5G will mean better industrial systems and certain other cool (but niche) use-cases.

- Most 5G networks will be very patchy, without ubiquitous coverage, except for very rudimentary performance. That means 5G-only applications will be rare - developers will have to assume 4G fallback (& WiFi) are common, and that dead-spots still exist.

- Lots of things get called 5G, but actually aren't 5G. It's become a sort of meaningless buzzword for "cool new wireless stuff", often by people who couldn't describe the difference between 5G, 4G or a pigeon carrying a message.

- Anyone who talks about 5G being essential for autonomous cars or remote surgery is clueless. 5G might get used in connected vehicles (self-driving or otherwise) if it's available and cheap, but it won't be essential - not least as it won't work everywhere (see above).

- Yes, there will be a bit more fixed wireless FWA broadband with 5G. But no, it's not replacing fibre or cable for normal users, especially in competitive urban markets. It'll help take FWA from 5% to 10-12% of global home broadband lines.

- The fact the 5G core is "a cloud-native service based architecture" doesn't make it world-changing. It's like raving about a software-defined heating element for your toaster. Fantastic for internal flexibility. But we expect that of anything new, really. It doesn't magically turn a mobile network into a "platform". Nor does it mean it's not Just Another G.

- No, enterprises are not going to "buy a network slice". The amount of #SliceWash I'm hearing is astonishing. It's a way to create some rudimentary virtualised sub-networks in 5G, but it's not a magic configurator for 100s or 1000s of fine-grained, dynamically-adjusted different permutations all coexisting in harmony. The delusional vision is very far removed from the mundane reality.

- The more interesting stuff in 5G happens in Phase 2/3, when 3GPP Release 16 & then Release 17 are complete, commercialised & common. R16 has just been finalised. From 2023-4 onward we should expect some more massmarket cool stuff, especially for industrial use. Assuming the economy recovers by then, that is.

- Ultra-reliable low-latency communications (URLLC) sounds great, but it's unclear there's a business case except at very localised levels, mostly for private networks. Actually, UR and LL are two separate things anyway. MNOs aren't going to be able sell reliability unless they also take legal *liability* if things go wrong. If the robot's network goes down and it injures a worker, is the telco CEO going to take the rap in court?

- Getting high-performance 5G working indoors will be very hard, need dedicated systems, and will take lots of time, money and trained engineers. It'll be a decade or longer before it's very common in public buildings - especially if it has to support mmWave and URLLC. Most things like AR/VR will just use Wi-Fi. Enterprises may deploy 5G in factories or airport hangars or mines - but will engineer it very carefully, examine the ROI - and possibly work with a specialist provider rather than a telco.

- #mmWave 5G is even more overhyped than most aspects. Yes, there's tons of spectrum and in certain circumstances it'll have huge speed and capacity. But it's go short range and needs line-of-sight. Outdoor-to-indoor coverage will be near zero. Having your back to a cell-site won't help. It will struggle to go through double-glazed windows, the shell of a car or train, and maybe even your bag or pocket. Extenders & repeaters will help, but it's going to be exceptionally patchy (and need tons of fibre everywhere for backhaul).

- 5G + #edgecomputing is a not going to be a big deal. If low-latency connections were that important, we'd have had localised *fixed* edge computing a decade ago, as most important enterprise sites connect with fibre. There's almost no FEC, so MEC seems implausible except for niches. And even there, not much will happen until there's edge federation & interconnect in place. Also, most smartphone-type devices will connect to someone else's WiFi between 50-80% of the time, and may have a VPN which means the network "egress" is a long way from the obvious geographically-proximal edge.

- Yes, enterprise is more important in 5G. But only for certain uses. A lot can be done with 4G. "Verticals" is a meaningless term; think about applications.

- No, it won't displace Wi-Fi. Obviously. I've been through this multiple times.

- No, all laptops won't have 5G. (As with 3G and 4G. Same arguments).

- No, 5G won't singlehandedly contribute $trillions to GDP. It's a less-important innovation area than many other things, such as AI, biotech, cloud, solar and probably quantum computing and nuclear fusion. So unless you think all of those will generate 10's or 100's of $trillions, you've got the zeros wrong.

- No, 5G won't fry your brain, or kill birds, or give you a virus. Conspiracy theorists are as bad as the hypesters. 5G is neither Devil nor Deity. It's just an important, but ultimately rather boring, upgrade.

There's probably a ton more 5G fallacies I've forgotten, and I might edit this with a few extra ones if they occur to me. Feel free to post comments here, although the majority of debate is on my LinkedIn version of this post (here). This is also the inaugural post for a new LinkedIn newsletter, Most of my stuff is not quite this snarky, but it depends on my mood. I'm @disruptivedean on Twitter so follow me there too.

If you like my work, and either need a (more sober) business advisory session or workshop, let me know. I'm also a frequent speaker, panellist and moderator for real and virtual events.

Just remember: #5GJAG. Just Another G.

Wednesday, July 29, 2020

The fake battle: 5G vs Wi-Fi

[Reposted from my LinkedIn & slightly extended. See the post here for a full comment thread]

I'm bored of the fake battle being hyped up between #WiFi and #5G, especially for enterprise connectivity in-building.

Let's be absolutely clear. Essentially *every* building, whether residential, enterprise office, public venue or industrial, will need good WiFi coverage, increasingly based on #WiFi6.

Most laptops, TVs, screens, voice assistants, tablets, consumer appliances & other non-smartphone devices will be WiFi-only. Only a handful will have cellular radios too - the economics & manufacturing/distribution complexities don't work for including 5G as a default in most electronic products.

Almost every building will *also* need decent indoor public 4G/5G broadband coverage, especially for employees' and visitors' phones. In most cases this will need to cover all major MNOs' networks, as well as public safety systems such as critical-communications LTE. (
Wi-Fi Calling doesn't work ubiquitously on all phones / mobile networks on enterprise Wi-Fi, so there will always need to be a cellular network for reliable basic telephony).

*Some* buildings will also need indoor private 5G for ultra low-latency machines or other connected devices. For industrial sites this will mostly be isolated local networks. For others it may be delivered by MNOs via local coverage or network-slicing, or by some form of neutral-host wholesale model.

The main competition for indoor 5G is actually indoor 4G, not WiFi for which there is only a narrow overlap in use cases. WiFi will almost always be needed as well as cellular, with very rare examples where it's absent - for instance outdoors on campus sites.

Also, future visitor access to WiFi may be made much easier with #OpenRoaming, which can use multiple affiliation-based credentials, not just SIM or passwords. That will change the usability barriers for Wi-Fi, for instance if you can connect via a loyalty app, rather than needing to visit a web-page and enter credentials.

Bottom line: it's not a battle. Wi-Fi6 and 5G will be needed for different purposes. They probably won't be integrated much either, as they'll have different financial models, different usage models (and locations) and deployment/upgrade timelines. Think divergence, not convergence - although some elements such as planning tools and fibre backhaul to the cells/APs will likely be combined.

If you’d like more details on this topic & my deeper analysis on the future of wireless, please contact me via information AT disruptive-analysis DOT com. I offer advisory services to governments, operators, vendors, enterprises & investors.

See also LinkedIn post with long comment thread via this link: here

Monday, June 22, 2020

Industrial 5G networks will mostly be discrete and isolated

A key argument cited for telcos having a central role in industrial / vertical #5G networks is "service continuity". Devices and users can connect both on-premise and in the wide area, because both are enabled by the same operator. An MNO can thus best provide on-premise connectivity as an extension, or slice, of its normal national cellular network.

MNOs and industry groups often assert this to dissuade governments and regulators from assigning local spectrum licences directly to businesses.

This argument doesn't stack up, for several reasons.

On a recent virtual event I moderated for Nigel Yeates Juliet #5grealised the speaker from Three. Business pointed out that its customers' private 4G/5G networks were generally isolated, not part of 3's macro network. They even use different spectrum. They can do roaming, but it's not a priority.

A central point is that most connected IoT and automation systems don't move outside the facility. Industrial robots don't go for a walk to the shops. What does move are vehicles, personal devices and shipped electronic goods.

Yet here, having local & wide area coverage from the same MNO is of minimal use. Guests, contractors and employees have devices on *all* networks, not just that of the on-prem network operator.

So some sort of roaming or neutral-host arrangement would be needed. And those capabilities could be also be offered a new specialised provider, as well as by an incumbent MNO.

In fact, it might be easier (and quicker) for a genuinely neutral wholesale player to offer that capability, rather than one MNO trying to negotiate a site-specific roaming or interconnect deal with all its rivals.

Another reason is eSIM and dual-SIM. Devices can have separate profiles for on-premise and wide-area subscriptions, and just switch from one to the other when they're off-site. This is an increasingly common feature in smartphones and vehicles.

In fact, private cellular networks don't even need SIMs - 5G allows the use of other identifiers such as enterprise security credentials, or even the new Wi-Fi OpenRoaming model.

At a radio level, there are distinct advantages to running private networks in isolated fashion, in separate spectrum. They can use different configurations to the macro environment, perhaps optimised for a different mix of up- and downlink in TDD spectrum.

And lastly, it is much easier to treat a private network as private, rather than some unusual public/private hybrid. The legal situations and liabilities are clearer. SLAs can be described and enforced in contracts. There doesn't have to be alignment in deployment speeds or priorities. Different vendors can be chosen.

This doesn't mean that MNOs don't have a role in such private enterprise networks - but it's likely to be done by a separate business unit that can engineer solutions specifically for verticals, thinking about the customer first. It won't be done by the main "mothership" network group, desperate to find "5G use cases" and crowbar-ing its main network (and also its #networkslicing and #edgecomputing platform) into unsuitable applications.

That MNO enterprise business unit might decide the macro RAN is suitable for a given client. Or it may choose to build its own network locally, with the enterprise owning the spectrum license. Or it might work with 3rd parties - or use WiFi instead. I'm expecting MNOs to acquire lots of vertical-specialist integrators and network installation firms in some industries like manufacturing, ports, mining and healthcare.

Maybe over time they'll add value and revenue to the central 5G network business, or act as channels for its #URLLC and MEC businesses. But that won't be their only offering - just one of a portfolio of options.

More generally, all of this points to private 4G/5G networks - especially in industrial sectors and areas such as ports and mining - being based on discrete, isolated deployments. There may be involvement by a national MNO in its deployment or operation (or spectrum licensing), but the network usually won't be part of an MNO's main infrastructure. There might be service continuity - but there's many ways to offer that, and it usually won't be in the top 10 priorities considered.

I definitely think that the roaming approach and neutral-host model offer many opportunities connected to private cellular too. There's some interesting angles relating to Open RAN here as well. Unfortunately, many of the verticals holding most appeal - hotels, airports, stadiums, office complexes - have obvious problems for the next year or so, given the pandemic and ensuing recesssion.

I'll be exploring these issues at a couple of different upcoming events.

Firstly, on July 7th, I'm running my next private workshop on Neutral Host Networks with Peter Curnow-Ford. It's now switched to a virtual event, over morning and afternoon sessions - plus a networking event (a virtual "pub" with special entertainment) in the evening. The broad outline is the same as first announced (link here) with more detailed updated agenda and format in the next couple of days. It will remain as a private, off-the-record event under the Chatham House Rule.

Also on August 20th, I'm doing another #5GRealised session with Juliet Media, specifically on the role of telcos in private networks. Details are here

As always, this theme and broader area is one I also advise on privately. Please drop me a message if you have specific needs for consulting or insight.

#5G #NeutralHost #Verticals #PrivateLTE #Private5G

Thursday, June 11, 2020

Changes are coming to home broadband. Expect prosumer offers, WFH special features and consumer SDWAN

Expect a big upsurge in "prosumer" and WFH broadband over the next year, including consumer-oriented SDWAN approaches, FWA bundles and more. This offers opportunities for fixed and mobile telcos, Wi-Fi and gateway vendors, and enterprise systems-integrators and resellers.

Lockdowns have led to massive surges in home broadband demand. ISPs' networks have generally held up well, as home working and education has (mostly) just smoothed out evening streaming peaks across the whole day. Uplink data has risen much faster than downlink, because of cloud and video-calling use.

Mobile usage has been relatively flat during lockdown. Lower out-of-home usage (especially at entertainment venues and in-car) has been offset by non-WiFi / non-fixed broadband users consuming more data at home. FWA connections have been growing where available, and broadly mirroring fibre / cable usage trends.

But other issues are more complex.

Employers are struggling with their home-workers' poor bandwidth (especially upstream), unreliable connections and new network-security risks. Households with 2+ adults doing WFH, plus children home-schooling (or playing games) struggle with capacity and prioritisation. Some people are working from the garage, attic, garden shed or anywhere there is space - but perhaps not Wi-Fi coverage. (Ironically, I'm now working from my basement, which is actually where my Wi-Fi AP is, so my broadband experience is actually better than normal).

We will see many solutions:
  • Extension of company BYOD mobile policies, covering costs of upgrades to existing home broadband
  • Telcos offering high-QoS partitions on existing broadband
  • Small-biz broadband products sold for home use, including via employers
  • Businesses giving WFH staff a dedicated FWA modem or mobile hotspot, kept completely separate from normal home-broadband, for easier management by IT staff
  • Home Wi-Fi improvements where in-home is the bottleneck (WiFi6 & mesh). This is already filtering through via retail, but expect more ISPs/telcos to offer upgrades to old CPE soon.
  • Fixed+cellular converged broadband gateways, especially where 5G is available
  • 3rd-party gateways acting as SDWAN nodes, bonding fixed broadband with mobile/FWA from another telco
  • Second fixed connections to homes, in areas where there are 2+ fibre/cable infrastructure providers
There will be no obvious single "winner" here - it will depend on a given country's competitive landscape for broadband, telcos with new fibre build-outs looking for quick wins, urban density, single homes vs. apartments, availability and capacity of 4G/5G FWA networks, family size & make-up and much more. I'd expect dozens of innovative offers to emerge over the next 2-12 months.


Are you looking for a quick burst of market insight, product or service stress-testing, or idea-generation? As I'm WFH at the moment, I can now offer by-the-hour advisory sessions. See this link for details of availability & booking.

Thursday, June 04, 2020

Edge computing meets Private Networking: quick thoughts

This morning, I gave a short presentation & then joined a panel of other speakers from Athonet, Ericsson, Huawei & Hewlett Packard Enterprise on a webinar session organised by TechUK.

It covered the role of edge computing in the context of private networks.

There are many possible different touch-points I see evolving between these two domains:
  • Enterprises wanting both private networks & on-premise edge compute for inhouse IoT systems and analytics (eg in manufacturing). This is not necessarily 3GPP-style MEC, though - it could be a local hyperscale node eg AWS Outpost
  • MNOs offering enterprises their own on-prem EPC/5GC node
  • MNOs offering 3GPP Release 16/17 5G with network slicing & integrated MEC edge capabilities (personally, I'm a bit skeptical that this is a big opportunity(
  • Metro edge datacentres for SPs running multiple private/vertical networks in a city, for hosting their own multi-tenant virtual cores or Open RAN elements
  • Neutral-host wireless networks for buildings or metro areas also offering "neutral edge" facilities, eg TowerCos or campus-network specialists
  • An edge data centre operator deploying its own citywide CBRS-type network for "one hop to the cloud" 4G/5G. (This harks back to my belief that Amazon could start using Whole Foods stores as mini data-centres, with direct fibre or cellular connectivity to the surrounding area)
  • Localised interconnect facilities (between MNOs, or private cellular network operators reaching cloud & public Internet). There's a whole host of edge-interconnect models I think will be essential - for instance where users of different MNOs have to interact with low latency (eg AR gaming), or where companies need external inputs to private networks & applications (eg 3rd party AI microservices for analytics).
In essence, this is a hugely complex intersection, which I'm only scratching the surface of here.

Ping me if this is an area where I can help you brainstorm new ideas, or test existing ones