Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label network slicing. Show all posts
Showing posts with label network slicing. Show all posts

Tuesday, June 20, 2023

Private 5G: Two different approaches at the Coronation

This post originally appeared on June 9 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

 A month ago, the UK and much of the world watched King Charles' #Coronation in London.

They were able to watch it partly because of the immense efforts of the various #broadcasters involved. Since then, two separate stories have emerged about the role of dedicated #5G connectivity in the TV coverage:

1) A dedicated private 5G network supplied by Neutral Wireless and BBC R&D, used by several broadcasters
2) A slice of the Vodafone public 5G network, enabled for ITN, based on Ericsson gear

In the comments I've linked to various articles and a great interview on Ericsson's Voice of 5G podcast show. They have details of the other partners involved too. In the BBC blog post they also mention a 3rd network on a separate cell, working alongside Sony, for low-latency (I think) remote-controlled cameras.

The #Private 5G network used 8 radios along The Mall (the tree-lined road between Buckingham Palace to Trafalgar Sq). It used 2x 40MHz channels in the UK's shared-licence band between 3.8-4.2GHz, with 1Gbps capacity (mostly for uplink). It was used by around 60 devices - I guess mostly cameras and test equipment via gateways, plus the BBC's onsite radio studio. They also used LiveU bonding systems to add capacity from public MNO networks. I'm not sure about the vendors of the radios or standalone core.

The 5G SA #networkslicing solution was apparently used for a single sector at a 3.5GHz temporary base station aimed at the Palace balcony. It also worked with LiveU. On the podcast, Andrea DonĂ  (VF's head of network in the UK) talks about "dedicating bandwidth to one sector for the slice" and carving out some of the uplink capacity.

One thing that is unclear to me is how many other users were sharing the VF standalone 5G network hosting the slice - SA hasn't been fully launched commercially in the UK, although in January VF said it had invited selected users to trial it. I also don't know whether the 5G NSA and SA networks were sharing the radio resource, or if they use separate channels.

The public 4G / 5G networks (and also Wi-Fi bands) in the area were pretty overloaded, despite additional mobile towers adding capacity. The Vodafone / Ericsson podcast notes that VF uses "all the bands" at major events (although there's no #mmWave 5G in the UK yet) - so including 4G at 2.1GHz and 2.6GHz, and some lower bands for 2G/3G.

My take from this is that #private5G is considerably more mature than #5Gslicing, but that both are interesting for broadcasters. Both need quite a lot of specialist engineering, but TV is a sector with lots of very clever specialists and great ability to set up temporary networks. Of course, both networks were *outdoors* which meant that the thick stonework of the palace and Westminster Abbey weren't relevant.

One last note - the huge bulk of broadcast audiovisual output at the coronation would have used dedicated #PMSE wireless for cameras and microphones. But the #UHF spectrum debate is for another post.


 

Thursday, January 12, 2023

Workarounds, hacks & alternatives to network QoS

Originally published Jan 12th 2023 on my LinkedIn Newsletter - see here for comments

Sometimes, upgrading the network isn't the answer to every problem.

For as long as I can remember, the telecom industry has talked about quality-of-service, both on fixed and mobile networks. There has always been discussion around "fast lanes", "bit-rate guarantees" and more recently "network slicing". Videoconferencing and VoIP were touted as needing priority QoS, for instance. 

There have also always been predictions about future needs of innovative applications, which would at a minimum need much higher downlink and uplink speeds (justifying the next generation of access technology), but also often tighter requirements on latency or predictability.

Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.

We see it again today, with predictions for metaverse applications adding yet more zeroes - we'll have 8K screens in front of our eyes, running at 120 frames per second, with Gbps speeds and sub-millisecond latencies need to avoid nausea or other nasty effects. So we'll need 6G to be designed to cope.

The issue is that many in the network industry often don't realise that not every technical problem needs a network-based solution, with smarter core network policies and controls, or huge extra capacity over the radio-network (and the attendant extra spectrum and sites to go with it).

Often, there are other non-network solutions that achieve (roughly) the same effects and outcomes. There's a mix of approaches, each with different levels of sophistication and practicality. Some are elegant technical designs. Others are best described as "Heath Robinson" or "MacGyver" approaches, depending on which side of the Atlantic you live.

I think they can be classified into four groups:

  • Software: Most obviously, a lot of data can be compressed. Buffers can be used to smooth out fluctuations. Clever techniques can correct for dropped or delayed packets. There's a lot more going on here though - some examples are described below.
  • Hardware / physical: Some problems have a "real world" workaround. Sending someone a USB memory stick is a (high latency) alternative to sending large volumes of data across a network. Phones with dual SIM-slots (or, now, eSIM profiles) allow coverage gaps or excess costs to be arbitraged.
  • Architectural: What's better? One expensive QoS-managed connection, or two cheaper unmanaged ones bonded together or used for diverse routing? The success of SDWAN provides a clue. Another example is the use of onboard compute (and Moore's Law) in vehicles, rather than processing telemetry data in the cloud or network-edge. In-built sound and image recognition in smart speakers or phones is a similar approach to distributed-compute architecture. That may have an extra benefit of privacy, too.
  • Behavioural: The other set of workaround exploit human psychology. Setting expectations - or warning of possible glitches - is often preferable to fixing or apologising for problems after they occur. Skype was one of the first communications apps to warn of dodgy connections - and also had the ability to reconnect when the network performance improved. Compare that with a normal PSTN/VoLTE call drop - it might have network QoS, but if you lose signal in an elevator, you won't get a warning, apology or a simplified reconnection.

These aren't cure-alls. Obviously if you're running a factory, you'd prefer not to have the automation system cough politely and quietly tell you to expect some downtime because of a network issue. And we certainly *will* need more bandwidth for some future immersive experiences, especially for uplink video in mixed reality.

But recently I've come across a few examples of clever workarounds or hacks, that people in the network/telecom industry probably wouldn't have anticipated. They potentially reduce the opportunity for "monetised QoS", or reduce future network capacity or coverage requirements, by shifting the burden from traffic to something else.

The first example relates to the bandwidth needs for AR/VR/metaverse connectivity - although I first saw this mentioned in the context of videoconferencing a few years ago. It's called "foveated rendering". (The fovea is the most dense part of the eye's retina). In essence, it uses the in-built eye tracking in headsets or good quality cameras. The system know what part of a screen or virtual environment you are focusing on, and reduces the resolution or frame-rate of the other sections in your peripheral vision. Why waste compute or network capacity on large swathes of an image that you're not actually noticing?

I haven't seen many "metaverse bandwidth requirement" predictions take account of this. They all just count the pixels & frame rate and multiply up to the largest number - usually in the multi-Gbps range. Hey presto, a 6G use-case! But perhaps don't build your business case around it yet...

Network latency and jitter is another area where there are growing numbers of plausible workarounds. In theory, lots of applications such as gaming require low latency connections. But actually, they mostly require consistent and predictable but low-ish latency. A player needs to have a well-defined experience, and especially for multi-player games there needs to be fairness.

The gaming industry - and also other sectors including future metaverse apps - have created a suite of clever approaches to dealing with network issues, as well as more fundamental problems where some players are remote and there are hard speed-of-light constraints. They can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.

There are also numerous techniques for predicting or anticipating movements and actions, so network-delivered data might not be needed continually. AI software can basically "fill in the gaps", and even compensate for some sorts of errors if needed. Similar concepts are used for "packet loss concealment" in VoIP or video transmissions. Apps can even subtly speed up or slow down streams to allow people to "catch up" with each other, or have the same latency even when distributed across the world.

We can expect much more of this type of software-based mitigation of network flaws in future. We may even get to the point where sending full video/image data is unnecessary - maybe we just store a high-quality 3D image of someone's face and room (with lighting) and just send a few bytes describing what's happening. "Dean turned his head left by 23degrees, adopted a sarcastic expression and said 'who needs QoS and gigabit anyway?' A cloud outside the window cast a dramatic shadow half a second later". It's essentially a more sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I know I'm massively oversimplyifying, but you get the direction of travel here).

The last example is a bit more left-field. I did some work last year on wireless passenger connectivity on trains. There's a huge amount of complexity and technical effort being done on dedicated trackside wireless networks, improving MNO 5G coverage along railways, on-train repeaters for better signal and passenger Wi-Fi using multi-SIM (or even satellite) gateways. None of these are easy or cheap - the reality is that there will be a mix of dedicated and public network connectivity, with cities and rural areas getting different performance, and each generation of train having different systems. Worse, the coated windows of many new trains, needed for anti-glare and insulation, effectively act as Faraday cages, blocking outdoor/indoor wireless signals.

It's really hard to take existing rolling-stock out of service for complex retrofits, install anything along operational tracks / inside tunnels, and anything electronic like repeaters or new access points need a huge set of certifications and installation procedures.

So I was really surprised when I went to the TrainComms conference last year and heard three big train operators say they were looking at a new way to improve wireless performance for their passengers. Basically, someone very clever realised that it's possible to laser-etch the windows with a fine grid of lines - which makes them more transparent to 4G/5G, without changing the thermal or visual properties very much. And that can be done much more quickly and easily for in-service trains, one window at a time.

I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.

The story here is that while network upgrades and QoS are important, there are often highly inventive workarounds - and very motivated software, hardware and materials-science specialists hoping to solve the same problems via a different path.

Do you think a metaverse app developer would rather work on a cool "foveated rendering" approach, or deal with 800 sets of network APIs and telco lawyers to obtain QoS contracts instead? And how many team-building exercises just involve hiring a high-quality boat to go across a lake, rather than working out how to build rafts from barrels and planks?

We'll certainly need faster, more reliable, lower-latency networks. But we need to be aware that they're not the only source of solutions, and that payments and revenue uplift for network performance and QoS are not pre-ordained.


#QoS #Networks #Regulation #NetNeutrality #5G #FTTX #metaverse #videoconferencing #networkslicing #6G

Thursday, July 14, 2022

Network Slicing is a huge error for the 5G industry

(Initially posted on LinkedIn, here. Probably best to use LI for comments & discussion)

I've started calling myself a "Slice Denier" or "Slicing Skeptic" on client calls and conference speeches on #5G.

Increasingly, I believe that #NetworkSlicing is one of the worst strategic errors made by the #mobile industry, since the catastrophic choice of IMS for communications applications. The latter has led to the fiascos of #VoLTE and #RCS, and loss of relevance of telcos in communications more broadly.

At best, slicing is an internal toolset that might allow telco operations or product teams (or their vendors) to manage their network resources. For instance, it could be used to separate part of a cell's capacity for FWA, and dynamically adjust that according to demand. It might be used as an "ingredient" to create a higher class of service for enterprise customers, for instance for trucks on a highway, or as part of an "IoT service" sold by MNOs. Public safety users might have an expensive, artisanal "hand-carved" slice which is almost a separate network. Maybe next-gen MVNOs.

(I'm talking proper 3GPP slicing here - not rebranded QoS QCI classes, private APNs, or something that looks like a VLAN, which will probably get marketed as "slices")

But the idea that slicing is itself a *product*, or that application developers or enterprises will "buy a slice" is delusional.

Firstly, slices will be dependent on [good] coverage and network control. A URLLC slice likely won't work reliably indoors, underground, in remote areas, on a train, on a neutral-host network, or while roaming. This has been a basic failure of every differentiated-QoS monetisation concept for many years, and 5G's often-higher frequencies make it worse, not better.

Secondly, there is no mature machinery for buying, selling, testing, supporting. price, monitoring slices. No, the 5G Network Exposure Function won't do it all. I haven't met a Slice salesperson yet, or a Slice-procurement team.

Thirdly, a "local slice" of a national 5G network will run headlong into a battle with the desire for separate private/dedicated local 5G networks, which may well be cheaper and easier. It also won't work well with the enterprise's IT/OT/IP domains, out of the box.

Also there's many challenges getting multi-operator slices, device OS links to slice APIs, slice "boundary controllers" between operators, aligning RAN and core slices, regulatory questionmarks and much more.

To use an appropriate analogy, consider an actual toaster, with settings for different timing, or a setting for bagels. Now imagine Toaster 5.0 with extra software smarts, perhaps cloud-native. Nobody wants to buy a single slice of toast, or a software profile. They'll just buy a toaster for their kitchen, or or get an "integrated breakfast solution" including toast in a cafe. They won't care about the slicing software. The chef might, but it's doubtful.

If you see 5G Network Slicing as a centrepiece of future "monetisation", you're in for an unpleasant smell of burning, and probably a blaring smoke alarm too.


 

Thursday, October 08, 2020

Platform regulation? Are you *sure*?

There's currently a lot of focus on regulation of technology platforms, because of concerns over monopoly power or privacy/data violations.

It's a central focus of the Digital Services Act proposed by the European Commission

It's under scrutiny as part of the US Congress House Judiciary Committee report on antitrust

Other governments also focus on "platforms", especially Amazon, Facebook, Google, Apple and a few others.

Typically, traditional telcos cheer on these moves against companies they (still!) wrongly refer to as "OTTs".

Yet there's a paradox here. While there are indeed concerns about big-tech monopoly abuse that must be addressed by regulators... they're not the only platforms that could be captured by the law.

I've lost count of the times I've heard "the network as a platform", or 5G is a platform" with QoS, network slicing etc often hyped as the basis for the future economy.

Yet telcos can have as much lock-in as Apple or Amazon. I can't get an EE phone service on my Vodafone mobile connection. I can't port-out my call detail records & online behaviour to a new operator. There's no "smart home portability law" if I sign up to my broadband provider's service. Or slice portability laws for enterprises.
 
On my LinkedIn version of this post [link], a GSMA strategist commented that unbundling some telco services "does not solve a customer pain point". Yet unbundling *does* often enable greater competition, innovation & lower consumer prices. You only have to look at the total lack of innovation in MNO/3GPP telephony & messaging services in the last 20 years to see the negative effects of lock-in & too-tight integration here. (VoLTE is not innovative, RCS is regressionary). 
 
Even more awkwardly, most of the mobile industry is currently using the exact same arguments in its push to get vendors to disaggregate the RAN.
 
Want 5G to be a platform? You'll be subject to the rules too. Be careful what you wish for... 
 
(By the way, I first wrote about this issue 6 years ago. The arguments haven't changed much at all since then: https://disruptivewireless.blogspot.com/2014/07/so-called-platform-neutrality-nothing.html )
 

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Monday, February 24, 2020

3rd Neutral Host Workshop + OpenRAN for shared networks. Early bird still available


NOTE: Owing to uncertainty around the impact of Coronavirus on travel, event attendance, company policies & venues, this workshop has been postponed from 31st March until 7th July. We have contacted existing registered attendees to discuss the options

On March 31st  July 7th I'll be running my 3rd public workshop on Neutral Host Networks in central London, together with colleague Peter Curnow-Ford.

As well as covering the basics of new wholesale/sharing models for MNOs, both with and without dedicated spectrum, we will also be looking more closely at the fit between NHNs and new virtualised vRAN / OpenRAN technologies. 

We'll cover all the various use-cases: metro-area network densification, indoor systems for various venues, road/rail coverage, rural wholesale models, FWA and more. 

The links (and differences) between neutral-host and private LTE/5G will be discussed, as well as alternative models such as multi-MNO sharing or national roaming. (see this post for some previous thoughts on this)

Different countries' competitive, regulatory and spectrum positions will be covered, to assess how that will impact the evolution of NHNs. 
 
Early bird pricing is available before June 7th.

Full details and registration are available here

Friday, January 03, 2020

Predictions for the next decade: looking out to 2030 for telecoms, wireless & adjacent technologies


It's tempting to emulate every other analyst & commentator and write a list of 2020 predictions of success and failure. In fact, I got part-way into a set of bulletpoints about what’s overhyped and underhyped. 

But to be honest, if you read my articles and tweets, you probably know what I think about 2020 already. Private cellular networks will be important (4G, initially). 5G fixed wireless is interesting and will grow the FWA market - but won't replace fibre. 5G is Just Another G and is overhyped, especially until the new core matures. RCS is still a worthless zombie, eating brains. But I don't need to repeat all this in detail, just because I'm a bit more sharp-worded than most observers. It wouldn't tell you much new.

But seeing as I spend a fair amount of time advising clients about the longer-term future, 5-10 years out or even further, I thought I'd set my sights higher. I use the term "telco-futurism" to look at the impacts of technology and broader society on telecoms, and vice versa.

So, at the start of the 2020s, what about the next decade? Assuming I haven't retired to my palatial Mars-orbiting private Moon in 10 years' time, what do I think I'll be writing, podcasting (or neural-transmitting) about in 2030?

So, let's have a few shots at this more-distant target...

  • 6G: In 2030, the first 6G networks are already gaining traction in the marketplace. The first users are still fixed connections to homes, and personal devices that look a bit similar to phones and wearables, but with a variety of new display and UI technologies, including contact lenses and advanced audio/haptic interfaces. 6G represents the maturing of various 5G concepts (such as the new core), plus greater intelligence to allow efficient operation. 
  • Details, details: Much of the 2020s will have been spent dealing with numerous "back-office" problems that have stopped many early 5G visions becoming real. Network-slicing will have thrown up huge operationalisation and security issues. Dealing with QoS/slice roaming or handoff, at borders between networks (outdoor / indoor / private / neutral / international) will be hugely complex. Edge computing scenarios will turn out to need local peering or interconnection points. All of these will have huge extra complexities with billing, pricing and monitoring. mmWave planning and design tools will need to have matured, as well as the processes for installation and operation.Training and skills for all of this will have been time-consuming and expensive - we'll need hundreds of thousands of experts - often multi-domain experts. By the time all these issues get properly fixed, 6G radios and vendors will exploit them, rather than the "legacy 5G" infrastructure. See this post for my discussion about the telecom industry's problems with accurate timelines.
  • Device-Network cooperation: By 2030, mobile ecosystems and control software will break today's silos between radio network, devices and applications much more effectively. Sensors in users' devices, cell-towers and elsewhere will be linked to AI which works out how, why and where people or IoT objects need connectivity and how best to deliver it. Recognise a moving truck with machine-vision, and bounce signals off it opportunistically. Work out that someone is approaching the front of a building, and pre-emptively look for Wi-Fi, or negotiate with the in-building neutral host on a marketplace before they enter the door. Spot behavioural patterns such as driving the same route to work, and optimise connectivity accordingly. Recognise a low battery, and tweak the "best-connected" algorithm for power efficiency, and downrate apps' energy demand.Integrate with crowd-flow patterns or weather forecasts. There will be thousands of ways to improve operations if networks stop just thinking of a "terminal" as just an endpoint, and look for external sources of operational data - that's a 20th Century approach. Expect Google's work on its Fi MVNO & Android/Pixel phones, and similar efforts by Samsung and maybe Apple, Qualcomm and ARM, to have driven much of this cross-domain evolution.
  • Energy-aware networks: Far more energy-awareness will be designed into all aspects of the network, cloud and device/app ecosystem. I'm not predicting some sort of monolithic and integrated cascading-payments system linked into CO2-taxes, but I expect "energy budget" to be linked much more closely to costs (including externalities) in different areas. How best to optimise wired/wireless data for power demand, where best to charge devices, "scavenging" for power and so on. Maybe even "nudge" people to lower-energy applications or consumption behaviours by including "power-shaming" indicators. If 3GPP and governments get their act together, as well as vendors & CSPs, overall 6G energy use will be a higher priority design-goal than throughput speed and latency.
  • Wi-Fi: We'll probably be on Wi-Fi 9 by 2030. It will continue to dominate connectivity inside buildings, especially homes and business premises with FTTX broadband (i.e. most of them in developed markets). It will continue to be used for primary connectivity on high-throughput / low-margin / low-mobility devices like TVs and display screens, PC-type devices, AR/VR headsets and so on. It will be bonded together with 5G/6G and other technologies with ever-better multi-path mechanisms, including ad-hoc device meshes. Ease of use will have improved, with the success of approaches like OpenRoaming. Fairly little public Wi-Fi will be delivered by "service providers" as we think of them today.  We'll probably still have to suffer the "6G will kill Wi-Fi" pundit-pieces and hype, though.
  • Spectrum: The spectrum world changes slowly at a global level, thanks to the glacial 4-year cycle of ITU WRCs. By 2030 we will have had 2023 and 2027 conferences, which will probably harmonise more spectrum for 5G/6G, satellites & high-altitude platforms (HAPS) and Wi-Fi type unlicensed use. The more interesting developments will occur at national / regional levels, below the ITU's role, in how these bands actually get released / authorised - and especially whether that's for localised or shared usage suitable for private networks and other innovators. By 2030 we should have been through 2+ cycles of US CBRS and UK/Germany/Japan/France style local licensing experiments, allocation methods, databases and sensing systems. I think we'll be closer to some of the "spectrum-as-a-service" models and marketplaces I've been discussing over the last 24 months, with more fluid resale and temporary usage permits. International allocations will still differ though. We will also see whether other options, such as "national licenses with lots of extra conditions" (eg MVNO access, rural coverage, sharing, power use etc) has helped maintain today's style of MNOs, despite the grumbling. We will also see much more opportunism and flexibility in band support in silicon/devices, and more sophisticated approaches to in-band sharing between different technologies. I'm less certain whether we will have progressed much with commercialisation of mmWave bands 20-100GHz, especially for mobile and indoor use. It's possible and we'll certainly see lots of R&D, but the practicalities may prove insuperable for wide usage.
  • Private/neutral cellular: Today, there's around 1000 MNOs globally (public and private). By 2030, I'd expect there to be between 100,000 and a million networks, probably with various new types of service provider, aggregation hubs and consortia. These will span industrial, city, office, rural, utility, "public venue" and many other domains. It will be increasingly hard to distinguish private from public, eg with MNOs' campus networks with private cores and hybrid public/private spectrum. We might even get another zero, if the goals of making private 4G/5G as easy and cheap to build as Wi-Fi prove feasible, although I have doubts. Most of these networks will be user-specific, but a decent fraction will be multi-tenant, either offering wholesale access or roaming to "legacy MNOs" as neutral hosts, or with some sort of landlord model such as a property company running a network with each occupied floor or building on campus as a "semi-private" network. Some such networks will look like micro-telcos (eg an airport providing access to caterers & airlines) and will need billing, management & security tools - and perhaps new forms of regulation. This massive new domain will help catalyse various shifts in the vendor community as well - especially cloud-native core and BSS/OSS, and probably various forms of open RAN, and also "neutral edge".
  • Security & privacy: I'm not a security expert, so I hesitate to imagine the risks and responses 10 years out. Both good and bad guys will be armed to the teeth with AI. We'll see networks attacked physically as well as logically. We'll see sophisticated thefts of credentials and what we quaintly term "secrets" today. There will be cameras and mics everywhere. Quantum threats may compromise encryption - and other quantum tools may enhance it, as well as provide new forms of identity and authentication. We will need to be wary of threats within core networks, especially where orchestration and oversight is automated. I think we will be wise to avoid "monocultures" of technologies at various levels of the network - we need to trade off efficiency and scale vs. resilience.
  • Satellite / HAPS: We'll definitely have more satellite constellations by 2030, including some huge ones from SpaceX or others. I have my doubts that they will be "game-changers" in terms of our overall broadband use, except in rural/remote areas. They won't have the capacity of terrestrial networks, and signals will struggle with indoor penetration and uplink from anything battery-powered. Vehicles, planes, boats and remote IoT will be much better-connected, though. Space junk & cascading-collision scenarios like the movie Gravity will be a worry, though. I'm not sure about drones and balloons as HAPS for mass-market use, although I suspect they'll have some cool applications we don't know today.
  • Cloud & edge: Let's get one thing clear - the bulk of the world's computing cycles & data storage will continue to occur in massive datacentres (perhaps heading towards a terawatt of aggregate power by 2030) and on devices themselves, or nearby gateways. But there will be a thriving mid-market of different sorts of "edge" as I've covered in many posts and presentations recently. This will partly be about low-latency, but not as much as most people think. It will be more about saving mass data-transport costs, protecting "data sovereignty" and perhaps optimising energy consumption. A certain amount will be inside telcos' networks, but without localised peering / aggregation this will be fairly niche, or else it will be wholesaled out to the big cloud players. There will be a lot of value in the overall orchestration of compute tasks for applications between multiple locations in the ecosystem, from chip-level to hyperscale and back again. The fundamental physical quantum of much edge compute will be mundane: a 40ft shipping container, plonked down near sources of power and fibre.
  • Multi-network: We should expect all connectivity to be "software-defined" and "multi-network". Devices will have lots of radios, connecting simultaneously, with different paths and providers (and multiple eSIM / other identities). Buildings will have mutliple fibres, wireless connections and management tools. Device-to-device connections and relaying will be prevalent. IoT will use a selection of LPWAN technologies as well as Wi-Fi, cellular and short-range connections. Satellite and maybe LiFi (light-based) connections will play new roles. Arbitrage, bonding, load-balancing will occur at multiple levels from silicon to OS to gateway to mid-network. Very few things will be locked to a single network or provider - unless it has unique value such as managed security or power consumption.
  • Voice & messaging: Telephony will be 150yo in 2026. By 2030 we'll still be making some retro-style "phone calls" although it will seem even more clunky, interruptive, unnatural and primitive than today. (It won't stop the cellular industry spending billions upgrading to Vo6G though). SMS won't have disappeared, either. But most consumers will communicate through a broad variety of voice and video interaction models, in-app, group-based, mediated by an array of assistants, and veracity-checked to avoid "fake voice" and man-in-the-middle attacks of ever increasing subtlety. Another 10 years of evolution beyond emojis, stories, filters and live broadcasts will allow communication which is expressive, emotion-first, and perhaps even richer and more nuanced than in-person body language. I'm not sure about AR/VR comms, although it will still be more important than RCS which will no doubt be celebrating its 23rd year of irrelevance, hype and refusal to die.
  • Enterprise comms:  UCaaS, cPaaS and related collaboration tools will progress steadily, if unspectacularly - although with ever more cloud focus. There will be more video, more AI-enriched experiences for knowledge management, translation, whispered coaching and search. There will be attempts to reduce travel to meetings and events as carbon taxes bite, although few will come close to the in-person experience or effectiveness. We'll still have some legacy phone calls and numbers (as with consumer communications) although these will be progressively pushed to the margins of B2B and E2E interactions. Ever more communications will take place "contextually" - within apps, natively supported in IoT devices, or with AI-based assistants. Contact centres and customer interactions will be battlegrounds for bots and assistants on both sides. ("Alexa, renegotiate my subscription for a better price - you have permission to emulate my voice"). Security and verification will be highly prized - just because something is heard doesn't mean it will match what was originally spoken
  • Network ownership models: Some networks of today will still look mostly like "telcos" in 2030,  but as I wrote in this post the first industry to be transformed by 5G will be the telecom industry itself. We'll see many new stakeholders, some of which look like SPs, some which are private network operators, and many new forms of aggregator, virtual operator, wholesale or neutral mobile/fibre provider. I'm not expecting a major shift back to nationalised or government-run networks, but I think regulations will favour more sharing of assets where it makes sense. Individual industries will take control of their own connectivity and communications, perhaps using standardised 5G, or mild variations of it. There will be major telcos of today still around - but most will not be providing "slices" to companies and offering deep cross-vertical managed services. There will be M&A which means that we'll have a much more heterogeneous telco/CSP market by 2030 than today's 800 identikit national MNOs. Fixed and fibre providers will be diverse as well - especially with the addition of cloud, utility and muncipal providers. I think the towerco / property-telco model will be important as asset owners / builders as well.
I realise that I could go on at length about many other topics here - autonomous and connected vehicles, the future of cities and socio-political spheres, shifts in entertainment models, the second wave of blockchain/ledgers, the role of human enhancement & biotech, new sources of energy and environmental technology, new forms of regulation and so forth. But this list is already long enough, I think. Various of these topics will also appear in podcasts - which I'm intending to ramp up in 2020. At the moment I'm on SoundCloud (link) but watch out here or on Twitter for announcements of other platforms.

If this has piqued your interest, please comment on my blog or LinkedIn article. This is a vision for 2030, which I hope is self-consistent and reasonable - but it is not the only plausible future scenario.

If you're interested in running a private workshop to discuss, debate and strategise around any of these topics, please get in touch via private message, or information AT disruptive-analysis DOT com. I work with numerous operators, vendors, regulators, industry bodies and investors to imagine the future of networks and other advanced technologies - and steer the path of evolution.

Happy New Year! (and New Decade)