Dean Bubley's Disruptive Wireless: Thought-leading wireless industry analysis
Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event
Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator? To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here
This post originally appeared in September 2023 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)
One line I heard yesterday at #ConnectedBritain that really struck me came from BT Group Network/Security head Howard Watson during his keynote.
He was hoping #6G arrived later rather than earlier, "For the Brisbane Olympics, not LA", ie 2032.
This is not the first time I've heard an MNO exec expressing a desire to let #5G
run longer, before 6G prompts more Capex and infrastructure changes.
They want to get payback on existing investments before thinking about
the next round.
This is unsurprising. The industry itself now
recognises that it overhyped 5G before launch, and completely forgot to
mention that it would arrive in phases, with all the "cool stuff" really
only arriving in later versions, with the features in 3GPP Releases 16,
17 & 18.
Instead, we started with 4G++ (ie non-standalone
5G, with sometimes higher speeds but not much else) and then the first
versions of "proper 5G" with the Release 15 standalone cloud-native
core.
5G SA gives somewhat lower latency, and some rudimentary
QoS and other features, but it's far from the ubiquitous millisecond /
gigabit / slicing nirvana that everyone promised in 2018.
I was skeptical from the beginning - and I'm still a "slice denier". (I think #networkslicing
remains a critical strategic error and distraction for the industry).
But my view is that the really useful stuff in 5G, such as
time-synchronous networking, RedCap and vertical-specific elements such
as FRMCS for railways, are still a long way from mainstream.
So I
can understand that MNOs look at the proposed 6G timeline of 2030, and
think "we're still making heavy work of moving to cloud-native 5G
standardalone. How are we going to do successive iterations of R15 SA,
R16, R17, R18, R19... and make money, all within 6 years?"
[Note:
technically 6G should start with Release 21, but based on past
experience we'll see R20, or maybe even R19, marketed as 6G by some
MNOs]
There is a possible uncomfortable answer that's starting to
get discussed quietly. What if 6G isn't primarily about MNOs, at least
at first?
6G will happen in 2030, one way or another. The world's
universities and R&D labs aren't going to down tools for two years,
while MNOs are still trying to "monetise" 5G. There will be a bunch of
technologies and standards that get called IMT2030 / 6G.
There might even be multiple standards, either because of geopolitics leading to regional versions, or because my niggling of IEEE and Wi-Fi Alliance eventually prompts them to submit a candidate 6G technology (#WiFi 9 or 10, I guess).
So
the question then becomes - will traditional MNOs be the main buyers of
6G in the 2028-2030 timeframe? Or will it be enterprises, new-entrant
and niche MNOs, infracos, neutral-hosts, satcos, governments and others
building greenfield wireless networks?
Is the failure of 5G to
live up to inflated expectations actually going to be the pivot point
for the (slow) demise of the legacy MNO model? Are we watching #pathdependency effects in play?
This post originally appeared on October 2 on my LinkedIn newsletter, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please subscribe / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)
Note: This article has been commissioned by the Dynamic Spectrum Alliance,
based on my existing well-known analysis and positions, which I have
been discussing for many years both publicly and privately. I believe
that in-building wireless - irrespective of technology - receives far
too little attention from policymakers and regulators. 6GHz should be
indoor-primary spectrum.
Abstract & summary: The vast bulk of
wireless data traffic today is for indoor applications. In future,
in-building wireless will become even more important. It is
ideally-suited to 6GHz spectrum, made available on an unlicensed basis. A
licensed model for 5G mobile in 6GHz would be unable to deliver
coverage consistently for more than a small number of sites.
Indoor wireless is already critical & often overlooked
Industry estimates suggest that 60-80% of cellular data is
delivered to indoor users, predominantly on smartphones. Additional
statistics shows that smartphones also typically consume
another 2-5x the cellular data volume on Wi-Fi, almost all of which is
indoors or in vehicles. In other words, 90%+ of total smartphone data is consumed inside buildings.
In addition, residential fixed broadband traffic volumes are
roughly 10-20x that of mobile broadband, with final delivery mostly over
Wi-Fi, often to non-smartphone devices such as smart TVs, laptops, game
consoles and voice assistants.
Outside the consumer market, a great deal of non-residential wireless connectivity is also indoors
– healthcare, education, manufacturing, conventions, hospitality and
office environments are all increasingly dependent on wireless,
especially with the rise of industrial automation systems, IoT, robots,
connected cameras and displays. These map to the rise in cloud- and
video-based business processes.
Most wireless uses & devices are indoor-centric
This does not imply that outdoor wireless use is either trivial
or unimportant. Most obviously, everyone uses their phones for calling,
messaging, mapping and various transport and other apps while
on-the-move. Vehicle connectivity is becoming essential, as well as
wireless use for safety, utilities and smart-city infrastructure. Some
sectors such as agriculture, logistics and construction are
predominantly outdoor-oriented, albeit often at specific locations and
sites.
But to a rough approximation, if 80%+ of wireless use is indoors, then 80%+ of economic and social benefit of wireless will accrue indoors as well. This applies irrespective of the technology involved – Wi-Fi, 4G/5G cellular, or even Bluetooth.
Future growth of indoor wireless
The demands for indoor connectivity are likely to grow in both
scale and scope in coming years. There will be huge demand for
high-throughput, low-latency access for both consumer and enterprise
use-cases.
Gigabit broadband, especially delivered with fibre, is
becoming the default for both residential and business premises. In the
medium term, we can expect 10Gig services to become more common as well.
In many cases, the bottleneck is now inside the building, and local wireless systems need to keep pace with the access network.
There is a growing array of demanding devices and applications connected inside homes and enterprises
premises. 4K and 8K screens, automation systems, healthcare products,
AR/VR systems, cameras for security and industrial purposes, robots and
much more.
Wireless devices will increasingly be located in any room or space inside a building, including bedrooms, garages, basements, meeting rooms, factory-floors and hospital operating theatres.
The density of devices
per-building or per-room will increase exponentially. While some will
be low-traffic products such as sensors, ever more appliances and
systems will feature screens, cameras and cloud/AI capabilities
demanding greater network performance.
There will be growing emphasis on the efficiency
of networks, in terms of both energy and spectrum usage. “Blasting
through walls” with wireless signals will be viewed negatively on both
counts.
Yet only some policymakers and regulators have explicit focus on indoor wireless in their broadband and spectrum policies. There has been some positive movement recently,
with regulators in markets such as the UK, Germany, Canada and Saudi
Arabia addressing the requirements. But it is now time for all governments
and regulators to specifically address indoor wireless needs – and
acknowledge the need for more spectrum, especially if they eventually
want to achieve “gigabit to each room” as a policy goal.
Wi-Fi can satisfy indoor requirements, but needs 6GHz
Almost all indoor devices discussed here have Wi-Fi
capabilities. A subset have 5G cellular radios as well. Very few are
5G-only. This situation is unlikely to change much, especially with a
5-10 year view.
Yet Wi-Fi faces a significant limit to its performance,
if it just has access to traditional 2.4GHz and 5GHz bands. Not only
are these limited in frequency range, but they also have a wide variety
of legacy devices, using multiple technologies, that must coexist with
any new systems.
While mesh systems have helped extend the reach to all rooms in a
home, and Wi-Fi 6 brings new techniques to improve performance and
device density in consumer and enterprise settings, much more will be
required in future.
Now, Wi-Fi 6E and 7 generations are able to use the 6GHz band.
This adds up to 1.2GHz of extra spectrum, with almost no sources of
interference indoors, and almost no risk of indoor use creating extra
interference to incumbent outdoor users, especially at lower power levels.
6GHz Wi-Fi would be able to address all the future requirements
discussed in the previous section, as well as reducing system latency,
improving indoor mobility and providing greater guarantees of QoS /
reliability.
6GHz 5G is unsuitable for indoor use, and of limited use outdoors
By contrast, 6GHz is a poor fit for indoor 5G.
Most buildings will be unable to use outdoor-to-indoor propagation
reliably, given huge propagation challenges through walls. This would be
hugely wasteful of both energy and spectrum resource anyway. This
situation will worsen in future as well, with greater use of insulated
construction materials and glass.
That leaves dedicated indoor systems such as small cells or
distributed antenna or radio systems. Current DAS systems cannot support
6GHz radios – most struggle even with 3.5GHz. It may be possible to
upgrade some of the more advanced systems with new radio heads, but few
building owners would be willing to pay, and almost no MNOs would. In
any case, only a fraction of buildings have indoor cellular systems, especially beyond the top tier of shopping malls, airports and other large venues.
The industry lacks the human and financial resources to implement new 6GHz-capble indoor systems in more than a tiny proportion of the millions of buildings worldwide, especially residential homes and small businesses.
Enabling public 5G services to work reliably indoors with 6GHz is therefore a decade-long project, at least. It would likely be the mid-2030s before 5G (or 6G) devices could routinely use 6GHz inside buildings.
Lobbyist estimations of the notional GDP uplift from IMT use of the
band ignore both the timing and the practical challenges for indoor
applications. A very heavy discount % should be applied to any such
calculations, even if the baseline assumptions are seen as credible.
Private 5G systems in factories or warehouses could
theoretically use 6GHz licensed cellular, but most developed countries
now have alternative bands being made available on a localised basis,
such as CBRS, 3.8-4.2GHz or 4.9GHz. Many countries also have (unused)
mmWave options for indoor private 5G networks. In theory, 5G systems
could also use an unlicensed 6GHz band for private networks, although
previous unlicensed 4G variants in 5GHz never gained much market
traction.
It is worth noting that there are also very few obvious use-cases for outdoor, exclusive-licensed 6GHz for 5G
either, beyond a generic increment in capacity, which could also be
provided by network densification or other alternative bands. Most
markets still have significant headroom in midband 3-5GHz spectrum for
5G, especially if small cells are deployed. The most-dense environments
in urban areas could also exploit the large amount of mmWave spectrum
made available for cellular use, typically in the 24-28GHz range, which
is already in some handsets and is still mostly unused.
Conclusions
Regulators and policymakers need to specifically analyse the use and supply/demand for indoor wireless,
and consider the best spectrum and technology options for such
applications and devices. Analysis will show that in-building wireless
accounts for the vast bulk of economic and social benefits from
connectivity.
This is best delivered by using Wi-Fi, which is
already supported by almost all relevant device types. With the
addition of 6GHz, it can address the future expected growth delivered by
FTTX broadband, as well as video, cloud and AR/VR applications.
The ultra-demanding uses that specifically require cellular indoors can use existing bands
with enhanced small cells and distributed radios, neutral-host
networks, or private 5G networks in the 3-5GHz range. There is also the
ample mmWave allocations for 5G.
A final fundamental element here is timing. 6 GHz Wi-Fi chipsets and user devices are already shipping
in their 100s of millions. Access points are widely available today and
becoming more sophisticated with Wi-Fi 7 and future 8+ versions. By
contrast, 5G/6G use of the band for indoor use is unlikely until well into the next decade, if at all.
Indoor wireless is critically important, growing, and needs Wi-Fi.
This post originally appeared on June 7 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)
I'm in Brussels this week at the Forum Europe European Spectrum Management Conference.
There's a lot to discuss, especially around #6GHz and 3.8-4.2GHz and the role of unlicensed and local/shared bands, as well as the upcoming World Radio Conference WRC-23.
I'll
have more to say, but here I just want to highlight one particular
theme that has been evident over the last couple of days: the tone of
the satellite sector, which is here in force, especially with GSOA and Intelsat.
In the past at these #spectrum events, the #satellite industry has turned up with a familiar script:
"Hi,
we're from the satellite industry. Please don't take our spectrum. We
help with defence, aviation & connecting the unconnected. Please
don't take our spectrum. We work tightly with the mobile industry, doing
backhaul & IoT and timing sync. They're our friends & vice
versa. Oh, and did we mention our spectrum? Please don't take any more
of it"
But this time, it's different. The message is now closer to:
"We're
doing all ths cool new stuff, including for wireless broadband, direct
to device and defence. So actually, we want to keep all our spectrum.
And maybe give back the old #mmWave
spectrum you took years ago, that the mobile industry hasn't even used.
Seriously, you want *more* spectrum to be taken from us and
pre-allocated to 6G now? Are you having a laugh?"
There was a
whole panel on direct-to-device, and satellite has fought its corner on
the upper 6GHz (it can coexist with low/medium power WiFi, but not high
power 5G) and fixed satellite links in 4GHz band. The future-looking 6G
panel started a fierce debate on 7-24GHz, which covers various of the
satellite incumbent bands.
There's been a few references to South
Korea's regulator reclaiming unused 28GHz licenses from MNOs that
haven't used the band. And there's a broad opinion that mobile/IMT is
not a friendly partner for spectrum-sharing, at least for national MNO
macro networks at full power. (Local private networks are OK-ish, it
seems).
"An IMT identification is an eviction notice - the incumbents must leave".
"It's
disingenuous to discuss coexistence studies - we've been here before
and know how it ends. It's not our first rodeo with the mobile industry"
Now clearly this year, in the last few months before WRC23, is when arguments get more vigorous. But some of the stuff at the #EUspectrum
event has been seriously punchy - Intelsat asked whether Europe should
be focused on primacy in an amorphous "race to 6G" or a more
geopolitically-crucial "space race".
My view is that the #5G
industry is seeing some chickens coming home to roost at the moment. It
overpromised Release 18 features with Release 15 timelines, got mmWave
spectrum years before it could be exploited, and have left politicians
and regulators with egg on their faces.
Meanwhile, the satellite
sector is positioning itself as super-cool and important. It has a
swagger that is being noticed by policymakers, and for good reason.
This post originally appeared on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)
Following on from my (rather controversial) post the other day about #6G and #IMT2030 needing to be indoor-primary and also have an IEEE / #WiFi candidate, I'm now going to *further* annoy various people.
There's a lot of talk about 6G being a "network of networks". This follows on from previous similar themes about #convergence and #HetNets. At one level I agree, but I think there needs to be a perspective shift.
There
has been a long string of attempts to blend Wi-Fi and cellular, going
all the way back to UMA in the 2G/3G era around 2005. (I was a
vociferous critic).
There's been a alphabet-zoo of acronyms
covering 3GPP gateway functions or selection/offload approaches - GAN,
ANDSF, TWAG, N3IWF, ATSSS - and probably others I've forgotten. From the
Wi-Fi side there's been Hotspot 2.0 and others. More recently we've
seen an attempt to bridge fixed and mobile networks, even going as far
as pitching 3GPP-type cores for fixed ISPs.
Pretty much all of
these have failed to gain traction. They've had limited deployments and
successes here and there, but nobody can claim that true "converged
wireless" is ubiquitous or even common. 99% of WiFi has no connection to
cellular. Genuine "offload" is tiny.
But despite this, the 6G
R&D and vision seems to be looking to do it all over again. This
phrase "network of networks" cropped up regularly at the 6GWorld#6Gsymposium events I attended this week. It now usually includes integrating #satellite or non-terrestrial (NTN) capabilities as much as Wi-Fi.
But
there's a bit of an unstated assumption I think needs to be challenged.
There seems to be unquestioned acceptance that the convergence layer -
or perhaps "umbrella" sheltering all the various technologies is
necessarily the 3GPP core network.
I think this is a problem.
Many of the new and emerging 6G stakeholders (for instance enterprises,
satellite operators, or fixed providers) do not understand 3GPP cores,
nor have the almost religious devotion to that model common in the
legacy cellular sector.
So I think any "convergence" in IMT2030
must be defined as bi-directional. Yes, Wi-Fi and satellite can slot
into a 3GPP umbrella. But satellite operators need to be able to add
terrestrial 6G as an add-on to their systems, while Wi-Fi controllers
(on-prem or cloud based) should be able to look after "naked"
(core-free) 3GPP radios where appropriate.
This would also flow
through to authentication methods, spectrum coordination and so on. Also
it should get reflected in government policy & regulation.
My view is that 3GPP-led convergence has largely failed. Maybe it gets fixed in 5G/6G eras, but maybe it won't. We need #5G and 6G systems to have both northbound and southbound integration options.
I
also think we need to recognise that "convergence" is itself only one
example of "combination" of networks. There are numerous other models,
such as bonding or hybrids that connect 2+ separate networks in software
or hardware.
This post originally appeared on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)
I'm giving a lot of thought to #6G
design goals, priorities & technology / policy choices. Important
decisions are coming up. I'll be exploring them in coming weeks and
months. Two important ones I see:
The
first one is self-evident. The vast bulk of mobile use - and an
even-larger % of total wireless use - is indoors. It's inside homes,
offices schools, factories, warehouses, public spaces like malls and
stadia - as well as inside vehicles like trains. Even outdoors, a large %
of usage is on private sites like industrial complexes or hospital
campuses.
Roughly 80% of mobile use is indoors - more if you
include wireless streaming to smart TVs and laptops/tablets. By the
2030s 6G era, there will be more indoor wireless use for #industrialautomation, #gaming, education, healthcare, #robotics and #AR / #VR / #metaverse and so on.
This
implies that economic, social, welfare and cultural upsides will be
indoor-primary. 80%+ of any GDP uplift will be indoor-generated. This
suggests 6G tech design & standards - and associated business models
and regulation - should be indoor-oriented too.
The IEEE / #WiFi
idea follows on from this. The default indoor wireless tech today is
Wi-Fi. There is a lot of indoor cellular use, but currently 5G is
supported poorly - and certainly not everywhere.
While 5G and future 6G indoor #smallcells, #neutralhost
and repeaters / DAS are evolving fast, *nobody* expects true ubiquity.
Indoor cellular will remain patchy, especially multi-operator. And many
devices (eg TVs) don't have cellular radios anyway.
This means that WiFi - likely future #WiFi8 and #WiFi9
- will remain central to in-building connectivity in the 6G era, no
matter how good the tech for reconfigurable surfaces or other cellular
innovations become.
IEEE decided not to pitch WiFi6 formally for
5G / IMT2020, but instead just show it surpassed all the metrics. But
"we could have done it if we wanted" isn't good enough. There are no
government-funded "WiFi Testbed Programs" or "WiFi Innovation Centres of
Excellence" because of this lower visibility.
Governments are
ITU members and listen to it. If policymakers want the benefits of full
connectivity, they need to support it with spectrum, targets and
funding, across *all* indoor options.
And if the WiFi industry
wants full / easy access to new resources, it needs to be an official 6G
/ IMT2030 technology. It needs access to IMT licensed spectrum,
especially for local licenses with AFC.
This idea will be very
unpopular among both cellular industry (3GPP pretends it is the "keeper
of the G's") and the WiFi sector, which sees it as a lot of extra work
& politics.
But I think it's essential for IMT2030 to
embrace network diversity, plus ownership- & business-model
diversity as central elements of 6G.
Sometimes, upgrading the network isn't the answer to every problem.
For as long as I can remember, the telecom industry has talked about
quality-of-service, both on fixed and mobile networks. There has always
been discussion around "fast lanes", "bit-rate guarantees" and more
recently "network slicing". Videoconferencing and VoIP were touted as
needing priority QoS, for instance.
There have also always been predictions about future needs of
innovative applications, which would at a minimum need much higher
downlink and uplink speeds (justifying the next generation of access
technology), but also often tighter requirements on latency or
predictability.
Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.
We see it again today, with predictions for metaverse applications
adding yet more zeroes - we'll have 8K screens in front of our eyes,
running at 120 frames per second, with Gbps speeds and sub-millisecond
latencies need to avoid nausea or other nasty effects. So we'll need 6G
to be designed to cope.
The issue is that many in the network industry often don't realise
that not every technical problem needs a network-based solution, with
smarter core network policies and controls, or huge extra capacity over
the radio-network (and the attendant extra spectrum and sites to go with
it).
Often, there are other non-network solutions that achieve (roughly)
the same effects and outcomes. There's a mix of approaches, each with
different levels of sophistication and practicality. Some are elegant
technical designs. Others are best described as "Heath Robinson" or
"MacGyver" approaches, depending on which side of the Atlantic you live.
I think they can be classified into four groups:
Software: Most obviously, a lot of data can
be compressed. Buffers can be used to smooth out fluctuations. Clever
techniques can correct for dropped or delayed packets. There's a lot
more going on here though - some examples are described below.
Hardware / physical:
Some problems have a "real world" workaround. Sending someone a USB
memory stick is a (high latency) alternative to sending large volumes of
data across a network. Phones with dual SIM-slots (or, now, eSIM
profiles) allow coverage gaps or excess costs to be arbitraged.
Architectural:
What's better? One expensive QoS-managed connection, or two cheaper
unmanaged ones bonded together or used for diverse routing? The success
of SDWAN provides a clue. Another example is the use of onboard compute
(and Moore's Law) in vehicles, rather than processing telemetry data in
the cloud or network-edge. In-built sound and image recognition in smart
speakers or phones is a similar approach to distributed-compute
architecture. That may have an extra benefit of privacy, too.
Behavioural: The
other set of workaround exploit human psychology. Setting expectations -
or warning of possible glitches - is often preferable to fixing or
apologising for problems after they occur. Skype was one of the first
communications apps to warn of dodgy connections - and also had the
ability to reconnect when the network performance improved. Compare that
with a normal PSTN/VoLTE call drop - it might have network QoS, but if
you lose signal in an elevator, you won't get a warning, apology or a
simplified reconnection.
These aren't cure-alls. Obviously if you're running a factory, you'd
prefer not to have the automation system cough politely and quietly
tell you to expect some downtime because of a network issue. And we
certainly *will* need more bandwidth for some future immersive
experiences, especially for uplink video in mixed reality.
But recently I've come across a few examples of clever workarounds
or hacks, that people in the network/telecom industry probably wouldn't
have anticipated. They potentially reduce the opportunity for "monetised
QoS", or reduce future network capacity or coverage requirements, by
shifting the burden from traffic to something else.
The first example relates to the bandwidth needs for AR/VR/metaverse
connectivity - although I first saw this mentioned in the context of
videoconferencing a few years ago. It's called "foveated rendering".
(The fovea is the most dense part of the eye's retina). In essence, it
uses the in-built eye tracking in headsets or good quality cameras. The
system know what part of a screen or virtual environment you are
focusing on, and reduces the resolution or frame-rate of the other
sections in your peripheral vision. Why waste compute or network
capacity on large swathes of an image that you're not actually noticing?
I haven't seen many "metaverse bandwidth requirement" predictions
take account of this. They all just count the pixels & frame rate
and multiply up to the largest number - usually in the multi-Gbps range.
Hey presto, a 6G use-case! But perhaps don't build your business case
around it yet...
Network latency and jitter is another area where there are growing
numbers of plausible workarounds. In theory, lots of applications such
as gaming require low latency connections. But actually, they mostly
require consistent and predictable but low-ish
latency. A player needs to have a well-defined experience, and
especially for multi-player games there needs to be fairness.
The gaming industry - and also other sectors including future
metaverse apps - have created a suite of clever approaches to dealing
with network issues, as well as more fundamental problems where some
players are remote and there are hard speed-of-light constraints. They
can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.
There are also numerous techniques for predicting or anticipating
movements and actions, so network-delivered data might not be needed
continually. AI software can basically "fill in the gaps", and even
compensate for some sorts of errors if needed. Similar concepts are used
for "packet loss concealment" in VoIP or video transmissions. Apps can
even subtly speed up or slow down streams to allow people to "catch up"
with each other, or have the same latency even when distributed across the world.
We can expect much more of this type of software-based mitigation of
network flaws in future. We may even get to the point where sending
full video/image data is unnecessary - maybe we just store a
high-quality 3D image of someone's face and room (with lighting) and
just send a few bytes describing what's happening. "Dean turned his
head left by 23degrees, adopted a sarcastic expression and said 'who
needs QoS and gigabit anyway?' A cloud outside the window cast a
dramatic shadow half a second later". It's essentially a more
sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I
know I'm massively oversimplyifying, but you get the direction of travel
here).
The last example is a bit more left-field. I did some work last year
on wireless passenger connectivity on trains. There's a huge amount of
complexity and technical effort being done on dedicated trackside
wireless networks, improving MNO 5G coverage along railways, on-train
repeaters for better signal and passenger Wi-Fi using multi-SIM (or even
satellite) gateways. None of these are easy or cheap - the reality is
that there will be a mix of dedicated and public network connectivity,
with cities and rural areas getting different performance, and each
generation of train having different systems. Worse, the coated windows
of many new trains, needed for anti-glare and insulation, effectively
act as Faraday cages, blocking outdoor/indoor wireless signals.
It's really hard to take existing rolling-stock out of service for
complex retrofits, install anything along operational tracks / inside
tunnels, and anything electronic like repeaters or new access points
need a huge set of certifications and installation procedures.
So I was really surprised when I went to the TrainComms conference
last year and heard three big train operators say they were looking at a
new way to improve wireless performance for their passengers.
Basically, someone very clever realised that it's possible to laser-etch
the windows with a fine grid of lines - which makes them more
transparent to 4G/5G, without changing the thermal or visual properties
very much. And that can be done much more quickly and easily for
in-service trains, one window at a time.
I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.
The story here is that while network upgrades and QoS are important,
there are often highly inventive workarounds - and very motivated
software, hardware and materials-science specialists hoping to solve the
same problems via a different path.
Do you think a metaverse app developer would rather work on a cool
"foveated rendering" approach, or deal with 800 sets of network APIs and
telco lawyers to obtain QoS contracts instead? And how many
team-building exercises just involve hiring a high-quality boat to go
across a lake, rather than working out how to build rafts from barrels
and planks?
We'll certainly need faster, more reliable, lower-latency networks.
But we need to be aware that they're not the only source of solutions,
and that payments and revenue uplift for network performance and QoS are
not pre-ordained.
(Initially posted on LinkedIn, here. Probably best to use LI for comments & discussion)
Published this week: my full STL Partners report on Enterprise Wi-Fi. Click here to get the full summary & extract.
Key takeout: Telcos, MNOs & other service providers need to take Wi-Fi6 , 6E & (soon) 7 much more seriously. So do policymakers.
5G is not enough for solving enterprises' connectivity problems on its own. It has important roles, especially in Private 5G guise, but cannot replace Wi-Fi in the majority of situations. They will coexist.
Wi-Fi will remain central to most businesses' on-site connectivity needs, especially indoors, for employees, guests and IoT systems.
Telcos
should support Wi-Fi more fully. They need a full toolkit to drive
relevance in enterprise, not just a 5G hammer & pretend everything
is a nail. CIOs and network purchasers know what they want - and it's
not 5G hype or slice-wash.
Newer versions of Wi-Fi
solve many of the oft-cited challenges of legacy systems, and are often
a better fit with existing IT and networks (and staff skills) than 5G,
whether private or public.
Deterministic latency, greater
reliability and higher density of devices make 6/6E/7 more suitable for
many demanding industrial and cloud-centric applications, especially in
countries where 6GHz spectrum is available. Like 5G it's not a universal
solution, but has far greater potential than some mobile industry
zealots seem to think.
Some recommendations:
- Study the roadmaps for Wi-Fi versions & enhancements carefully. There's a lot going on over the next couple of years. -
CSP executives should ensure that 5G "purists" do not control efforts
on technology strategy, regulatory engagement, standards or marketing. - Instead, push a vision of "network diversity", not an unrealistic monoculture. (Read my recent skeptical post on slicing, too) -
Don't compare old versions of Wi-Fi with future versions of 5G. It is
more reasonable to compare Wi-Fi 6 performance with 5G Release 15, or
future Wi-Fi 7 with Rel17 (and note: it will arrive much earlier) -
5G & Wi-Fi will sometimes be converged... and sometimes kept
separate (diverged). Depends on the context, applications & multiple
other factors. Don't overemphasise convergence anchored in 3GPP cores. - Consider new service opportunities from OpenRoaming, motion-sensing and mesh enhancements. -
The Wi-Fi industry itself is getting better at addressing specific
vertical sectors, but still needs more focus and communication on
individual industries - There should be far more "Wi-Fi for Vertical X, Y, Z" associations, events and articles. - Downplay clunky & privacy-invasive Wi-Fi "monetisation" platforms for venues and transport networks. -
Policymakers & regulators should look at "Advanced Connectivity" as
a whole, not focus solely on 5G. Issue 6GHz spectrum for unlicenced
use, ideally the whole band - Support Wi-Fi for local licensed spectrum bands (maybe WiFi8). Look at 60GHz opportunities. - Insist Wi-Fi included as an IMT2030 / 6G candidate.
This post was originally published on my LinkedIn Newsletter (here). Please sign up, and join the discussion thread there.
Background
I'm increasingly finding myself drawn into discussions of
#geopolitics and how it relates to #telecoms. This goes well beyond
normal regulatory and policymaking involvement, as it means that rules -
and opportunities and risks - are driven by much larger "big picture"
strategic global trends, including the war in Ukraine.
As well as predicting strategic shifts, there are also lessons to be
learned from events at a local, tactical level which have wider
ramifications. Often, there will be trade-offs against normal telecoms
preoccupations with revenue growth, theoretical "efficiency" of spectrum
or network use, standardisation, competition and consumer welfare.
This is the first of what will probably be a regular set of articles
on this broader theme. Here, I'm focusing on the Ukraine war, in the
context some of the other geopolitical factors that I think are
important. I'm specifically thinking about what they may mean for the types
of network technology that are used, deployed and developed in future.
This has implications for #5G, #6G, #satellite networks, #WiFi, #FTTX
and much more, including the cloud/edge domains that support much of it.
Ukraine and other geopolitical issues
This article especially drills into how the conflict in Ukraine has
manifested in terms of telecoms and connectivity, and attempts to
extrapolate to some early recommendations for policymakers more broadly.
I'm acutely consicous of the ongoing devastation and hideous war
crimes being perpetrated there - I hope this isn't too early to try to
analyse the narrow field of networking dispassionately, while conflict
still rages.
For context, as well as Ukraine, other geopolitical issues impacting telecoms include:
US / West vs. China tensions, from trade wars to broader
restrictions on the use of Huawei and other vendors' equipment, as well
as sanctions on the export of components.
Impact of the pandemic
on supply chains, plus the greater strategic and political importance
of resilient telecom networks and devices in the past two years.
The
politics of post-pandemic recovery, industrial strategy and stimulus
funds. Does this go to broadband deployment, themes such as Open RAN,
national networks, smart cities/infrastructure, satellite networks... or
somewhere else?
Tensions within the US, and between US and
Europe over the role and dominance of "Big Tech". Personal data,
monopoly behaviour, censorship or regional sovereignty etc. This mostly
doesn't touch networks today, but maybe cloud-native will draw
attention.
Semiconductor supply-chain challenges and the geopolitical fragility of Taiwan's chip-fabrication sector.
How telecoms (and cloud) fits within Net Zero strategies, either as a consumer of energy, or as an enabler of green solutions.
Cyber
threats from nation-state actors, criminal cartels and terrorist-linked
groups - especially aimed at critical infrastructure and
health/government/finance systems.
In other words, there's a lot going on. It will impact 5G, 6G
development, vendor landscapes, cloud - and also other areas such as
spectrum policy and Internet governance.
Network diversity as a focus
I've written and spoken before about the importance of "network
diversity" and the dangers of technology monocultures, including
over-reliance on particular standards (eg 5G) or particular business
models (eg national MNOs) as some sort of universal platform. It is now
clear that it is more important than ever.
The analogy I made with agriculture, or ecological biodiversity, is proving to be robust.
(Previous work includes this article from 2020 about private enterprise networks, or my 2017 presentation
keynote on future disruptions, at Ofcom's spectrum conference. (The
blue/yellow image of wheat fields, repeated here in this post, was
chosen long before it became so resonant as the Ukrainian flag). I've
also covered the shift towards Open RAN and telecoms supplier
diversification – including a long report I submitted to the UK
Government's Diversification Task Force last year - see this post and download the report).
A key takeout from my Open RAN report was that demand diversity is as important as creating more supply
choices in a given product domain. Having many classes of network
operator and owner – for instance national MNOs, enterprise private
4G/5G, towercos, industrial MNOs and neutral hosts – tends to pull
through multiple options for supply in terms of both vendor diversity and technology diversity. They have different requirements, different investment criteria and different operational models.
In Ukraine, the "demands" for connectivity are arising from an even
more broad set of sources, including improvised communications for
refugees, drones and military personnel.
The war in Ukraine & telecoms
There have been numerous articles published which highlight the
surprising resilience and importance of Ukrainian telecoms during the
war so far. Bringing together and synthesising multiple sources, this
has highlighted a number of important issues around network
connectivity:
The original “survivability” concept of IP networks seems to
have been demonstrated convincingly. Whether used for ISPs’ Internet
access, or internal backhaul and transport for public fixed and mobile
networks, the ability for diverse and resilient routing paths seems to
have mostly been successful.
Public national mobile networks -
mostly 4G in Ukraine's case - have proven essential in many ways,
whether that has been for reporting information about enemy combatants'
locations and activities, obtaining advice from government authorities,
or dealing with the evacuation as refugees. (I'm not sure if subway
stations used as shelters have underground cellular coverage, or if
there is WiFi). Authorities also seem to have had success in getting
citizens to self-censor, to avoid disclosing sensitive details to their
enemies.
Reportedly the Russian forces haven't generally
targeted telecoms infrastructure on a widescale basis. This was partly
because they have been using commerical mobile networks themselves.
However, because roaming was disabled, Russian military use of their
encrypted handsets and SIMs on public 3G/4G networks seems to have
failed. Two articles here and here
give good insight, and also suggests there may be network surveillance
backdoors which Russia may have exploited. There have also been reports
of stingrays ("fake" base stations used for interception of calls /
identity) being deployed. It also appears that some towns and cities -
notably the destroyed city of Mariupol - have been mostly knocked
offline, partly because the electrical grid was attacked first.
Ukraine’s
competitive telecoms market has probably helped its resilience. There
is a highly fragmented fixed ISP landscape, with very inexpensive
connections. There are over a dozen public peering-points across the
country. There are three main MNOs, with many users having SIMs from 2+
operators. (This is a good overview article - https://ukraineworld.org/articles/ukraine-explained/key-facts-about-ukraines-telecom-industry). It seems they have enabled some form of national roaming to allow subscribers to attach to each others' networks.
WiFi hotspots (likely with mobile backhaul) have been used by NGOs evacuating refugees by buses.
Although
it is still only being used at a small scale, the LEO satellite
terminals from SpaceX’s StarLink seem to be an important contributor to
connectivity – not least as a backup option. Realistically, satellite
isn’t appropriate for millions of individual homes – and especially not
personal vehicles and smartphones – but is an important part of the
overall network-diversity landscape. Various commentators have suggested
it is useful as a backup for critical infrastructure connectivity, as
well as for mobile units such as special forces.
Another satellite broadband provider, Viasat, apparently suffered a cyberattack at the start of the war (link here),
which knocked various modem users offline (or even "bricked" the
devies), reportedly including Ukrainian government organisations.
Investigations haven't officially named Russia, but a coincidence seems
improbable. This attack also impacted users outside Ukraine.
Various
peer-to-peer apps using Bluetooth or WiFi allow direct connections
between phones, even if wide area connections are down (see link)
There
have been some concerning reports about the impact of GPS jammers on
the operation of cellular networks, which may use it as a source of
“timing synchronisation” to operate properly, especially for TDD radio
bands. While this has long been a risk for individual cell-sites from
low-power transmitters, the use of deliberate electronic warfare tools
could potentially point to broader vulnerabilities in future.
There
has been wide use of commercial drones like the DJI Mavic-3 for
surveillance (video and thermal imaging), or modified to deliver
improvised weaponry. These use WiFi to connect to controllers on the
ground, as well as a proprietary video transmission protocols (called
O3+) which apparently has range of up to 15km using unlicensed spectrum.
Some of the "Aerorozvidka" units reportedly then use StarLink terminals
to connect back to command sites to coordinate artillery attacks (link).
In short, it seems that Ukraine has been well served by having lots
of connectivity options - probably including some additional military
systems that aren't widely discussed. It has benefited from multiple
fixed, cellular and satellite networks, with potential for interconnect,
plus inventive "quick fixes" after failures and collaboration between
providers. It is exploiting licensed and unlicensed spectrum, with
cellular, Wi-Fi and other technologies.
In other words, network diversity is working properly. There appears
to be no single point of failure, despite deliberate attacks by
invading forces and hackers. Connectivity is far from perfect, but it
has held up remarkably well. Perhaps the full range of electronic
warfare options hasn't been used - but given the geographical size of
Ukraine and the inability of Russia forces to maintain supply-lines to
distant units, that is also unsurprising.
Another set of issues that I haven't really examined are around
connectivity within sanctions-hit Russia. Maybe it will have to develop
more local network equipment manufacturers - if they can get the
necessary silicon and other components. It probably will not wish to
over-rely on Huawei & ZTE any more than some Western countries have
been happy with Nokia and Ericsson as primary options. More problematic
may be fixed-Internet routers, servers, WiFi APs and other
Western-dominated products. I can't say I'm sympathetic, and I certainly
don't want to offer suggestions. Let's see what happens.
Recommendations for policymakers, industry bodies and regulators
So what are the implications of all this? Hopefully, few other
countries face a similar invasion by a large and hostile army. But
preparedness is wise, especially for countries with unfriendly
neighbours and territorial disputes. And even for everywhere else, the
risks of cyberattacks, terrorism, natural disasters - or even just
software bugs or human error - are still significant.
I should stress that I'm not a cybersecurity or critical
infrastructure specialist. But I can read across from other trends I'm
seeing in telecoms, and in particular I'm doing a lot of work on "path
dependency" where small, innocent-seeming actions end up having
long-term strategic impacts and can lock-in technology trajectories.
My initial set of considerations and recommendations:
As a general principle, divergence in technology should be
considered at least as positively than convergence. It maintains
optionality, fosters innovation and reduces single-point-of-failure
risks.
National networks and telcos (fixed and mobile) are
essential - but cannot do everything. They also need to cooperate during
emergencies - a spirit of collaboration which seems to have worked well
during the pandemic in many countries.
Normal ideas about
cyber-resilience and security may not extend to the impact of full-scale
military electronic warfare units, as well as more "typical" online
hacking and malware attacks.
Having separate "air-gapped"
networks available makes sense not just for critical communications
(military, utilities etc) but for more general use. It isn't inefficient
- it's insurance. There may be implications here for network-sharing in
some instances.
Thought needs to be given to emergency
fallbacks and improvised work-arounds, for instance in the event of mass
power outages or sabotage. This is particularly important for
software/cloud-based networks, which may be less "fixable" in the field.
Can a 5G network be "bodged"? (that's "MacGyvred" to my US friends)? As a sidenote - how have electric vehicles fared in Ukraine?
Unlicensed
spectrum and "permissionless communications" is hugely important during
emergency situations. Yes, it doesn't have control or lawful intercept.
But that's entirely acceptable in extreme circumstances.
Linkages
between technologies, access networks and control/identity planes
should generally be via gateways that can be closed, controlled or
removed if necessary. If one is attacked, the rest should be firewalled
off from it. For the same reason "seamless" should be a red-flag word
for cross-tech / cross-network roaming. Seams are important. They offer
control and the ability to partition if necessary. "Frictionless" is OK,
as long as friction can be re-imposed if needed.
Governments should be extremely
cautious of telcos extending 3GPP control mechanisms – especially the
core network and slicing – to fixed broadband infrastructure. Fixed
broadband is absolutely critical, and complex software dependencies may
trade off fine-grained control vs. resilience - and offer additional
threat surfaces.
Democratising and improving satellite
communications looks like an ever more wise move, for all sorts of
reasons. It's not a panacea, but it's certainly "air-gapped" as above.
3GPP-based "non-terrestrial" networks, eg based on drones or balloons,
also has potential - but will ideally be able to work independently of
terrestrial networks if needed.
I haven't heard much about LPWAN and LoRa-type networks, but I can imagine that being useful in emergency situations too.
Sanctions,
trade wars and supply-chain issues are highly unpredictable in terms of
intended and unintended consequences. Technology diversity helps
mitigate this, alongside supplier diversity in any one network domain.
Spectrum
policy should enable enough scale economies to ensure good supply of
products (and viability of providers), but not *so* much scale that any
one option drives out alternatives.
The role and impact of
international bodies like ITU, GSMA and 3GPP needs careful scrutiny. We
are likely to see them become even more political in future. If
necessary, there may have to be separate "non-authoritarian" and
"authoritarian" versions of some standards (and spectrum policies).
De-coupling and de-layering technologies' interdependency - especially
radio and core networks - could isolate "disagreements" in certain
layers, without undermining the whole international collaboration.
There
should be a rudimentary basic minimum level of connectivity that uses
"old" products and standards. Maybe we need to keep a small slice of
900MHz spectrum alive for generator-powered GSM cells and a box of cheap
phones in bunkers - essentially a future variant of Ham Radio.
So to wrap up, I'm ever more convinced that Network Diversity is
essential. Not only does it foster innovation, and limit oligopoly risk,
but it also enables more options in tragic circumstances. We should
also consider the potential risks of too much sophistication and pursuit
of effiency and performance at all costs. What happens when things
break (or get deliberately broken)?
In the meantime, I'm hoping for a quick resolution to this awful war. Slava Ukraini!
Sidenote: I am currently researching the areas of “technology
lock-in” and “path dependence”. In particular, I have been investigating
the various mechanisms by which lock-in occurs and strategies for
spotting its incipience, or breaking out of it. Please get in touch with
me, if this is an area of interest for you.
Copied from my LinkedIn. Please click here for the download page & comments
I'm publishing my full
report & recommendations on telecoms supplier diversification,
especially for 5G, but more broadly for "advanced connectivity" overall.
This follows my "10 Principles" article from 2 months ago.
It
covers both near-term RAN diversification and a long-term roadmap for a
better telecoms/networking landscape towards 2030, with 6G and other
connectivity enabling "biodiversity" rather than monoculture.
Although it has been triggered by UK Department for Digital, Culture, Media and Sport (DCMS)
work via its Diversification Task Force - and will be submitted
directly to it - it is applicable more broadly to global policymakers
considering 5G, private networks, Open RAN, Wi-Fi, spectrum and vendor
policy issues.
My view is that Open RAN is important, but
overhyped (like 5G itself). Much of the value from 5G is in settings
where there is already good vendor choice (eg indoors, or for private
cellular).
Governments should focus more on context for deployment, ownership models and substitutive options like WiFi6. All bring extra supply options.
In short - *Demand* diversification catalyses *Supply* diversification.
(To download from LinkedIn, display in full screen & select download PDF)