Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label IOT. Show all posts
Showing posts with label IOT. Show all posts

Thursday, February 09, 2023

What does an AI think about Net Neutrality?

Originally published on my LinkedIn Newsletter, 9th Feb 2023. See here for comment thread

Two very important trends are occurring in tech I'm following at the moment, so I thought it might be fun to combine them:

  • The emergence of #GenerativeAI, for answering questions, generating images and sounds, and potentially a whole lot more. OpenAI #ChatGPT is the current best-known, but there are dozens of others using language models, transformers & other techniques. Some people are suggesting it will redefine web search - and potentially an awful lot more than that. Some even see it as a pivotal shift in technology, society and "skilled" employment.
  • The re-emergence of discussions around #NetNeutrality and associated regulation relating to technology platforms, telcos and networks, like the ridiculous (un)#fairshare & #InternetTrafficTax concept being pitched in Europe by lobbyists. In the UK, Ofcom recently concluded a consultation on whether changes to NN rules should be made (I sent in a reply myself - I'll discuss it another time).

So, I asked ChatGPT what it thought about NN, over a series of questions. I specifically focused on whether it helps or hinders innovation.

The transcript is below, but some thoughts from me first:

  • The text is good. Almost alarmingly good. I found myself saying "good point" a few times. This is probably because it gives reasons in fairly long lists, not just "3 bulletpoints for a slide".
  • It seems pretty even-handed, with "proponents say this, others say that"
  • You can sense that its training base tends to give it "common" responses and frequently-cited arguments. That doesn't mean they're wrong, but it can lack nuance, or repeat some common *fallacies* as well.
  • The long-list approach and comments is quite good at highlighting the need for multiple approaches & reconciling them. A single statistical approach isn't enough - there are also qualitative questions to ask, surveys to undertake and multiple angles to absorb. ChatGPT understands the issue is complex, even if it cannot resolve the complexity itself. Simple answers are wrong.
  • I was particularly impressed with the response to my question on cross-border effects, and to separating NN vs. funding effects.
  • It struggled with the last question, about Internet and non-Internet innovation. EDIT: I just realised that it ignored my typo, and understood that I meant "directly" when I wrote "direct" in my question
  • I'd be very worried if I did bog-standard "content generation" for clickbait blogs, "bucketshop" market research, or corporate social posts. This is all perfectly-usable general blurb if you need slabs of text nobody is going to really delve into too deeply.

All that said, I don't think this is (yet) a gamechanger in terms of analysis and argument. It's good for collating a lot of "food for thought", but as others have pointed out, it can also be "confidently wrong", sometimes in amusing ways.

In a separate chat, I asked it for a contrarian's view of #5G use-cases. While it was right on some things ("Some argue that the vast majority of #IoT devices do not require the high speed and low latency that 5G provides and that the hype around IoT is overblown.") others were less convincing: ("Some argue that 5G may not provide the low latency and high reliability that is being promised, and that there are other technologies, such as wired connections, that are better suited for applications such as autonomous vehicles"). If there's one thing less relevant to self-driving cars than 5G, it's fixed broadband!

Anyway - enjoy the debate treanscribed in full, below.

(Note: the image for this article came from Stable Diffusion, when I asked for a picture showing "An advanced AI debating Net Neutrality")



Me: Does net neutrality help innovation?

ChatGPT: Net neutrality has been a topic of much debate, with proponents arguing that it promotes innovation and others arguing that it stifles it.

Proponents of net neutrality argue that it helps to level the playing field for new entrants and small businesses by preventing Internet Service Providers (ISPs) from discriminating against certain types of traffic or services. This, in turn, encourages innovation, as entrepreneurs and startups are free to bring new products and services to market without fear of being blocked or slowed down by ISPs.

On the other hand, opponents of net neutrality argue that it creates a regulatory burden that stifles innovation and investment in the broadband industry. They argue that without the ability to prioritize different types of traffic, ISPs will be unable to effectively manage their networks and may be forced to limit investment in new infrastructure and technologies.

Ultimately, the impact of net neutrality on innovation is a complex and highly debated issue, and there is no clear consensus on whether it helps or hurts innovation. However, proponents of net neutrality generally believe that it promotes a free and open internet that supports innovation and competition, while opponents believe that it imposes unnecessary regulation that stifles growth and investment.


Me: What research is necessary to reach a better conclusion?

ChatGPT: To reach a more definitive conclusion on the impact of net neutrality on innovation, several types of research would be necessary, including:

  1. Economic studies: Economic studies could analyze the impact of net neutrality regulations on investment and innovation in the broadband industry, as well as the impact on consumer prices and access to broadband services.
  2. Case studies: Case studies of countries or regions with and without net neutrality regulations could provide valuable insights into the real-world effects of these regulations on innovation and investment in the broadband industry.
  3. Surveys and interviews: Surveys and interviews with entrepreneurs, startups, and industry experts could provide insights into the impact of net neutrality on the ability of new entrants to bring new products and services to market.
  4. Technical studies: Technical studies could examine the impact of net neutrality regulations on network management and traffic prioritization, and assess the potential impact on network performance, reliability, and security.
  5. Long-term data analysis: Long-term data analysis could track changes in investment, innovation, and consumer prices over time in regions with and without net neutrality regulations, providing valuable insights into the long-term impact of these regulations.

Overall, a comprehensive and multi-disciplinary approach to research, combining economic, case study, survey, technical, and data analysis methods, would be necessary to reach a more definitive conclusion on the impact of net neutrality on innovation.


Me: How would you measure innovation in this context?

ChatGPT: Measuring the impact of net neutrality on innovation is a complex task, as innovation can take many forms and can be difficult to quantify. However, several metrics could be used to measure innovation in this context, including:

  1. Investment in research and development (R&D): This could be measured by tracking the amount of investment in R&D by broadband companies and startups, as well as the number of patents filed in relevant areas.
  2. New product and service launches: This could be measured by tracking the number of new products and services introduced by broadband companies and startups, as well as the market adoption and success of these offerings.
  3. Entrepreneurial activity: This could be measured by tracking the number of new startups founded in the broadband industry, as well as the success of these startups in attracting investment and bringing new products and services to market.
  4. Consumer benefits: This could be measured by tracking changes in consumer prices, access to broadband services, and the availability of new and innovative online services and applications.
  5. Network performance and reliability: This could be measured by tracking changes in network speed, latency, and uptime, as well as the ability of networks to handle new and innovative applications and services.

While these metrics are not exhaustive, they provide a good starting point for measuring the impact of net neutrality on innovation in the broadband industry. By tracking changes in these metrics over time in regions with and without net neutrality regulations, researchers could gain valuable insights into the impact of these regulations on innovation.


Me: A lot of those metrics are international. More regions with net neutrality might increase innovation globally, not locally. How do you account for that?

ChatGPT: You are correct, some of the metrics mentioned could be influenced by factors beyond the borders of a single region. To account for this, it would be important to consider both the local and global impact of net neutrality on innovation.

For example, to measure the local impact of net neutrality, researchers could compare the metrics mentioned in regions with and without net neutrality regulations, taking into account factors such as the local economy, cultural and regulatory environment, and the state of the broadband industry in each region. This would provide valuable insights into the specific impact of net neutrality on innovation within those regions.

To measure the global impact of net neutrality, researchers could compare these metrics on a global scale, taking into account factors such as the state of the global economy, the state of the broadband industry globally, and the regulatory environment in different regions. This would provide a broader perspective on the impact of net neutrality on innovation globally, and could help to identify any trends or patterns that may not be apparent from a regional analysis alone.

Overall, it is important to consider both the local and global impact of net neutrality on innovation to gain a complete and accurate understanding of the impact of these regulations on the broadband industry and innovation more broadly.


Me: How would we know if net neutrality had a larger effect on innovation than availability of financing for startups?

ChatGPT: Determining the relative impact of net neutrality versus the availability of financing on innovation can be challenging, as these factors are often interrelated and can have complex and interconnected effects. However, several approaches could be used to isolate the impact of each factor, including:

  1. Comparative studies: Researchers could compare regions with and without net neutrality regulations, but with similar levels of financing available for startups. By controlling for the availability of financing, researchers could isolate the impact of net neutrality on innovation.
  2. Historical analysis: Researchers could analyze data from periods of time when net neutrality regulations were in place or not in place, and compare this data with changes in the availability of financing for startups. This would help to identify any correlations between net neutrality and innovation, independent of changes in financing.
  3. Causal inference methods: Researchers could use causal inference methods, such as regression analysis, to model the relationship between net neutrality, financing, and innovation, while controlling for other factors that could influence these relationships.
  4. Survey and interview-based research: Researchers could survey and interview startups, entrepreneurs, and industry experts to gain insights into the relative importance of net neutrality and financing for innovation. This would provide valuable qualitative data to complement the quantitative data obtained from other research methods.

Overall, a multi-disciplinary and multi-method approach, incorporating comparative studies, historical analysis, causal inference methods, and survey and interview-based research, would be necessary to determine the relative impact of net neutrality and financing on innovation in a robust and comprehensive manner.


Me: How do you distinguish innovations that direct relate to Internet access, compared to those which involve non-Internet applications and technologies?

ChatGPT: Distinguishing innovations that directly relate to Internet access versus those that involve non-Internet applications and technologies can be challenging, as many innovations are influenced by both Internet access and non-Internet technologies. However, several approaches could be used to differentiate these types of innovations, including:

  1. Technical classification: Innovations could be classified based on their underlying technologies, with innovations that rely primarily on Internet access being classified as Internet-related innovations and innovations that rely primarily on non-Internet technologies being classified as non-Internet-related innovations.
  2. Market classification: Innovations could be classified based on the markets they serve, with innovations that serve primarily Internet-based markets, such as online commerce or online communication, being classified as Internet-related innovations and innovations that serve primarily non-Internet-based markets, such as manufacturing or agriculture, being classified as non-Internet-related innovations.
  3. End-user classification: Innovations could be classified based on their end-users, with innovations that are primarily designed for use by individuals accessing the Internet being classified as Internet-related innovations and innovations that are primarily designed for use by individuals not accessing the Internet being classified as non-Internet-related innovations.

It is important to note that these classifications are not mutually exclusive, and that many innovations may have elements of both Internet-related and non-Internet-related technologies, markets, and end-users. Nevertheless, using these classifications can provide valuable insights into the extent to which innovations are influenced by Internet access, and can help to distinguish Internet-related innovations from non-Internet-related innovations.

 

Sunday, July 24, 2022

New Report on Enterprise Wi-Fi: No, 5G is not enough

(Initially posted on LinkedIn, here. Probably best to use LI for comments & discussion)

Published this week: my full STL Partners report on Enterprise Wi-Fi. Click here to get the full summary & extract.

Key takeout: Telcos, MNOs & other service providers need to take Wi-Fi6 , 6E & (soon) 7 much more seriously. So do policymakers.

5G is not enough for solving enterprises' connectivity problems on its own. It has important roles, especially in Private 5G guise, but cannot replace Wi-Fi in the majority of situations. They will coexist.

Wi-Fi will remain central to most businesses' on-site connectivity needs, especially indoors, for employees, guests and IoT systems.

Telcos should support Wi-Fi more fully. They need a full toolkit to drive relevance in enterprise, not just a 5G hammer & pretend everything is a nail. CIOs and network purchasers know what they want - and it's not 5G hype or slice-wash.

Newer versions of Wi-Fi solve many of the oft-cited challenges of legacy systems, and are often a better fit with existing IT and networks (and staff skills) than 5G, whether private or public. 




Deterministic latency, greater reliability and higher density of devices make 6/6E/7 more suitable for many demanding industrial and cloud-centric applications, especially in countries where 6GHz spectrum is available. Like 5G it's not a universal solution, but has far greater potential than some mobile industry zealots seem to think.

Some recommendations:

- Study the roadmaps for Wi-Fi versions & enhancements carefully. There's a lot going on over the next couple of years.
- CSP executives should ensure that 5G "purists" do not control efforts on technology strategy, regulatory engagement, standards or marketing.
- Instead, push a vision of "network diversity", not an unrealistic monoculture. (Read my recent skeptical post on slicing, too)
- Don't compare old versions of Wi-Fi with future versions of 5G. It is more reasonable to compare Wi-Fi 6 performance with 5G Release 15, or future Wi-Fi 7 with Rel17 (and note: it will arrive much earlier)
- 5G & Wi-Fi will sometimes be converged... and sometimes kept separate (diverged). Depends on the context, applications & multiple other factors. Don't overemphasise convergence anchored in 3GPP cores.
- Consider new service opportunities from OpenRoaming, motion-sensing and mesh enhancements.
- The Wi-Fi industry itself is getting better at addressing specific vertical sectors, but still needs more focus and communication on individual industries
- There should be far more "Wi-Fi for Vertical X, Y, Z" associations, events and articles.
- Downplay clunky & privacy-invasive Wi-Fi "monetisation" platforms for venues and transport networks.
- Policymakers & regulators should look at "Advanced Connectivity" as a whole, not focus solely on 5G. Issue 6GHz spectrum for unlicenced use, ideally the whole band
- Support Wi-Fi for local licensed spectrum bands (maybe WiFi8). Look at 60GHz opportunities.
- Insist Wi-Fi included as an IMT2030 / 6G candidate.

See link for report extract & Exec Summary


Thursday, January 06, 2022

Private 4G/5G: Three Markets, Not One

Private 5G segmentation: Introduction & Overview

Private 4G and 5G networks are rapidly becoming mainstream. This isn’t news.

But from recent conversations, client engagements and events, it’s becoming increasingly clear that many don’t quite grasp how private cellular use-cases are segmented – and why it’s going to get even more complex in the next 2-3 years.

In reality, this isn’t really “a market” in a singular sense. It’s currently at least three separate and distinct markets, with only minimal overlap at present. The main common thread is the deployment of cellular (3GPP 4G/5G) networks by non-MNOs.


 

A common fallacy involves talking about “vertical industries” as the main way to divide up the sector. But that doesn’t really work, as any given vertical has dozens of sub-categories and hundreds of potential applications and deployment scenarios. For instance, the “energy vertical” covers everything from a gas station, to an offshore windfarm, a 1000km pipeline or an oil-futures trading floor in a financial district.

Verticals are useful ways to divide up sales and marketing efforts, and make sense for cohesive reports, papers or webinars, but also blend together elements of three very different markets for private 4G/5G:

  •        Critical communications networks
  •        Indoor mobile phone networks
  •        Cloud and IT/IoT networks
No alt text provided for this image

It is worth discussing each of these in turn.

Critical communications networks

These have made up the bulk of major private network deployments over the last 5-10 years. They are typically deployed for utilities, oil & gas, mining, public safety, airports and military purposes. Often, they are used in rugged environments, for human communications (typically push-to-talk), as well as in-vehicle gateways and specific automation systems such as remote sensors and monitoring systems. The specialised GSM-R system for railways fits in this category as well.

Usually, they are replacing alternatives such as private mobile radio (PMR), TETRA and microwave fixed-links. They have typically been packaged and deployed by specialist integrators for sectors like oil-rigs or field-deployment by military units. There is limited “replicability”. They vary widely in size, from a single portable network for public safety, up to a national network for a utility company.

There is little need for interconnection with public mobile networks; indeed it may be specifically avoided in order to maintain isolation for optimal security and “air-gapping” for critical applications.

Most are 4G, reflecting mission-criticality and its frequent need for proven, mature technology and wide product availability. 5G is however used in certain niches and is being tested widely, although the most useful features will only arrive when Release 16/17 versions are commercialised in the next few years.

Indoor mobile phone networks

This includes some of both the oldest and newest deployments. Early local private 2G/3G networks essentially used GSM phones and thin slices of light-licensed/unlicensed spectrum to replace DECT cordless phones in a few markets – notably the UK, Netherlands and Japan.

They could also work with multi-SIM phones to blend public and private modes. I first saw an enterprise-grade GSM picocell in 2001, and an on-premise core network box in 2005. There are still several thousand such networks around, including ones updated to 4G and some that run on ships or onboard private jets.

More recently, there has been growing interest in using private 4G/5G to create neutral host networks for in-building, or on-campus coverage. There are multiple models for neutral host (I’ve counted around 10-15 variations), with some needing a full local network with its own spectrum and core, and others just relying on the tenant MNOs’ active equipment. In the US, CBRS-based options may turn out to be among the more sophisticated.

Whether used to support public MNOs more effectively than alternative indoor systems such as DAS (distributed antenna systems), or perhaps for linking to a UC / UCaaS system for enterprise voice, the main use-cases are for phones. They are almost always deployed for a single building or campus.

This segment is the most likely to require interconnection with the public mobile infrastructure, as well as supporting normal “phone calls” rather than push-to-talk voice.

Cloud and IT/IoT network

This category of private cellular is probably receiving the greatest attention from many newcomers to the sector, as well as external observers such as analysts and journalists.

It ties in with many of the newest trends around cloud and edge-computing, AI and machine vision in factories, robots and AGVs in warehouses, security cameras and more general IoT / smart building use-cases. It aligns with many of the "transformation" projects in IT, plus some parts of the OT (operational technology) space such as smart manufacturing.

As such, it tends to be viewed as a complement – or alternative – to other IT-type network technologies like Wi-Fi and fibre-based ethernet. And given that many of the use-cases have a heavy cloud (or at least multi-site WAN) orientation, there is more acceptance of virtualisation of cores and perhaps in future the RAN.

This is currently the area with the greatest amounts of experimentation and innovation – although actual large-scale operational deployments are still relatively few. There is more focus on 5G than 4G, although that might change as executives learn more about the practicalities and economics. Vendors often orient on the soundbite that "private 5G should be as easy as Wi-Fi".

There is a major focus on automation, replicability and ease-of-use. This was exemplified by the recent AWS Private 5G announcement, which seems squarely aimed at this segment.

However, there is perhaps a divide opening between the IT-type scenarios (where it can be seen as a sort of enterprise Wi-Fi-on-steroids vision) and OT deployments in which it gets embedded into larger industrial automation or other systems, such as factory robots or dockside cranes. In the latter scenarios we can see companies like Siemens integrating cellular into their wider systems, just as they have historically used Wi-Fi/WLAN and fibre.

Although the main focus is on building / campus networks for this model, it may also extend to larger domains such as smart cities, as well as multi-location users such as retail chains.

There is some overlap with the critical communications segment, but that is fairly rare at the moment, especially given the lesser role (and trust) of public cloud in many of those areas.

In addition, there is a fair amount of talk about interconnection with the public mobile network (especially where telcos are acting as vendors), but in reality, that's a secondary consideration that doesn't go much beyond a PowerPoint slide for now. There are certain exceptions which are interesting, but they're far from typical.

Conclusions and the Future of Private Networks Segmentation

At present, the "private 5G market" is actually at least three separate markets. And it's mostly about private 4G rather than 5G. Critical communications networks, indoor mobile phone networks and cloud/IT/IoT networks are largely distinct in terms of motivations, channels, economics, devices and applications. There is much less overlap than many observers expect.

(There are also smaller adjacent sectors such as community networks, 4G/5G-based FWA and other specialities).

But over the next 1-2 years, we can expect the three bubbles on the Venn diagram to overlap more – although asymmetrically. Critical and cloud/IoT networks will start to become hybridised. Critical 4G/5G networks in mines or utility sites will start to support extra IT-like applications, for instance (although that probably won't need formal network slicing).

Some enterprise private cellular networks will examine adding neutral-host and inbound roaming or interconnect from public MNOs' subscribers – although there are assorted regulatory and security/operational hurdles to address.

There won't be much overlap between critical networks and neutral/guest cellular, though. Nobody's smartphone will be roaming from their normal consumer 5G network onto the utility company's private infrastructure, I think. A few employees' devices might have special arrangements though.

But we will also see the emergence of a number of additional bubbles on the chart, some of which are more like "quasi-private" models, such as outdoor neutral host networks, selling wholesale capacity to MNOs. There will be various forms of Wi-Fi integration (but probably less than many expect / want). And we will undoubtedly see maturity of both cloud-delivered private cellular like AWS's, and (belatedly) some sort of MNO-based network slice integration.

And if you want an "outlier" to ponder, consider the potential for grassroots private "consumer-grade" 5G. There's a lot of hype about things like Helium's decentralised and blockchain-based model, but I'm deeply sceptical of this (that's for another post, though). More likely is the emergence of a true Wi-Fi hotspot approach, where we start to see lightweight "free 5G" options, using unlicensed (or maybe CBRS GAA) spectrum, with a cheap core and small cell. Scan the QR code next to the barista to download your eSIM, and you're good to go….

 



The bottom line is that the private 4G/5G market is complex and nuanced. Market statistics frequently combine everything from a nationwide utility's or railway's critical infrastructure, to a few small-cells connecting up digital signs in a mall car-park. It's easy to assume it's all about millisecond-latency robots zipping about factories, rather than a security guard with a handheld radio, or indoor network coverage for a hotel.

Operators, vendors, enterprises and governments need to delve a bit more deeply than just talking about "verticals" for private cellular, or else they risk making errors with their product portfolios or regulatory direction.

Dean Bubley (@disruptivedean) is a wireless technology analyst & futurist, who advises a broad range of companies and institutions active in the 5G, Wi-Fi and cloud marketplaces. He has covered private cellular networks for more than 20 years. He is a regular speaker and moderator at live and virtual events. Please get in touch on LinkedIn or via information AT disruptive-analysis DOT com for advisory or speaking requests.

#Private5G #Private4G #CriticalCommunications #5G #IoT #IIoT #Cloud #WiFi #verticals

Thursday, May 06, 2021

Why does the Edge Computing sector ignore Wi-Fi?

We should be talking more about Wi-Fi-Edge as well as 5G-edge. Arguably, it is more important (along with fibre-connected edge)

Yes, the 3GPP term MEC has been upgraded from "mobile edge compute" to "multi-access", but there's still little focus on local edge-cloud use-cases that rely on fixed (usually fixed + Wi-Fi) broadband.

Given today's Wi-Fi often has lower latency than current 5G versions (2-5 milliseconds is common), and many devices such as AR/VR headsets don't have 5G radios, this seems odd.

Many of the use-cases for advanced connectivity, especially IoT in smart buildings and smart homes, as well as gaming and content/video display, uses Wi-Fi predominantly. 5G won't replace it.

On enterprise sites, Edge Computing applications will terminate to end-devices connected with a mix of 5G (public and private), 4G, Wi-Fi, fibre, Ethernet, LPWAN & other tech. This isn't just about low-latency, but connections for IoT devices, cameras, screens etc. that require local processing - and local storage ("data sovereignty"). 

They might use cloud-type software stacks, and use hyperscale cloud for deep analytics, but there will be various reasons for on/near-prem edge.

Offices connect all laptops, collaboration/meeting systems and screens with Wi-Fi. Wi-Fi dominates in education. Even in retail settings and #smartcities, there's a lot of Wi-Fi or proprietary industrial WLAN variants.

In homes, the opportunity is almost entirely about #WiFiEdge. TVs, laptops, voice assistants, smartphones, tablets, AR/VR headsests and most other residential devices connect with Wi-Fi (plus some short-range Bluetooth, ZigBee etc). Very few end-devices inside the home connect with 4G/5G, and even in future the low-band 5G connections that penetrate the walls likely won't support the ultra-low latencies that many talk about.

All of these have significant links to #cloud platforms and applications. Indeed, many higher-end Wi-Fi systems are themselves cloud-controlled. 

Outdoors, especially for mobile and vehicular use-cases, #5GEdge (& 4G for years) will be important plus maybe SatelliteEdge & LoRaEdge

In general, I'd expect "fixed edge" of one sort or another to be far more important than "mobile edge" or MEC. In many ways, it already is, given #CDNs largely service fixed broadband use-cases.

Possibly this is just reflecting a lack of marketing - or perhaps the cloud/edge/datacentre sector has been blinded by #5Gwash hype and has forgotten to focus on often more-important technologies for some critical applications - whether that's security-camera analytics or multiplayer games. They may well need low-latency or secure on-premise compute, but won't (often) be using 5G.

This also perhaps reflects the fact that 5G needs some edge-compute for its own operation (especially Open RAN), so the industry is trying to offset the costs by hyping the potential revenues of using that infrastructure for customer applicatins as well. That's less true for other connectivity types, although fixed/cable broadband has a lot of localised compute infrastructure too.

I'm curious to see if this blending of #WiFiEdge has resonance.
At the very least I think the Wi-Fi and fixed-broadband providers should be making much more noise about it. Seems bizarre that 5G-edge gets all the attention when it is, well, a bit of an edge-case.

Thursday, April 08, 2021

Free-to-download report on Creating Enterprise-Friendly 5G Policies (for goverments & regulators)

Copied from my LinkedIn. Please click here for the download page & comments

I'm publishing a full report & recommendations on Enterprise & Private 5G, especially aimed at policymakers and regulators.

It explains the complex dynamics linking Enterprises, MNOs and Governments – explaining the motivations of each around connectivity, 5G deployment choices, IoT and the broader impacts and trade-offs around the economy and productivity.

This is not a simple calculus – MNOs want to exploit 5G opportunities for verticals, but businesses have their own priorities and preferences. Governments want to satisfy both groups – and also act as both major network users themselves and “suppliers” of spectrum.

A supporting cast of cloud players, network vendors, other classes of service providers and other stakeholders have important roles as well.

This report is a “Director’s Cut” extended version of a paper originally commissioned for internal use by Microsoft, now made available for general distribution.

(To download on LinkedIn, display in full screen & select download PDF)




#5G #policy #telecoms #private5G #cloud #IoT #spectrum #WiFi

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Saturday, April 18, 2020

Rethinking wireless networks for post-COVID19 Smart Buildings

For the past month or so, I've been thinking about the longer-term technology, policy and business trends that might emerge in the wake of the current pandemic. I'm especially interested in those that could directly or indirectly affect the use and deployment of networks and communications.

I wrote up my initial scenarios for what might lie ahead for the telecoms industry in the recent STL Partners report on COVID-19: Now, Next & After (link), and also discussed them on the STL webinar on the same topic (link). The next update webinar is on May 6th - link. 

I've also participated in other client webinars and podcasts on campus networks (link), private 4G/5G (link) and Wi-Fi6E (link) recently - and I always include a section considering the pandemic's impact. (Any market analysis or opinion formed more than 2 months ago now needs to be reconsidered in the light of the pandemic and coming economic recession).

With that in mind, one area I've started thinking about is that of in-building wireless and smart buildings, especially relating to business locations. (Residential coverage of cellular and better home broadband / Wi-Fi is also top-of-mind, but I'll tackle that separately another time).

Obviously, offices and shopping malls are currently empty in much of the world, but eventually they will return to regular use, to some level at least. Even buildings sadly vacated by companies that cannot survive the economic impact will likely gain new tenants and uses.  


Making buildings pandemic-proof 

We already have building codes and regulations to protect us against fire risks, and even earthquakes. For fires, we have sensors, alarms, fire escapes, drills, signage and so on. In parts of the world there are specific rules governing indoor coverage for public safety radios, and they are being updated as agencies upgrade from P25 / TETRA systems to 4G / 5G critical-communications cellular alternatives.

So what else would it take to make a building "pandemic-proof"? I'm especially interested how we manage social distancing - both during the next phase of recovery and a gradual return to near-normal when a vaccine becomes available, but also during possible future waves of COVID or new entirely outbreaks. 

In the wake of the 2008 financial crisis there was a big focus on banks' transparency, financial stability and regulatory "stress tests". I'd be very surprised if equivalent changes don't take place over the next few years - especially as many coronavirus infections are understood to occur indoors.

I've found various articles about smart buildings and the pandemic already, where the main focus seems to be on general hygiene and infection control. Using thermal cameras (and perhaps facial recognition) can automate detection of people with fevers. LED lights can provide disinfection in some cases, and bathroom sensors can help enforce hand-washing. Remote access to building-management systems allow facilities personnel to work from home. Better management of temperature and humidity may reduce the survival time of viruses and bacteria.

I can imagine a range of strategies being adopted in coming years:
  • Temperature-detection and hygiene management, as above.
  • Ability for remote building-management wherever possible
  • Design guidelines for wider corridors and stairways, better ventilation, virus-unfriendly surfaces, automated doors rather than handles, and so on
  • Ways to impose, measure and enforce social-distancing rules in emergencies - for example by dynamically lowering maximum numbers of permitted people in enclosed spaces, or digital signs for making corridors or aisles into one-way systems.
  • Use of sensors to measure occupancy, density and flow of people, and control entry/exit better
  • Automated disinfection systems or processes (maybe using robots)
  • Use of occupants' / visitors' phones or other devices to help them navigate / work more safely
  • Ability for authorities to use cameras, admission-control and other data for contact-tracing purposes (subject to emergency laws on privacy etc).
Clearly, not all of these can apply to all buildings - and there is obviously a huge spectrum of venue types with different requirements. A supermarket is different to an office block, a corner-shop, a factory or warehouse full of robots. Older buildings are not likely to be able to widen corridors, while a "cube farm" has more flexibility. 

But what that means is that in a future outbreak, a government could say: "Workplaces certified to standard PNDMC-A can remain open, if they reduce occupancy to X, Y & Z metrics. PNDMC-B locations must comply with emergency rules A, B & C. All others must close."

Clearly, those type of rules will incentivise building owners and developers to upgrade their sites wherever possible. While it is too early to guess exactly how the specific regulations might be formulated, there are nonetheless some initial ideas and steps to think through.


The role of networks

Given my own focus on mobile and wireless systems, a key theme immediately leaps to my mind: many of these techniques and practices will require better and wider indoor connectivity than is common today in many places. 

While some building-management systems will be based on wired connections (not least as they'll need cables for power anyway), I expect wireless networks to be extremely important for much of this.

I see wireless networks being employed both indirectly (for connection of sensors, cameras or other devices, such as smartphones used for distancing apps) and directly by using the network itself as a sensing and measurement tool. Indoor mapping and positioning will be needed in tandem with wireless for various use-cases.

There are particular challenges and opportunities for indoor wireless systems here:
  • There will be a need to support both public networks (for indoor use of nationwide MNO networks and services) and localised private wireless, for the building or company's own needs.
  • Almost inevitably, both 3GPP cellular (4G/5G) and Wi-Fi (5/6/7) will be essential for different use-cases and device types, plus public-safety wireless such as TETRA. In many some cases additional technologies such as Bluetooth low-energy, ZigBee or proprietary systems will also be required as well. 
  • All of this will occur while major transitions to 5G (at different frequencies) and private cellular networks are ongoing in coming years.
  • Any real-time mobile app, whether it is giving alerts, or uploading updates on location, will be dependent on good wireless connectivity, either via Wi-Fi or in-building cellular connections
  • Proximity-based apps (for instance using Bluetooth) will risk false-positives if they are not integrated with building location and indoor-mapping systems. You can safely stay 2 metres from someone infected, if there is a wall or floor/ceiling between you.
  • IoT systems such as disinfectant robots will also need access to indoor maps and granular positioning technology.
  • Next-generation networks such as private/campus 5G and also recent Wi-Fi meshes have improved wireless-positioning abilities. This could allow both real-time and reported proximity-monitoring - as well as enabling remote working & even "lights out" full automation in industrial settings
  • Both Wi-Fi and cellular networks can work out how many devices/users are not just connected, but detected, even if they do not attempt - or are not permitted - to connect to a given system. That could yield good data on user-density, especially if they are personal devices such as smartphones.
  • Wi-Fi enhancements already enable motion-detection - which can be considerably more accurate than traditional infra-red, and also work through walls. One technology innovator here is Cognitive Systems (link) but there are others as well. I've also seen suggestions that future 5G variants may be able to do something similar, if deployed with small cells. (I'm not sure how it would work with other in-building shared networks, though).
  • Potential to use localised cell-broadcast messaging, or Wi-Fi hotspot captive-portal pages, to distribute public health information and advice
  • There may be a growing need to align the indoor wireless network(s) with nearby outdoors connectivity, or link multiple buildings together well. Campus networks are already growing in importance for multiple reasons (link) and social-distancing and control adds another set of use-cases. (Consider private/public spaces such as courtyards, rooftop bars, parking lots and so on).
  • The use of virtualised radio networks (or specific variants such as OpenRAN) could also prove valuable here - for instance to enable operators to scale up/down capacity dedicated to indoor 5G wireless systems, or switch radio VNFs between indoor and outdoor coverage. (This goes far beyond pandemic-proofing and I will write about it another time). 
  • Neutral-host indoor wireless systems will be able to onboard new tenant networks such as public safety, or private building management networks, depending on future requirements and spectrum licensing policy.
  • There may be edge-computing requirements driven by pandemic-proofting, although that doesn't necessarily imply either on-prem or very granular nearby edge facilities, rather than metro-level.
This is still just a very rough draft of my ideas - and clearly there are various policy / regulatory hypotheses here as well as technology direction. I'm not a specialist on building regulations, so it's quite possible I've made unreasonable assumptions. But this is intended as the start of a discussion, rather than a definitive forecast. I expect this topic and more detailed discussion to surface in coming months and years.

Your comments are very welcome - and if you want to get in touch with me directly, please connect on my LinkedIn, or send me a Twitter DM. If you're hosting any webinars, or holding internal brainstorms on this, I'd be very interested in participating.


Friday, January 03, 2020

Predictions for the next decade: looking out to 2030 for telecoms, wireless & adjacent technologies


It's tempting to emulate every other analyst & commentator and write a list of 2020 predictions of success and failure. In fact, I got part-way into a set of bulletpoints about what’s overhyped and underhyped. 

But to be honest, if you read my articles and tweets, you probably know what I think about 2020 already. Private cellular networks will be important (4G, initially). 5G fixed wireless is interesting and will grow the FWA market - but won't replace fibre. 5G is Just Another G and is overhyped, especially until the new core matures. RCS is still a worthless zombie, eating brains. But I don't need to repeat all this in detail, just because I'm a bit more sharp-worded than most observers. It wouldn't tell you much new.

But seeing as I spend a fair amount of time advising clients about the longer-term future, 5-10 years out or even further, I thought I'd set my sights higher. I use the term "telco-futurism" to look at the impacts of technology and broader society on telecoms, and vice versa.

So, at the start of the 2020s, what about the next decade? Assuming I haven't retired to my palatial Mars-orbiting private Moon in 10 years' time, what do I think I'll be writing, podcasting (or neural-transmitting) about in 2030?

So, let's have a few shots at this more-distant target...

  • 6G: In 2030, the first 6G networks are already gaining traction in the marketplace. The first users are still fixed connections to homes, and personal devices that look a bit similar to phones and wearables, but with a variety of new display and UI technologies, including contact lenses and advanced audio/haptic interfaces. 6G represents the maturing of various 5G concepts (such as the new core), plus greater intelligence to allow efficient operation. 
  • Details, details: Much of the 2020s will have been spent dealing with numerous "back-office" problems that have stopped many early 5G visions becoming real. Network-slicing will have thrown up huge operationalisation and security issues. Dealing with QoS/slice roaming or handoff, at borders between networks (outdoor / indoor / private / neutral / international) will be hugely complex. Edge computing scenarios will turn out to need local peering or interconnection points. All of these will have huge extra complexities with billing, pricing and monitoring. mmWave planning and design tools will need to have matured, as well as the processes for installation and operation.Training and skills for all of this will have been time-consuming and expensive - we'll need hundreds of thousands of experts - often multi-domain experts. By the time all these issues get properly fixed, 6G radios and vendors will exploit them, rather than the "legacy 5G" infrastructure. See this post for my discussion about the telecom industry's problems with accurate timelines.
  • Device-Network cooperation: By 2030, mobile ecosystems and control software will break today's silos between radio network, devices and applications much more effectively. Sensors in users' devices, cell-towers and elsewhere will be linked to AI which works out how, why and where people or IoT objects need connectivity and how best to deliver it. Recognise a moving truck with machine-vision, and bounce signals off it opportunistically. Work out that someone is approaching the front of a building, and pre-emptively look for Wi-Fi, or negotiate with the in-building neutral host on a marketplace before they enter the door. Spot behavioural patterns such as driving the same route to work, and optimise connectivity accordingly. Recognise a low battery, and tweak the "best-connected" algorithm for power efficiency, and downrate apps' energy demand.Integrate with crowd-flow patterns or weather forecasts. There will be thousands of ways to improve operations if networks stop just thinking of a "terminal" as just an endpoint, and look for external sources of operational data - that's a 20th Century approach. Expect Google's work on its Fi MVNO & Android/Pixel phones, and similar efforts by Samsung and maybe Apple, Qualcomm and ARM, to have driven much of this cross-domain evolution.
  • Energy-aware networks: Far more energy-awareness will be designed into all aspects of the network, cloud and device/app ecosystem. I'm not predicting some sort of monolithic and integrated cascading-payments system linked into CO2-taxes, but I expect "energy budget" to be linked much more closely to costs (including externalities) in different areas. How best to optimise wired/wireless data for power demand, where best to charge devices, "scavenging" for power and so on. Maybe even "nudge" people to lower-energy applications or consumption behaviours by including "power-shaming" indicators. If 3GPP and governments get their act together, as well as vendors & CSPs, overall 6G energy use will be a higher priority design-goal than throughput speed and latency.
  • Wi-Fi: We'll probably be on Wi-Fi 9 by 2030. It will continue to dominate connectivity inside buildings, especially homes and business premises with FTTX broadband (i.e. most of them in developed markets). It will continue to be used for primary connectivity on high-throughput / low-margin / low-mobility devices like TVs and display screens, PC-type devices, AR/VR headsets and so on. It will be bonded together with 5G/6G and other technologies with ever-better multi-path mechanisms, including ad-hoc device meshes. Ease of use will have improved, with the success of approaches like OpenRoaming. Fairly little public Wi-Fi will be delivered by "service providers" as we think of them today.  We'll probably still have to suffer the "6G will kill Wi-Fi" pundit-pieces and hype, though.
  • Spectrum: The spectrum world changes slowly at a global level, thanks to the glacial 4-year cycle of ITU WRCs. By 2030 we will have had 2023 and 2027 conferences, which will probably harmonise more spectrum for 5G/6G, satellites & high-altitude platforms (HAPS) and Wi-Fi type unlicensed use. The more interesting developments will occur at national / regional levels, below the ITU's role, in how these bands actually get released / authorised - and especially whether that's for localised or shared usage suitable for private networks and other innovators. By 2030 we should have been through 2+ cycles of US CBRS and UK/Germany/Japan/France style local licensing experiments, allocation methods, databases and sensing systems. I think we'll be closer to some of the "spectrum-as-a-service" models and marketplaces I've been discussing over the last 24 months, with more fluid resale and temporary usage permits. International allocations will still differ though. We will also see whether other options, such as "national licenses with lots of extra conditions" (eg MVNO access, rural coverage, sharing, power use etc) has helped maintain today's style of MNOs, despite the grumbling. We will also see much more opportunism and flexibility in band support in silicon/devices, and more sophisticated approaches to in-band sharing between different technologies. I'm less certain whether we will have progressed much with commercialisation of mmWave bands 20-100GHz, especially for mobile and indoor use. It's possible and we'll certainly see lots of R&D, but the practicalities may prove insuperable for wide usage.
  • Private/neutral cellular: Today, there's around 1000 MNOs globally (public and private). By 2030, I'd expect there to be between 100,000 and a million networks, probably with various new types of service provider, aggregation hubs and consortia. These will span industrial, city, office, rural, utility, "public venue" and many other domains. It will be increasingly hard to distinguish private from public, eg with MNOs' campus networks with private cores and hybrid public/private spectrum. We might even get another zero, if the goals of making private 4G/5G as easy and cheap to build as Wi-Fi prove feasible, although I have doubts. Most of these networks will be user-specific, but a decent fraction will be multi-tenant, either offering wholesale access or roaming to "legacy MNOs" as neutral hosts, or with some sort of landlord model such as a property company running a network with each occupied floor or building on campus as a "semi-private" network. Some such networks will look like micro-telcos (eg an airport providing access to caterers & airlines) and will need billing, management & security tools - and perhaps new forms of regulation. This massive new domain will help catalyse various shifts in the vendor community as well - especially cloud-native core and BSS/OSS, and probably various forms of open RAN, and also "neutral edge".
  • Security & privacy: I'm not a security expert, so I hesitate to imagine the risks and responses 10 years out. Both good and bad guys will be armed to the teeth with AI. We'll see networks attacked physically as well as logically. We'll see sophisticated thefts of credentials and what we quaintly term "secrets" today. There will be cameras and mics everywhere. Quantum threats may compromise encryption - and other quantum tools may enhance it, as well as provide new forms of identity and authentication. We will need to be wary of threats within core networks, especially where orchestration and oversight is automated. I think we will be wise to avoid "monocultures" of technologies at various levels of the network - we need to trade off efficiency and scale vs. resilience.
  • Satellite / HAPS: We'll definitely have more satellite constellations by 2030, including some huge ones from SpaceX or others. I have my doubts that they will be "game-changers" in terms of our overall broadband use, except in rural/remote areas. They won't have the capacity of terrestrial networks, and signals will struggle with indoor penetration and uplink from anything battery-powered. Vehicles, planes, boats and remote IoT will be much better-connected, though. Space junk & cascading-collision scenarios like the movie Gravity will be a worry, though. I'm not sure about drones and balloons as HAPS for mass-market use, although I suspect they'll have some cool applications we don't know today.
  • Cloud & edge: Let's get one thing clear - the bulk of the world's computing cycles & data storage will continue to occur in massive datacentres (perhaps heading towards a terawatt of aggregate power by 2030) and on devices themselves, or nearby gateways. But there will be a thriving mid-market of different sorts of "edge" as I've covered in many posts and presentations recently. This will partly be about low-latency, but not as much as most people think. It will be more about saving mass data-transport costs, protecting "data sovereignty" and perhaps optimising energy consumption. A certain amount will be inside telcos' networks, but without localised peering / aggregation this will be fairly niche, or else it will be wholesaled out to the big cloud players. There will be a lot of value in the overall orchestration of compute tasks for applications between multiple locations in the ecosystem, from chip-level to hyperscale and back again. The fundamental physical quantum of much edge compute will be mundane: a 40ft shipping container, plonked down near sources of power and fibre.
  • Multi-network: We should expect all connectivity to be "software-defined" and "multi-network". Devices will have lots of radios, connecting simultaneously, with different paths and providers (and multiple eSIM / other identities). Buildings will have mutliple fibres, wireless connections and management tools. Device-to-device connections and relaying will be prevalent. IoT will use a selection of LPWAN technologies as well as Wi-Fi, cellular and short-range connections. Satellite and maybe LiFi (light-based) connections will play new roles. Arbitrage, bonding, load-balancing will occur at multiple levels from silicon to OS to gateway to mid-network. Very few things will be locked to a single network or provider - unless it has unique value such as managed security or power consumption.
  • Voice & messaging: Telephony will be 150yo in 2026. By 2030 we'll still be making some retro-style "phone calls" although it will seem even more clunky, interruptive, unnatural and primitive than today. (It won't stop the cellular industry spending billions upgrading to Vo6G though). SMS won't have disappeared, either. But most consumers will communicate through a broad variety of voice and video interaction models, in-app, group-based, mediated by an array of assistants, and veracity-checked to avoid "fake voice" and man-in-the-middle attacks of ever increasing subtlety. Another 10 years of evolution beyond emojis, stories, filters and live broadcasts will allow communication which is expressive, emotion-first, and perhaps even richer and more nuanced than in-person body language. I'm not sure about AR/VR comms, although it will still be more important than RCS which will no doubt be celebrating its 23rd year of irrelevance, hype and refusal to die.
  • Enterprise comms:  UCaaS, cPaaS and related collaboration tools will progress steadily, if unspectacularly - although with ever more cloud focus. There will be more video, more AI-enriched experiences for knowledge management, translation, whispered coaching and search. There will be attempts to reduce travel to meetings and events as carbon taxes bite, although few will come close to the in-person experience or effectiveness. We'll still have some legacy phone calls and numbers (as with consumer communications) although these will be progressively pushed to the margins of B2B and E2E interactions. Ever more communications will take place "contextually" - within apps, natively supported in IoT devices, or with AI-based assistants. Contact centres and customer interactions will be battlegrounds for bots and assistants on both sides. ("Alexa, renegotiate my subscription for a better price - you have permission to emulate my voice"). Security and verification will be highly prized - just because something is heard doesn't mean it will match what was originally spoken
  • Network ownership models: Some networks of today will still look mostly like "telcos" in 2030,  but as I wrote in this post the first industry to be transformed by 5G will be the telecom industry itself. We'll see many new stakeholders, some of which look like SPs, some which are private network operators, and many new forms of aggregator, virtual operator, wholesale or neutral mobile/fibre provider. I'm not expecting a major shift back to nationalised or government-run networks, but I think regulations will favour more sharing of assets where it makes sense. Individual industries will take control of their own connectivity and communications, perhaps using standardised 5G, or mild variations of it. There will be major telcos of today still around - but most will not be providing "slices" to companies and offering deep cross-vertical managed services. There will be M&A which means that we'll have a much more heterogeneous telco/CSP market by 2030 than today's 800 identikit national MNOs. Fixed and fibre providers will be diverse as well - especially with the addition of cloud, utility and muncipal providers. I think the towerco / property-telco model will be important as asset owners / builders as well.
I realise that I could go on at length about many other topics here - autonomous and connected vehicles, the future of cities and socio-political spheres, shifts in entertainment models, the second wave of blockchain/ledgers, the role of human enhancement & biotech, new sources of energy and environmental technology, new forms of regulation and so forth. But this list is already long enough, I think. Various of these topics will also appear in podcasts - which I'm intending to ramp up in 2020. At the moment I'm on SoundCloud (link) but watch out here or on Twitter for announcements of other platforms.

If this has piqued your interest, please comment on my blog or LinkedIn article. This is a vision for 2030, which I hope is self-consistent and reasonable - but it is not the only plausible future scenario.

If you're interested in running a private workshop to discuss, debate and strategise around any of these topics, please get in touch via private message, or information AT disruptive-analysis DOT com. I work with numerous operators, vendors, regulators, industry bodies and investors to imagine the future of networks and other advanced technologies - and steer the path of evolution.

Happy New Year! (and New Decade)