Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Wednesday, November 25, 2020

Interoperability is often good – but should not be mandated

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

Context: I'm going to be spending more time on telecom/tech policy & geopolitics over the next few months, spanning UK, US, Europe & Global issues. I'll be sharing opinions & analysis on the politics of 5G & Wi-Fi, spectrum, broadband plans, supply-chain diversity & competition.

Recently, I've seen more calls for governments to demand mandatory interoperability between technology systems (or between vendors) as a regulatory tool. I think this would be a mistake - although incentivising interop can sometimes be a good move for various reasons. This is a fairly long post to explain my thinking, with particular reference to Open RAN and messaging.

Background & history

The telecoms industry has thrived on interoperability. Phone calls work from anywhere to anywhere, while handsets and other devices are tested & certified for proper functioning on standardised networks. Famously, interoperability between different “islands” of SMS led to the creation of a huge market for mobile data services, although that didn't happen overnight in many countries.

Much the same is true in the IT world as well, with everything from email standards to USB connections and Wi-Fi certification proving the point. The web and open APIs make it easier for cloud applications to work together harmoniously.

Image source: https://pixabay.com/illustrations/rings-wooden-rings-intertwined-100181/

 But not everything valuable is interoperable. It isn't the only approach. Proprietary and vertically-integrated solutions remain important too.

Many social media and communications applications have very limited touch-points with each other. The largest 4G/5G equipment companies don’t allow operator customers to mix-and-match components in their radio systems. Many IT systems remain closed, without public APIs. Consumers can’t choose to subscribe to network connectivity from MNO A, but telephony & SMS from ISP B, and exclusive content belonging to cable company C.

This isn't just a telecom or IT thing. It’s difficult to get different industrial automation systems to work together. An airline can’t buy an airframe from Boeing, but insist that it has avionics from Airbus. The same is true for cars' sub-systems and software.

Tight coupling or vertical integration between different subsystems can enable better overall efficiency, or more fluid consumer experience - but at the cost of creating "islands". Sometimes that's a problem, but sometimes it's actually an advantage.

Well-known examples of interoperability in a narrow market subset can obscure broader use of proprietary systems in a wider domain. Most voice-related applications, beyond traditional "phone calls", do not interoperate by default. You could probably connect a podcast platform to a karaoke app, home voice assistant and a critical-communications push-to-talk system.... but why would you? (This is one reason why I always take care to never treat "voice" and "telephony" synonymously).

Hybrid, competitive markets are optimal

So there is value in interoperable systems, and also in proprietary alternatives and niches. Some sectors gravitate towards openness, such as federation between different email systems. Others may create de-facto proprietary appoaches - which might risk harmful monopolies, or which may be transferred to become open standards (for instance, Adobe's PDF document format).

And even if something is based on theoretically interoperable underpinnings, it might still not interoperate in practice. Most enterprise Private 4G and 5G networks are not connected to public mobile networks, even though they use the same standards.


Interoperability can be both a positive and negative for security. Open and published interfaces can be scrutinised for vulnerabilities, and third-parties can test anything that can be attached to something else. Yet closed systems have fewer entry points – the “attack surface” may be smaller. Having a private technology for a specific purpose – from a military communications infrastructure to a multiplayer gaming network – may make commercial or strategic sense.

In many all areas of technology, we see a natural pendulum swing between openness and proprietary. From open flexibility to closed-system optimisation, and back again. Often there are multiple layers of technology, where the pendulum swings with a different cadence for each. Software-isation of many hardware products means a given system might employ multiple layers at the same time.

 Consider this (incomplete and sometimes overlapping) set of scenarios for interoperability:

  • Between products: A device needs to be able to connect to a network, using the right radio frequencies and protocols. Or an electrical plug needs to fit into a standardised socket.
  • Within products or solutions (between components): A product or service can be considered to be just a collection of sub-systems. A computer might be able to support different suppliers’ memory chips or disks, using the same sockets. A browser could support multiple ad-blockers. A telco’s virtualised network could support different vendors for certain functions.
  • Application-to-application / service-to-service: An application can link to, integrate or federate with another - for instance a reader could share this article on their Twitter feed, or mobile user can roam onto another network, or a bank can share data access with an accounting tool.
  • Data portability: Data formats can be common from one system to another, so users can own and move their "state" data and history. This could range from a porting a phone number, to moving uploaded photos from one social platform to another.

There’s also a large and diverse industry dedicated to gluing together things which are not directly interoperable – and acting as important boundaries to enforce security, charging or other functions. Session Border Controllers link different voice systems, with transcoders to translate between different codecs. Gateways link Wi-Fi or Bluetooth IoT devices to fixed or wireless broadband backhaul. Connectors enable different software platforms to work together. Mapping functions will eventually allow 5G network slicing to work across core, transport and radio domains, abstracting the complexities at the boundaries.

Added to this is the entire sphere of systems integration – the practice of connecting disparate systems and components together, to create solutions. While interoperability helps SIs in some ways, it also commoditises some of their business.

Coexistence vs. interoperation

Yet another option for non-interoperable systems is rules for how they can coexist, without damaging each other’s operation. This is seen in unlicensed or shared wireless spectrum bands, to avoid “tragedies of the commons” where interference would jam all the disparate systems. Even licensed bands can be "technology neutral".

Analogous approaches enable the safe coexistence of different types of road users on the same highway - or in the voice/video arena, technologies such as WebRTC which embed "codec negotiation" procedures into the standards.

Arguably, improving software techniques, automation, containerisation and AI will make such interworking and coexistence approaches even easier in future. Such kludginess might not please engineering purists who value “elegance”, but that’s not the way the world works – and certainly shouldn’t be how it’s regulated.

In a healthy and competitive market, customers should be able to choose between open and closed options, understanding the various trade-offs involved, yet be protected from abusive anti-competitive power.

A great example of consumer gains and "generativity" in innovation is that of the Internet itself, which works alongside walled-garden, telco or private-network alternatives to access content and applications.

Customers can have the best of both worlds - accelerated, because of the competitive tensions involved. The only risk is that of monopolies or oligopolies, which requires oversight.

Where does government & regulatory policy fit in this?

This highlights an important and central point: the role of government, and its attitude to technology standards, interoperability and openness. This topic is exemplified by various recent initiatives, ranging from enthusiasm around Open RAN for 5G in the US, UK and elsewhere, to the EU’s growing attempts to force Internet platform businesses to interoperate and enable portability of data or content, as part of its Digital Services Act.

My view is that governments should, in general, let technology markets, vendors and suppliers make their own choices.

It is reasonable that governments often want to frame regulation in ways to protect citizens from monopolists, or risks of harm such as cybersecurity. In general, competition rules are developed across industries, without specific rules about products, unless there is unfair vertical integration and cross-subsidy.

Governments can certainly choose to adopt or even incentivise interoperability for various reasons – but they should not enshrine it in laws as mandatory. If you're a believer in interventionist policies, then incentivising market changes that favour national champions, foster inward investment and increase opportunities can make sense - although others will clearly differ.

(Personally, I think major tranches of intervention and state-aid should only apply to game-changers with huge investment needs - so perhaps for carbon capture technology, or hydrogen-powered aviation).

Open RAN may be incentivised, not mandated

A particular area of focus by many in telecoms is around open radio networks. The O-RAN Alliance and the TIP OpenRan project are at the forefront, with many genuinely impressive innovations and evolutions occurring. Rakuten's deployment is proving to be a beacon - at least for greenfield networks - while others such as Vodafone are using this architectural philosophy for rural coverage improvements.

Governments are increasingly involved as well - seeing a possible way to meet voters' desires for better/cheaper coverage, while also offsetting perceived risks from concentrations of power in a few large integrated vendors. This latter issue has been pushed further into the limelight by Huawei's fall from favour in a number of countries, which then see a challenge from a smaller number of alternative providers - Nokia, Ericsson and in some cases Samsung and NEC or niche providers.

This combination of factors then gets further conflated with industrial policy goals. For instance, if a country is good at creating software but not manufacturing radios, then Open RAN is an opportunity, that might merit some form of R&D stimulus, government-funded testbeds and so on.

So I can see some arguments for incentives - but I would be very wary of a step to enshrine any specific interop requirements into law (or rules for licenses), or for large-scale subsidies or plans for government-run national infrastructure. The world has largely moved to "tech neutral" approaches in areas such as spectrum awards. In the past, governments would mandate certain technologies for certain bands - but that is now generally frowned upon.

No, message apps should not interoperate

Another classic example of undesirable "forced interoperability" is in messaging applications. I've often heard many in the telecoms industry assert that it would be much better if WhatsApp, iMessage, Telegram, Snap - and of course the mobile industry's own useless RCS standard - could interconnect. Recently, some government and lobbying groups have suggested much the same, especially in Brussels.

Yet this would instantly hobble the best and most unique features of each - how would ephemeral (disappearing) messages work on systems that keep them stored perpetually? How would an encrypted platform interoperate with a non-encrypted platform? How could an invite/accept contact system interwork with a permissive any-to-any platform? How would a phone-number identity system work with a screen-name one?

... and that's before the real unintended consequences kick in, when people realise that their LinkedIn messages now interoperate with Tinder, corporate Slack and telemedicine messaging functions.

That doesn't mean there's never a reason to interoperate between message systems. In particular, if there's an acquisition it can be useful and imporant - imagine if Zoom and Slack merged, for instance. Or a gaming platform's messaging might want users to send invitations on social media. I could see some circumstances (for business) where it might be helpful linking Twitter and LinkedIn - but also others where it would be a disaster (I'm looking at you, Sales Navigator spamming tools).

So again - interoperability should be an option. Not a default. And in this case, I see zero reasons for governments to incentivise.

Conclusion

Interoperability between technology solutions or sub-systems should be possible - but it should not be assumed as a default, nor legislated in areas with high levels of innovation. It risks creating lowest-common denominators which do not align with users' needs or behaviours. Vertical integration often brings benefits, and as long as the upsides and downsides are transparent, users can make informed trade-offs and choices.

Lock-in effects can occur in both interoperable and proprietary systems. I'll be writing more about the concept of path dependence in future.

Regulating or mandating interoperability risks various harms - not just a reduction in innovation and differentiation, but also unexpected and unintended consequences. Many cite the European standardisation of GSM 2G/3G mobile networks as a triumph - yet the US, Korea, Japan, China and others allowed a mix of GSM, CDMA and local oddities such as iDen, WiBro and PHS. No prizes for guessing which parts of the world now lead in 5G, although correlation doesn't necessarily imply causation here.

There's also a big risk from setting precedents that could lead to unintended consequences. Perhaps car manufacturers would be next in line to be forced to have open interfaces for all the electronic systems, impacting many automakers' potential revenues. Politicians need to think more broadly. As a general rule, if someone uses the obsolete term "digital" in the context of interop, they're not thinking much at all.

I've written before about the possible risks to telcos from the very "platform neutrality" concept that many have campaigned for. Do they imagine regulators wouldn't notice that many have their own ambitions to be platform providers too?

In my view, an ideal market is made up of a competitive mix of interoperable and proprietary options. As long as abuses are policed effectively, customers should be able to make their own trade-offs - and their own mistakes.



As always - please comment and discuss this. I'll participate in the discussions as far as possible. If you've found this thought-provoking, please like and share on LinkedIn, Twitter and beyond. And get in touch if I can help you with internal advisory work, or external communications or speaking / keynote needs.

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

#5G #openran #regulation #telecom #mobile #interoperability #competition #messaging #voice #innovation


Thursday, October 08, 2020

Platform regulation? Are you *sure*?

There's currently a lot of focus on regulation of technology platforms, because of concerns over monopoly power or privacy/data violations.

It's a central focus of the Digital Services Act proposed by the European Commission

It's under scrutiny as part of the US Congress House Judiciary Committee report on antitrust

Other governments also focus on "platforms", especially Amazon, Facebook, Google, Apple and a few others.

Typically, traditional telcos cheer on these moves against companies they (still!) wrongly refer to as "OTTs".

Yet there's a paradox here. While there are indeed concerns about big-tech monopoly abuse that must be addressed by regulators... they're not the only platforms that could be captured by the law.

I've lost count of the times I've heard "the network as a platform", or 5G is a platform" with QoS, network slicing etc often hyped as the basis for the future economy.

Yet telcos can have as much lock-in as Apple or Amazon. I can't get an EE phone service on my Vodafone mobile connection. I can't port-out my call detail records & online behaviour to a new operator. There's no "smart home portability law" if I sign up to my broadband provider's service. Or slice portability laws for enterprises.
 
On my LinkedIn version of this post [link], a GSMA strategist commented that unbundling some telco services "does not solve a customer pain point". Yet unbundling *does* often enable greater competition, innovation & lower consumer prices. You only have to look at the total lack of innovation in MNO/3GPP telephony & messaging services in the last 20 years to see the negative effects of lock-in & too-tight integration here. (VoLTE is not innovative, RCS is regressionary). 
 
Even more awkwardly, most of the mobile industry is currently using the exact same arguments in its push to get vendors to disaggregate the RAN.
 
Want 5G to be a platform? You'll be subject to the rules too. Be careful what you wish for... 
 
(By the way, I first wrote about this issue 6 years ago. The arguments haven't changed much at all since then: https://disruptivewireless.blogspot.com/2014/07/so-called-platform-neutrality-nothing.html )
 

Wednesday, September 30, 2020

Rakuten 5G launch - quick takes

A quick post, copied from my LinkedIn (link) which is probably where comment / discussion will flow:

I just watched the Rakuten Mobile, Inc. #5G press conference.

Quick takeouts (+see Twitter thread link in comments):

- Rakuten is following Jio in undercutting incumbent MNOs with a greenfield / low-cost infrastructure & lightweight organisation
- Simple consumer-centric plan called Un-Limit V (ie V=5) with some of its own phones. It reckons it's 70% cheaper than rivals
- Big pitch for cloud + #OpenRAN
- Doing sub-GHz with NEC + Intel , plus Qualcomm for #mmWave radios
- Initial 870Mbps, upgraded to 2Gbps in a few months
- Unclear on NSA vs. SA support for new phones & network
- No mention of enterprise, verticals, Industry 4.0 etc. All about entertainment & "experience", with XR, gaming & streaming. Maybe enterprise is via APIs
- New "Big" 5G phone available from today
- I'll politely ignore the RCS-based communicator app

If I was a legacy MNO elsewhere in the world, I'd be nervously looking at my strategy team (& advisors) right now:
- Is enterprise really the key to #5G ?
- Will consolidation 4>3 or 3>2 MNOs just allow in a new greenfield entrant in our market?
- How fast can we reduce our legacy cost base?
- Is our government watching this as well?
- What happens when Rakuten pitches its platform internationally? Could *it* directly enter our market?


See also my Twitter thread with more screenshots & comment: https://twitter.com/disruptivedean/status/1311184039274074112?s=20

Monday, September 28, 2020

Verticals 5G: It's more than just MNOs vs. Private Networks, there's a whole new universe of other service providers too

For the last few years, I've written and spoken extensively about 4G or 5G cellular networks optimised for enterprises, whether that's for a factory, a port, an electricity grid - or even just a medium-sized office building. Recent trends confirm the acceleration of this model.

  • CBRS in the US is growing rapidly, including for local and industrial/utility uses
  • Localised 4G/5G spectrum is now available in UK, Germany, Netherlands, France, Japan and elsewhere, with many new countries examining the options
  • Many campus/dedicated network strategies by traditional mobile operators (MNOs)
  • Assorted testbeds and trials sponsored by governments, groups like 5G ACIA etc.
  • Growing intersections with Open RAN and neutral host models

An inflection point has now been reached.

Enterprise/local cellular is happening, finally

It's been a long time coming. In fact, I've been following the broad concept of enterprise cellular since about 2001, when I first met with a small cell vendor, called ip.access. Around 2005-2009 there was a lot of excitement about local 2G/3G networks, with the UK and Netherlands releasing thin slices of suitable spectrum. A number of organisations deployed networks, although it never hit the massmarket, for various reasons.

Now, however, private 4G and 5G is becoming "real". There's a critical mass of enterprises that are seriously interested, as this intersects with ongoing trends around IoT deployment, workforce automation, smart factory / city / building / etc concepts, and the availability of localised spectrum and cloud-based elements like network cores. It's still not easy, but the ingredients are much more accessible and easier to "cook".

A binary choice of MNOs vs enterprise?

But throughout this whole story we've had an underlying narrative of a two-way choice:

  • Enterprises can obtain private / on-premise cellular networks from major MNOs as a service, perhaps with dedicated coverage plus a "slice" of the main macro network and core functions.
  • Enterprises can build their own cellular networks, in the same way they build Wi-Fi or wired ethernet LANs today, or operate their wider private mobile radio (PMR) system.

This is a "false binary". A fallacy that there's only two options. Black & white. Night & day.

In reality, there's a whole host of shades-of-grey - or perhaps a better analogy, multi-coloured dawns and sunsets.

Not just MNOs

There is a lengthening cast-list of other types of service provider that can build, run and sell 4G and 5G networks to enterprises or "verticals" (the quaint & rather parochial term that classical telcos use to describe the other 97% of the economy).

An incomplete list of non-traditional MNOs targeting private mobile networks includes:

  • Fixed and cable operators, especially those which have traditionally had large enterprise customer bases for broadband, VPNs, PBXs / UC, managed Wi-Fi etc.
  • MVNOs wanting to deploy some of their own radio infrastructure to "offload" traffic from their usual host provider in select locations.
  • TowerCo's moving up the value chain into private or neutral networks (for instance, Cellnex and Digital Colony / Freshwave)
  • IT services firms affiliated to specific enterprises (for example, HubOne, the IT subsidiary of the company running Paris's airports)
  • Industrial automation suppliers acting as "industrial mobile operators" on behalf of customers (maybe a robot or crane supplier running/owning a local 5G network for a manufacturer or port, as an integral part of their systems)
  • Utility companies running private 4G/5G and providing critical communications to other utilities and sectors (for instance Southern Linc in the US), or perhaps acting as a neutral host, such as a client in Asia that I've advised.
  • Dedicated MNOs for particular industries, such as oil & gas, often in specific regions
  • Municipalities and local authorities deploying networks for internal use, citizen services or as public neutral-host networks for MNOs. The Liverpool 5G testbed in the UK is a good example, while Sunderland's authority is looking at becoming an NHN.
  • Railway companies either for neutral-host along tracks, or acting as FWA service providers in their own right, to nearby homes and businesses.
  • Specialist IoT connectivity providers, perhaps focusing on LPWAN connectivity, such as Puloli in the US.
  • FWA / WISP networks shifting to 4G/5G and targetting enterprises (eg for agricultural IoT)
  • Overseas MNOs without national spectrum in a market, but which want to service multinational enterprise clients' sites and offices. Verizon is looking at private cellular in the UK, for instance - and it wouldn't surprise me if Rakuten expands its footprint outside Japan.
  • Property and construction companies, especially for major regeneration districts or whole new smart-city developments.
  • UC/UCaaS and related voice & communications-centric enterprise SPs, such as Tango Networks with CBRS
  • Universities creating campus networks for students, or other education/research organisations servicing students, staff and visitors
  • Major cloud providers creating 4G / 5G networks for a variety of use-cases and enterprise groups - Amazon and Google are both tightly involved (albeit opaquely, beyond Google's SAS business), while Microsoft's acquisition of Metaswitch points to cloud-delivered private 5G, albeit perhaps not with spectrum and RAN managed itself.
  • Tourism and hospitality service providers providing connectivity solutions to hotels or resorts - although that's probably taking a backseat given economic & pandemic woes.
  • Broadcasters, event-management and content-production companies deploying private networks on behalf of sports and entertainment venues, festivals
  • Dozens more options - I'm aware of numerous additional categories and more will inevitably emerge in coming years. Ask me for details.

Conclusion: beyond the MNO/Enterprise binary fallacy

You get the picture. The future of 4G / 5G isn't just going to split between traditional "public mobile operators" (typically the GSMA membership) vs. individual enterprises creating DIY networks. There will be an entire new universe of SPs of many different types.

You can call them "new telcos", "Specialist Wirelss SPs", "Alternative Mobile Operators" or create assorted other categories. Many will be multi-site operators. Some may be regional or national.

We will see MNOs set up divisions that look like these new SP types, or perhaps acquire them. Some vendors will become quasi-SPs for enterprise, too. This is a hugely dynamic area, and trying to create fixed buckets and segments is a fool's errand.


Understanding this new and heterogeneous landscape is critical for enterprises, policymakers, vendors and investors - as well as traditional MNOs. I've been saying for years that "telecoms is too important to be left to the telcos", and it appears to be becoming true at a rapid pace.

Many in the mobile industry assert that 5G will transform industries. In many cases it will.... but the first industry to get transformed is the mobile industry itself.

This newsletter & my services

Thanks for reading this article. If you haven't subscribed to my LinkedIn Newsletter updates, please look for the "subscribe" button here. If it has resonated, please like this post and share it with others, either on LinkedIn or on other channels.

If you have a relevant interest in this and related topics around the future of telecoms and technology, please connect with me. (But no spammers and "lead generation" people, please).

I do advisory projects, strategy workshops and brainstorms, or real/virtual speaking engagements on the 5G, spectrum, private network and broader "telecom futurism" space. Drop me a message about how I can help you.

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Tuesday, August 25, 2020

Voice: So much more than Phone Calls

 [Originally published on LinkedIn. Please subscribe to my new LinkedIn Newsletter here]

Trivia Question: When was the first example of network-based music streaming launched?

I'll bet many of you guessed that it was Spotify in 2006, or Pandora in 2000. Maybe some of you guessed RealAudio, back in 1995.

But the actual answer is over a century earlier. It was the Théâtrophone, first demonstrated in 1881 in Paris, with commercial services around Europe from 1890. It allowed people to listen to concerts or operas with a telephone handset, from another location across town. It even supported stereo audio, using a headset. It finally went out of business in the 1930s, killed by radio. Although by then, another form of remote audio streaming - Muzak, delivering cabled background music for shops and elevators - was also popular.


Why is this important? Because these services used "remote sound" (from the Greek tele+phonos) over networks. They were voice/audio communications services.

Yet they were not "phone calls".

Over the last century, we've started to use the words "voice communications", "telephony" and "phone calls" interchangeably, especially in the telecoms industry. But they're actually different. We often talk about "voice" services being a core component of today's fixed and mobile operators' service portfolios.

But actually, most telcos just do phone calls, not voice in general. One specific service, out of a voice universe of hundreds or thousands of possibilities. And a clunky, awkward service at that - one designed 100+ years ago for fixed networks, or 30+ years ago for mobile networks.

*Phone rings, interrupting me*

"Hello?"

"Oh, is that Dean Bubley?"

"Yes, that's me"

"Hi, I'm from Company X. How are you today?"

"I'm fine, thanks. How can I help you?"

... and so on.

It's unnatural, interruptive and often unwanted. A few years ago a 20-something told me some words of wisdom "The only people who phone me are my parents, or people I don't want to talk to". He's pretty much right. Lots of people hate unsolicited calls, especially from withheld numbers. They'll leave their phones on silent. (They also hate voicemails even more).

I used to go into meetings at operators and ask them "Why do people make phone calls? Give me the top 10 reasons". I'd usually get "to speak to someone" as an answer. Or maybe a split between B2B and B2C. But never a list of actual reasons - "calling a doctor", "chatting to a relative", "politely speaking to an acquaintance but wishing they'd get to the point".

Now don't get me wrong - ad-hoc, unscheduled phone calls can still be very useful. Person A calling Person B for X minutes is not entirely obsolete. It's been good to speak to friends and relative during lockdown, or a doctor, or a bank or prospective client. There's a lot of interactions where we don't have an app to coordinate timings, or an email address to schedule a Zoom call.

But overall, the phone call is declining in utility and popularity. It's an undifferentiated, lowest-common denominator form of communications, with some serious downsides. Yet it's viewed as ubiquitous and somehow "official". Why do web forms always insist on a number, when you never want to receive a call from that organisation?

Partly this relates to history and regulation - governments impose universal service obligations, release numbering, collect stats & make regulations about minutes (volume or price), determine interconnect and wholesale rates and so on. In turn, that has driven revenues for quite a lot of the telecom industry - and defined pricing plans.

But it's a poor product. There are no fine-grained controls - perhaps turning up the background noise-cancellation for a call from a busy street, and turning it down on a beach so a friend can hear the waves crashing on the shore. There's no easy one-click "report as spam" button. I can't give cold-callers a score for relevance, or see their "interruption reputation" stats. I can't thread phone calls into a conversation. Yes, there's some wizardry that can be done with cPaaS (comms platforms-as-a-service) but that takes us beyond telephony and the realm of the operators.

Beyond that, there's a whole wider universe of non-call voice (and audio) applications that operators don't even consider, or perhaps only a few. For instance:

  • Easy audioconferencing
  • Push-to-talk
  • Voice-to-text transcription (for consumers)
  • Voice analytics (e.g. for behavioural cues)
  • Voice collaboration
  • Voice assistants (like Alexa)
  • Audio streaming
  • Podcasts
  • Karaoke
  • One-way voice / one-way video (eg for a doorbell)
  • Telecare and remote intercom functions for elderly people
  • Telemedicine with sensor integration (eg ultrasound)
  • IoT integrations (from elevator alarms to smartwatches)
  • "Whisper mode" or "Barge-in" for 3-person calls
  • Stereo
  • De-accenting
  • Voice biometric security
  • Data-over-sound
  • In-game voice with 3D-positioning
  • Veterinary applications - who says voices need to be human?

There are dozens, maybe hundreds of possibilities. Some could be blended with a "call" model, while others have completely different user-interaction models. Certain of these functions are implemented in contact-centre and enterprise UCaaS systems, but others don't really fit well with the call/session metaphor of voice.

I've talked about contextual communications in the past, especially with WebRTC as an enabling technology, which allows voice/video elements to be integrated into apps and browser pages. I've also written before about the IoT integration opportunities - something which is only now starting to pick up (Disclosure: I'm currently working with specialist platform provider iotcomms.io to describe "people to process" and event-triggered communications).

But what irritates me is that the mainstream telecoms industry has just totally abdicated its role as a provider and innovator of voice services and applications. You only have to look at the mobile industry currently talking about Vo5G ("5G Voice") as a supposed evolution from the VoLTE system used with 4G. It's basically the same thing - phone calls - that we've had for over 100 years on fixed networks, and 30 years on mobile. It's still focused on IMS as a platform, dedicated QoS metrics, roaming, interconnection and so on. But it's still exactly the same boring, clunky, obsolescent model of "calls".

There was a golden opportunity to rethink everything for 5G and say "Hey, what *is* this voice thing in the 2020s? What do people actually want to use voice communications *for*? What interaction models and use-cases? What would make it broader & more general-purpose?" In fact, I said exactly the same thing around 10 years ago, when VoLTE was being dreamed up.

Nothing's changed, except better codecs (although HD voice was around on 3G) and lame attempts to integrate it with the even-worse ViLTE video and perennially-useless RCS messaging functions. The focus is on interoperability, not utility. Interop & interconnection is a nice-to-have for communications. Users need to actually like the thing first.

Some of the vendors pay lip-service to device integration and IoT. But unless you can tune the underlying user interface, codecs, acoustic parameters, audio processing, numbering/identity and 100 other variables in some sort of cPaaS, it's useless.

I don't want a phone call on a smartwatch - I want an ad-hoc voice-chat with a friend to ask what beer he wants when I'm at the bar. I want tap-to-record-and-upload of conversations, from my sunglasses, when someone's trying to sell me something & I suspect they're scamming me. I want realtime audio-effects like an audio Instagram filter that make me sound like I'm a cartoon character, or 007. (I don't want karaoke, but I imagine millions do)

So remember: the telecoms industry doesn't do "voice". It just does one or two voice applications. VoLTE is actually ToLTE. It's not too late - but telcos and their suppliers need to take a much broader view of voice than just interoperable PSTN-type phone calls. Maybe start with Théâtrophone 2.0?

This post was first published via my LinkedIn Newsletter - see here + also the comment stream on LI

#voice #telecoms #volte #phone #telephony #IMS #VoLTE #telcos #cPaaS #conferencing

If you're interested in revisiting your voice strategy, get in touch via email or LinkedIn, to discuss projects, workshops and speaking engagements. We can even discuss it by phone, if you insist.

Saturday, August 08, 2020

A rant about 5G myths - chasing unicorns​

Exasperated rant & myth-busting time.

I actually got asked by a non-tech journalist recently "will 5G change our lives?"

Quick answer: No. Emphatically No.


#5G is Just Another G. It's not a unicorn

Yes, 5G is an important upgrade. But it's also *massively* overhyped by the mobile industry, by technology vendors, by some in government, and by many business and technology journalists.

- There is no "race to 5G". That's meaningless geopolitical waffle. Network operators are commercial organisations and will deploy networks when they see a viable market, or get cajoled into it by the terms & timing of spectrum licenses.

- Current 5G is like 4G, but faster & with extra capacity. Useful, but not world-changing.

- Future 5G will mean better industrial systems and certain other cool (but niche) use-cases.

- Most 5G networks will be very patchy, without ubiquitous coverage, except for very rudimentary performance. That means 5G-only applications will be rare - developers will have to assume 4G fallback (& WiFi) are common, and that dead-spots still exist.

- Lots of things get called 5G, but actually aren't 5G. It's become a sort of meaningless buzzword for "cool new wireless stuff", often by people who couldn't describe the difference between 5G, 4G or a pigeon carrying a message.

- Anyone who talks about 5G being essential for autonomous cars or remote surgery is clueless. 5G might get used in connected vehicles (self-driving or otherwise) if it's available and cheap, but it won't be essential - not least as it won't work everywhere (see above).

- Yes, there will be a bit more fixed wireless FWA broadband with 5G. But no, it's not replacing fibre or cable for normal users, especially in competitive urban markets. It'll help take FWA from 5% to 10-12% of global home broadband lines.

- The fact the 5G core is "a cloud-native service based architecture" doesn't make it world-changing. It's like raving about a software-defined heating element for your toaster. Fantastic for internal flexibility. But we expect that of anything new, really. It doesn't magically turn a mobile network into a "platform". Nor does it mean it's not Just Another G.

- No, enterprises are not going to "buy a network slice". The amount of #SliceWash I'm hearing is astonishing. It's a way to create some rudimentary virtualised sub-networks in 5G, but it's not a magic configurator for 100s or 1000s of fine-grained, dynamically-adjusted different permutations all coexisting in harmony. The delusional vision is very far removed from the mundane reality.

- The more interesting stuff in 5G happens in Phase 2/3, when 3GPP Release 16 & then Release 17 are complete, commercialised & common. R16 has just been finalised. From 2023-4 onward we should expect some more massmarket cool stuff, especially for industrial use. Assuming the economy recovers by then, that is.

- Ultra-reliable low-latency communications (URLLC) sounds great, but it's unclear there's a business case except at very localised levels, mostly for private networks. Actually, UR and LL are two separate things anyway. MNOs aren't going to be able sell reliability unless they also take legal *liability* if things go wrong. If the robot's network goes down and it injures a worker, is the telco CEO going to take the rap in court?

- Getting high-performance 5G working indoors will be very hard, need dedicated systems, and will take lots of time, money and trained engineers. It'll be a decade or longer before it's very common in public buildings - especially if it has to support mmWave and URLLC. Most things like AR/VR will just use Wi-Fi. Enterprises may deploy 5G in factories or airport hangars or mines - but will engineer it very carefully, examine the ROI - and possibly work with a specialist provider rather than a telco.

- #mmWave 5G is even more overhyped than most aspects. Yes, there's tons of spectrum and in certain circumstances it'll have huge speed and capacity. But it's go short range and needs line-of-sight. Outdoor-to-indoor coverage will be near zero. Having your back to a cell-site won't help. It will struggle to go through double-glazed windows, the shell of a car or train, and maybe even your bag or pocket. Extenders & repeaters will help, but it's going to be exceptionally patchy (and need tons of fibre everywhere for backhaul).

- 5G + #edgecomputing is a not going to be a big deal. If low-latency connections were that important, we'd have had localised *fixed* edge computing a decade ago, as most important enterprise sites connect with fibre. There's almost no FEC, so MEC seems implausible except for niches. And even there, not much will happen until there's edge federation & interconnect in place. Also, most smartphone-type devices will connect to someone else's WiFi between 50-80% of the time, and may have a VPN which means the network "egress" is a long way from the obvious geographically-proximal edge.

- Yes, enterprise is more important in 5G. But only for certain uses. A lot can be done with 4G. "Verticals" is a meaningless term; think about applications.

- No, it won't displace Wi-Fi. Obviously. I've been through this multiple times.

- No, all laptops won't have 5G. (As with 3G and 4G. Same arguments).

- No, 5G won't singlehandedly contribute $trillions to GDP. It's a less-important innovation area than many other things, such as AI, biotech, cloud, solar and probably quantum computing and nuclear fusion. So unless you think all of those will generate 10's or 100's of $trillions, you've got the zeros wrong.

- No, 5G won't fry your brain, or kill birds, or give you a virus. Conspiracy theorists are as bad as the hypesters. 5G is neither Devil nor Deity. It's just an important, but ultimately rather boring, upgrade.

There's probably a ton more 5G fallacies I've forgotten, and I might edit this with a few extra ones if they occur to me. Feel free to post comments here, although the majority of debate is on my LinkedIn version of this post (here). This is also the inaugural post for a new LinkedIn newsletter, Most of my stuff is not quite this snarky, but it depends on my mood. I'm @disruptivedean on Twitter so follow me there too.

If you like my work, and either need a (more sober) business advisory session or workshop, let me know. I'm also a frequent speaker, panellist and moderator for real and virtual events.

Just remember: #5GJAG. Just Another G.