Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, April 08, 2021

Free-to-download report on Creating Enterprise-Friendly 5G Policies (for goverments & regulators)

Copied from my LinkedIn. Please click here for the download page & comments

I'm publishing a full report & recommendations on Enterprise & Private 5G, especially aimed at policymakers and regulators.

It explains the complex dynamics linking Enterprises, MNOs and Governments – explaining the motivations of each around connectivity, 5G deployment choices, IoT and the broader impacts and trade-offs around the economy and productivity.

This is not a simple calculus – MNOs want to exploit 5G opportunities for verticals, but businesses have their own priorities and preferences. Governments want to satisfy both groups – and also act as both major network users themselves and “suppliers” of spectrum.

A supporting cast of cloud players, network vendors, other classes of service providers and other stakeholders have important roles as well.

This report is a “Director’s Cut” extended version of a paper originally commissioned for internal use by Microsoft, now made available for general distribution.

(To download on LinkedIn, display in full screen & select download PDF)

#5G #policy #telecoms #private5G #cloud #IoT #spectrum #WiFi

Wednesday, March 03, 2021

The Worst Metrics in Telecoms

 (This post was initially published as an article on my LinkedIn Newsletter - here - please see that version for comments and discussion)

GDP isn't a particularly good measure of the true health of a country's economy. Most economists and politicians know this.

This isn't a plea for non-financial measures such as "national happiness". It's a numerical issue. GDP is hard to measure, with definitions that vary widely by country. Important aspects of the modern world such as "free" online services and family-provided eldercare aren't really counted properly.

However, people won't abandon GDP, because they like comparable data with a long history. They can plot trends, curves, averages... and don't need to revise spreadsheets and models from the ground up with something new. Other metrics are linked to GDP - R&D intensity, NATO military spending commitments and so on - which would needed to be re-based if a different measure was used. The accounting and political headaches would be huge.

A poor metric often has huge inertia and high switching costs.

Telecoms is no different, like many sub-sectors of the economy. There are many old-fashioned metrics that are really not fit for purpose any more - and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

Some of the worst telecoms metrics I see regularly include:

  • Voice traffic measured in minutes of use (or messages counted individually)
  • Cost per bit (or increasingly energy use per bit) for broadband
  • $ per MHz per POP (population) for radio spectrum auctions
  • ARPU
  • CO2 savings "enabled" by telecom services, especially 5G

That's not an exhaustive list by any means. But the point of this article is to make people think twice about commonplace numbers - and ideally think of meaningful metrics rather than easy or convenient ones.

The sections below gives some quick thoughts on why these metrics either won't work in the future - or are simply terrible even now and in the past.

(As an aside, if you ever see numbers - especially forecasts - with too many digits and "spurious accuracy", that an immediate red flag: "The Market for Widgets will be $27.123bn in 2027". It tells you that the source really doesn't understand numbers - and you really shouldn't trust, or base decisions, on someone that mathematically inept)

Minutes and messages

The reason we count phone calls in minutes (rather than, say, conversations or just a monthly access fee) is based on an historical accident. Original human switchboard operators were paid by the hour, so a time-based quantum made the most sense for billing users. And while many phone plans are now either flat-rate, or use per-second rates, many regulations are still framed in the language of "the minute". (Note: some long-distance calls were also based on length of cable used, so "per mile" as well as minute)

This is a ridiculous anachronism. We don't measure or price other audiovisual services this way. You don't pay per-minute for movies or TV, or value podcasts, music or audiobooks on a per-minute basis. Other non-telephony voice communications modes such as push-to-talk, social audio like ClubHouse, or requests to Alexa or Siri aren't time-based.

Ironically, shorter calls are often more valuable to people. There's a fundamental disconnect between price and value.

A one-size-fits-all metric for calls stops telcos and other providers from innovating around context, purpose and new models for voice services. It's hard to charge extra for "enhanced voice" in a dozen different dimensions. They should call on governments to scrap minute-based laws and reporting requirements, and rejig their own internal systems to a model that makes more sense.





.... applies to counting individual messages/SMS as well. It's a meaningless quantum that doesn't align with how people use IMs / DMs / group chats and other similar modalities. It's like counting or charging for documents by the pixel. Threads, sessions or conversations are often more natural units, albeit harder to measure.

Cost per bit

"5G costs less per bit than 4G". "Traffic levels increase faster than revenues!".

Cost-per-bit is an often-used but largely meaningless metric, which drives poor decision-making and incentives, especially in the 5G era of multiple use-cases - and essentially infinite ways to calculate the numbers.

Different bits have very different associated costs. A broad average is very unhelpful for investment decisions. The cost of a “mobile” bit (for an outdoor user in motion, handing off from cell to cell) is very different to an FWA bit delivered to a house’s external fixed antenna, or a wholesale bit used by an MVNO.

Costs can vary massively by spectrum band, to a far greater degree than technology generation - with the cost of the spectrum itself a major component. Convergence and virtualisation means that the same costs (eg core and transport networks) can apply to both fixed and mobile broadband, and 4G/5G/other wireless technologies. Uplink and downlink bits also have different costs - which perhaps should include the cost of the phone and power it uses, not just the network.

The arrival of network slicing (and URLLC) will mean “cost per bit” is an ever-worse metric, as different slices will inherently be more or less "expensive" to create and operate. Same thing with local break-out, delivery of content from a nearby edge-server or numerous other wrinkles.

But in many ways, the "cost" part of cost/bit is perhaps the most easy to analyse, despite the accounting variabilities. Given enough bean-counters and some smarts in the network core/OSS, it would be possible to create some decent numbers at least theoretically.

But the bigger problem is the volume of bits. This is not an independent variable, which flexes up and down just based on user demand and consumption. Faster networks with more instantaneous "headroom" actually create many more bits, as adaptive codecs and other application intelligence means that traffic expands to fill the space available. And pricing strategy can basically dial up or down the number of bits customers used, with minimal impact on costs.

A video application might automatically increase the frame rate, or upgrade from SD to HD, with no user intervention - and very little extra "value". There might be 10x more bits transferred for the same costs (especially if delivered from a local CDN). Application developers might use tools to predict available bandwidth, and change the behaviour of their apps dynamically.

So - if averaged costs are incalculable, and bit-volume is hugely elastic, then cost/bit is meaningless. Ironically, "cost per minute of use" might actually be more relevant here than it is for voice calls. At the very least, cost per bit needs separate calculations for MBB / FWA / URLLC, and by local/national network scale.

(By a similar argument, "energy consumed per bit" is pretty useless too).

Spectrum prices for mobile use

The mobile industry has evolved around several generations of technology, typically provided by MNOs to consumers. Spectrum has typically been auctioned for exclusive use on a national / regional basis, in fixed-sized slices in chunks perhaps 5/10/20MHz wide, with licenses often specifying rules on coverage of population.

For this reason, it's not surprising that a very common metric is "$ per MHz / Pop" - the cost per megahertz, per addressable population in a given area.

Up to a point, this has been pretty reasonable, given that the main use of 2G, 3G and even 4G has been for broad, wide-area coverage for consumers' phones and sometimes homes. It has been useful for investors, telcos, regulators and others to compare the outcomes of auctions.

But for 5G and beyond (actually the 5G era, rather than 5G specifically), this metric is becoming ever less-useful. There are three problems here:

  • Growing focus on smaller areas of licenses: county-sized in CBRS in the US, and site-specific in Germany, UK and Japan for instance, especially for enterprise sites and property developments. This makes comparisons much harder, especially if areas are unclear.
  • Focus of 5G and private 4G on non-consumer applications and uses. Unless the idea of "population" is expanded to include robots, cars, cows and IoT gadgets, the "pop" part of the metric clearly doesn't work. As the resident population of a port or offshore windfarm zone is zero, then a local spectrum license would effectively have an infinite $ / MHz / Pop.
  • Spectrum licenses are increasingly being awarded with extra conditions such as coverage of roads, land-area - or mandates to offer leases or MVNO access. Again, these are not population-driven considerations.

Over the next decade we will see much greater use of mobile spectrum-sharing, new models of pooled ("club") spectrum access, dynamic and database-driven access, indoor-only licenses, secondary-use licenses and leases, and much more.

Taken together, these issues are increasingly rendering $/MHz/Pop a legacy irrelevance in many cases.


"Average Revenue Per User" is a longstanding metric used in various parts of telecoms, but especially by MNOs for measuring their success in selling consumers higher-end packages and subcriptions. It has long come under scrutiny for its failings, and various alternatives such as AMPU (M for margin) have emerged, as well as ways to carve out dilutive "user" groups such as low-cost M2M connections. There have also been attempts to distinguish "user" from "SIM" as some people have multiple SIMs, while other SIMs are shared.

At various points in the past it used to "hide" effective loan repayments for subsidised handsets provided "free" in the contract, although that has become less of an issue with newer accounting rules. It also faces complexity in dealing with allocating revenues in converged fixed/mobile plans, family plans, MVNO wholesale contracts and so on.

A similar issue to "cost per bit" is likely to happen to ARPU in the 5G era. Unless revenues and user numbers are broken out more finely, the overall figure is going to be a meaningless amalgam of ordinary post/prepaid smartphone contracts, fixed wireless access, premium "slice" customers and a wide variety of new wholesale deals.

The other issue is that ARPU further locks telcos into the mentality of the "monthly subscription" model. While fixed monthly subs, or "pay as you go top-up" models still dominate in wireless, others are important too, especially in the IoT world. Some devices are sold with connectivity included upfront.

Enterprises buying private cellular networks specifically want to avoid per-month or per-GB "plans" - it's one of the reasons they are looking to create their own dedicated infrastructure. MNOs may need to think in terms of annual fees, systems integration and outsourcing deals, "devices under management" and all sorts of other business models. The same is true if they want to sell "slices" or other blended capabilities - perhaps geared to SLAs or business outcomes.

Lastly - what is a "user" in future? An individual human with a subscription? A family? A home? A group? A device?

ARPU is another metric overdue for obsolescence.

CO2 "enablement" savings

I posted last week about the growing trend of companies and organisations to cite claims that a technology (often 5G or perhaps IoT in general) allows users to "save X tons of CO2 emissions".

You know the sort of thing - "Using augmented reality conferencing on your 5G phone for a meeting avoids the need for a flight & saves 2.3 tons of CO2" or whatever. Even leaving aside the thorny issues of Jevon's Paradox, which means that efficiency tends to expand usage rather than replace it - there's a big problem here:


There's no attempt at allocating this notional CO2 "saving" between the device(s), the network(s), the app, the cloud platform, the OS & 100 other elements. There's no attempt such as "we estimate that 15% of this is attributable to 5G for x, y, z reasons".

Everyone takes 100% credit. And then tries to imply it offsets their own internal CO2 use.

"Yes, 5G needs more energy to run the network. But it's lower CO2 per bit, and for every ton we generate, we enable 2 tons in savings in the wider economy".

Using that logic, the greenest industry on the planet is industrial sand production, as it's the underlying basis of every silicon chip in every technological solution for climate change.

There's some benefit from CO2 enablement calculations, for sure - and there's more work going into reasonable ways to allocate savings (look in the comments for the post I link to above), but readers should be super-aware of the limitations of "tons of CO2" as a metric in this context.

So what's the answer?

It's fairly easy to poke holes in things. It's harder to find a better solution. Having maintained spreadsheets of company and market performance and trends myself, I know that analysis is often held hostage by what data is readily available. Telcos report minutes-of-use and ARPU, so that's what everyone else uses as a basis. Governments may demand that reporting, or frame rules in those terms (for instance, wholesale voice termination rates have "per minute" caps in some countries).

It's very hard to escape from the inertia of a long and familiar dataset. Nobody want to recreate their tables and try to work out historic comparables. There is huge path dependence at play - small decisions years ago, which have been entrenched in practices in perpetuity, even though the original rationale has long since gone. (You may have noticed me mention path dependence a few times recently. It's a bit of a focus of mine at the moment....)

But there's a circularity here. Certain metrics get entrenched and nobody ever questions them. They then get rehashed by governments and policymakers as the basis for new regulations or measures of market success. Investors and competition authorities use them. People ignore the footnotes and asterisks warning of limitations

The first thing people should do is question the definitions of familiar public or private metrics. What do they really mean? For a ratio, are the assumptions (and definitions) for both denominator and numerator still meaningful? Is there some form of allocation process involved? Are there averages which amalgamate lots of dissimilar categories?

I'd certainly recommend Tim Harford's book "How to Make the World Add Up" (link) as a good backgrounder to questioning how stats are generated and sometimes misused.

But the main thing I'd suggest is asking whether metrics can either hide important nuance - or can set up flawed incentives for management.

There's a long history of poor metrics having unintended consequences. For example, it would be awful (but not inconceivable) to raise ARPUs by cancelling the accounts of low-end users. Or perhaps an IoT-focused vertical service provider gets punished by the markets for "overpaying" for spectrum in an area populated by solar panels rather than people.

Stop and question the numbers. See who uses them / expects them and persuade them to change as well. Point out the fallacies and flawed incentives to policymakers.

If you have any more examples of bad numbers, feel free to add them in the comments. I forecast there will be 27.523 of them, by the end of the year.

The author is an industry analyst and strategy advisor for telecoms companies, governments, investors and enterprises. He often "stress-tests" qualitative and quantitative predictions and views of technology markets. Please get in touch if this type of viewpoint and analysis interests you - and also please follow @disruptivedean on Twitter.

Friday, February 05, 2021

New Report & Recommendations on Telecoms Supplier Diversification

Copied from my LinkedIn. Please click here for the download page & comments

I'm publishing my full report & recommendations on telecoms supplier diversification, especially for 5G, but more broadly for "advanced connectivity" overall. This follows my "10 Principles" article from 2 months ago.

It covers both near-term RAN diversification and a long-term roadmap for a better telecoms/networking landscape towards 2030, with 6G and other connectivity enabling "biodiversity" rather than monoculture.

Although it has been triggered by UK Department for Digital, Culture, Media and Sport (DCMS) work via its Diversification Task Force - and will be submitted directly to it - it is applicable more broadly to global policymakers considering 5G, private networks, Open RAN, Wi-Fi, spectrum and vendor policy issues.

My view is that Open RAN is important, but overhyped (like 5G itself). Much of the value from 5G is in settings where there is already good vendor choice (eg indoors, or for private cellular).

Governments should focus more on context for deployment, ownership models and substitutive options like WiFi6. All bring extra supply options.

In short - *Demand* diversification catalyses *Supply* diversification.

(To download from LinkedIn, display in full screen & select download PDF)

Monday, January 11, 2021

The Myth of "Always Best Connected"

 (This was originally posted as a LinkedIn Newsletter article. See this link, read the comment thread, and please subscribe)

It Was the Best of Times, it Was the Worst of Times

One of the most ludicrous phrases in telecoms is "Always Best Connected", or ABC. It is typically used by an operator, network vendor or standards organisation attempting to glue together cellular and Wi-Fi connections. It's a term that pretends that some sort of core network function can automatically and optimally switch a user between wireless models, without them caring - or even knowing - that it's happening.

Often, it's used together with the equally-stupid term "seamless handover", and perhaps claims that applications are "network agnostic" or that it doesn't matter what technology or network is used, as long as the user can "get connected". Often, articles or papers will go on to describe all Wi-Fi usage on devices as "offload" from cellular (it isn't - perhaps 5% of Wi-Fi traffic from phones is genuine offload).

There's been a long succession of proposed technologies and architectures, mostly from the 3GPP and cellular industry, keen to embrace but downplay Wi-Fi as some sort of secondary access mechanism. Acronyms abound - UMA, GAN, IWLAN, ANDSF, ATSSS, HetNets and so on. There have been attempts to allow a core network to switch a device's Wi-Fi radio on/off, and even hide the Wi-Fi logo so the user doesn't realise that's being used. It's all been a transparent and cynical attempt to sideline Wi-Fi - and users' independent choice of connection options - in the name of so-called "convergence". Pretty much all of these have been useless (or worse) except in very narrow circumstances.

To be fair, accurate and genuine descriptions - let's say "Rarely Worst-Connected" or "Usually Good-Enough Connected" or "You'll Take What Connection We Give You & Shut Up" - probably don't have the same marketing appeal.

Who's Better, Who's Best?

The problem is that there is no singular definition of "best". There are numerous possible criteria, many of which are heavily context-dependent.

Which "best" is being determined?

  • Highest connection speed (average, or instantaneous?)
  • Lowest latency & jitter
  • Lowest power consumption (including network, device and cloud)
  • Highest security
  • Highest visibility and control
  • Lowest cost (however defined)
  • Greatest privacy
  • Best coverage / lowest risk of drops while moving around
  • Highest redundancy (which might mean 2+ independent connections)
  • Connection to the public Internet vs. an edge server

In most cases involving smartphones, the basic definition of "best" is "enough speed and reliability so I can use my Internet / cloud application with OK performance, without costing me any extra money or inconvenience". Yet people and applications are becoming more discerning, and the network is unaware of important contextual information.

For instance, someone with flatrate data may view "best" very differently to someone with a limited data quota. Someone in a vehicle at traffic lights may have a different connection preference to someone sitting on the sofa at home. Someone playing a fast-paced game has a different best to someone downloading a software update. A user on a network with non-neutral policies, or one which collects and sells data on usage patterns, may want to use alternatives where possible.

In an era of private cellular, IoT, multiple concurrent applications, encryption, cloud/edge computing and rising security and privacy concerns, all this gets even more complex.

In addition to a lack of a single objective "best", there are many stakeholders, each of which may have their own view of what is "best", according to their particular priorities.

  • The user
  • The application developer
  • The network operator(s)
  • The user's employer or parents
  • The building / venue owner
  • The device or OS vendor
  • A third-party connection management provider (eg SD-WAN vendor)
  • The government

On some occasions, all these different "bests" will align. But on others, there will be stark divergence, especially where the stakeholders have access to different options for connectivity. A mobile phone network won't know that the user has access to an airport lounge's premium Wi-Fi, because of their frequent flyer status. A video-streaming app can't work out whether 5G or Wi-Fi will route to a closer, lower-power edge server.

So who or what oversees these conflicts and makes a final decision on which connection (or, increasingly, connections plural) is chosen? Who's the ultimate arbiter - and what do the other stakeholders do about it?

This problem isn't unique to network connectivity - it's true for transport as well. I live in London, and if I want to get from my home to somewhere else, I have lots of "best" options. Tube, bus, drive, taxi, walk, cycle and so on. Do I want to get there via the fastest route? Cheapest? Least polluting? Easiest for social-distancing? Have a chance to listen to a podcast on the way? If I want to put the best smile on the most people's faces, maybe I should go by camel or unicycle? And what's best for the city's air, Transport for London's finances, other travellers' convenience, or whoever I'm meeting (probably not the unicycle)?


There are multiple apps that give me all the options, and define preferences and constraints. The same is true for device operating systems, or connection-management software tools.

Hit Me With Your Best Shot

There are also all sorts of weird possible effects where "application-aware networks" end up in battle with "network-aware applications". Many applications are designed to work differently on different networks - perhaps "only auto-download video on Wi-Fi" or "ask the user before software updates download over metered connections". Some might try to work out the user's preferences intelligently, and compress / cache / adjust the flow when they appear to be on cellular, or uprate video when the user is home - or perhaps casting content to a larger screen. The network has little grasp of true context or user/developer desire and preferences.

Networks might attempt to treat a given application, user or traffic flow differently - perhaps giving it priority, or slowing or blocking it, or assigning it to a particular "slice". The application on the other hand might try to second-guess or game the network - either by spoofing another application's signature, or just using heuristics to reverse-engineer any "policy" or "optimisation" that might get applied.

You're My Best Friend

So what's the answer? How can the connectivity for a device or application be optimised?

There's no simple answer here, given the number of parameters discussed. But some general outlines can be created.

  • Firstly, there needs to be multiple connections available, and ways to choose, switch, arbitrage between them - or bond them together.
  • The operating system and radios / wired connections of the device should allow the user (or apps) to know what's available, with which characteristics - and any heuristics that can be deduced from current and previous behaviour.
  • The user or device-owner needs to know "who or what is in charge of connections" and be able to delegate and switch that decision function when desired. It might be outsourced to their MNO, or their device supplier, or a third party. Or it could be that each application gets to choose its own connection.
  • As a default, the user should always be aware of any automated changes - and be given the option to disable them. These should not be "seamless" but "frictionless" or low-friction. (Seams are important. They're there for a reason. Anyone disagreeing with this statement must post a picture of themselves wearing a seamless Lycra all-in-one along with their comment).
  • Connectivity providers (whether SPs or privately-owned) should provide rich status information about their services - expected/guaranteed speed & latency, ownership, pricing, congestion, the nature of any data-collection or traffic inspection practices, and so on. This will be useful as input to the decision engines. Over time, it will be good to standardise this information. (Governments and policymakers - take note as well)
  • We can expect connectivity decisions to be partly driven by external context - location, movement, awareness of indoor/outdoor situation, environment (eg home, work, travelling, roaming), use of accessories like headphones or displays, and so on.

Going forward, we can expect wireless devices to have some form of SD-WAN type control function. Using technologies such as multipath TCP, it will become easier to use multiple simultaneous connections - perhaps dedicated some to specific applications, or bonding them together. For security and privacy, the software may send packets via diverse routes, stopping any individual network monitoring function from seeing the entire flow.

Growing numbers of devices will have eSIM capability, allowing new network identities / owners to be added. Some may have 2+ cellular radios, as well as Wi-Fi (again, perhaps 2+ independent connections), USB and maybe in future satellite or other options as well.

Add in the potential for Free 5G (link), beamforming, private 5G, local-licensed spectrum WiFi, relaying & assorted other upcoming innovations to add even more layers here.

The bottom line is that "best connected" will become even more mythical in future than it already is. But there will be more options - and more tools - to try to optimise it, based on a dynamic and complex set of variables - especially when going beyond connectivity towards overall "quality of experience" metrics spanning eyeball-to-cloud. There's likely be plenty of opportunities for AI, user-experience designers, standards bodies and numerous others.

But (with apologies to the Tina Turner), users should always be wary of any software or service provider that claims to be "Simply the Best".

If you've enjoyed this article, please sign up for my LinkedIn Newsletter (link). Please also reach out to me for advisory workshops, consulting projects, speaking slots etc.

#5G #WiFi #cellular #mobile #telecoms #satellite #wireless #smartphones #connectionmanagement