Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Monday, June 19, 2023

CAPEX in telecoms - beware of headline numbers

This post originally appeared on June 12 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

CAPEX numbers are important in #telecoms. But they're also often collected and analysed in a haphazard fashion, or sometimes twisted and misinterpreted. There are examples that wrongly imply casual links or are carefully selected to drive specific policy choices.

- Telco execs watch CAPEX stats as they're important elements of cashflow & also signify key strategies and technology transitions
- Vendors watch #CAPEX stats to understand demand for new products
- Investors watch CAPEX as inputs to their valuation models, and as a barometer for company/industry health and prospects
- Policymakers watch CAPEX as it gets captured in "investment" statistics, and as an indicator for potential regulatory changes (or as a metric of success of previous policies)

Various ratios are commonplace, for both companies and the industry:
- CAPEX vs. revenues
- CAPEX vs. EBITDA
- CAPEX of telecoms vs. tech/hyperscalers
- CAPEX vs. R&D spending
- Fixed vs. Mobile CAPEX
... and so on

The problem is that "telco CAPEX" is also a very vague and malleable concept. Digging into it reveals many more questions - and problems with the methodologies and conclusions drawn, especially where headline numbers are concerned.

Some of the questions I'm currently looking at include:

- What counts as a "telco"? Are you including towercos, subsea fibre operators, municipalities building networks, MVNOs and many others?
- Are historic CAPEX numbers restated when telcos sell or acquire other businesses, especially tower spin-outs?
- Is it meaningful to compare CAPEX for 10 / 30 / 50 year assets such as #FTTP, which will generate decades of new revenue, with last year's figures?
- How do you separate CAPEX for basic coverage vs. incremental capacity vs. "generational" upgrades to fibre or #5G? A lot of CAPEX occurs even if usage is low
- How do you deal with leasing or other financing models? If CAPEX shifts to OPEX, how is it captured in the stats?
- What happens with "cloudified" networks? Firstly they rely on shared (often 3rd-party) assets, and secondly they are *supposed* to lower costs / investments. But will the lower CAPEX be viewed as a sign of distress, not modernisation?
- Is non-network CAPEX broken out (eg retail sites, central offices, datacentres etc)?
- Is "adjacent capex" included and if so, how?, eg in-building #wireless, #spectrum licenses, software development

I hear many commentators and lobbyists claim "#NetNeutrality led to lower CAPEX!" or "Streaming traffic leads to higher CAPEX!" or "There's an investment gap!". Without detailed data - and an analysis of causality - you have to question the veracity & meaningfulness of such rhetoric.

In summary - CAPEX is indeed important. But in fact it's so important, that headline numbers are often useless or misleading.

Ask for details on segmentation, methodology and definitions - if they aren't available, treat the numbers with deep skepticism.

#FTTX #telcos #regulations #networks #fairshare

Wednesday, March 03, 2021

The Worst Metrics in Telecoms

 (This post was initially published as an article on my LinkedIn Newsletter - here - please see that version for comments and discussion)

GDP isn't a particularly good measure of the true health of a country's economy. Most economists and politicians know this.

This isn't a plea for non-financial measures such as "national happiness". It's a numerical issue. GDP is hard to measure, with definitions that vary widely by country. Important aspects of the modern world such as "free" online services and family-provided eldercare aren't really counted properly.

However, people won't abandon GDP, because they like comparable data with a long history. They can plot trends, curves, averages... and don't need to revise spreadsheets and models from the ground up with something new. Other metrics are linked to GDP - R&D intensity, NATO military spending commitments and so on - which would needed to be re-based if a different measure was used. The accounting and political headaches would be huge.

A poor metric often has huge inertia and high switching costs.

Telecoms is no different, like many sub-sectors of the economy. There are many old-fashioned metrics that are really not fit for purpose any more - and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

Some of the worst telecoms metrics I see regularly include:

  • Voice traffic measured in minutes of use (or messages counted individually)
  • Cost per bit (or increasingly energy use per bit) for broadband
  • $ per MHz per POP (population) for radio spectrum auctions
  • ARPU
  • CO2 savings "enabled" by telecom services, especially 5G

That's not an exhaustive list by any means. But the point of this article is to make people think twice about commonplace numbers - and ideally think of meaningful metrics rather than easy or convenient ones.

The sections below gives some quick thoughts on why these metrics either won't work in the future - or are simply terrible even now and in the past.

(As an aside, if you ever see numbers - especially forecasts - with too many digits and "spurious accuracy", that an immediate red flag: "The Market for Widgets will be $27.123bn in 2027". It tells you that the source really doesn't understand numbers - and you really shouldn't trust, or base decisions, on someone that mathematically inept)

Minutes and messages

The reason we count phone calls in minutes (rather than, say, conversations or just a monthly access fee) is based on an historical accident. Original human switchboard operators were paid by the hour, so a time-based quantum made the most sense for billing users. And while many phone plans are now either flat-rate, or use per-second rates, many regulations are still framed in the language of "the minute". (Note: some long-distance calls were also based on length of cable used, so "per mile" as well as minute)

This is a ridiculous anachronism. We don't measure or price other audiovisual services this way. You don't pay per-minute for movies or TV, or value podcasts, music or audiobooks on a per-minute basis. Other non-telephony voice communications modes such as push-to-talk, social audio like ClubHouse, or requests to Alexa or Siri aren't time-based.

Ironically, shorter calls are often more valuable to people. There's a fundamental disconnect between price and value.

A one-size-fits-all metric for calls stops telcos and other providers from innovating around context, purpose and new models for voice services. It's hard to charge extra for "enhanced voice" in a dozen different dimensions. They should call on governments to scrap minute-based laws and reporting requirements, and rejig their own internal systems to a model that makes more sense.

Much.

the

same

argument...

.... applies to counting individual messages/SMS as well. It's a meaningless quantum that doesn't align with how people use IMs / DMs / group chats and other similar modalities. It's like counting or charging for documents by the pixel. Threads, sessions or conversations are often more natural units, albeit harder to measure.

Cost per bit

"5G costs less per bit than 4G". "Traffic levels increase faster than revenues!".

Cost-per-bit is an often-used but largely meaningless metric, which drives poor decision-making and incentives, especially in the 5G era of multiple use-cases - and essentially infinite ways to calculate the numbers.

Different bits have very different associated costs. A broad average is very unhelpful for investment decisions. The cost of a “mobile” bit (for an outdoor user in motion, handing off from cell to cell) is very different to an FWA bit delivered to a house’s external fixed antenna, or a wholesale bit used by an MVNO.

Costs can vary massively by spectrum band, to a far greater degree than technology generation - with the cost of the spectrum itself a major component. Convergence and virtualisation means that the same costs (eg core and transport networks) can apply to both fixed and mobile broadband, and 4G/5G/other wireless technologies. Uplink and downlink bits also have different costs - which perhaps should include the cost of the phone and power it uses, not just the network.

The arrival of network slicing (and URLLC) will mean “cost per bit” is an ever-worse metric, as different slices will inherently be more or less "expensive" to create and operate. Same thing with local break-out, delivery of content from a nearby edge-server or numerous other wrinkles.

But in many ways, the "cost" part of cost/bit is perhaps the most easy to analyse, despite the accounting variabilities. Given enough bean-counters and some smarts in the network core/OSS, it would be possible to create some decent numbers at least theoretically.

But the bigger problem is the volume of bits. This is not an independent variable, which flexes up and down just based on user demand and consumption. Faster networks with more instantaneous "headroom" actually create many more bits, as adaptive codecs and other application intelligence means that traffic expands to fill the space available. And pricing strategy can basically dial up or down the number of bits customers used, with minimal impact on costs.

A video application might automatically increase the frame rate, or upgrade from SD to HD, with no user intervention - and very little extra "value". There might be 10x more bits transferred for the same costs (especially if delivered from a local CDN). Application developers might use tools to predict available bandwidth, and change the behaviour of their apps dynamically.

So - if averaged costs are incalculable, and bit-volume is hugely elastic, then cost/bit is meaningless. Ironically, "cost per minute of use" might actually be more relevant here than it is for voice calls. At the very least, cost per bit needs separate calculations for MBB / FWA / URLLC, and by local/national network scale.

(By a similar argument, "energy consumed per bit" is pretty useless too).

Spectrum prices for mobile use

The mobile industry has evolved around several generations of technology, typically provided by MNOs to consumers. Spectrum has typically been auctioned for exclusive use on a national / regional basis, in fixed-sized slices in chunks perhaps 5/10/20MHz wide, with licenses often specifying rules on coverage of population.

For this reason, it's not surprising that a very common metric is "$ per MHz / Pop" - the cost per megahertz, per addressable population in a given area.

Up to a point, this has been pretty reasonable, given that the main use of 2G, 3G and even 4G has been for broad, wide-area coverage for consumers' phones and sometimes homes. It has been useful for investors, telcos, regulators and others to compare the outcomes of auctions.

But for 5G and beyond (actually the 5G era, rather than 5G specifically), this metric is becoming ever less-useful. There are three problems here:

  • Growing focus on smaller areas of licenses: county-sized in CBRS in the US, and site-specific in Germany, UK and Japan for instance, especially for enterprise sites and property developments. This makes comparisons much harder, especially if areas are unclear.
  • Focus of 5G and private 4G on non-consumer applications and uses. Unless the idea of "population" is expanded to include robots, cars, cows and IoT gadgets, the "pop" part of the metric clearly doesn't work. As the resident population of a port or offshore windfarm zone is zero, then a local spectrum license would effectively have an infinite $ / MHz / Pop.
  • Spectrum licenses are increasingly being awarded with extra conditions such as coverage of roads, land-area - or mandates to offer leases or MVNO access. Again, these are not population-driven considerations.

Over the next decade we will see much greater use of mobile spectrum-sharing, new models of pooled ("club") spectrum access, dynamic and database-driven access, indoor-only licenses, secondary-use licenses and leases, and much more.

Taken together, these issues are increasingly rendering $/MHz/Pop a legacy irrelevance in many cases.

ARPU

"Average Revenue Per User" is a longstanding metric used in various parts of telecoms, but especially by MNOs for measuring their success in selling consumers higher-end packages and subcriptions. It has long come under scrutiny for its failings, and various alternatives such as AMPU (M for margin) have emerged, as well as ways to carve out dilutive "user" groups such as low-cost M2M connections. There have also been attempts to distinguish "user" from "SIM" as some people have multiple SIMs, while other SIMs are shared.

At various points in the past it used to "hide" effective loan repayments for subsidised handsets provided "free" in the contract, although that has become less of an issue with newer accounting rules. It also faces complexity in dealing with allocating revenues in converged fixed/mobile plans, family plans, MVNO wholesale contracts and so on.

A similar issue to "cost per bit" is likely to happen to ARPU in the 5G era. Unless revenues and user numbers are broken out more finely, the overall figure is going to be a meaningless amalgam of ordinary post/prepaid smartphone contracts, fixed wireless access, premium "slice" customers and a wide variety of new wholesale deals.

The other issue is that ARPU further locks telcos into the mentality of the "monthly subscription" model. While fixed monthly subs, or "pay as you go top-up" models still dominate in wireless, others are important too, especially in the IoT world. Some devices are sold with connectivity included upfront.

Enterprises buying private cellular networks specifically want to avoid per-month or per-GB "plans" - it's one of the reasons they are looking to create their own dedicated infrastructure. MNOs may need to think in terms of annual fees, systems integration and outsourcing deals, "devices under management" and all sorts of other business models. The same is true if they want to sell "slices" or other blended capabilities - perhaps geared to SLAs or business outcomes.

Lastly - what is a "user" in future? An individual human with a subscription? A family? A home? A group? A device?

ARPU is another metric overdue for obsolescence.

CO2 "enablement" savings

I posted last week about the growing trend of companies and organisations to cite claims that a technology (often 5G or perhaps IoT in general) allows users to "save X tons of CO2 emissions".

You know the sort of thing - "Using augmented reality conferencing on your 5G phone for a meeting avoids the need for a flight & saves 2.3 tons of CO2" or whatever. Even leaving aside the thorny issues of Jevon's Paradox, which means that efficiency tends to expand usage rather than replace it - there's a big problem here:

Double-counting.

There's no attempt at allocating this notional CO2 "saving" between the device(s), the network(s), the app, the cloud platform, the OS & 100 other elements. There's no attempt such as "we estimate that 15% of this is attributable to 5G for x, y, z reasons".

Everyone takes 100% credit. And then tries to imply it offsets their own internal CO2 use.

"Yes, 5G needs more energy to run the network. But it's lower CO2 per bit, and for every ton we generate, we enable 2 tons in savings in the wider economy".

Using that logic, the greenest industry on the planet is industrial sand production, as it's the underlying basis of every silicon chip in every technological solution for climate change.

There's some benefit from CO2 enablement calculations, for sure - and there's more work going into reasonable ways to allocate savings (look in the comments for the post I link to above), but readers should be super-aware of the limitations of "tons of CO2" as a metric in this context.

So what's the answer?

It's fairly easy to poke holes in things. It's harder to find a better solution. Having maintained spreadsheets of company and market performance and trends myself, I know that analysis is often held hostage by what data is readily available. Telcos report minutes-of-use and ARPU, so that's what everyone else uses as a basis. Governments may demand that reporting, or frame rules in those terms (for instance, wholesale voice termination rates have "per minute" caps in some countries).

It's very hard to escape from the inertia of a long and familiar dataset. Nobody want to recreate their tables and try to work out historic comparables. There is huge path dependence at play - small decisions years ago, which have been entrenched in practices in perpetuity, even though the original rationale has long since gone. (You may have noticed me mention path dependence a few times recently. It's a bit of a focus of mine at the moment....)

But there's a circularity here. Certain metrics get entrenched and nobody ever questions them. They then get rehashed by governments and policymakers as the basis for new regulations or measures of market success. Investors and competition authorities use them. People ignore the footnotes and asterisks warning of limitations

The first thing people should do is question the definitions of familiar public or private metrics. What do they really mean? For a ratio, are the assumptions (and definitions) for both denominator and numerator still meaningful? Is there some form of allocation process involved? Are there averages which amalgamate lots of dissimilar categories?

I'd certainly recommend Tim Harford's book "How to Make the World Add Up" (link) as a good backgrounder to questioning how stats are generated and sometimes misused.

But the main thing I'd suggest is asking whether metrics can either hide important nuance - or can set up flawed incentives for management.

There's a long history of poor metrics having unintended consequences. For example, it would be awful (but not inconceivable) to raise ARPUs by cancelling the accounts of low-end users. Or perhaps an IoT-focused vertical service provider gets punished by the markets for "overpaying" for spectrum in an area populated by solar panels rather than people.

Stop and question the numbers. See who uses them / expects them and persuade them to change as well. Point out the fallacies and flawed incentives to policymakers.

If you have any more examples of bad numbers, feel free to add them in the comments. I forecast there will be 27.523 of them, by the end of the year.

The author is an industry analyst and strategy advisor for telecoms companies, governments, investors and enterprises. He often "stress-tests" qualitative and quantitative predictions and views of technology markets. Please get in touch if this type of viewpoint and analysis interests you - and also please follow @disruptivedean on Twitter.

Monday, December 04, 2017

5G & IoT? We need to talk about latency



Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.

Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

Think about timescales a bit more deeply:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

Are we focusing 5G too much on the occasional Goldilocks of not-too-fast and not-too-slow?