Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label forecasts. Show all posts
Showing posts with label forecasts. Show all posts

Thursday, June 29, 2023

5G data traffic growth - the devil (FWA) is in the detail

This blog combines two separate, linked LinkedIn articles published in June 2023 on consecutive days. The original posts and comment threads are here and here.

Measuring #mobile data traffic is important for operators, vendors, and policymakers.

As I've said before, we should use *good* #metrics to measure the #telecoms industry, rather than just *easy* metrics. This post is an example of what I mean.

Yesterday, Ericsson released its latest Mobility Report. It's always an interesting trove of statistics on mobile subscribers, networks and usage, with extra topical articles, sometimes written by customers or guests.

While obviously it's very oriented to cellular technologies and has an optimistic pro-3GPP stance, it has a long pedigree and a lot of work goes into it. It's partly informed by private stats from Ericsson's real-world, in-service networks run by MNO customers.

This edition includes extra detail, such as breaking out fixed-wireless access & separating video traffic into VoD #streaming (eg Netflix) vs. social media like TikTok and YouTube.

It had plenty of golden "information nuggets". For instance, traffic density can be 500-1000x higher in dense urban locations than sparse rural areas. I'll come back to that another time.

Global mobile data grew 36% from Q1'22 to Q1'23. The full model online predicts 31% growth in CY2023, falling to just 15% in 2028, despite adding in AR/VR applications towards the end of the decade. That's a fairly rapid s-curve flattening.

For Europe, MBB data growth is predicted at 29% in 2023, falling to only 12% in 2028. That's a *really* important one for all sorts of reasons, and is considerably lower than many other forecasts.

But what really caught my eye was this "#FWA data traffic represented 21% of global mobile data traffic at the end of 2022". Further, it is projected to grow much faster than mobile broadband (MBB) and account for *30%* of total traffic in 2028, mostly #5G. When the famous "5G triangle" of use-cases was developed by ITU, it didn't even mention FWA.

However, the report didn't break out this split by region. So I decided to estimate it myself based on the regional split of FWA subscribers, which was shown in a graphic. I also extended the forecasts out to 2030.

I then added an additional segmentation of my own - an indoor vs outdoor split of MBB data. I've pegged this at 75% indoors, aligning with previous comments from Ericsson and others. Some indoor MBB is served by dedicated in-building wireless systems, and some is outdoor-to-indoor from macro RAN or outdoor small cells.

The result is fascinating. By the 2030, it is possible that over 40% of European 5G data traffic will be from FWA. Just 14% of cellular data will be for outdoor mobile broadband. So what's generating the alleged 5G GDP uplift?

That has massive implications for spectrum policy (eg on #6GHz) and proposed #fairshare traffic fees. It also highlights the broad lack of attention paid to indoor cellular and FWA.

Note: This is a quick, rough estimate, but it's the type of data we need for better decisionmaking. I hope to catalyse others to do similar analysis.

 


A separate second post then looked at the policy aspects of this:

Yesterday's post on mobile data traffic - and contribution from 5G FWA and indoor use - seems to have struck a chord. Some online and offline comments have asked about the policy implications.

There are several conclusions for regulators and telecoms/infrastructure ministries:

- Collect more granular data, or make reasoned estimates, of breakdowns of data traffic in your country & trends over time. As well as #FWA vs #MBB & indoor vs outdoor, there should be a split between rural / urban / dense & ideally between macro #RAN vs outdoor #smallcell vs dedicated indoor system. Break out rail / road transport usage.
- Develop a specific policy (or at least gather data and policy drivers) for FWA & indoor #wireless. That feeds through to many areas including spectrum, competition, consumer protection, #wholesale, rights-of-way / access, #cybersecurity, inclusion, industrial policy, R&D, testbeds and trials etc. Don't treat #mobile as mostly about outdoor or in-vehicle connectivity.
- View demand forecasts of mobile #datatraffic and implied costs for MNO investment / capacity-upgrade through the lens of detailed stats, not headline aggregates. FWA is "discretionary"; operators know it creates 10-20x more traffic per user. In areas with poor fixed #broadband (typically rural) that's potentially good news - but those areas may have spare mobile capacity rather than needing upgrades. Remember 4G-to-5G upgrade CAPEX is needed irrespective of traffic levels. FWA in urban areas likely competes with fibre and is a commercial choice, so complaints about traffic growth are self-serving.
- Indoor & FWA wireless can be more "tech neutral" & "business model neutral" than outdoor mobile access. #WiFi, #satellite and other technologies play more important roles - and may be lower-energy too. Shared / #neutralhost infrastructure is very relevant.
- Think through the impact of detailed data on #spectrum requirements and bands. In particular, the FWA/MBB & indoor splits are yet more evidence that the need for #6GHz for #5G has been hugely overstated. In particular, because FWA is "deterministic" (ie it doesn't move around or cluster in crowds) it's much more tolerant of using different bands - or unlicensed spectrum. Meanwhile indoor MBB can be delivered with low-band macro 5G, dedicated in-building systems (perhaps mmWave), or offloaded to WiFi. Using midband 5G and MIMO to "blast through walls" is not ideal use of either spectrum or energy.
- View 5G traffic data/forecasts used in so-called #fairshare or #costrecovery debates with skepticism. Check if discretionary FWA is inflating the figures. Question any GDP impact claims. Consider how much RAN investment is actually serving indoor users, maybe inefficiently. And be aware that home FWA traffic skews towards TVs and VoD #streaming (Netflix, Prime etc) rather than smartphone- or upload-centric social #video like TikTok & FB/IG.

Telecoms regulation needs good input data, not convenient or dramatic headline stats.

 

Thursday, June 22, 2023

Data traffic growth forecasts - AD Little's new report has a lot better methodology than most

This post originally appeared on June 5 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

When I saw that Arthur D. Little had published a report on “The evolution of data growth in Europe”, on behalf of ETNO Association & GSMA, I rolled my eyes.
 
Both organisations have previously published terrible studies by consultants, riddled with flawed assumptions and dodgy multiplier "fiddle factors". I’ve loudly criticised Axon and Coleago reports related to the (un)#fairshare and #6GHz #spectrum debates respectively.
 
So I started the ADL report with trepidation, not helped by a strange typo / editing error in the first paragraph.
 
But actually, the report is pretty good, and I broadly agree with both methodology and conclusions, albeit with one major caveat.
 
It estimates usage of home and mobile broadband on the basis of hours-per-day of active use of heavy applications such as video streaming, gaming and possible metaverse-type experiences.
 
I’ve used GB-per-hour myself, to model passenger data-traffic demand on trains. It makes more sense than the usual Gbps, as most applications are “bursty”. It also fits the typical heuristics of human behaviour. How many seconds a day do you spend on social media?
 
The central prediction of 20% growth in fixed traffic and 25% for mobile usage seems reasonable. I could argue for 25/20 rather than 20/25, but it's fine as a rough estimate.

Importantly these rates for the next few years are well within the bounds of both fixed broadband (moving to #FTTP) and mobile (on #5G) without incremental investments in extra capacity, beyond the main "generational" shift & CAPEX. And that is driven by government policy and competition, not traffic load and congestion. The report convincingly shows that nobody really needs/values more than 100Mbps for current apps, so #gigabit networks have plenty of headroom.

My main criticism is there is no analysis of mobile device traffic carried over fixed networks and #WiFi. Smartphones used at home for video, gaming or social media will be c80% on Wi-Fi, and indoor usage is c80% of the total.

The report also talks about AI pre-emptively downloading content for “infinite scrolling”, but doesn't suggest it could be smart enough to do so mostly over cheap / low-energy fixed connections. (IMO, by 2030, governments may *mandate* cellular offload via neutral-host or Wi-Fi for indoor use).

I agree with the report's assertions that VR is in an indoor/fixed application, that most #IoT traffic is a rounding-error and that #Web3 is probably irrelevant. The #metaverse scenarios seem mostly plausible.
 
One area I think ADL underestimates is fixed broadband for video streaming. While Netflix and YouTube are “active” viewing, historically, many people just leave broadcast TV switched on, even if nobody is in the room except the cat.

If TV really goes online-only, then that becomes a genuine “waste” of capacity, unless you can advertise to pets.

Overall - really quite good analysis, which (ironically, given the sponsors) fatally undermines the #InternetTrafficTax rhetoric.

 


Wednesday, March 03, 2021

The Worst Metrics in Telecoms

 (This post was initially published as an article on my LinkedIn Newsletter - here - please see that version for comments and discussion)

GDP isn't a particularly good measure of the true health of a country's economy. Most economists and politicians know this.

This isn't a plea for non-financial measures such as "national happiness". It's a numerical issue. GDP is hard to measure, with definitions that vary widely by country. Important aspects of the modern world such as "free" online services and family-provided eldercare aren't really counted properly.

However, people won't abandon GDP, because they like comparable data with a long history. They can plot trends, curves, averages... and don't need to revise spreadsheets and models from the ground up with something new. Other metrics are linked to GDP - R&D intensity, NATO military spending commitments and so on - which would needed to be re-based if a different measure was used. The accounting and political headaches would be huge.

A poor metric often has huge inertia and high switching costs.

Telecoms is no different, like many sub-sectors of the economy. There are many old-fashioned metrics that are really not fit for purpose any more - and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

Some of the worst telecoms metrics I see regularly include:

  • Voice traffic measured in minutes of use (or messages counted individually)
  • Cost per bit (or increasingly energy use per bit) for broadband
  • $ per MHz per POP (population) for radio spectrum auctions
  • ARPU
  • CO2 savings "enabled" by telecom services, especially 5G

That's not an exhaustive list by any means. But the point of this article is to make people think twice about commonplace numbers - and ideally think of meaningful metrics rather than easy or convenient ones.

The sections below gives some quick thoughts on why these metrics either won't work in the future - or are simply terrible even now and in the past.

(As an aside, if you ever see numbers - especially forecasts - with too many digits and "spurious accuracy", that an immediate red flag: "The Market for Widgets will be $27.123bn in 2027". It tells you that the source really doesn't understand numbers - and you really shouldn't trust, or base decisions, on someone that mathematically inept)

Minutes and messages

The reason we count phone calls in minutes (rather than, say, conversations or just a monthly access fee) is based on an historical accident. Original human switchboard operators were paid by the hour, so a time-based quantum made the most sense for billing users. And while many phone plans are now either flat-rate, or use per-second rates, many regulations are still framed in the language of "the minute". (Note: some long-distance calls were also based on length of cable used, so "per mile" as well as minute)

This is a ridiculous anachronism. We don't measure or price other audiovisual services this way. You don't pay per-minute for movies or TV, or value podcasts, music or audiobooks on a per-minute basis. Other non-telephony voice communications modes such as push-to-talk, social audio like ClubHouse, or requests to Alexa or Siri aren't time-based.

Ironically, shorter calls are often more valuable to people. There's a fundamental disconnect between price and value.

A one-size-fits-all metric for calls stops telcos and other providers from innovating around context, purpose and new models for voice services. It's hard to charge extra for "enhanced voice" in a dozen different dimensions. They should call on governments to scrap minute-based laws and reporting requirements, and rejig their own internal systems to a model that makes more sense.

Much.

the

same

argument...

.... applies to counting individual messages/SMS as well. It's a meaningless quantum that doesn't align with how people use IMs / DMs / group chats and other similar modalities. It's like counting or charging for documents by the pixel. Threads, sessions or conversations are often more natural units, albeit harder to measure.

Cost per bit

"5G costs less per bit than 4G". "Traffic levels increase faster than revenues!".

Cost-per-bit is an often-used but largely meaningless metric, which drives poor decision-making and incentives, especially in the 5G era of multiple use-cases - and essentially infinite ways to calculate the numbers.

Different bits have very different associated costs. A broad average is very unhelpful for investment decisions. The cost of a “mobile” bit (for an outdoor user in motion, handing off from cell to cell) is very different to an FWA bit delivered to a house’s external fixed antenna, or a wholesale bit used by an MVNO.

Costs can vary massively by spectrum band, to a far greater degree than technology generation - with the cost of the spectrum itself a major component. Convergence and virtualisation means that the same costs (eg core and transport networks) can apply to both fixed and mobile broadband, and 4G/5G/other wireless technologies. Uplink and downlink bits also have different costs - which perhaps should include the cost of the phone and power it uses, not just the network.

The arrival of network slicing (and URLLC) will mean “cost per bit” is an ever-worse metric, as different slices will inherently be more or less "expensive" to create and operate. Same thing with local break-out, delivery of content from a nearby edge-server or numerous other wrinkles.

But in many ways, the "cost" part of cost/bit is perhaps the most easy to analyse, despite the accounting variabilities. Given enough bean-counters and some smarts in the network core/OSS, it would be possible to create some decent numbers at least theoretically.

But the bigger problem is the volume of bits. This is not an independent variable, which flexes up and down just based on user demand and consumption. Faster networks with more instantaneous "headroom" actually create many more bits, as adaptive codecs and other application intelligence means that traffic expands to fill the space available. And pricing strategy can basically dial up or down the number of bits customers used, with minimal impact on costs.

A video application might automatically increase the frame rate, or upgrade from SD to HD, with no user intervention - and very little extra "value". There might be 10x more bits transferred for the same costs (especially if delivered from a local CDN). Application developers might use tools to predict available bandwidth, and change the behaviour of their apps dynamically.

So - if averaged costs are incalculable, and bit-volume is hugely elastic, then cost/bit is meaningless. Ironically, "cost per minute of use" might actually be more relevant here than it is for voice calls. At the very least, cost per bit needs separate calculations for MBB / FWA / URLLC, and by local/national network scale.

(By a similar argument, "energy consumed per bit" is pretty useless too).

Spectrum prices for mobile use

The mobile industry has evolved around several generations of technology, typically provided by MNOs to consumers. Spectrum has typically been auctioned for exclusive use on a national / regional basis, in fixed-sized slices in chunks perhaps 5/10/20MHz wide, with licenses often specifying rules on coverage of population.

For this reason, it's not surprising that a very common metric is "$ per MHz / Pop" - the cost per megahertz, per addressable population in a given area.

Up to a point, this has been pretty reasonable, given that the main use of 2G, 3G and even 4G has been for broad, wide-area coverage for consumers' phones and sometimes homes. It has been useful for investors, telcos, regulators and others to compare the outcomes of auctions.

But for 5G and beyond (actually the 5G era, rather than 5G specifically), this metric is becoming ever less-useful. There are three problems here:

  • Growing focus on smaller areas of licenses: county-sized in CBRS in the US, and site-specific in Germany, UK and Japan for instance, especially for enterprise sites and property developments. This makes comparisons much harder, especially if areas are unclear.
  • Focus of 5G and private 4G on non-consumer applications and uses. Unless the idea of "population" is expanded to include robots, cars, cows and IoT gadgets, the "pop" part of the metric clearly doesn't work. As the resident population of a port or offshore windfarm zone is zero, then a local spectrum license would effectively have an infinite $ / MHz / Pop.
  • Spectrum licenses are increasingly being awarded with extra conditions such as coverage of roads, land-area - or mandates to offer leases or MVNO access. Again, these are not population-driven considerations.

Over the next decade we will see much greater use of mobile spectrum-sharing, new models of pooled ("club") spectrum access, dynamic and database-driven access, indoor-only licenses, secondary-use licenses and leases, and much more.

Taken together, these issues are increasingly rendering $/MHz/Pop a legacy irrelevance in many cases.

ARPU

"Average Revenue Per User" is a longstanding metric used in various parts of telecoms, but especially by MNOs for measuring their success in selling consumers higher-end packages and subcriptions. It has long come under scrutiny for its failings, and various alternatives such as AMPU (M for margin) have emerged, as well as ways to carve out dilutive "user" groups such as low-cost M2M connections. There have also been attempts to distinguish "user" from "SIM" as some people have multiple SIMs, while other SIMs are shared.

At various points in the past it used to "hide" effective loan repayments for subsidised handsets provided "free" in the contract, although that has become less of an issue with newer accounting rules. It also faces complexity in dealing with allocating revenues in converged fixed/mobile plans, family plans, MVNO wholesale contracts and so on.

A similar issue to "cost per bit" is likely to happen to ARPU in the 5G era. Unless revenues and user numbers are broken out more finely, the overall figure is going to be a meaningless amalgam of ordinary post/prepaid smartphone contracts, fixed wireless access, premium "slice" customers and a wide variety of new wholesale deals.

The other issue is that ARPU further locks telcos into the mentality of the "monthly subscription" model. While fixed monthly subs, or "pay as you go top-up" models still dominate in wireless, others are important too, especially in the IoT world. Some devices are sold with connectivity included upfront.

Enterprises buying private cellular networks specifically want to avoid per-month or per-GB "plans" - it's one of the reasons they are looking to create their own dedicated infrastructure. MNOs may need to think in terms of annual fees, systems integration and outsourcing deals, "devices under management" and all sorts of other business models. The same is true if they want to sell "slices" or other blended capabilities - perhaps geared to SLAs or business outcomes.

Lastly - what is a "user" in future? An individual human with a subscription? A family? A home? A group? A device?

ARPU is another metric overdue for obsolescence.

CO2 "enablement" savings

I posted last week about the growing trend of companies and organisations to cite claims that a technology (often 5G or perhaps IoT in general) allows users to "save X tons of CO2 emissions".

You know the sort of thing - "Using augmented reality conferencing on your 5G phone for a meeting avoids the need for a flight & saves 2.3 tons of CO2" or whatever. Even leaving aside the thorny issues of Jevon's Paradox, which means that efficiency tends to expand usage rather than replace it - there's a big problem here:

Double-counting.

There's no attempt at allocating this notional CO2 "saving" between the device(s), the network(s), the app, the cloud platform, the OS & 100 other elements. There's no attempt such as "we estimate that 15% of this is attributable to 5G for x, y, z reasons".

Everyone takes 100% credit. And then tries to imply it offsets their own internal CO2 use.

"Yes, 5G needs more energy to run the network. But it's lower CO2 per bit, and for every ton we generate, we enable 2 tons in savings in the wider economy".

Using that logic, the greenest industry on the planet is industrial sand production, as it's the underlying basis of every silicon chip in every technological solution for climate change.

There's some benefit from CO2 enablement calculations, for sure - and there's more work going into reasonable ways to allocate savings (look in the comments for the post I link to above), but readers should be super-aware of the limitations of "tons of CO2" as a metric in this context.

So what's the answer?

It's fairly easy to poke holes in things. It's harder to find a better solution. Having maintained spreadsheets of company and market performance and trends myself, I know that analysis is often held hostage by what data is readily available. Telcos report minutes-of-use and ARPU, so that's what everyone else uses as a basis. Governments may demand that reporting, or frame rules in those terms (for instance, wholesale voice termination rates have "per minute" caps in some countries).

It's very hard to escape from the inertia of a long and familiar dataset. Nobody want to recreate their tables and try to work out historic comparables. There is huge path dependence at play - small decisions years ago, which have been entrenched in practices in perpetuity, even though the original rationale has long since gone. (You may have noticed me mention path dependence a few times recently. It's a bit of a focus of mine at the moment....)

But there's a circularity here. Certain metrics get entrenched and nobody ever questions them. They then get rehashed by governments and policymakers as the basis for new regulations or measures of market success. Investors and competition authorities use them. People ignore the footnotes and asterisks warning of limitations

The first thing people should do is question the definitions of familiar public or private metrics. What do they really mean? For a ratio, are the assumptions (and definitions) for both denominator and numerator still meaningful? Is there some form of allocation process involved? Are there averages which amalgamate lots of dissimilar categories?

I'd certainly recommend Tim Harford's book "How to Make the World Add Up" (link) as a good backgrounder to questioning how stats are generated and sometimes misused.

But the main thing I'd suggest is asking whether metrics can either hide important nuance - or can set up flawed incentives for management.

There's a long history of poor metrics having unintended consequences. For example, it would be awful (but not inconceivable) to raise ARPUs by cancelling the accounts of low-end users. Or perhaps an IoT-focused vertical service provider gets punished by the markets for "overpaying" for spectrum in an area populated by solar panels rather than people.

Stop and question the numbers. See who uses them / expects them and persuade them to change as well. Point out the fallacies and flawed incentives to policymakers.

If you have any more examples of bad numbers, feel free to add them in the comments. I forecast there will be 27.523 of them, by the end of the year.

The author is an industry analyst and strategy advisor for telecoms companies, governments, investors and enterprises. He often "stress-tests" qualitative and quantitative predictions and views of technology markets. Please get in touch if this type of viewpoint and analysis interests you - and also please follow @disruptivedean on Twitter.

Monday, January 16, 2017

My 2017 Plans: Research, Events & Client Focus

Excuse the narcissism: This blog post is about me. 

It's intended to clarify my current research focus, the ways I engage with clients, events I get involved in, and the other people and companies I work with.

Most of my work falls into 3 broad and overlapping areas:
  • Network Technology, Policy & Strategy: Evolution of telecom networks & operator business models. Fixed & mobile infrastructure, 5G, WiFi, LPWAN, NFV/SDN, spectrum policy, net neutrality, SD-WAN, MEC, MVNOs, eSIM, policy, mobile broadband, OSS/BSS and so on. (I don't do much on photonics & transport, or detailed product analysis or economic modelling though).
  • Communications Applications & Services: How humans & machines communicate & what that enables. Voice, telephony, video comms, messaging, WebRTC, cPaaS, VoLTE, UC/UCaaS, role of telcos, contextual communications, social communications, VoIP apps, bots & speech-tech, wholesale, numbering, collaboration etc.
  • TelcoFuturism: The intersection points of the telecoms / enterprise comms industry, with other orthogonal trends such as AI, blockchain, AR/VR, robotics, drones, IoT, self-driving vehicles, quantum technology, technological (un)employment, future government, human enhancement, geopolitics, advanced healthcare and demography.
In terms of client engagement and business model, I work as an analyst, consultant and futurist. This means several areas of activity:

  • Written reports, sometimes under my own Disruptive Analysis brand (eg recently on eSIM - link - and soon on Blockchain + Telecoms & maybe WebRTC/cPaaS once again). 
  • But in much greater volume, my report output goes through STL Partners / Telco 2.0, for which I act as Associate Director of the "Future of the Network" research stream (link). Recent FoN reports have covered 5G strategy, eSIM, LPWAN, Net Neutrality, SDN/NFV, SD-WAN. I'll be writing for STL on those topics plus also spectrum policy, VoLTE, satellite communications, vendor positioning & value-chain, network slicing & edge-computing in 2017. (If you're interested in subscribing to the Future of the Network programme, please contact me at information AT disruptive-analysis DOT com, or speak to an existing STL Partners sales contact).
  • Internal advisory projects and workshops for operators, vendors, regulators and investors. I participate in various private consulting assignments, under-NDA roundtables and presentations, or advisory workshops - sometimes for C-level executives and sometimes for departmental/product/strategy teams. Much of my work is on assisting companies to understand future market context & opportunities (especially across multiple silos), answer complex questions about value-chain & competitive dynamics, or "stress-testing" of existing plans and world-views. I'm happy to provide proposals & references on request.
  • Keynote speaker at public and private events. This spans both technology-specific issues ("what will 5G look like?", "what are the uses of blockchain in telecoms?") through to broader futurism ("what will the telecom industry look like in 2030, and what can we do about it?)". Get in touch if you want me to speak at something - fees/expenses apply for events that are company-specific, or require significant travel.
  • Providing input into M&A due-diligence, regulatory & policymaking processes or investment theses. I'm no longer a certified financial analyst, though.
  • Advisory boards and retainer relationships. I'm happy to work with clients on an ongoing basis, as long as it does not compromise my independence (eg ability to criticise). 
  • Writing white papers or custom reports for vendors and operators. I only write documents where my opinion is already aligned with my client's, or where they are looking for a contrarian or "provocative" piece. I retain editorial control. Given my trenchant and well-publicised views on many technology areas, there's no point asking me to write a glowing testimonial for stuff I criticise regularly. (Also, I don't do product comparisons or endorsements).
  • Some of my work is conducted in partnership with other independent consultants and analysts. I've worked with Martin Geddes (link), Alan Quayle (link) and Chris Lewis (link) before, and am open to other collaborations if they are mutually beneficial.
  • Interviews and other contributions for press and broadcast media. As well as industry specialists like TelecomTV, I've also been quoted by BBC, Economist, FT & many others.
I attend and speak at/moderate a lot of events - probably around 30-40 a year. These are mostly in the UK, rest of Europe and US, although I'd intend to spend more time at conferences in Asia and the rest of the world. My favourite events are those with 100-300 people, run by small-to-midsize event companies, and not over-controlled by sponsors paying for speaking slots or trying to censor the agenda. Any credible event has dissenting voices and debate. 

Conferences I visit or speak at are mostly a mix of public industry events (eg TADSummit, Great Telco Debate, Terrapinn, Layer123, WiFiNow, Cambridge Wireless & Upperside are among the best), company-specific forums run by vendors (eg Comptel Nexterday, Metaswitch Forum, GenBand Perspectives) and regulatory/policy workshops. Some Meetups are good as well - in particular London Futurists.

I go to a few midsize trade shows (eg Enterprise Connect, TMForum) but not the ones with 10's of thousands of people (CES, MWC, CeBIT etc). The latter I find a complete waste of time, as I'm spread too thinly to be able to focus on particular themes. In the past I've had 400+ briefing invitations for MWC, and it takes weeks just to process emails and say "no thanks" without being excessively rude. 

My current roster of upcoming events (some speaking, some just attending) includes:
Please get in touch if you're looking for a speaker, moderator, or just an attendee prepared to ask difficult questions & post a bunch of commentary on Twitter during the event. Also, let me know if you're an AR professional running an analyst summit - I try to get to as many as I can.

In the past, I've also co-run small workshop-style events with Martin Geddes (eg on "Future of Voice") and that's something I may well return to in 2017.

In terms of publishing short-form pieces, this blog will continue to be my main vehicle. I also republish most longer pieces on my LinkedIn page (link), which often gets more comments and engagement - and also I put some on Medium (link), which doesn't. Occasionally people ask to syndicate my posts - it depends on the site and whether it gets a different audience to me. I don't often write guest posts for other people, except occasionally for consulting / retainer clients - I'm quite a bit more costly than freelance writers.

I put up quite a lot of my public conference presentations on SlideShare (link) although I intend to update it more frequently. There's also quite a few of my recent presentations on YouTube (link) & a few on Vimeo (link). I'm going to be doing - and collating - more video content in 2017.

Otherwise, for 2017 I'm hopefully going to carry on my usual broad & pithy coverage & commentary on the telecoms industry, plus spend a rather larger fraction of my time on more general futurism and tech-policy topics. If you don't know already, I'm @disruptivedean on Twitter, and can be reached by email at information at disruptive-analysis dot com.


Thursday, April 02, 2015

Report update: WebRTC market expanding and maturing, but in unexpected ways

I've just published a major update to the Disruptive Analysis WebRTC Market Status & Forecast report, which originally came out in September 2014. The update revises the key forecasts, and considers the shifts in industry structure and use-case that I've been seeing & talking about recently.

The headline numbers: 6.7bn devices are forecast to support WebRTC (on a broadly-defined basis) by the end of 2019. At that point, there are expected to be 2bn active consumer users, and 900m business users of WebRTC (with considerable overlap).


But digging beyond the updated market forecasts, it's important to recognise some key underlying trends:

  • The definition of "WebRTC" is becoming blurry. ORTC, app-embedded WebRTC, plug-ins, 3rd-party PaaS & SDKs etc. are changing the landscape. However, only the purists really care - others just exploit the "democratisation" of creating comms apps and capabilities more easily
  • In numerical terms, mobile implementations of WebRTC are starting to out-accelerate desktop browser-based ones, outside the enterprise. This favours either sophisticated developers able to build apps around the various WebRTC client frameworks, or those using 3rd-party PaaS solutions
  • Many "big names" have launched WebRTC products and services in recent months, ranging from Cisco & Avaya, to AT&T, Tata and Facebook. This is a strong endorsement of the technology - and often integrated with a parallel shift to cloud-based services.
  • Developer mindshare is increasing - helped by hackathons and presence at vertical events - but many in the web/app world remain unaware of WebRTC's potential. Enterprise comms professionals seem much more aware of it.
  • While contact-centres are still the major WebRTC hotspot in enterprise, there is growing interest in mobile customer-service apps, and video-integrated collaboration tools. This overlaps the trend towards cloud-based apps, as well as new styles of corporate messaging / social-timeline approaches to communication.
  • This is driving the "disunification" of business comms, as I discussed about 3 months ago. WebRTC-based DUC will grow much faster than WebRTC-based UC, although that has large potential too. There will be >300m business DUC users by 2019.
  • The market for vendors selling WebRTC gateways (telco/enterprise) or commercial WebRTC platform-as-a-service is comparatively slow-moving, but starting to pick up steam. The last 6 months has seen considerable advances in uptake of "interoperable" use-cases. 
  • However, developers often have a variety of open-source alternatives - and there is a growing suspicion that PaaS indirectly competes with vendor-driven products. Indeed, some vendors now have their own PaaS platform (Genband Kandy, Digium Respoke, Acision Forge etc).
  • There are now more than 10 telecom operators with some sort of commercial implementation involving WebRTC, with several more with well-advanced plans and prototypes that Disruptive Analysis is aware of. Some have multiple initiatives
  • For major consumer web services, WebRTC is creeping in, often with limited tests and deployments for obscure user groups - such as Facebook's video-messaging for Chromebook users. It's still unclear if Whatsapp's long-awaited voice service is based on WebRTC or not. 
In other words, there is a lot of noise and action - and indeed growing usage - but comparatively little hard cash at the moment. However, that is starting to change - CafeX's recent funding round is a good indicator, while discussions with vendors & PaaS players have shown growing awareness of better marketing and partnerships. This is also not unusual - there was a considerable lag between people using the web in its early days, and anyone (beyond ISPs) making real money from services or application infrastructure.

Ultimately, WebRTC is a technology which lowers the bar for both true innovators, and others doing today's services more easily/cheaply. In many cases, WebRTC adds value to something else - whether it's extending the reach of a conferencing system, or helping reduce churn by better customer-service. 

Overall, Disruptive Analysis remains bullish about the technology, both in the short-to-medium term, and in the long run as it converges with cloud, contextual communications and even aspects of IoT. WebRTC remains a fundamentally disruptive technology, and its ramifications are only at the first stage of being realised.

The new update is sold along with the full "reference report" from September, plus a one-hour briefing call and additional update later in the year. Contents and pricing/ordering details are here

I'll also be speaking or moderating at various upcoming WebRTC-related events:
Lastly, if you have any questions, or represent a WebRTC company or user, interested in setting up a briefing with me, please contact me via information AT disruptive-analysis DOT com

Tuesday, February 03, 2015

Inevitabilities, adjacencies and anti-forecasting



I get annoyed by “unintended consequences”. Too often, they are only “unintended” because they were not predicted. Similarly, many forecasts fail to become real, as they overlook predictable problems – or perhaps, distant external factors that cut the ground from under them. Another category of “predictable failure” comes from wishful-thinking “visions” that ignore other, unstoppable trends that make them impossible.

When it comes to analysing the future direction of technology markets – be it my normal stamping ground of telecoms, my broader futurism work, or even politics – I am constantly aware that companies, industry-bodies and self-appointed visionaries fail to look outside their narrow silos. Consistently, and near-universally.

Now, obviously, nobody has a perfect crystal ball about what will happen. But it is often possible to determine what won’t happen, or at least has a vanishingly-small probability. And it is also possible to identify other factors which will almost-certainly happen in the same time frame – and see their possible inter-dependence.

That’s all quite abstract, so a current real-world example: the use of online encryption, and the recent reaction from telcos and governments.

It’s been pretty clear to me for years that once Moore’s Law meant that data could routinely be encrypted with minimal cost (monetary, power, latency, inconvenience etc), then it would be. It was locked-in. Pretty much inevitable. All other things being equal, people like increased privacy and security – or at least, the perception of it. And even if “people” didn’t want it, then it seemed likely that a lot of companies would, on their behalf.

Encryption – like any security measure - can always be foregone, in specific circumstances, where it makes sense and all actors make rational decisions. It’s safer to make non-encryption the exception, not vice-versa.

The exact timelines, technologies and architectures were less-clear (to me at least). And it certainly wasn’t obvious that the largest catalyst would be fears raised by a whistle-blower about state surveillance – although the growing use of VPNs, Tor and other techniques in repressive regimes were pointing that direction anyway. Continued examples of hacking of apps and servers, data leaks, credit card databases stolen and so forth also made “more encryption” a safe bet as a generic forecast. Invasive actions by telcos inspecting or modifying data traffic have also been a contributor – albeit perhaps less than I expected in pre-Snowden days.

Now given that “more encryption” was what I’d call an “inevitability”, you might have thought that companies impacted by it might have started preparing long ago. Instead, the last 12 months have seen Governments and telcos panicked by the rise of HTTPS and SPDY, as well as proprietary encryption techniques used in apps, peer-to-peer technologies and VPNs. (I suspect that many security services had been quietly predicting this, but politicians seem to be treating it as a sudden surprise too).

Now I’m not going to make a call on how to “deal with” encryption or not here, whether it’s a good or bad thing in certain circumstances – I just want to point out that the situation has been predictable. Yet nobody seems to have run a filter of “hmm, what happens when it all goes dark?” over their existing products, services and practices, over the last five years or so.

The same concept of “inevitability” has also been the curse of many other technology domains. It has been inevitable since at least 2005 that, sooner or later, somebody would realise that sending 160 characters of text was pretty simple and cheap, and not worth 1-10c per message. Yet we had to go through years of vendors saying “mobile data can be worth $10000/MB, look at SMS!” without many people considering the inevitable conclusion that it wasn’t really “worth” that, when decent competition finally emerged. Instead, industry groupthink tried to pitch “value-based pricing”, when in reality it was “grudging-acceptance pricing”.

The fact that the mobile industry has probably pocketed an extra trillion dollars in profits from over-priced SMS over the last decade is a fortunate accident. The emergence of Whatsapp, LINE & WeChat was a predictable – nay, inevitable – eventuality. In many ways, it was overdue. It was only some fairly clunky UIs and low penetration which stopped it happening during the Symbian/J2ME era, as there were plenty of early pre-Whatsapp attempts. The signs were there.

Yet once again, that inevitability was ignored. Rather than making sensible attempts to defend SMS by adding value, the golden goose was ignored. Rather than reinvesting 5%, 10%, 20% in service innovation, SMS revenues were classified as “data” in operators’ financial reports and used to help justify 3G/4G licences and investments. This despite the inevitability that faster networks would make the risks even greater.

Telephony is next up. We already know the inevitabilities there – and although some vendors and software developers are (finally) trying to make phone calls more useful and “friendly”, that message hasn’t percolated through to many in charge of investment and service-innovation at network operators. Instead, they are focused on recreating Telephony 1.0 and putting the bulk of investment into things like VoLTE.

But telephony – and messaging – also have to counter effects other than just the “inevitability” of free/low-cost VoIP and IM. They also have to factor in the power of adjacency – things going on in the “silo next door” – or perhaps the silo down the street or over the horizon.

For telephony, adjacencies come from various use-cases for WebRTC-type contextual communications, as well as concepts like hypervoice. But they also come from changes in human communication more broadly – the replacement of some “voice” tasks with apps (eg booking taxis), or perhaps richer forms of interaction like augmented/virtual reality.

In the networking and broadband space, some adjacencies are starting to become visible – such as competition from new platforms such as satellites, drones and balloons, as well as direct peer-to-peer communications between devices. Others are less-obvious, such as the slow move of governments and large non-telecom companies into the domain of network ownership, as well as cloud services. Only today, bank Santander announced that it would offer online storage to businesses. Expect automotive, utility, healthcare and other providers to take prominent roles in IoT development – potentially including network ownership.

As another example, there’s a lot going on in the arcane world of SIM cards. Everyone remembers a couple of weeks of excitement around the Apple SIM last year. But that’s just the tip of the iceberg. Have you heard about the liberalisation of MNC codes, and what that might imply in future, for example? What about downloadable IMSIs? Blinded by the acronyms and obscurity? Well, that’s where some of the disruptions are potentially coming from. Fore-warned is fore-armed.

The trick here is to think not just in terms of projections and forecasts – but in terms of intersections. What other lines are coming up to meet your beautiful curves stretching out to the future? What happens to your assumptions when those lines cross?

This gets much harder when one tries to apply the same principles to more general forecasting, or futurology. It’s easy to get caught out by automobiles when you try to predict the evolution of the horse-and-cart. The “paperless office” failed to take account of cheap printers, better online document-publishing – and human psychology and behaviour. Many, many predictors of “convergence” have completely missed other trends which actually favour “divergence” and fragmentation.

There’s a lot of predictions about AI around at the moment – both utopian and (especially) dystopian. But few factor in other parallel trends, such as enhancement of human cognition, whether by software, pharmaceutical means, or even genetics. A whole host of societal trends also take on a new complexion, if one factors in increasing longevity, biomedical advances, robotics, nanotech, 3D printing and so on. What happens to our security (and encryption) when the smart lightbulb reads your email on your screen over your shoulder, or listens to your conversation, before you even get to tunnel it through a VPN?

The story here is to look beyond the upbeat, positive predictions and hockey-stick curves. They’re seductive – but you also need to have a Devil’s advocate view, trying to pick holes in the narrative. Ideally, the ideas are not just “robust” to criticism, but as per Nassim Taleb, they are “antifragile” and strengthened by the challenge.

I haven’t mentioned politics much in this post, but that’s an important domain for this type of analysis too. Many of the more populist agendas fall prey to the “unintended consequences” flaw. For example, imagine an anti-capitalist agenda that inadvertently breaks Moore’s Law, or the investment case for new factories to make chips, smartphones or PCs. Or perhaps, penalises entrepreneurs who make huge exits when they sell startups. At one level, it might be seen as preventing “planned obsolescence”, or reducing inequality. But if a knock-on effect is a slowing in technology needed for climate models, environmental sensor networks, design and development of new clean energy technologies – then it will have been a Pyrrhic victory, with severely negative consequences.

Visionaries tend to think in terms of clean, idealistic utopias. Or of one over-arching metaphor like a “personal AI”, or a centrally-determined allocation & orchestration of processing or networking resources in a perfect cloud/NFV/SDN telecoms industry. They forget about legacy technologies, second-order effects, human behaviour, regulatory/political concerns and practical issues getting from “here and now” to the sunny uplands of the future. If the route goes via predictably-dangerous territory in between, the idealists have a duty to scrutinise it in advance.

So in reality, the future is messy. And analysis of inevitabilities, and the practice of “anti-forecasting” (what won’t happen) need to form a part of any visionary’s or forecaster’s arsenal of weapons.

The quote that the future is “already here, but unevenly distributed” comes from William Gibson, whose awkward, heterogeneous, sometimes-jarring worlds of the near-future are much more realistic than the beneficent, techno-utopian, AI-assisted Culture envisioned by my other favourite author, the late Iain M. Banks.

Beware of “elegance” in technology (or socio-political) predictions – it’s almost inevitably wrong. Wishful-thinking is a useful thought experiment. But it’s not a “vision” – it’s just a screenplay or fictional plot, and usually a rigid one at that. Hybrids, overlaps, complexity, gaps, inefficiencies, blurred definitions, political and human realities, flexibility - those are the signs of a realistic forecast or prediction.

If you are interested in due diligence, Devil’s Advocacy, or an open workshop/brainstorm on possibilities, inevitabilities or anti-forecasting, please get in touch, at information AT disruptive-analysis DOT com.