Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label Messaging. Show all posts
Showing posts with label Messaging. Show all posts

Sunday, October 08, 2023

RCS messaging: still a zombie, but now wearing a suit

This post originally appeared on October 4 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

Yesterday I followed the Mobile Ecosystem Forum stream of its #RCSWorld conference, on #RCS #messaging, especially business messages. I thought it was time to get an update.
 
As regular followers know, I’m a long-time critic of RCS. I saw it announced in 2008, wrote reports & advised telco clients about its many problems in 2010-2013, called it a zombie tech in 2015 (“28 quarters later”) and have been sniping at it ever since, including at Google’s acquisition of Jibe and its attempt to turn it into Android’s equivalent of Apple #iMessage.
 
Some flaws have been addressed (it finally uses E2E encryption), while Google’s tightening control of its features has maybe fixed its “design by committee” paralysis and historic fragmentation. Google is now hosting the whole application for many MNOs, rather than telcos relying on (and paying for) in-network IMS integration, but with an implicit threat of end-running them if they don’t support the services to customers.

There's about 1.2bn phones with RCS active - mostly Google #Android but also about 200m in China. This has been driven by its adoption as the default messaging client on new phones, rather than by consumer download.

I didn't hear any stats on genuine active use - ie beyond just using it as a pseudo-#SMS/MMS app because it's the default. Numbers always seem to be monthly MAUs rather than meaningful DAUs. No anecdotes of teenagers who swapped from FB / WA / iMessage / WeChat / TikTok / whatever because RCS is cooler with better emojis, birthday greeting fireworks or cat-ear image filters.
 
To be fair, the conference name was misleading. Almost the entire event was about RCS Business Messaging (RBM) rather than personal or group messaging. It was about targeted marketing campaigns (that’s spam to most of us), customer interaction with so-called “brands”, multichannel whatnot, and blather about engagement and “digital” marketing

Apparently A2P revenues for SMS are flattening, but the addition of "rich" interactive in-messaging customer experience functions will reignite growth. One operator in the audience asked why the same forecasts have been shown (and not come true) for the past 4-5 years. Apparently it's too complex for most developers.

So the big innovation is "basic RCS" with 160 characters. SMS with a brand logo, a verification tick and read receipts. It's aiming at the #cPaaS market to get more devs/marketers onto the first rung & hope to catalyse more fancy use-cases later.
 
IMO this is why Apple isn’t going to support it anytime soon, despite Google's cringey social media exhortations. The notion RCS is a standard for P2P messaging is a smokescreen. It’s an ad & CRM platform, not an SMS replacement or default way to chat with friends. It’s not going to be the messaging equivalent of USB-C chargers & forced on Apple by the European Commission
 
In a nutshell, it’s still a zombie. But now it’s a zombie in a suit spamming you with ads and "engagement" while it eats your brain


 

.

Monday, November 29, 2021

Update: Recent Posts & Themes

(This article was initally posted on my LinkedIn newsletter. If you are not already signed up, please subscribe here)

I have a couple of other deep-dive themes cued up for articles in coming weeks, but I wanted to put out a quick newsletter update covering a few recent themes, posts and events that have been occupying me.


 

The last month has featured a lot of thinking, speaking & client engagements on private 5G, infrastructure-sharing and neutral-host business models, network slicing and capability/API exposure, Wi-Fi 6E & 6GHz, Open RAN and the interaction of cellular & other wireless technologies.

Some recent short-form posts that you may have missed:

  • Telecom operators (and their partners & regulators) should be giving as much consideration to *buying* APIs and capabilities as selling them - LINK
  • Thoughts on the Ericsson / Vonage acquisition - LINK
  • Should we be thinking more about "micro-churn" incidents, where subscribers temporarily switch between operators, using technologies such as eSIM? - LINK
  • Want me to speak at, or moderate your 2022 event? Or present at an internal workshop or offsite? - LINK
  • RCS messaging is still a purposeless zombie technology, continuing to eat brains after 13 years. Google's involvement hasn't changed much - LINK
  • The telecoms industry still hasn't gone beyond telephony, to think more broadly about "voice" services & applications - LINK

I've been to a couple of recent "verticals" events, about networking in ports and for railways. There's a lot of interest in private cellular - but also a huge amount of emphasis on Wi-Fi, including specialised versions with 60GHz or unique forms of QoS intended for industrial or trackside use.

I also presented on a webinar recently on behalf of iBwave, about the scope for Private 4G/5G networks for utilities and energy companies (LINK to view on-demand). Watch out for an upcoming eBook on the same topic. Another webinar on the competiton/convergence between Wi-Fi6 and 5G was for Spirent (LINK


 

Scott and Iain at Telecoms.com invited me onto their weekly podcast for a (rather irreverent) chat about the current trends and news from the industry, over a couple of beers. We took aim at 5G, the Metaverse, Open RAN & a lot more. YouTube link embedded above!

In addition, I moderated a panel on Infrastructure Sharing for the 5G Techritory event. I'm not sure if an archived version will be put online, but keep a watch out for it here.

And on a personal note, I also took part in my first improv comedy performance. If you book me to speak at one of your events, I can't promise to wear the same shirt as in the picture, but I will certainly be happy to make things up on the spot spontaneously, or deal with any hecklers ruthlessly!

#5G #WiFi #verticals #PrivateLTE #Private5G #mobile #telecoms #spectrum #voice #messaging #networkslicing #neutralhost #regulation

Wednesday, March 03, 2021

The Worst Metrics in Telecoms

 (This post was initially published as an article on my LinkedIn Newsletter - here - please see that version for comments and discussion)

GDP isn't a particularly good measure of the true health of a country's economy. Most economists and politicians know this.

This isn't a plea for non-financial measures such as "national happiness". It's a numerical issue. GDP is hard to measure, with definitions that vary widely by country. Important aspects of the modern world such as "free" online services and family-provided eldercare aren't really counted properly.

However, people won't abandon GDP, because they like comparable data with a long history. They can plot trends, curves, averages... and don't need to revise spreadsheets and models from the ground up with something new. Other metrics are linked to GDP - R&D intensity, NATO military spending commitments and so on - which would needed to be re-based if a different measure was used. The accounting and political headaches would be huge.

A poor metric often has huge inertia and high switching costs.

Telecoms is no different, like many sub-sectors of the economy. There are many old-fashioned metrics that are really not fit for purpose any more - and even some new ones that are badly-conceived. They often lead to poor regulatory decisions, poor optimisation and investment approaches by service providers, flawed incentives and large tranches of self-congratulatory overhype.

Some of the worst telecoms metrics I see regularly include:

  • Voice traffic measured in minutes of use (or messages counted individually)
  • Cost per bit (or increasingly energy use per bit) for broadband
  • $ per MHz per POP (population) for radio spectrum auctions
  • ARPU
  • CO2 savings "enabled" by telecom services, especially 5G

That's not an exhaustive list by any means. But the point of this article is to make people think twice about commonplace numbers - and ideally think of meaningful metrics rather than easy or convenient ones.

The sections below gives some quick thoughts on why these metrics either won't work in the future - or are simply terrible even now and in the past.

(As an aside, if you ever see numbers - especially forecasts - with too many digits and "spurious accuracy", that an immediate red flag: "The Market for Widgets will be $27.123bn in 2027". It tells you that the source really doesn't understand numbers - and you really shouldn't trust, or base decisions, on someone that mathematically inept)

Minutes and messages

The reason we count phone calls in minutes (rather than, say, conversations or just a monthly access fee) is based on an historical accident. Original human switchboard operators were paid by the hour, so a time-based quantum made the most sense for billing users. And while many phone plans are now either flat-rate, or use per-second rates, many regulations are still framed in the language of "the minute". (Note: some long-distance calls were also based on length of cable used, so "per mile" as well as minute)

This is a ridiculous anachronism. We don't measure or price other audiovisual services this way. You don't pay per-minute for movies or TV, or value podcasts, music or audiobooks on a per-minute basis. Other non-telephony voice communications modes such as push-to-talk, social audio like ClubHouse, or requests to Alexa or Siri aren't time-based.

Ironically, shorter calls are often more valuable to people. There's a fundamental disconnect between price and value.

A one-size-fits-all metric for calls stops telcos and other providers from innovating around context, purpose and new models for voice services. It's hard to charge extra for "enhanced voice" in a dozen different dimensions. They should call on governments to scrap minute-based laws and reporting requirements, and rejig their own internal systems to a model that makes more sense.

Much.

the

same

argument...

.... applies to counting individual messages/SMS as well. It's a meaningless quantum that doesn't align with how people use IMs / DMs / group chats and other similar modalities. It's like counting or charging for documents by the pixel. Threads, sessions or conversations are often more natural units, albeit harder to measure.

Cost per bit

"5G costs less per bit than 4G". "Traffic levels increase faster than revenues!".

Cost-per-bit is an often-used but largely meaningless metric, which drives poor decision-making and incentives, especially in the 5G era of multiple use-cases - and essentially infinite ways to calculate the numbers.

Different bits have very different associated costs. A broad average is very unhelpful for investment decisions. The cost of a “mobile” bit (for an outdoor user in motion, handing off from cell to cell) is very different to an FWA bit delivered to a house’s external fixed antenna, or a wholesale bit used by an MVNO.

Costs can vary massively by spectrum band, to a far greater degree than technology generation - with the cost of the spectrum itself a major component. Convergence and virtualisation means that the same costs (eg core and transport networks) can apply to both fixed and mobile broadband, and 4G/5G/other wireless technologies. Uplink and downlink bits also have different costs - which perhaps should include the cost of the phone and power it uses, not just the network.

The arrival of network slicing (and URLLC) will mean “cost per bit” is an ever-worse metric, as different slices will inherently be more or less "expensive" to create and operate. Same thing with local break-out, delivery of content from a nearby edge-server or numerous other wrinkles.

But in many ways, the "cost" part of cost/bit is perhaps the most easy to analyse, despite the accounting variabilities. Given enough bean-counters and some smarts in the network core/OSS, it would be possible to create some decent numbers at least theoretically.

But the bigger problem is the volume of bits. This is not an independent variable, which flexes up and down just based on user demand and consumption. Faster networks with more instantaneous "headroom" actually create many more bits, as adaptive codecs and other application intelligence means that traffic expands to fill the space available. And pricing strategy can basically dial up or down the number of bits customers used, with minimal impact on costs.

A video application might automatically increase the frame rate, or upgrade from SD to HD, with no user intervention - and very little extra "value". There might be 10x more bits transferred for the same costs (especially if delivered from a local CDN). Application developers might use tools to predict available bandwidth, and change the behaviour of their apps dynamically.

So - if averaged costs are incalculable, and bit-volume is hugely elastic, then cost/bit is meaningless. Ironically, "cost per minute of use" might actually be more relevant here than it is for voice calls. At the very least, cost per bit needs separate calculations for MBB / FWA / URLLC, and by local/national network scale.

(By a similar argument, "energy consumed per bit" is pretty useless too).

Spectrum prices for mobile use

The mobile industry has evolved around several generations of technology, typically provided by MNOs to consumers. Spectrum has typically been auctioned for exclusive use on a national / regional basis, in fixed-sized slices in chunks perhaps 5/10/20MHz wide, with licenses often specifying rules on coverage of population.

For this reason, it's not surprising that a very common metric is "$ per MHz / Pop" - the cost per megahertz, per addressable population in a given area.

Up to a point, this has been pretty reasonable, given that the main use of 2G, 3G and even 4G has been for broad, wide-area coverage for consumers' phones and sometimes homes. It has been useful for investors, telcos, regulators and others to compare the outcomes of auctions.

But for 5G and beyond (actually the 5G era, rather than 5G specifically), this metric is becoming ever less-useful. There are three problems here:

  • Growing focus on smaller areas of licenses: county-sized in CBRS in the US, and site-specific in Germany, UK and Japan for instance, especially for enterprise sites and property developments. This makes comparisons much harder, especially if areas are unclear.
  • Focus of 5G and private 4G on non-consumer applications and uses. Unless the idea of "population" is expanded to include robots, cars, cows and IoT gadgets, the "pop" part of the metric clearly doesn't work. As the resident population of a port or offshore windfarm zone is zero, then a local spectrum license would effectively have an infinite $ / MHz / Pop.
  • Spectrum licenses are increasingly being awarded with extra conditions such as coverage of roads, land-area - or mandates to offer leases or MVNO access. Again, these are not population-driven considerations.

Over the next decade we will see much greater use of mobile spectrum-sharing, new models of pooled ("club") spectrum access, dynamic and database-driven access, indoor-only licenses, secondary-use licenses and leases, and much more.

Taken together, these issues are increasingly rendering $/MHz/Pop a legacy irrelevance in many cases.

ARPU

"Average Revenue Per User" is a longstanding metric used in various parts of telecoms, but especially by MNOs for measuring their success in selling consumers higher-end packages and subcriptions. It has long come under scrutiny for its failings, and various alternatives such as AMPU (M for margin) have emerged, as well as ways to carve out dilutive "user" groups such as low-cost M2M connections. There have also been attempts to distinguish "user" from "SIM" as some people have multiple SIMs, while other SIMs are shared.

At various points in the past it used to "hide" effective loan repayments for subsidised handsets provided "free" in the contract, although that has become less of an issue with newer accounting rules. It also faces complexity in dealing with allocating revenues in converged fixed/mobile plans, family plans, MVNO wholesale contracts and so on.

A similar issue to "cost per bit" is likely to happen to ARPU in the 5G era. Unless revenues and user numbers are broken out more finely, the overall figure is going to be a meaningless amalgam of ordinary post/prepaid smartphone contracts, fixed wireless access, premium "slice" customers and a wide variety of new wholesale deals.

The other issue is that ARPU further locks telcos into the mentality of the "monthly subscription" model. While fixed monthly subs, or "pay as you go top-up" models still dominate in wireless, others are important too, especially in the IoT world. Some devices are sold with connectivity included upfront.

Enterprises buying private cellular networks specifically want to avoid per-month or per-GB "plans" - it's one of the reasons they are looking to create their own dedicated infrastructure. MNOs may need to think in terms of annual fees, systems integration and outsourcing deals, "devices under management" and all sorts of other business models. The same is true if they want to sell "slices" or other blended capabilities - perhaps geared to SLAs or business outcomes.

Lastly - what is a "user" in future? An individual human with a subscription? A family? A home? A group? A device?

ARPU is another metric overdue for obsolescence.

CO2 "enablement" savings

I posted last week about the growing trend of companies and organisations to cite claims that a technology (often 5G or perhaps IoT in general) allows users to "save X tons of CO2 emissions".

You know the sort of thing - "Using augmented reality conferencing on your 5G phone for a meeting avoids the need for a flight & saves 2.3 tons of CO2" or whatever. Even leaving aside the thorny issues of Jevon's Paradox, which means that efficiency tends to expand usage rather than replace it - there's a big problem here:

Double-counting.

There's no attempt at allocating this notional CO2 "saving" between the device(s), the network(s), the app, the cloud platform, the OS & 100 other elements. There's no attempt such as "we estimate that 15% of this is attributable to 5G for x, y, z reasons".

Everyone takes 100% credit. And then tries to imply it offsets their own internal CO2 use.

"Yes, 5G needs more energy to run the network. But it's lower CO2 per bit, and for every ton we generate, we enable 2 tons in savings in the wider economy".

Using that logic, the greenest industry on the planet is industrial sand production, as it's the underlying basis of every silicon chip in every technological solution for climate change.

There's some benefit from CO2 enablement calculations, for sure - and there's more work going into reasonable ways to allocate savings (look in the comments for the post I link to above), but readers should be super-aware of the limitations of "tons of CO2" as a metric in this context.

So what's the answer?

It's fairly easy to poke holes in things. It's harder to find a better solution. Having maintained spreadsheets of company and market performance and trends myself, I know that analysis is often held hostage by what data is readily available. Telcos report minutes-of-use and ARPU, so that's what everyone else uses as a basis. Governments may demand that reporting, or frame rules in those terms (for instance, wholesale voice termination rates have "per minute" caps in some countries).

It's very hard to escape from the inertia of a long and familiar dataset. Nobody want to recreate their tables and try to work out historic comparables. There is huge path dependence at play - small decisions years ago, which have been entrenched in practices in perpetuity, even though the original rationale has long since gone. (You may have noticed me mention path dependence a few times recently. It's a bit of a focus of mine at the moment....)

But there's a circularity here. Certain metrics get entrenched and nobody ever questions them. They then get rehashed by governments and policymakers as the basis for new regulations or measures of market success. Investors and competition authorities use them. People ignore the footnotes and asterisks warning of limitations

The first thing people should do is question the definitions of familiar public or private metrics. What do they really mean? For a ratio, are the assumptions (and definitions) for both denominator and numerator still meaningful? Is there some form of allocation process involved? Are there averages which amalgamate lots of dissimilar categories?

I'd certainly recommend Tim Harford's book "How to Make the World Add Up" (link) as a good backgrounder to questioning how stats are generated and sometimes misused.

But the main thing I'd suggest is asking whether metrics can either hide important nuance - or can set up flawed incentives for management.

There's a long history of poor metrics having unintended consequences. For example, it would be awful (but not inconceivable) to raise ARPUs by cancelling the accounts of low-end users. Or perhaps an IoT-focused vertical service provider gets punished by the markets for "overpaying" for spectrum in an area populated by solar panels rather than people.

Stop and question the numbers. See who uses them / expects them and persuade them to change as well. Point out the fallacies and flawed incentives to policymakers.

If you have any more examples of bad numbers, feel free to add them in the comments. I forecast there will be 27.523 of them, by the end of the year.

The author is an industry analyst and strategy advisor for telecoms companies, governments, investors and enterprises. He often "stress-tests" qualitative and quantitative predictions and views of technology markets. Please get in touch if this type of viewpoint and analysis interests you - and also please follow @disruptivedean on Twitter.

Wednesday, November 25, 2020

Interoperability is often good – but should not be mandated

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

Context: I'm going to be spending more time on telecom/tech policy & geopolitics over the next few months, spanning UK, US, Europe & Global issues. I'll be sharing opinions & analysis on the politics of 5G & Wi-Fi, spectrum, broadband plans, supply-chain diversity & competition.

Recently, I've seen more calls for governments to demand mandatory interoperability between technology systems (or between vendors) as a regulatory tool. I think this would be a mistake - although incentivising interop can sometimes be a good move for various reasons. This is a fairly long post to explain my thinking, with particular reference to Open RAN and messaging.

Background & history

The telecoms industry has thrived on interoperability. Phone calls work from anywhere to anywhere, while handsets and other devices are tested & certified for proper functioning on standardised networks. Famously, interoperability between different “islands” of SMS led to the creation of a huge market for mobile data services, although that didn't happen overnight in many countries.

Much the same is true in the IT world as well, with everything from email standards to USB connections and Wi-Fi certification proving the point. The web and open APIs make it easier for cloud applications to work together harmoniously.

Image source: https://pixabay.com/illustrations/rings-wooden-rings-intertwined-100181/

 But not everything valuable is interoperable. It isn't the only approach. Proprietary and vertically-integrated solutions remain important too.

Many social media and communications applications have very limited touch-points with each other. The largest 4G/5G equipment companies don’t allow operator customers to mix-and-match components in their radio systems. Many IT systems remain closed, without public APIs. Consumers can’t choose to subscribe to network connectivity from MNO A, but telephony & SMS from ISP B, and exclusive content belonging to cable company C.

This isn't just a telecom or IT thing. It’s difficult to get different industrial automation systems to work together. An airline can’t buy an airframe from Boeing, but insist that it has avionics from Airbus. The same is true for cars' sub-systems and software.

Tight coupling or vertical integration between different subsystems can enable better overall efficiency, or more fluid consumer experience - but at the cost of creating "islands". Sometimes that's a problem, but sometimes it's actually an advantage.

Well-known examples of interoperability in a narrow market subset can obscure broader use of proprietary systems in a wider domain. Most voice-related applications, beyond traditional "phone calls", do not interoperate by default. You could probably connect a podcast platform to a karaoke app, home voice assistant and a critical-communications push-to-talk system.... but why would you? (This is one reason why I always take care to never treat "voice" and "telephony" synonymously).

Hybrid, competitive markets are optimal

So there is value in interoperable systems, and also in proprietary alternatives and niches. Some sectors gravitate towards openness, such as federation between different email systems. Others may create de-facto proprietary appoaches - which might risk harmful monopolies, or which may be transferred to become open standards (for instance, Adobe's PDF document format).

And even if something is based on theoretically interoperable underpinnings, it might still not interoperate in practice. Most enterprise Private 4G and 5G networks are not connected to public mobile networks, even though they use the same standards.


Interoperability can be both a positive and negative for security. Open and published interfaces can be scrutinised for vulnerabilities, and third-parties can test anything that can be attached to something else. Yet closed systems have fewer entry points – the “attack surface” may be smaller. Having a private technology for a specific purpose – from a military communications infrastructure to a multiplayer gaming network – may make commercial or strategic sense.

In many all areas of technology, we see a natural pendulum swing between openness and proprietary. From open flexibility to closed-system optimisation, and back again. Often there are multiple layers of technology, where the pendulum swings with a different cadence for each. Software-isation of many hardware products means a given system might employ multiple layers at the same time.

 Consider this (incomplete and sometimes overlapping) set of scenarios for interoperability:

  • Between products: A device needs to be able to connect to a network, using the right radio frequencies and protocols. Or an electrical plug needs to fit into a standardised socket.
  • Within products or solutions (between components): A product or service can be considered to be just a collection of sub-systems. A computer might be able to support different suppliers’ memory chips or disks, using the same sockets. A browser could support multiple ad-blockers. A telco’s virtualised network could support different vendors for certain functions.
  • Application-to-application / service-to-service: An application can link to, integrate or federate with another - for instance a reader could share this article on their Twitter feed, or mobile user can roam onto another network, or a bank can share data access with an accounting tool.
  • Data portability: Data formats can be common from one system to another, so users can own and move their "state" data and history. This could range from a porting a phone number, to moving uploaded photos from one social platform to another.

There’s also a large and diverse industry dedicated to gluing together things which are not directly interoperable – and acting as important boundaries to enforce security, charging or other functions. Session Border Controllers link different voice systems, with transcoders to translate between different codecs. Gateways link Wi-Fi or Bluetooth IoT devices to fixed or wireless broadband backhaul. Connectors enable different software platforms to work together. Mapping functions will eventually allow 5G network slicing to work across core, transport and radio domains, abstracting the complexities at the boundaries.

Added to this is the entire sphere of systems integration – the practice of connecting disparate systems and components together, to create solutions. While interoperability helps SIs in some ways, it also commoditises some of their business.

Coexistence vs. interoperation

Yet another option for non-interoperable systems is rules for how they can coexist, without damaging each other’s operation. This is seen in unlicensed or shared wireless spectrum bands, to avoid “tragedies of the commons” where interference would jam all the disparate systems. Even licensed bands can be "technology neutral".

Analogous approaches enable the safe coexistence of different types of road users on the same highway - or in the voice/video arena, technologies such as WebRTC which embed "codec negotiation" procedures into the standards.

Arguably, improving software techniques, automation, containerisation and AI will make such interworking and coexistence approaches even easier in future. Such kludginess might not please engineering purists who value “elegance”, but that’s not the way the world works – and certainly shouldn’t be how it’s regulated.

In a healthy and competitive market, customers should be able to choose between open and closed options, understanding the various trade-offs involved, yet be protected from abusive anti-competitive power.

A great example of consumer gains and "generativity" in innovation is that of the Internet itself, which works alongside walled-garden, telco or private-network alternatives to access content and applications.

Customers can have the best of both worlds - accelerated, because of the competitive tensions involved. The only risk is that of monopolies or oligopolies, which requires oversight.

Where does government & regulatory policy fit in this?

This highlights an important and central point: the role of government, and its attitude to technology standards, interoperability and openness. This topic is exemplified by various recent initiatives, ranging from enthusiasm around Open RAN for 5G in the US, UK and elsewhere, to the EU’s growing attempts to force Internet platform businesses to interoperate and enable portability of data or content, as part of its Digital Services Act.

My view is that governments should, in general, let technology markets, vendors and suppliers make their own choices.

It is reasonable that governments often want to frame regulation in ways to protect citizens from monopolists, or risks of harm such as cybersecurity. In general, competition rules are developed across industries, without specific rules about products, unless there is unfair vertical integration and cross-subsidy.

Governments can certainly choose to adopt or even incentivise interoperability for various reasons – but they should not enshrine it in laws as mandatory. If you're a believer in interventionist policies, then incentivising market changes that favour national champions, foster inward investment and increase opportunities can make sense - although others will clearly differ.

(Personally, I think major tranches of intervention and state-aid should only apply to game-changers with huge investment needs - so perhaps for carbon capture technology, or hydrogen-powered aviation).

Open RAN may be incentivised, not mandated

A particular area of focus by many in telecoms is around open radio networks. The O-RAN Alliance and the TIP OpenRan project are at the forefront, with many genuinely impressive innovations and evolutions occurring. Rakuten's deployment is proving to be a beacon - at least for greenfield networks - while others such as Vodafone are using this architectural philosophy for rural coverage improvements.

Governments are increasingly involved as well - seeing a possible way to meet voters' desires for better/cheaper coverage, while also offsetting perceived risks from concentrations of power in a few large integrated vendors. This latter issue has been pushed further into the limelight by Huawei's fall from favour in a number of countries, which then see a challenge from a smaller number of alternative providers - Nokia, Ericsson and in some cases Samsung and NEC or niche providers.

This combination of factors then gets further conflated with industrial policy goals. For instance, if a country is good at creating software but not manufacturing radios, then Open RAN is an opportunity, that might merit some form of R&D stimulus, government-funded testbeds and so on.

So I can see some arguments for incentives - but I would be very wary of a step to enshrine any specific interop requirements into law (or rules for licenses), or for large-scale subsidies or plans for government-run national infrastructure. The world has largely moved to "tech neutral" approaches in areas such as spectrum awards. In the past, governments would mandate certain technologies for certain bands - but that is now generally frowned upon.

No, message apps should not interoperate

Another classic example of undesirable "forced interoperability" is in messaging applications. I've often heard many in the telecoms industry assert that it would be much better if WhatsApp, iMessage, Telegram, Snap - and of course the mobile industry's own useless RCS standard - could interconnect. Recently, some government and lobbying groups have suggested much the same, especially in Brussels.

Yet this would instantly hobble the best and most unique features of each - how would ephemeral (disappearing) messages work on systems that keep them stored perpetually? How would an encrypted platform interoperate with a non-encrypted platform? How could an invite/accept contact system interwork with a permissive any-to-any platform? How would a phone-number identity system work with a screen-name one?

... and that's before the real unintended consequences kick in, when people realise that their LinkedIn messages now interoperate with Tinder, corporate Slack and telemedicine messaging functions.

That doesn't mean there's never a reason to interoperate between message systems. In particular, if there's an acquisition it can be useful and imporant - imagine if Zoom and Slack merged, for instance. Or a gaming platform's messaging might want users to send invitations on social media. I could see some circumstances (for business) where it might be helpful linking Twitter and LinkedIn - but also others where it would be a disaster (I'm looking at you, Sales Navigator spamming tools).

So again - interoperability should be an option. Not a default. And in this case, I see zero reasons for governments to incentivise.

Conclusion

Interoperability between technology solutions or sub-systems should be possible - but it should not be assumed as a default, nor legislated in areas with high levels of innovation. It risks creating lowest-common denominators which do not align with users' needs or behaviours. Vertical integration often brings benefits, and as long as the upsides and downsides are transparent, users can make informed trade-offs and choices.

Lock-in effects can occur in both interoperable and proprietary systems. I'll be writing more about the concept of path dependence in future.

Regulating or mandating interoperability risks various harms - not just a reduction in innovation and differentiation, but also unexpected and unintended consequences. Many cite the European standardisation of GSM 2G/3G mobile networks as a triumph - yet the US, Korea, Japan, China and others allowed a mix of GSM, CDMA and local oddities such as iDen, WiBro and PHS. No prizes for guessing which parts of the world now lead in 5G, although correlation doesn't necessarily imply causation here.

There's also a big risk from setting precedents that could lead to unintended consequences. Perhaps car manufacturers would be next in line to be forced to have open interfaces for all the electronic systems, impacting many automakers' potential revenues. Politicians need to think more broadly. As a general rule, if someone uses the obsolete term "digital" in the context of interop, they're not thinking much at all.

I've written before about the possible risks to telcos from the very "platform neutrality" concept that many have campaigned for. Do they imagine regulators wouldn't notice that many have their own ambitions to be platform providers too?

In my view, an ideal market is made up of a competitive mix of interoperable and proprietary options. As long as abuses are policed effectively, customers should be able to make their own trade-offs - and their own mistakes.



As always - please comment and discuss this. I'll participate in the discussions as far as possible. If you've found this thought-provoking, please like and share on LinkedIn, Twitter and beyond. And get in touch if I can help you with internal advisory work, or external communications or speaking / keynote needs.

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

#5G #openran #regulation #telecom #mobile #interoperability #competition #messaging #voice #innovation


Friday, January 03, 2020

Predictions for the next decade: looking out to 2030 for telecoms, wireless & adjacent technologies


It's tempting to emulate every other analyst & commentator and write a list of 2020 predictions of success and failure. In fact, I got part-way into a set of bulletpoints about what’s overhyped and underhyped. 

But to be honest, if you read my articles and tweets, you probably know what I think about 2020 already. Private cellular networks will be important (4G, initially). 5G fixed wireless is interesting and will grow the FWA market - but won't replace fibre. 5G is Just Another G and is overhyped, especially until the new core matures. RCS is still a worthless zombie, eating brains. But I don't need to repeat all this in detail, just because I'm a bit more sharp-worded than most observers. It wouldn't tell you much new.

But seeing as I spend a fair amount of time advising clients about the longer-term future, 5-10 years out or even further, I thought I'd set my sights higher. I use the term "telco-futurism" to look at the impacts of technology and broader society on telecoms, and vice versa.

So, at the start of the 2020s, what about the next decade? Assuming I haven't retired to my palatial Mars-orbiting private Moon in 10 years' time, what do I think I'll be writing, podcasting (or neural-transmitting) about in 2030?

So, let's have a few shots at this more-distant target...

  • 6G: In 2030, the first 6G networks are already gaining traction in the marketplace. The first users are still fixed connections to homes, and personal devices that look a bit similar to phones and wearables, but with a variety of new display and UI technologies, including contact lenses and advanced audio/haptic interfaces. 6G represents the maturing of various 5G concepts (such as the new core), plus greater intelligence to allow efficient operation. 
  • Details, details: Much of the 2020s will have been spent dealing with numerous "back-office" problems that have stopped many early 5G visions becoming real. Network-slicing will have thrown up huge operationalisation and security issues. Dealing with QoS/slice roaming or handoff, at borders between networks (outdoor / indoor / private / neutral / international) will be hugely complex. Edge computing scenarios will turn out to need local peering or interconnection points. All of these will have huge extra complexities with billing, pricing and monitoring. mmWave planning and design tools will need to have matured, as well as the processes for installation and operation.Training and skills for all of this will have been time-consuming and expensive - we'll need hundreds of thousands of experts - often multi-domain experts. By the time all these issues get properly fixed, 6G radios and vendors will exploit them, rather than the "legacy 5G" infrastructure. See this post for my discussion about the telecom industry's problems with accurate timelines.
  • Device-Network cooperation: By 2030, mobile ecosystems and control software will break today's silos between radio network, devices and applications much more effectively. Sensors in users' devices, cell-towers and elsewhere will be linked to AI which works out how, why and where people or IoT objects need connectivity and how best to deliver it. Recognise a moving truck with machine-vision, and bounce signals off it opportunistically. Work out that someone is approaching the front of a building, and pre-emptively look for Wi-Fi, or negotiate with the in-building neutral host on a marketplace before they enter the door. Spot behavioural patterns such as driving the same route to work, and optimise connectivity accordingly. Recognise a low battery, and tweak the "best-connected" algorithm for power efficiency, and downrate apps' energy demand.Integrate with crowd-flow patterns or weather forecasts. There will be thousands of ways to improve operations if networks stop just thinking of a "terminal" as just an endpoint, and look for external sources of operational data - that's a 20th Century approach. Expect Google's work on its Fi MVNO & Android/Pixel phones, and similar efforts by Samsung and maybe Apple, Qualcomm and ARM, to have driven much of this cross-domain evolution.
  • Energy-aware networks: Far more energy-awareness will be designed into all aspects of the network, cloud and device/app ecosystem. I'm not predicting some sort of monolithic and integrated cascading-payments system linked into CO2-taxes, but I expect "energy budget" to be linked much more closely to costs (including externalities) in different areas. How best to optimise wired/wireless data for power demand, where best to charge devices, "scavenging" for power and so on. Maybe even "nudge" people to lower-energy applications or consumption behaviours by including "power-shaming" indicators. If 3GPP and governments get their act together, as well as vendors & CSPs, overall 6G energy use will be a higher priority design-goal than throughput speed and latency.
  • Wi-Fi: We'll probably be on Wi-Fi 9 by 2030. It will continue to dominate connectivity inside buildings, especially homes and business premises with FTTX broadband (i.e. most of them in developed markets). It will continue to be used for primary connectivity on high-throughput / low-margin / low-mobility devices like TVs and display screens, PC-type devices, AR/VR headsets and so on. It will be bonded together with 5G/6G and other technologies with ever-better multi-path mechanisms, including ad-hoc device meshes. Ease of use will have improved, with the success of approaches like OpenRoaming. Fairly little public Wi-Fi will be delivered by "service providers" as we think of them today.  We'll probably still have to suffer the "6G will kill Wi-Fi" pundit-pieces and hype, though.
  • Spectrum: The spectrum world changes slowly at a global level, thanks to the glacial 4-year cycle of ITU WRCs. By 2030 we will have had 2023 and 2027 conferences, which will probably harmonise more spectrum for 5G/6G, satellites & high-altitude platforms (HAPS) and Wi-Fi type unlicensed use. The more interesting developments will occur at national / regional levels, below the ITU's role, in how these bands actually get released / authorised - and especially whether that's for localised or shared usage suitable for private networks and other innovators. By 2030 we should have been through 2+ cycles of US CBRS and UK/Germany/Japan/France style local licensing experiments, allocation methods, databases and sensing systems. I think we'll be closer to some of the "spectrum-as-a-service" models and marketplaces I've been discussing over the last 24 months, with more fluid resale and temporary usage permits. International allocations will still differ though. We will also see whether other options, such as "national licenses with lots of extra conditions" (eg MVNO access, rural coverage, sharing, power use etc) has helped maintain today's style of MNOs, despite the grumbling. We will also see much more opportunism and flexibility in band support in silicon/devices, and more sophisticated approaches to in-band sharing between different technologies. I'm less certain whether we will have progressed much with commercialisation of mmWave bands 20-100GHz, especially for mobile and indoor use. It's possible and we'll certainly see lots of R&D, but the practicalities may prove insuperable for wide usage.
  • Private/neutral cellular: Today, there's around 1000 MNOs globally (public and private). By 2030, I'd expect there to be between 100,000 and a million networks, probably with various new types of service provider, aggregation hubs and consortia. These will span industrial, city, office, rural, utility, "public venue" and many other domains. It will be increasingly hard to distinguish private from public, eg with MNOs' campus networks with private cores and hybrid public/private spectrum. We might even get another zero, if the goals of making private 4G/5G as easy and cheap to build as Wi-Fi prove feasible, although I have doubts. Most of these networks will be user-specific, but a decent fraction will be multi-tenant, either offering wholesale access or roaming to "legacy MNOs" as neutral hosts, or with some sort of landlord model such as a property company running a network with each occupied floor or building on campus as a "semi-private" network. Some such networks will look like micro-telcos (eg an airport providing access to caterers & airlines) and will need billing, management & security tools - and perhaps new forms of regulation. This massive new domain will help catalyse various shifts in the vendor community as well - especially cloud-native core and BSS/OSS, and probably various forms of open RAN, and also "neutral edge".
  • Security & privacy: I'm not a security expert, so I hesitate to imagine the risks and responses 10 years out. Both good and bad guys will be armed to the teeth with AI. We'll see networks attacked physically as well as logically. We'll see sophisticated thefts of credentials and what we quaintly term "secrets" today. There will be cameras and mics everywhere. Quantum threats may compromise encryption - and other quantum tools may enhance it, as well as provide new forms of identity and authentication. We will need to be wary of threats within core networks, especially where orchestration and oversight is automated. I think we will be wise to avoid "monocultures" of technologies at various levels of the network - we need to trade off efficiency and scale vs. resilience.
  • Satellite / HAPS: We'll definitely have more satellite constellations by 2030, including some huge ones from SpaceX or others. I have my doubts that they will be "game-changers" in terms of our overall broadband use, except in rural/remote areas. They won't have the capacity of terrestrial networks, and signals will struggle with indoor penetration and uplink from anything battery-powered. Vehicles, planes, boats and remote IoT will be much better-connected, though. Space junk & cascading-collision scenarios like the movie Gravity will be a worry, though. I'm not sure about drones and balloons as HAPS for mass-market use, although I suspect they'll have some cool applications we don't know today.
  • Cloud & edge: Let's get one thing clear - the bulk of the world's computing cycles & data storage will continue to occur in massive datacentres (perhaps heading towards a terawatt of aggregate power by 2030) and on devices themselves, or nearby gateways. But there will be a thriving mid-market of different sorts of "edge" as I've covered in many posts and presentations recently. This will partly be about low-latency, but not as much as most people think. It will be more about saving mass data-transport costs, protecting "data sovereignty" and perhaps optimising energy consumption. A certain amount will be inside telcos' networks, but without localised peering / aggregation this will be fairly niche, or else it will be wholesaled out to the big cloud players. There will be a lot of value in the overall orchestration of compute tasks for applications between multiple locations in the ecosystem, from chip-level to hyperscale and back again. The fundamental physical quantum of much edge compute will be mundane: a 40ft shipping container, plonked down near sources of power and fibre.
  • Multi-network: We should expect all connectivity to be "software-defined" and "multi-network". Devices will have lots of radios, connecting simultaneously, with different paths and providers (and multiple eSIM / other identities). Buildings will have mutliple fibres, wireless connections and management tools. Device-to-device connections and relaying will be prevalent. IoT will use a selection of LPWAN technologies as well as Wi-Fi, cellular and short-range connections. Satellite and maybe LiFi (light-based) connections will play new roles. Arbitrage, bonding, load-balancing will occur at multiple levels from silicon to OS to gateway to mid-network. Very few things will be locked to a single network or provider - unless it has unique value such as managed security or power consumption.
  • Voice & messaging: Telephony will be 150yo in 2026. By 2030 we'll still be making some retro-style "phone calls" although it will seem even more clunky, interruptive, unnatural and primitive than today. (It won't stop the cellular industry spending billions upgrading to Vo6G though). SMS won't have disappeared, either. But most consumers will communicate through a broad variety of voice and video interaction models, in-app, group-based, mediated by an array of assistants, and veracity-checked to avoid "fake voice" and man-in-the-middle attacks of ever increasing subtlety. Another 10 years of evolution beyond emojis, stories, filters and live broadcasts will allow communication which is expressive, emotion-first, and perhaps even richer and more nuanced than in-person body language. I'm not sure about AR/VR comms, although it will still be more important than RCS which will no doubt be celebrating its 23rd year of irrelevance, hype and refusal to die.
  • Enterprise comms:  UCaaS, cPaaS and related collaboration tools will progress steadily, if unspectacularly - although with ever more cloud focus. There will be more video, more AI-enriched experiences for knowledge management, translation, whispered coaching and search. There will be attempts to reduce travel to meetings and events as carbon taxes bite, although few will come close to the in-person experience or effectiveness. We'll still have some legacy phone calls and numbers (as with consumer communications) although these will be progressively pushed to the margins of B2B and E2E interactions. Ever more communications will take place "contextually" - within apps, natively supported in IoT devices, or with AI-based assistants. Contact centres and customer interactions will be battlegrounds for bots and assistants on both sides. ("Alexa, renegotiate my subscription for a better price - you have permission to emulate my voice"). Security and verification will be highly prized - just because something is heard doesn't mean it will match what was originally spoken
  • Network ownership models: Some networks of today will still look mostly like "telcos" in 2030,  but as I wrote in this post the first industry to be transformed by 5G will be the telecom industry itself. We'll see many new stakeholders, some of which look like SPs, some which are private network operators, and many new forms of aggregator, virtual operator, wholesale or neutral mobile/fibre provider. I'm not expecting a major shift back to nationalised or government-run networks, but I think regulations will favour more sharing of assets where it makes sense. Individual industries will take control of their own connectivity and communications, perhaps using standardised 5G, or mild variations of it. There will be major telcos of today still around - but most will not be providing "slices" to companies and offering deep cross-vertical managed services. There will be M&A which means that we'll have a much more heterogeneous telco/CSP market by 2030 than today's 800 identikit national MNOs. Fixed and fibre providers will be diverse as well - especially with the addition of cloud, utility and muncipal providers. I think the towerco / property-telco model will be important as asset owners / builders as well.
I realise that I could go on at length about many other topics here - autonomous and connected vehicles, the future of cities and socio-political spheres, shifts in entertainment models, the second wave of blockchain/ledgers, the role of human enhancement & biotech, new sources of energy and environmental technology, new forms of regulation and so forth. But this list is already long enough, I think. Various of these topics will also appear in podcasts - which I'm intending to ramp up in 2020. At the moment I'm on SoundCloud (link) but watch out here or on Twitter for announcements of other platforms.

If this has piqued your interest, please comment on my blog or LinkedIn article. This is a vision for 2030, which I hope is self-consistent and reasonable - but it is not the only plausible future scenario.

If you're interested in running a private workshop to discuss, debate and strategise around any of these topics, please get in touch via private message, or information AT disruptive-analysis DOT com. I work with numerous operators, vendors, regulators, industry bodies and investors to imagine the future of networks and other advanced technologies - and steer the path of evolution.

Happy New Year! (and New Decade)