Pages

Pages

Saturday, February 25, 2017

A Core Problem for Telcos: One Network, or Many?

In my view the central question - maybe an existential dilemma - facing the telecoms industry is this:

Is it better to have one integrated, centrally-managed and feature-rich network, or several less feature-rich ones, operated independently?

Most of the telecoms "establishment" - operators, large vendors, billing/OSS suppliers, industry bodies - tends to prefer the first option. So we get notions of networks with differentiated QoS levels, embedding applications in-network with NFV and mobile edge computing (MEC) and perhaps "slicing" future 5G networks, with external customer groups or applications becoming virtual operators. There is an assumption that all the various standards are tightly coupled - radio, core network, so-called "telco cloud", IMS and so on. Everything is provided as a "network function" or "network service" in integrated fashion, and monetised by a single CSP.

It's not just the old guard either. New "non-establishment" approaches to managing quality also appear, such as my colleague Martin Geddes' views on clever and deterministic contention-management mechanisms (link). That takes a fresh look at statistical multiplexing.

Yet users, device vendors and cloud/Internet application providers often prefer a different approach. Using multiple network connections, either concurrently or being able to switch between them easily, is seen to help reduce costs, improve coverage and spread risks better. I've written before about using independent connections to create "Quasi-QoS" (link), especially in fixed networks with SD-WAN. In mobile, hundreds of millions of users have multi-SIM handsets, while (especially in IoT) we see multi-IMSI SIM cards that can be combined with roaming deals to give access to all mobile networks in a given country, or optimise for costs/performance in other ways. Google's Fi service famously combines multiple MVNO deals, as well as WiFi. Others are looking to blend LPWAN with cellular, or satellite and so on. The incremental cost of adding another connection (especially wireless) is getting ever lower. At the other end of the spectrum, data centres will often want redundant fibre connections from different providers, to offset the risk of a digger cutting a duct, as well as the ability to arbitrage on pricing or performance.

I have spoken to "connected car" specialists who want their vehicles to have access not just to (multiple) cellular networks, but also satellite, WiFi in some locations - and also work OK in offline mode as well. Many software developers create apps which are "network aware", with connectivity preferences and fallbacks. We can expect future AI-based systems to be much smarter as well - perhaps your car will know that your regular route to work has 10 miles of poor 4G coverage, so it learns to pre-cache data, or uses a temporary secondary cellular link from a different provider.

There are some middle grounds as well. Technologies such as MIMO in wireless networks give "managed multiplicity", using bouncing radio signals and multiple antennas. Plenty of operators offer 4G backups for fixed connections, or integrate WiFi into their same core infrastructure. The question then is whether the convergence occurs in the network, or perhaps just in the billing system. Is there a single point of control (or failure)?

The problem for the industry is this: multi-network users want all the other features of the network (security, identity, applications etc) to work irrespective of their connection. Smartphone users want to be able to use WiFi wherever they are, and get access to the same cloud services - not just the ones delivered by their "official" network operator. They also want to be able to switch provider and keep access - the exact opposite of the type of "lock-in" that many in the telecoms industry would prefer. Google Fi does this, as it can act as an intermediary platform. That's also true for various international MVNO/MNO operators like Truphone.

A similar problem occurs at an application level: can operators push customers to be loyal to a single network-resident service such as telephony, SMS or (cough) RCS? Or are alternative forces pushing customers to choose multiple different services, either functionally-identical or more distant substitutes? It's pretty clear that the low marginal cost of adding another VoIP or IM or social network cost outweighs the benefits of having one "service to rule them all", no matter how smart it is. In this case, it's not just redundancy and arbitrage, but the ability to choose fine-grained features and user-experience elements.

In the past, the trump card for the mono-network approach has been QoS and guarantees. But ironically, the shift to mobile usage has reduced the potential here - operators cannot really guarantee QoS on wireless networks, as they are not in control of local interference, mobility or propagation risks. You couldn't imagine an SLA that guaranteed network connection quality, or application performance - just as long as it wasn't raining, or there wasn't a crowd of people outside your house. 




In other words, the overall balance is shifting towards multiplicity of networks. This tends to pain many engineers, as it means networks will (often) be less-deterministic as they are (effectively) inverse-multiplexed. Rather than one network being shared between many users/applications, we will see one user/device sharing many networks. 

While there will still be many use-cases for well-managed networks - even if users ultimately combine several of them - this means that future developments around NFV and network-slicing need to be realistic, rather than utopian. Your "slice" or QoS-managed network may only be used a % of them time, rather than exclusively. It's also likely that your "customer" will be an AI or smart application, rather than an end-user susceptible to being offered loyalty incentives. That has significant implications for pricing and value-chain - for example, meaning that aggregators and brokers will become much more important in future.

My view is that there are various options open to operators to mitigate the risks. But they need to be realistic and assume that a good % of their customers will, inevitably, be "promiscuous". They need to think more about competing for a larger share of a user's/device's connectivity, and less about loading up each connection with lots of QoS machinery which adds cost rather than agility. Nobody will pay for QoS (or a dedicated slice) only 70% of the time. Some users will be happy with a mono-connection option. But those need to be identified and specifically-relevant solutions developed accordingly. Hoping that software-defined arbitrage and multi-connection devices simply disappear is wishful (and harmful) thinking. Machiavellian approaches to stopping multi-connection won't work either - forget about switching off WiFi remotely, or connecting to a different network than the one the user prefer.

This is one of the megatrends and disruptions I often discuss in workshops with telco and vendor clients. If you would like to arrange a private Telecoms Strategic Disruptions session or custom advisory project, please get in touch with me via information AT disruptive-analysis DOT com.

No comments:

Post a Comment