Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label arbitrage. Show all posts
Showing posts with label arbitrage. Show all posts

Thursday, July 20, 2017

Mobile Multi-Connection & SD-WAN is coming


I’ve written before (link) about the impact of SD-WAN on fixed (enterprise) operators, where it is having significant effects on the market for MPLS VPNs, allowing businesses to bond together / arbitrage between normal Internet connection(s), small-capacity MPLS links and perhaps an LTE modem in the same box. Now, similar things are being seen in the mobile world. This is the "multi-network" threat I've discussed before (link).

Sometimes provided through a normal CSP, and sometimes managed independently, SD-WAN has had a profound impact on MPLS pricing in some corporate sectors. It has partly been driven by an increasing % of branch-site data traffic going into the HQ network and straight out again to the web or a cloud service. That “tromboning” is expensive, especially if it is using premium MPLS capacity.



The key enabler has been the software used to combine multiple connections – either to bond them together, send traffic via differential connections based on type or speed, add security and cloud-management functions, or offer arbitrage capabilities of varying sorts. It has also disrupted network operators hoping to offer NFV- and SDN-services alongside access: if only a fraction of the traffic goes through that operator’s core, while the rest breaks-out straight to the Internet, or via a different carrier, it’s difficult to add valuable functionality with network software.

But thus far, the main impact has been on business fixed-data connections, especially MPLS which can be 30-40x the cost of a “vanilla” ISP broadband line, for comparable throughput speeds. Many network providers have now grudgingly launched SD-WAN services of their own – the “if you can’t beat them, then join them” strategy aiming to keep customer relevance, and push their own cloud-connect products. Typically they’ve partnered with SD-WAN providers like VeloCloud, while vendors such as Cisco have made acquisitions.

I’ve been wondering for a while if we’d see the principle extended to mobile devices or users – whether it’s likely to get multiple mobile connections, or a mix of mobile / fixed, to create similar problems for either business or consumer devices. It fits well with my broader belief of “arbitrage everywhere” (link).

Up to a point, WiFi on smartphones and other devices already does this multi-connection vision, but most implementations have been either/or cellular and WiFi, not both together. Either the user, the OS, or one of the various cellular hand-off standards has done the switching.

This is now starting to change. We are seeing early examples of mobile / WiFi / fixed combinations, where connections from multiple SPs and MNOs are being bonded, or where traffic is intelligently switched-between multiple “live” connections. (This is separate from things like eSIM- or multi-IMSI enabled mobile devices or services like Google Fi, which can connect to different networks, but only one at a time).

The early stages of mobile bonding / SD-WAN are mostly appearing in enterprise or IoT scenarios. The onboard WiFi in a growing number of passenger trains is often based on units combining multiple LTE radios. (And perhaps satellite). These can use multiple operators’ SIMs in order to maximise both coverage and throughput along the track. I’ve seen similar devices used for in-vehicle connections for law enforcement, and for some fixed-IoT implementations such as road-tolling or traffic-flow monitors.

At a trade show recently I saw the suitcase-sized unit below. It has 12 LTE radios and SIMs, plus a switch, so it can potentially combine 3 or 4 connections to each network operator. It’s used in locations like construction sites, to create a “virtual fibre” connection for the project office and workers, where normal fixed infrastructure is not available. Usually, the output is via WiFi or fixed-ethernet, but it can also potentially support site-wide LPWAN (or conceivably even a local private unlicensed/shared-band LTE network). 



It apparently costs about $6000 or so, although the vendor prefers to offer it as a service, with the various backhaul SIMs / data plans, rather than on a BYO basis. Apparently other similar systems are made by other firms – and I can certainly imagine less-rugged or fewer-radio versions having a much lower price point.

But what really caught my eye recently is a little-discussed announcement from Apple about the new iOS11. It supports “TCP Multipath”. (this link is a good description & the full Applie slide-deck from WWDC is here). This should enable it to use multiple simultaneous connections – notably cellular and WiFi, although I guess that conceivably a future device could even support two cellular radios (perhaps in an iPad with enough space and battery capacity). 

That on its own could yield some interesting results, especially as iOS already allows applications to distinguish between network connections (“only download video in high quality over WiFi”, etc).It also turns out that Apple has been privately using Multipath TCP for 4 years, for Siri - with, it claims, a 5x drop in network connection failure rates.

The iOS11 APIs offer various options for developers to combine WiFi and cellular (see slide 37 onward here). But I’m also wondering what future generations of developer controls over such multipath connectivity might enable. It could allow novel approaches to security, performance optimisation on a per-application or per-flow basis, offload and on-load, and perhaps integration with other similar devices, or home WiFi multi-AP solutions that are becoming popular. Where multiple devices cooperate, many other possibilities start to emerge.



What we may well see in future is multi-device, multi-access, P2P meshes. Imagine a family at home, with each member having a subscription and data-plan with a different mobile network. Either via some sort of gateway, or perhaps using WiFi or Bluetooth directly between devices, they can effectively share each others’ connections (and the fixed broadband), while simultaneously using their own “native” cellular data. Potentially, they can share phone numbers / identities this way as well. An advanced connection-management tool can optimise for throughput, latency or just simply coverage anywhere in the house or garden. 



This could have a number of profound implications. It would lead to much greater substitution between different networks and plans. It would indirectly improve network coverage, especially indoors. It could either increase or decrease demand for small cells (are they still needed, if phones can act as multi-network relays? Or perhaps operators try to keep people “on net” and give them away for free?). From a regulatory or law-enforcement standpoint it means serious challenges around identifying individual users. It could mean that non-neutral network policies could be “gamed”, as could pricing plans.

Now I’ll fully admit that I’m extrapolating quite a bit from a seemingly simple enhancement of iOS. (I’m also not sure how this would work with Android devices). But to me, this looks analogous to another Apple move last year – adding CallKit to iOS, which allowed other voice applications to become “first-class citizens” on iPhones, with multiple diallers and telephony experiences sharing call-logs and home-screen answerability.

Potentially, multipath in iOS allows other networks to become (effectively) first-class citizens as well as the “native” MNO connection controlled from the SIM.

I’m expecting other examples of mobile connection-bonding and arbitrage to emerge in the coming months and years. The lessons from SD-WAN in the fixed domain should be re-examined by carriers through a wireless lens: expect more arbitrage in future.

Thursday, July 13, 2017

Both sides are wrong in the Net Neutrality debate

I've been watching the ongoing shouting-match about Net Neutrality (in the US & Europe) with increasing exasperation. Recently there was a "day of action" by pro-neutrality activists, which raised the temperature yet further.

The problem? Pretty much everyone, on both sides (and on both sides of the Atlantic), is dead wrong a good % of the time. They're not necessarily wrong on the same things, but overall the signal-to-noise ratio on NN is very poor.

There are countless logical fallacies perpetrated by lobbyists and commentators of all stripes: strawman arguments, false dichotomies, tu-quoque, appeals to authority and all the rest. (This is a great list of fallacies, by the way. Everyone should read it). 

Everyone's analogies are useless too - networks aren't pipes, or dumb. Packets don't behave like fluids. Or cars on a road. There are no "senders". It's not like physical distribution or logistics. Even the word "neutrality" is dubious as a metaphor. The worst of all is "level playing field". Anyone using it is being duplicitous, ignorant, or probably both. (See this link).

I receive lots of exhortations from both sides - I get well-considered, but too-narrow network-science commentary & Twitter debates from friend & colleague Martin Geddes. I read detailed and learned regulatory snark and insider stories from John Strand. I see telco/vendor CEOs speaking (OK, grandstanding) at policy conferences. I get reports of egregious telco- and state-based blocking of certain Internet services from Access Now, EFF and elsewhere. I see VCs and investors lining up on both sides, depending on whether they have web interests, or network vendor/processing positions. I watch comments from the FCC, Ofcom, EU Commission, BEREC, TRAI and others - as well as politicians. And I read an absolute ton of skewed & partial soundbites from lobbyists on Twitter or assorted articles/papers.

And I see the same, tired - often fallacious or irrelevant - arguments trotted out again and again. Let me go through some of the common ones:
  • Some network purists insist routers & IP itself are (at core) non-neutral, because there are always vagaries & choices in how the internals, such as buffers, are configured. They try to use this to invalidate the whole NN concept, or claim that the Internet is broken/obsolete and needs to be replaced. Other Internet purists insist that the original "end-to-end" principle was to get as close as possible to "equal treatment" for packets, and either don't recognise the maths - or suggest that the qualitative description should be treated as a goal, even if the precise mechanisms involve some fudges. Everyone is wrong.
  • In the US, the current mechanism for NN was to incorporate it under the FCC's Title II rules. That was a clunky workaround, after an earlier NN ruling was challenged by Verizon in 2011. In many ways, the original version was a much cleaner way to do it, as it risked less regulatory creep. Everyone is wrong.
  • Many people talk about prioritisation of certain traffic (eg movies) and how that could either (a) allow innovative business models, or (b) disenfranchise startups unable to match web giants' payments. Yet the technology doesn't work properly (and won't), it's almost impossible to price/market/sell/manage in practice, and there is no demand. Conspicuously, there have been no lobbyists demanding the right to pay for priority. There is no market for it, and it won't work. It's irrelevant. Everyone is wrong.
  • Some people assert that NN will reduce "investment" in networks, as it will preclude innovation. Others assert that NN increases overall investment (on networks plus servers/apps/devices). When I tried to quantify the possible revenues from 25 suggested non-neutral business models (link), I concluded the incremental revenue would barely cover the extra costs of implementation, if that. There are many reasons for investments in networks (eg 4G then 5G deployment cycles), while we also see CapEx being replaced by OpEx or software licences for managed or virtual networks. Drawing meaningful correlations is hard enough, let alone causation from an individual issue out of dozens. Everyone is wrong.
  • Most of the debate seems to centre on content - notably video streaming. This ties in with operators wanting to bundle TV and related programming, or Netflix and YouTube seen as dominating Internet traffic and therefore being pivot-points for neutrality. Yet in most markets, IPTV is not delivered via the public Internet anyway, and is considered OK to prioritise as it's a basic service. On the opposite side, upgrades to high-speed consumer broadband is partly driven by the desire for streaming video - revenues would fall if it was blocked, while efforts to charge extra fees to Netflix and co would likely backfire - they'd insist on opposite fees to be carried, like TV channels. Meanwhile, most of the value in the Internet doesn't come from content, but from applications, communications, cloud services and data transmission. However, they are all much techier, so get mostly overlooked by lobbyists and politicians entranced by Hollywood, Netflix or the TV channels. Everyone is wrong.
  • Lots of irrelevant comments on all sides about CDNs or paid-peering being examples of prioritisation (or of craven content companies paying for special favours). Fascinating area, but irrelevant to discussion about access-network ISPs. Everyone is wrong.
  • Lots of discussion about zero-rating or "sponsored data" paid for by 3rd-parties and whether they are right/wrong/distortions. Lots of debate whether they have to be offered to all music / video streaming services, whether they should just be promotional or can be permanent. And so on. Neither relate to treatment of data transmission by the network - and differential treatment of pricing is, like CDNs, interesting but irrelevant to NN. And sponsored data models don't work technically or commercially, with a handful of minor exceptions. Ignore silly analogies to 1-800 phone numbers - they are totally flawed comparisons (see my 2014 rant here). Upshot: zero-rating isn't an NN issue, and sponsored data (with prioritisation or not) doesn't work (for at least 10 reasons). Everyone is wrong.
  • Almost everyone in the US and Europe regulatory scene now agrees that outright blocking of certain services (eg VoIP) or trying to force specific application/web providers to pay an "access" toll fee is both undesirable or unworkable. It would just drive use of VPNs (which ISPs would block at their peril), or amusingly could mean that Telco1.com could legally block the website of Telco2.com, which would make make future marketing campaigns a lot of fun. In other words, it's not going to happen, except maybe for special cases such as childrens' use, or on planes. It's undesirable, regulatorily unacceptable, easy to spot and impossible anyway. Forget about it. Everyone is wrong.
  • Lots of discussion about paid-for premium QoS on broadband, and whether or not it should apply to IoT, 5G, NFV/SDN, network-slicing, general developer-facing APIs and therefore allow different classes of service to be created, and winners/losers to be based on economic firepower. Leaving aside enterprise-grade MPLS and VPN services (where this is both permissible and possible), there's a lot of nonsense talked here. For consumer fixed broadband, many of the quality issues relate to in-home wiring and WiFi interference, for which ISP-provided QoS is irrelevant. For mobile, the radio environment is inherently unpredictable (concrete walls, sudden crowds of people, interference etc). Better packet scheduling can tilt the odds a bit, but forget about hard SLAs or even predictability. Coverage is far more a limiting factor. Dealing with 800 ISPs around the world with different systems/pricing is impossible. The whole area is a non-starter: bigger web companies know how much of a minefield this is, and smaller ones don't care. Everyone is wrong.
In summary - nearly anyone weighing in on Net Neutrality, on either side, is talking nonsense a good % of the time. (And yes, probably me too - I'm sure people will pick holes in a couple of things here).


So what's the answer?
  • First, tone down the rhetoric on both sides. The whole thing is a cacaphony of nonsense, mostly from lobbyists representing two opposing cheeks of the same arse. Acknowledge the hyperbole. Get some reputable fact-checkers involved, and maybe sponsored by government and/or crowdsourcing.
  • Second, recognise that many of the threatened non-neutral models are either impossible or obviously unprofitable. Arguing about them is sophistry and a waste of everyone's time. There are more important things at stake.
  • Thirdly, design and create proper field-trials to try to prove/disprove assertions about innovation, cost structures etc. Select a state, a city or a class of users, or speciallly-licensed ISPs to run prototypes and actually get some proper data. Don't try to change anything on a national or international basis overnight, no matter how many theoretical "studies" have been done. Create a space for operators and developers to try out creating "specialised services", see if they work, and see what happens to everything else. Then develop policy based on evidence - and yes, you'll have to wait a few years. You should have done it sooner instead of arguing. I suspect it'll prove my point 2 above, anyway
  • Fourth, consider "inevitabilities" (see this link for discussion). VPNs will get more common. NFV and edge-computing will get more common. Multiple connections will get more common. New networks (eg private cellular, LPWAN) will get more common. Multi-hop connections with WiFi and ZigBee & meshes will get more common. Devices & applications will fragment, cloudify, become "serverless", being componentised with micro-services, and be harder to decode and classify in the network. AI will get more common, to "game" the network policies, as well as help manage the infrastructure. All this changes the landscape for NN over the next couple of years, so we'll end up debating it all again. Think about these things (and others) now.
  • Six, try some rules on branding Internet / other access. Maybe allow specialised services, but force them to be sold separately from Internet access, and called something else (Ain'ternet? I Can't Believe it's Not Internet?)
  • Seven, get ISP executives (and maybe web/content companies' execs too) to make a public promise about acting in consumers' interests on Internet matters, as I suggested a few years ago - an IPocratic Oath. (link)
  • Eight, train and empower the judiciary to be able to understand, collect data and adjudicate quickly on Internet-related issues. It may be that competition law could be applied, or injunctions granted, even in the absence of hard NN laws. Let's get 24x7 overnight Internet courts able to take an initial view on permissibility of traffic management - not wait 2 years and appeals during which time an app-developer slowly dies.
  • Nine, let's get more accountability on traffic-management and network configurations, so that neutrality/competition law can be applied at a later date anyway. We already have rules on data-retention for customer calls & access to networks. Let's have all internal network configuration & operational data in ISPs' networks securely captured, encrypted, held in escrow and available to prosecutors if needed, under warrant. A blockchain use-case, perhaps? We're going to need that data anyway, to guarantee that customer data hasn't been tampered with by the network. 
  • Ten, ask software (and content and IoT device and cloud) developers what they actually want from the networks. Most seem to be absent from the debate - the forgotten stakeholders. Understand how important "permissionless innovation" actually is. Query whether they care about network QoS, or understand how it links to overall QoS which covers everything from servers to displays to device chipsets to user-interfaces. Find out how they deal with network glitches, dodgy coverage - and whether "fallback" strategies mean that the primary network is getting more or less important. Do they want better networks, are they prepared to pay for them - or would they just rather have better visibility and predictability of when problems are likely to occur?
Apologies for the length of this piece. I'll happily pay someone 0.0000001c for it to load faster, as long as the transaction cost is less than 5% of that.

Get in touch with me at information AT disruptive-analysis dot com if you'd like to discuss it more, or have a sane discussion about Neutrality and what it really means for broadband, policy, 5G, network slicing, IoT and all the rest.

Saturday, February 25, 2017

A Core Problem for Telcos: One Network, or Many?

In my view the central question - maybe an existential dilemma - facing the telecoms industry is this:

Is it better to have one integrated, centrally-managed and feature-rich network, or several less feature-rich ones, operated independently?

Most of the telecoms "establishment" - operators, large vendors, billing/OSS suppliers, industry bodies - tends to prefer the first option. So we get notions of networks with differentiated QoS levels, embedding applications in-network with NFV and mobile edge computing (MEC) and perhaps "slicing" future 5G networks, with external customer groups or applications becoming virtual operators. There is an assumption that all the various standards are tightly coupled - radio, core network, so-called "telco cloud", IMS and so on. Everything is provided as a "network function" or "network service" in integrated fashion, and monetised by a single CSP.

It's not just the old guard either. New "non-establishment" approaches to managing quality also appear, such as my colleague Martin Geddes' views on clever and deterministic contention-management mechanisms (link). That takes a fresh look at statistical multiplexing.

Yet users, device vendors and cloud/Internet application providers often prefer a different approach. Using multiple network connections, either concurrently or being able to switch between them easily, is seen to help reduce costs, improve coverage and spread risks better. I've written before about using independent connections to create "Quasi-QoS" (link), especially in fixed networks with SD-WAN. In mobile, hundreds of millions of users have multi-SIM handsets, while (especially in IoT) we see multi-IMSI SIM cards that can be combined with roaming deals to give access to all mobile networks in a given country, or optimise for costs/performance in other ways. Google's Fi service famously combines multiple MVNO deals, as well as WiFi. Others are looking to blend LPWAN with cellular, or satellite and so on. The incremental cost of adding another connection (especially wireless) is getting ever lower. At the other end of the spectrum, data centres will often want redundant fibre connections from different providers, to offset the risk of a digger cutting a duct, as well as the ability to arbitrage on pricing or performance.

I have spoken to "connected car" specialists who want their vehicles to have access not just to (multiple) cellular networks, but also satellite, WiFi in some locations - and also work OK in offline mode as well. Many software developers create apps which are "network aware", with connectivity preferences and fallbacks. We can expect future AI-based systems to be much smarter as well - perhaps your car will know that your regular route to work has 10 miles of poor 4G coverage, so it learns to pre-cache data, or uses a temporary secondary cellular link from a different provider.

There are some middle grounds as well. Technologies such as MIMO in wireless networks give "managed multiplicity", using bouncing radio signals and multiple antennas. Plenty of operators offer 4G backups for fixed connections, or integrate WiFi into their same core infrastructure. The question then is whether the convergence occurs in the network, or perhaps just in the billing system. Is there a single point of control (or failure)?

The problem for the industry is this: multi-network users want all the other features of the network (security, identity, applications etc) to work irrespective of their connection. Smartphone users want to be able to use WiFi wherever they are, and get access to the same cloud services - not just the ones delivered by their "official" network operator. They also want to be able to switch provider and keep access - the exact opposite of the type of "lock-in" that many in the telecoms industry would prefer. Google Fi does this, as it can act as an intermediary platform. That's also true for various international MVNO/MNO operators like Truphone.

A similar problem occurs at an application level: can operators push customers to be loyal to a single network-resident service such as telephony, SMS or (cough) RCS? Or are alternative forces pushing customers to choose multiple different services, either functionally-identical or more distant substitutes? It's pretty clear that the low marginal cost of adding another VoIP or IM or social network cost outweighs the benefits of having one "service to rule them all", no matter how smart it is. In this case, it's not just redundancy and arbitrage, but the ability to choose fine-grained features and user-experience elements.

In the past, the trump card for the mono-network approach has been QoS and guarantees. But ironically, the shift to mobile usage has reduced the potential here - operators cannot really guarantee QoS on wireless networks, as they are not in control of local interference, mobility or propagation risks. You couldn't imagine an SLA that guaranteed network connection quality, or application performance - just as long as it wasn't raining, or there wasn't a crowd of people outside your house. 




In other words, the overall balance is shifting towards multiplicity of networks. This tends to pain many engineers, as it means networks will (often) be less-deterministic as they are (effectively) inverse-multiplexed. Rather than one network being shared between many users/applications, we will see one user/device sharing many networks. 

While there will still be many use-cases for well-managed networks - even if users ultimately combine several of them - this means that future developments around NFV and network-slicing need to be realistic, rather than utopian. Your "slice" or QoS-managed network may only be used a % of them time, rather than exclusively. It's also likely that your "customer" will be an AI or smart application, rather than an end-user susceptible to being offered loyalty incentives. That has significant implications for pricing and value-chain - for example, meaning that aggregators and brokers will become much more important in future.

My view is that there are various options open to operators to mitigate the risks. But they need to be realistic and assume that a good % of their customers will, inevitably, be "promiscuous". They need to think more about competing for a larger share of a user's/device's connectivity, and less about loading up each connection with lots of QoS machinery which adds cost rather than agility. Nobody will pay for QoS (or a dedicated slice) only 70% of the time. Some users will be happy with a mono-connection option. But those need to be identified and specifically-relevant solutions developed accordingly. Hoping that software-defined arbitrage and multi-connection devices simply disappear is wishful (and harmful) thinking. Machiavellian approaches to stopping multi-connection won't work either - forget about switching off WiFi remotely, or connecting to a different network than the one the user prefer.

This is one of the megatrends and disruptions I often discuss in workshops with telco and vendor clients. If you would like to arrange a private Telecoms Strategic Disruptions session or custom advisory project, please get in touch with me via information AT disruptive-analysis DOT com.