Pages

Pages

Tuesday, December 19, 2017

Emerging risks to telcos from "Cuckoo Platforms"

Summary
  • Telcos want to be platform players at varying points in their network architecture and service offerings. 
  • But successful platforms generally need "anchor tenants" to gain scale.
  • The problem comes when anchor-tenants are themselves other 3rd-party platforms.
  • There is a risk of platforms-on-platforms acting as "cuckoos", pushing the native owner's eggs out of the nest.
  • Telcos face a risk from major cloud platforms overwhelming their MEC edge-compute platforms.
  • ... and a risk from major AI-based commerce platforms overwhelming their messaging, voice and IoT platforms.
  • Other future platforms also face similar challenges.
  • To succeed as platform providers, telecom operators need to have their own anchor-type services, and to have a well-designed approach to combating the risk of parasitic cuckoo platforms.

Background: the Internet overcame its broadband host

The cuckoo bird is infamous for laying its eggs in other birds' nests. The young cuckoos grow much faster than the rightful occupants, forcing the other chicks out - if they haven't already physically knocked the other eggs overboard. (See "brood parasitism", here).


Analogies exist quite widely in technology - a faster-growing "tenant" sometimes pushes out the offspring of the host. Arguably Microsoft's original Windows OS was an early "cuckoo platform" on top of IBM's PC, removing much of IBM's opportunity for selling additional software. 

In many ways, Internet access itself has outgrown its own host: telco-provided connectivity. Originally, fixed broadband (and the first iterations of 3G mobile broadband) were supposed to support a wide variety of telco-supplied services. Various "service delivery platforms" were conceived, including IMS, yet apart from ordinary operator telephony/VoIP and some IPTV, very little emerged as saleable services.

Instead, Internet access - which started using dial-up modems and normal phone lines before ADSL and cable and 3G/4G were deployed - has been the interloping bird which has thrived in the broadband nest instead of telcos' own services. It's interesting to go back and look at the 2000-era projections for walled-garden, non-Internet services.


The need for an anchor tenant

The problem is that everyone wants to be a platform player. And when you're building and scaling a new potential platform, it's really hard to turn down a large and influential "anchor tenant", even if you worry it might ultimately turn out to be a Trojan Horse (apologies for the mixed metaphor). You need the scale, the validation, and the draw for other developers and partners.

This is why the most successful platforms are always the one which have one of their own products as the key user. It reduces the cannibalisation risk. Office is the anchor tenant on Windows. iTunes, iMessage and the camera app are anchors on iOS. Amazon.com is the anchor tenant for AWS.

Unfortunately, the telecoms industry looks like it will have to learn a(nother) tough lesson or two about "cuckoo platforms".


MEC is a tempting nest

The more I look at Multi-Access Edge Computing (MEC), the more I see the risks of a questionable platform strategy. Some people I met at the Small Cells event, in the US a couple of weeks ago, genuinely believe it can allow telcos to become some sort of distributed competitor to Amazon AWS. They see MEC as a general-purpose edge cloud for mainstream app and IoT developers, especially those needing low-latency applications. 

I think this is delusional - firstly because no developer will want to deal with 800 worldwide operators with individual edge-cloud services and pricing, secondly because this issue of latency is overstated & oversimplified (see my recent post, link), and thirdly because a lot of edge-computing tasks will actually be designed to reduce the use of the network and reliance/spend on network operators.

But also, this "MEC as quasi-Amazon" strategy will fail mostly because the edge/distributed version Amazon will be Amazon. The recent announcement by Nokia that it will be implementing AWS Greengrass in its MEC servers is a perfect example (link). I suspect that other MEC operators and vendors will end up acting as "nests" for Azure, IBM Bluemix and various other public cloud providers.

Apologies for the awful pun, but these "cloud-cuckoos" will use the ready-made servers at the telco edge to house their young distributed-computing services, especially for IoT - if the wholesale price is right. They will also build their own sites in other "deeper" network locations (link). 

In other words, telcos' MEC deployments are going to help the cloud providers become even larger. They may get a certain revenue stream from their tenancy, but this will likely be at the cost of further entrenching the major players overall. The prices paid by an Amazon-scale provider for MEC hosting are likely to be far lower than the prices that individual "retail" developers might pay.

(The real opportunity for MEC, in my view, lies in hosting the internal network-centric applications of the operators themselves, probably linked to NFV. Think distributed EPCs, security gateways, CDN nodes and so on. Basically, stuff that lives in the network already, but is more flexible/responsive if located at the edge rather than a big data centre).


End-running Messaging-as-a-Platform (MaaP)

Another example of platform-on-platform cannibalisation is around the concept of "messaging as a platform", MaaP. Notwithstanding WeChat's amazing success in China, my sense is that it's being vastly over-hyped as a potential channel for marketing and customer interaction. 

I just don't see the majority of people in other markets forgoing the web or optimised native apps, and using WhatsApp or iMessage or SnapChat or SMS as the centrepiece of their future purchases or "engagement" (ugh) with companies and A2P functions. But where they do decide to use messaging apps for B2C reasons, the chatbots they interact with will not be MaaP-dedicated or MaaP-exclusive

These chatbots will themselves be general "conversational platforms" that work across multiple channels, not just messaging, with voice as well as text, and with a huge AI-based back-end infrastructure and ongoing research/deployment effort. They'll work in messaging apps, browsers, smart speakers, wearables, car and general APIs for embedding in apps and all sorts of other contexts.

Top of the list of conversational platforms are likely to be Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana and Facebook M, plus probably other emergent ones from the Internet realm.


MaaP is "just another channel" for broad conversational/commerce platforms

In other words, some messaging apps might theoretically become "platforms", but the anchor tenants will be "wholesale" conversational platforms, not individual brands or developers. In some cases they will again be in-house assistants (iMessage + Siri, or Google Allo + Assistant for instance). In other cases, they may be 3rd-party bot ecosystems - we already see Amazon Alexa integrated into numerous other devices.

Now consider what telcos are doing around MaaP. As well as extending their existing SMS business towards A2P (application-to-person), they have also allowed third-parties like Twilio to absorb much of the added value as cPaaS providers. And when it comes to RCS* which has an explicit MaaP strategy, they have welcomed Google as a key enabler on Android, despite its obvious desire to use it mainly as a free iMessage rival. (*obviously, I'm not a believer in RCS succeeding for many other reasons as well, but let's leave that for this argument).

What the GSMA seems to have also missed is that Google isn't really interested in RCS MaaP per-se - it simply wants as many channels as possible for its Assistant, and its DialogFlow developer toolkit. To be fair, Google announced Assistant, and acquired API.AI (DialogFlow's original source) after it acquired Jibe. It's moved from mobile-first, to AI-first, since September 2015.

The Google conversational interface is not going to be exclusive to RCS, or especially optimised for it. (I asked the DialogFlow keynote speaker about this at last week's AI World conference in Boston, and it was pretty clear that it wasn't exactly top-of-mind. Or even bottom-of-mind). Google's conversational platform will be native in Android, in other messaging apps like Allo, Chrome, Google Home and presumably 1000 other outlets.

From an RCS MaaP perspective, it's a huge cuckoo that will be more important than the Jibe platform. There is no telco "anchor tenant" for RCS-MaaP as far as I can tell - I haven't even seen large deployment of MNOs' own customer-care apps using it. If I was an airline's or a retailer's customer experience manager, and I was looking beyond my own Android & iOS apps for message-based interactions, I wouldn't be looking at creating an RCS chatbot. I'd be creating an Assistant chatbot, plus one for Alexa and maybe Siri.


Can you cuckoo-proof a platform?

Apple, incidentally, has a different strategy. It tends to view its own services as integrated parts of a holistic experience. It tries to make its various platforms cuckoo-proof, especially where it doesn't have an anchor tenant app. This is a major reason for the AppStore policies being so restrictive - it doesn't want apps to be mini-platforms in their own right, especially around transactions. Currently, Google and Amazon are fighting their own mutual anti-cuckoo war over YouTube on Fire TV, and sales of Google Home on Amazon.com (link). Amazon and Apple are also mutually wary.

It's worth noting that telcos are sometimes pretty good at cuckoo-deterrence too. In theory, wholesale mobile networks could have a been a platform for all manner of disruptive interlopers, but in reality, MVNO deals have been carefully chosen to avoid commoditisation. A similar reticence exists around eSIM and remote SIM provisioning - probably wisely, given the various platform-on-platform concepts for network arbitrage that have been suggested.


Conclusions

In my view, both MEC and (irrespective of its many other failings) RCS are susceptible to cuckoo platforms. I also wonder if various telco-run IoT initiatives, and potentially network-slicing will become a platform for other platforms in future too.

One of the key factors here is a "the rush to platformisation". Platforms only succeed when they evolve out of already-successful products, which can become inhouse anchor tenants. Amazon's marketplace platform grew on the back of its own book and other retail sales. AWS success grew on the back of Amazon using its own APIs and cloud-computing.

MEC needs to succeed on the basis of telcos' own use of their edge-computing resources - which don't currently exist in a meaningful way, partly because NFV has been slower than expected. MaaP needs telcos' own messaging services and use-cases to be successful before it should look at external developers. With RCS, that's not going to happen.

Network-slicing needs to have telcos' own slices in place, before pitching to car manufacturers (or Internet players, again). IoT is the same too. Otherwise, expect even more telco eggs to be pushed out of the nest, as they help to foster other birds' offspring.

Monday, December 04, 2017

5G & IoT? We need to talk about latency



Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.

Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

Think about timescales a bit more deeply:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

Are we focusing 5G too much on the occasional Goldilocks of not-too-fast and not-too-slow?