Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label apps. Show all posts
Showing posts with label apps. Show all posts

Sunday, November 07, 2021

No, the Metaverse is not the killer app for 5G

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

Let's stop the next cliche before it even starts.

Most knowledgeable people now roll their eyes in derision whenever they hear the words 5G and autonomous driving (or robotic surgery) mentioned in the same sentence. But the mobile industry's hypesters are always casting around for some new trope - and especially the mythical "killer app" that could help to justify the costs and complexity.

And as if on cue, the Metaverse - essentially a buzzword meaning a hybrid of AR/VR with the social web, collaboration and gaming - has captured the headlines.

No alt text provided for this image

The growing noise around Metaverse technologies - and especially Facebook's recent rebrand to Meta - is attracting a whole slew of bandwagon-jumpers. The cryptocurrency community has been the first to trumpet its assumed future role - perhaps unsurprisingly, since they tend to be even more fervent and boosterish than the mobile sector. But we're also seeing the online shopping, advertising and gaming worlds hail the 'Verse as the next big thing.

Next up - I can pretty much guarantee it - will be the 5G industry talking about millisecond latency and buying a "Metaverse network slice". We'll probably get the edge-computing crowd popping up shortly afterwards too. I've already seen a few posts hailing the Metaverse as the possible next big thing for MNOs (mobile network operators).

They're wrong.

The elephant in the room

If you've found this article without knowing my normal coverage themes, you might be surprised to read that the single biggest issue for connecting Metaverse devices and users will be real, physical walls.

If you go through Mark Zuckerberg's lengthy video intro to Meta and his view of future technologies, you'll notice that a high % of scenarios and use-cases are indoors. Gaming from your sofa. Virtual living rooms. Hybrid work environments blending WFH with in-person meetings, and so on.

This shouldn't be a huge surprise. The more immersive a technology is - and especially if it's VR rather than AR based - the more likely people will take part while seated, or at least not while walking around an outdoor environment with obstacles and dangers. Most gaming, and most business collaboration takes places indoors too.

And indoor environments tend to have particular ways that connectivity is delivered to devices. Generally, Wi-Fi tends to be used a lot, as the access points are themselves indoors, at the end of broadband connection or office local area network.

Basically, wireless signals at frequencies above 2-3GHz don't get inside buildings very well from outside, and the higher the performance, the worse that propagation tends to be. Put simply, 5G-connected headsets and other devices will generally not work reliably indoors, especially if they have to deliver consistent high data speeds and low latencies which need higher frequencies. We can also expect the massive push for Net Zero in coming years to mean ever-better insulated buildings, which will make matters even worse for wireless signals as a side-effect.

For sure, certain locations will have well-engineered indoor 5G systems that will work effectively - but software developers generally won't be able to assume this. Airports, big sports venues, shopping malls and some industrial sites like factories will be at the top of the list for these types of solutions. For those locations, 5G Metaverse connections may well be widely used and effective. However, those are the exceptions - and it will take many years to deploy new in-building systems, or upgrade existing infrastructure anyway.

In particular, most homes and offices will have patchy or sometimes no 5G coverage, especially in internal rooms, elevators or basements. (There might be a 5G signal or logo displayed on the device, but that doesn't mean that the famously-promised gigabit speeds or millisecond latencies will actually be deliverable).

In those locations, expect Metaverse devices to use Wi-Fi as a baseline - and increasingly the Wi-Fi 6/6E/7 generations with better capabilities than previous versions.

What the Meta video tells us

I'm aware that the Metaverse is more than just Facebook / Meta, but the 1h17 video from Zuck (link) is not a bad overview of what to expect in terms of experiences, devices and business models. Obviously there will be different views from Epic Games, Microsoft's various initiatives around Hololens and Mesh, plus whatever Apple is quietly cooking up, but this is a decent place to start.

The first thing to note is the various Horizon visions that Meta is pitching - Home, Worlds and Workrooms. These are (broadly) for close social interaction, gaming/larger-scale social and business collaboration - especially hybrid work.

Mostly, the demos and visions are expected to take place from the participant's home, office, school or similar venue. There's a couple of outdoor examples of enhanced sports, or outdoor art/advertising as well. Virtual desktops, avatars that mimic eye and facial movements and so on.

In terms of devices, there's a large emphasis on headsets (obviously the Oculus Quest, and also the new high-end Cambria device promised for 2022) as well as discussions of AR glasses, from the RayBan Stories recently launched, to a forthcoming project called Nazare.

The technology discussion is all around the functional elements, not the connectivity. Optics, sensors, batteries, displays, speakers, cameras and so on. There are developer tools for hand and voice interaction, and presence / placement of objects in the virtual realm. There's lots of discussion around creators, advertising and the ability to own (and interoperate) virtual avatars, costumes and furniture. There are also nods to privacy, as would be expected.

There's no mention of connectivity, apart from noting that Cambria will have radios of some sort. The section on the "Dozen major technological breakthroughs for next-gen metaverse" doesn't mention wireless, 5G or anything else.

No alt text provided for this image

It's worth noting that Oculus devices and the RayBan glasses today use Wi-Fi. We can also expect the gesture-control in future will likely lean on UWB sensors. Outside of Facebook / Meta essentially all of today's dedicated AR/VR headsets connect with Wi-Fi or a cable, to a local network or broadband line. (That might be 5G fixed-wireless to the building for a few % of homes, but that will still use Wi-Fi on the inside).

Where cellular 4G/5G takes a role in XR is where the device is tethered to a phone or modem, or is experienced actually on the smartphone itself - think Pokemon Go, or the IKEA app that lets you design a room with virtual furniture.

We can expect the same with the Metaverse. If you're using a smartphone to access it, then obviously 5G will play a role, just as it will for all mobile apps in 3-4 years time when penetration has increased.

Will Cambria and future iterations feature 5G built-in? Maybe but I doubt it, not least because of the extra cost and engineering involved, as well as multiple versions to support different regional frequency options. Would a future Apple AR/Metaverse headset feature cellular, like some versions of the Watch? Again, that's possible but I wouldn't bet on it.

In the second half of the decade, later versions of 5G (Release 17 & 18) will have useful new features like centimetre-accuracy positioning that could be useful for Metaverse purposes - but again, that's reliant on having decent coverage in the first place. There will likely be some useful aspects outdoors though - for instance accurate measurement of vehicles on roadways.

Facebook Connectivity becomes Meta too

One other thing I noticed is a reference on LinkedIn to Facebook's often-overlooked Connectivity division, which does all sorts of interesting programmes and initiatives like TIP (which does OpenRAN and other projects), Terragraph 60GHz mesh, Express Wi-Fi and the low-end Basics "FB-lite" platform for developing markets with limited network infrastructure.


No alt text provided for this image

Apparently it's now being renamed Meta Connectivity - partly I guess because of the reorganisation and rebranding of the group overall, but also as a longterm part of the Metaverse landscape.

To me, that also indicates that the Metaverse is going to use multiple wireless (and wired) technologies - which aligns with Zuckerberg's view that it's more of a reinvention of the Internet/Web overall, rather than a particular app or experience.

Bandwidth-heavy? Or perhaps not....

One other thing needs to be considered around the Metaverse and connectivity. The immediate assumption is that such a "rich" environment, either full-virtual or overlaid onto a view of the real world, will need lots of data - and therefore the types of bandwidths promised by 5G. If we all use Metaverse devices to project "virtual TV screens" onto virtual surfaces, it will use lots of capacity, supposedly.

But it strikes me that avatars (even photo-realistic ones) & 3D reconstructions of real-world scenes will likely need less bandwidth than actual video. Realtime rendering will likely be done on-device in most cases, just sending the motion/sensor data or metadata about objects over the network.

Clearly this will depend on the exact context and application, but if your PC or phone or headset has a model of your friend's virtual house, or your virtual conference room - and all the objects and people/avatars in it - then it doesn't actually need realtime 4K video feeds to show different views.

In addition, the integration of eye-tracking allows pre-emptive downloads or actions, so "pseudo-latency" can seem very low, irrespective of the network's actual performance. If the headset sees you looking at a football, it can start working on the trajectory of a kick 10's or even 100's of milliseconds before you move your virtual leg.

That said, the sensor data uplink & motion control downlink will need low latency, but I suspect that will be more about driving localised breakout and peering rather than genuine localised compute. If you're in a hybrid conference with distant colleagues, the main role for edge-computing is to offload your data to the nearest Internet exchange with as few hops as possible.

(Some of the outdoor scenes in the Meta video from Connect seem rather unrealistic. They show groups of people playing table tennis and a virtual basketball match with "friends on the other side of the world", which would involve some interesting issues with the speed of light and how that would impact latency.)

Conclusion

In a nutshell - no, the Metaverse isn't the killer app for 5G.

The timelines align between the two, so where 'Verse apps are used on smartphones they'll increasingly use 5G if it's available and the user is out-and-about. But that's correlation, not causation. Those smartphones will typically be connected via Wi-Fi when at home, school or work. I suspect the main impact on smartphones will be on the need for better 3D graphics capability and enhanced sensors and cameras, rather than the network side.

Will we see some headsets or glasses with built-in cellular radios, some with 5G support? Sure, there will certainly be a few emerging in coming years, especially for enterprise / private network use. I'd expect field-workers, military, or industrial employees to exploit various forms of AR and VR in demanding situations well-suited to cellular, although many will tether a headset or glasses to a separate modem / module to reduce weight.

Many devices will also include various other wireless technologies too - Wi-Fi, Bluetooth, maybe Thread/Matter, UWB and so on.

But if anything, I suspect that the Metaverse may turn out to be the killer app for WiFi7, especially for home and office usage. That doesn't mean that 5G won't benefit as well - but I don't see it as a central enabler, given the probable heavy indoor bias of the main applications. (I don't think that cryptocurrency or edge-computing are key enablers either, but those are debates for another day)

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

#Metaverse #Facebook #Meta #AugmentedReality #VirtualReality #5G #WiFi #MixedReality #Mobile #Wireless #Devices #Gaming #Collaboration #HybridWorking

Tuesday, December 19, 2017

Emerging risks to telcos from "Cuckoo Platforms"

Summary
  • Telcos want to be platform players at varying points in their network architecture and service offerings. 
  • But successful platforms generally need "anchor tenants" to gain scale.
  • The problem comes when anchor-tenants are themselves other 3rd-party platforms.
  • There is a risk of platforms-on-platforms acting as "cuckoos", pushing the native owner's eggs out of the nest.
  • Telcos face a risk from major cloud platforms overwhelming their MEC edge-compute platforms.
  • ... and a risk from major AI-based commerce platforms overwhelming their messaging, voice and IoT platforms.
  • Other future platforms also face similar challenges.
  • To succeed as platform providers, telecom operators need to have their own anchor-type services, and to have a well-designed approach to combating the risk of parasitic cuckoo platforms.

Background: the Internet overcame its broadband host

The cuckoo bird is infamous for laying its eggs in other birds' nests. The young cuckoos grow much faster than the rightful occupants, forcing the other chicks out - if they haven't already physically knocked the other eggs overboard. (See "brood parasitism", here).


Analogies exist quite widely in technology - a faster-growing "tenant" sometimes pushes out the offspring of the host. Arguably Microsoft's original Windows OS was an early "cuckoo platform" on top of IBM's PC, removing much of IBM's opportunity for selling additional software. 

In many ways, Internet access itself has outgrown its own host: telco-provided connectivity. Originally, fixed broadband (and the first iterations of 3G mobile broadband) were supposed to support a wide variety of telco-supplied services. Various "service delivery platforms" were conceived, including IMS, yet apart from ordinary operator telephony/VoIP and some IPTV, very little emerged as saleable services.

Instead, Internet access - which started using dial-up modems and normal phone lines before ADSL and cable and 3G/4G were deployed - has been the interloping bird which has thrived in the broadband nest instead of telcos' own services. It's interesting to go back and look at the 2000-era projections for walled-garden, non-Internet services.


The need for an anchor tenant

The problem is that everyone wants to be a platform player. And when you're building and scaling a new potential platform, it's really hard to turn down a large and influential "anchor tenant", even if you worry it might ultimately turn out to be a Trojan Horse (apologies for the mixed metaphor). You need the scale, the validation, and the draw for other developers and partners.

This is why the most successful platforms are always the one which have one of their own products as the key user. It reduces the cannibalisation risk. Office is the anchor tenant on Windows. iTunes, iMessage and the camera app are anchors on iOS. Amazon.com is the anchor tenant for AWS.

Unfortunately, the telecoms industry looks like it will have to learn a(nother) tough lesson or two about "cuckoo platforms".


MEC is a tempting nest

The more I look at Multi-Access Edge Computing (MEC), the more I see the risks of a questionable platform strategy. Some people I met at the Small Cells event, in the US a couple of weeks ago, genuinely believe it can allow telcos to become some sort of distributed competitor to Amazon AWS. They see MEC as a general-purpose edge cloud for mainstream app and IoT developers, especially those needing low-latency applications. 

I think this is delusional - firstly because no developer will want to deal with 800 worldwide operators with individual edge-cloud services and pricing, secondly because this issue of latency is overstated & oversimplified (see my recent post, link), and thirdly because a lot of edge-computing tasks will actually be designed to reduce the use of the network and reliance/spend on network operators.

But also, this "MEC as quasi-Amazon" strategy will fail mostly because the edge/distributed version Amazon will be Amazon. The recent announcement by Nokia that it will be implementing AWS Greengrass in its MEC servers is a perfect example (link). I suspect that other MEC operators and vendors will end up acting as "nests" for Azure, IBM Bluemix and various other public cloud providers.

Apologies for the awful pun, but these "cloud-cuckoos" will use the ready-made servers at the telco edge to house their young distributed-computing services, especially for IoT - if the wholesale price is right. They will also build their own sites in other "deeper" network locations (link). 

In other words, telcos' MEC deployments are going to help the cloud providers become even larger. They may get a certain revenue stream from their tenancy, but this will likely be at the cost of further entrenching the major players overall. The prices paid by an Amazon-scale provider for MEC hosting are likely to be far lower than the prices that individual "retail" developers might pay.

(The real opportunity for MEC, in my view, lies in hosting the internal network-centric applications of the operators themselves, probably linked to NFV. Think distributed EPCs, security gateways, CDN nodes and so on. Basically, stuff that lives in the network already, but is more flexible/responsive if located at the edge rather than a big data centre).


End-running Messaging-as-a-Platform (MaaP)

Another example of platform-on-platform cannibalisation is around the concept of "messaging as a platform", MaaP. Notwithstanding WeChat's amazing success in China, my sense is that it's being vastly over-hyped as a potential channel for marketing and customer interaction. 

I just don't see the majority of people in other markets forgoing the web or optimised native apps, and using WhatsApp or iMessage or SnapChat or SMS as the centrepiece of their future purchases or "engagement" (ugh) with companies and A2P functions. But where they do decide to use messaging apps for B2C reasons, the chatbots they interact with will not be MaaP-dedicated or MaaP-exclusive

These chatbots will themselves be general "conversational platforms" that work across multiple channels, not just messaging, with voice as well as text, and with a huge AI-based back-end infrastructure and ongoing research/deployment effort. They'll work in messaging apps, browsers, smart speakers, wearables, car and general APIs for embedding in apps and all sorts of other contexts.

Top of the list of conversational platforms are likely to be Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana and Facebook M, plus probably other emergent ones from the Internet realm.


MaaP is "just another channel" for broad conversational/commerce platforms

In other words, some messaging apps might theoretically become "platforms", but the anchor tenants will be "wholesale" conversational platforms, not individual brands or developers. In some cases they will again be in-house assistants (iMessage + Siri, or Google Allo + Assistant for instance). In other cases, they may be 3rd-party bot ecosystems - we already see Amazon Alexa integrated into numerous other devices.

Now consider what telcos are doing around MaaP. As well as extending their existing SMS business towards A2P (application-to-person), they have also allowed third-parties like Twilio to absorb much of the added value as cPaaS providers. And when it comes to RCS* which has an explicit MaaP strategy, they have welcomed Google as a key enabler on Android, despite its obvious desire to use it mainly as a free iMessage rival. (*obviously, I'm not a believer in RCS succeeding for many other reasons as well, but let's leave that for this argument).

What the GSMA seems to have also missed is that Google isn't really interested in RCS MaaP per-se - it simply wants as many channels as possible for its Assistant, and its DialogFlow developer toolkit. To be fair, Google announced Assistant, and acquired API.AI (DialogFlow's original source) after it acquired Jibe. It's moved from mobile-first, to AI-first, since September 2015.

The Google conversational interface is not going to be exclusive to RCS, or especially optimised for it. (I asked the DialogFlow keynote speaker about this at last week's AI World conference in Boston, and it was pretty clear that it wasn't exactly top-of-mind. Or even bottom-of-mind). Google's conversational platform will be native in Android, in other messaging apps like Allo, Chrome, Google Home and presumably 1000 other outlets.

From an RCS MaaP perspective, it's a huge cuckoo that will be more important than the Jibe platform. There is no telco "anchor tenant" for RCS-MaaP as far as I can tell - I haven't even seen large deployment of MNOs' own customer-care apps using it. If I was an airline's or a retailer's customer experience manager, and I was looking beyond my own Android & iOS apps for message-based interactions, I wouldn't be looking at creating an RCS chatbot. I'd be creating an Assistant chatbot, plus one for Alexa and maybe Siri.


Can you cuckoo-proof a platform?

Apple, incidentally, has a different strategy. It tends to view its own services as integrated parts of a holistic experience. It tries to make its various platforms cuckoo-proof, especially where it doesn't have an anchor tenant app. This is a major reason for the AppStore policies being so restrictive - it doesn't want apps to be mini-platforms in their own right, especially around transactions. Currently, Google and Amazon are fighting their own mutual anti-cuckoo war over YouTube on Fire TV, and sales of Google Home on Amazon.com (link). Amazon and Apple are also mutually wary.

It's worth noting that telcos are sometimes pretty good at cuckoo-deterrence too. In theory, wholesale mobile networks could have a been a platform for all manner of disruptive interlopers, but in reality, MVNO deals have been carefully chosen to avoid commoditisation. A similar reticence exists around eSIM and remote SIM provisioning - probably wisely, given the various platform-on-platform concepts for network arbitrage that have been suggested.


Conclusions

In my view, both MEC and (irrespective of its many other failings) RCS are susceptible to cuckoo platforms. I also wonder if various telco-run IoT initiatives, and potentially network-slicing will become a platform for other platforms in future too.

One of the key factors here is a "the rush to platformisation". Platforms only succeed when they evolve out of already-successful products, which can become inhouse anchor tenants. Amazon's marketplace platform grew on the back of its own book and other retail sales. AWS success grew on the back of Amazon using its own APIs and cloud-computing.

MEC needs to succeed on the basis of telcos' own use of their edge-computing resources - which don't currently exist in a meaningful way, partly because NFV has been slower than expected. MaaP needs telcos' own messaging services and use-cases to be successful before it should look at external developers. With RCS, that's not going to happen.

Network-slicing needs to have telcos' own slices in place, before pitching to car manufacturers (or Internet players, again). IoT is the same too. Otherwise, expect even more telco eggs to be pushed out of the nest, as they help to foster other birds' offspring.

Monday, October 30, 2017

Debunking the Network QoS myth

Every few years, the network industry - vendors, operators & industry bodies - suffers a mass delusion: that there is a market for end-to-end network QoS for specific applications. The idea is that end-users, application developers - or ideally both - will pay telcos for prioritised/optimised connections of specific "quality", usually defined in terms of speed, latency & jitter (variability).

I've watched it for at least a dozen years, usually in 3-year waves:
  • We had enterprise networks promising differentiated classes of service on VPNs or the corporate connection to the Internet. Avoid the impact of the marketing department watching cat videos!
  • We had countless failed iterations of the "turbo boost" button for broadband, fixed or mobile.
  • We had the never-realised "two-sided markets" for broadband, featuring APIs that developers would use to pay for "guaranteed QoS"
  • We had numerous cycles of pointless Net Neutrality arguments, talking about "paid prioritisation", a strawman of massive proportions. (Hint: no content or app developer has ever had lobbyists pleading for their right to buy QoS, only telcos asking to be able to sell it. Compare with, say, campaigns for marijuana decriminalisation).
  • We currently have 5G "network slicing" concepts, promising that future MNOs will be able to "sell a slice" to an enterprise, a car manufacturer, a city or whatever.
  • My long-standing colleague & interlocutor Martin Geddes is pitching a concept of app-focused engineering of networks, including stopping "over-delivering" broadband to best-efforts applications, thus forcing them to predict and right-size their demands on the network.
In my view, most of these attempts will fail, especially when applied to last-mile Internet access technologies, and even more-especially to wireless/mobile access. There isn't, nor ever will be, a broad and open market for "end-to-end network QoS" for Internet applications. We are seeing network-aware applications accelerating much faster than application-aware networks. (See this paper I wrote 2 years ago - link).

Where QoS works is where one organisation controls both ends of a connection AND also tightly-defines and controls the applications:
  • A fixed-broadband provider can protect IP telephony & IPTV on home broadband between central office & the home gateway.
  • An enterprise can build a private network & prioritise its most important application(s), plus maybe a connection to a public cloud or UCaaS service.
  • Mobile operators can tune a 4G network to prioritise VoLTE.
  • Telco core and transport networks can apply differential QoS to particular wholesale customers, or to their own various retail requirements (eg enterprise users' data vs. low-end consumers, or cell-site timing signals and backhaul vs. user data). 
  • Industrial process & control systems use a variety of special realtime connection protocols and networks. Vendors of "OT" (operational technology) tend to view IT/telecoms and TCP/IP as quaint. The IT/OT boundary is the real "edge".
Typically these efforts are costly and complex (VoLTE was frequently-described as one of the hardest projects to implement by MNOs), make it hard to evolve the application rapidly because of dependencies on the network and testing requirements, and often have very limited or negative ROI. More importantly, they don't involve prioritising chunks of the public Internet - the telco-utopian "but Netflix will pay" story.

There are a number of practical reasons why paid-QoS is a myth. And there's also a growing set of reasons why it won't exist (for the most part) in future either, as new techniques are applied to deal with variable/unpredictable networks.

An incomplete list of reasons why Internet Access QoS isn't a "market" include:
  • Coverage. Networks aren't - and won't be - completely ubiquitous. Self-driving cars need to be able to work offline, whether in a basement car-park, during a network outage in a hurricane, or in the middle of a forest. The vehicle won't ask the cloud for permission to brake, even if it's got promised millisecond latency. Nobody pays for 99.99% access only 80% of the time.
  • The network can't accurately control or predict wireless effects at micro-scale, ie RF absorption or interference. It can minimise the damage (eg with MIMO, multiple antennas) or anticipate problems (weather forecast of rain = impact on mmWave signals).
  • End-user connections to applications generally go via local WiFi or LAN connections, which service providers cannot monitor or control.
  • No application developer wants to cut QoS deals with 800 different global operators, with different pricing & capabilities. (Or worse, 800 million different WiFi owners).
  • 5G, 4G, 3G and zero-G all coexist. There is no blanket coverage. Nobody will pay for slicing or QoS (if it works) on the small islands of 5G surrounded by an ocean of lesser networks.

  • "Applications" are usually mashups of dozens of separate components created by different companies. Ads, 3rd-party APIs, cloud components, JavaScript, chunks of data from CDNs, security layers and so on. Trying to map all of these to separate (but somehow linked) quality agreements is a combinatorial nightmare.
  • Devices and applications have multiple features and functions. A car manufacturer wouldn't want one slice, but ten - engine telemetry, TV for the kids in the back seat, assisted-driving, navigation, security updates, machine-vision uploads and so on all have very different requirements and business models.
  • Lots of IoT stuff is latency-insensitive. For an elevator maintenance company, a latency of a week is fine to see if the doors are sticking a bit, and an engineer needs to arrive a month earlier than scheduled.
  • I don't know exactly how "serverless computing" works but I suspect that - and future software/cloud iterations - take us even further from having apps asking the network for permission/quality on the fly. 
  • Multiple networks are becoming inevitable, whether they are bonded (eg SD-WANs or Apple's use of TCP Multipath), used in tandem for different functions (4G + SigFox combo chips), meshed in new ways, or linked to some sort of arbitrage function (multi-IMSI MVNOs, or dual-SIM/radio devices).  See also my piece on "Quasi-QoS" from last year (link)

  • Wider use of VPNs, proxies and encryption will mean the network can't unilaterally make decisions on Internet QoS, even if the laws allow it.
  • Increasing use of P2P technologies (or D2D devices) which don't involve service providers' control infrastructure at all.
  • Network APIs would probably have to be surfaced to developers via OS/browser functions. Which then means getting Apple, Google, Microsoft et al to act as some sort of "QoS storefront". Good luck with that.
  • No developer will pay for QoS when "normal" service is running fine. And when it isn't, the network has a pricing/delivery challenge when everyone tries to get premium QoS during congestion simultaneously. (I wrote about this in 2009 - link
  • Scaling the back-end systems for application/network QoS, to perhaps billions of transactions per second, is a non-starter. (Or wishful thinking, if you're a vendor).
  • There's probably some extra horribleness from GDPR privacy regulations in Europe and information-collection consent, which further complicates QoS as it's "processing". I'll leave that one to the lawyers, though.
  • It's anyone's guess what new attack-surfaces emerge from a more QoS-ified Internet. I can think of a few.
But the bigger issue here is that application and device developers generally don't know or care about how networks work, or (in general) have any willingness to pay. Yes, there's a handful of exceptions - maybe mobile operators wanting timing sync for their femtocells, for example. Safety-critical communications obviously needs quality guarantees, but doesn't use the public Internet. Again, these link back to predictable applications and a willingness to engineer the connection specifically for them.

But the usually-cited examples, such as videoconferencing providers, IoT specialists, car companies, AR/VR firms and so on are not a viable market for Internet QoS. They have other problems to solve, and many approaches to delivering "outcomes" for their users.

A key issue is that "network performance" is not considered separately and independently. Many developers try to find a balance between network usage and other variables such as battery life / power consumption together. They also think about other constraints - CPU and screen limitations, user behaviour and psychology, the costs of cloud storage/compute, device OS variations and updates, and so on. So for instance, an app might choose a given video codec based on what it estimates about available network bandwidth, plus what it knows about the user, battery and so on. It's a multi-variable problem, not just "how can the network offer better quality".




Linked to this is analytics, machine learning and AI. There are huge advances in tuning applications (or connection-managers) to deal with network limitations, whether that relates to performance, cost or battery use. Applications can watch rates of packet throughput and drops from both ends, and make decisions how to limit the impact of congestion. (see also this link to an earlier piece I wrote on AI vs. QoS). 

Self-driving vehicles use onboard image-recognition. Data (real-world sensed data and "training" data) gets uploaded to the cloud, and algorithms downloaded. The collision-avoidance system will recognise a risk locally, in microseconds.



 They can focus resources on the most-important aspects: I saw a videoconference developer last week talk about using AI to spot "points of interest" such as a face, and prioritise "face packets" over "background packets" in their app. Selective forwarding units (SFUs) act as video-switches which are network-aware, device-aware, cost-aware and "importance-aware" - for example, favouring the main "dominant" speaker.
 
Another comms developer (from Facebook, which has 400 million monthly users of voice/video chat) talked about the variables it collects about calls, to optimise quality and user experience "outcome": network conditions, battery level before & after, duration, device type & CPU patterns, codec choice and much more. I suspect they will also soon be able to work out how happy/annoyed the participants are based on emotional analysis. I asked about what FB wanted from network APIs and capabilities - hoping for a QoS reference - and got a blank look. It's not even on the radar screen.

At another event, GE's Minds and Machines, the "edge" nodes have a cut-down version of their Predix software which can work without the cloud-based mothership when offline - essential when you consider the node could be on a locomotive in a desert, or on a plane at 35000ft.



The simple truth is that there is no "end to end QoS" for Internet applications. Nobody controls every step from a user's retina to a server, for generic "permissionless innovation" applications and services. Paid prioritisation is a nonsense concept - the Net Neutrality crowd should stop using that strawman.

Yes, there's a need for better QoS (or delta-Q or performance management or slicing or whatever other term you want to use) in the middle of networks, and for very specific implementations like critical communications for public safety. 

The big unknown is for specific, big, mostly mono-directional flows of "content", such as streaming video. There could be an argument for Netflix and YouTube and peers, given they already pay CDNs, although that's a flawed analogy on many levels. But I suspect there's a risk there that any QoS payments to non-neutral networks get (more than?) offset by reverse payments by those networks to the video players. If telcos charge Netflix for QoS, it wouldn't surprise me to see Netflix charge telcos for access. It's unclear if it's zero-sum, positive or negative net.

But for the wider Public Internet, for consumer mobile apps or enterprise cloud? Guaranteed (or paid) QoS is a myth, and a damaging one. Yes, better-quality, better-managed networks are desirable. Yes, internal core-network use of better performance-management, slicing and other techniques will be important for telcos. Private wireless or fixed broadband networks, where the owner controls the apps and devices, might be an opportunity too.

But the concept of general, per-app QoS-based Internet access remains a dud. Both network innovation and AI are taking it ever-further from reality. Some developers may work to mark certain packets to assist routing - but they won't be paying SPs for an abstract notion of "quality". The notion of an "application outcome" is itself a wide and moving target, which the network industry only sees through a very narrow lens.

Friday, February 20, 2015

The myth of "Telcos winning back revenue from OTT players"

In the run-up to MWC, I'm seeing a spate of news articles in the telco press/blogosphere, or vendor press releases, which are titled something like: 

"How Telcos can Win Back Revenue From OTT Providers"

These are almost all uniformly wrong or at least, misleading marketing hype or clickbait.

Let's parse that sentence "win back revenue from OTT providers". I'll tackle the continued use of "OTT" later in the post - but it's a legacy term that has no place in the telecom industry going forward.

But first, whatever you name them, so-called OTT providers generally do not take revenue from telcos. They take customers or usage, by offering either cheap/free or better services - often both. People use Whatsapp or SnapChat as a free, more-functional and cooler upgrade to SMS. They use Skype as an improved user-experience to telephony, and we're seeing switching to myriad new voice/video apps and WebRTC-powered services (my report here), for contextual comms. Internet app providers often derive value in other ways (ecosystem, advertisers, stickers, recording, cloud services etc) by giving away message or voice transport for free. There is no - or very little - revenue to "win back".

Telephony and SMS are not going to disappear entirely, but they are old and clunky lowest-common denominator services in a world of unlimited choice, and best-of-breed applications targeting individual use-cases and preferences. "Winning back revenue" requires there to be revenue to win, and renewed consumer appeal competing against alternatives to a sufficient degree somehow to encourage payment.

Person-to-person SMS has historically been a rip-off. It's never been "value-based pricing", it's been grudge- or resentment-based pricing. We used to hear people say it allowed $10000/MB - and that's the problem. It was orders of magnitude too expensive for a service that never evolved over a 20-year period. Sending 160 characters from A to B was cool in 1995. It's not rocket science in 2015. Similarly, telephony transport is priced expensively compared to costs and value (for most uses) too. That said, VoLTE puts the implementation & production costs back up, in the unproven hope of future gains from spectrum re-farming.

The  telecom industry used to make over $100bn a year from SMS. It still makes a decent fraction of that, although the exact amount depends much on accounting and bundle-allocation chicanery. Excess SMS profits of close to a $trillion over the last decade or two seem probable - with minimal service innovation from reinvested cashflow. To put that in context, it's probably larger than all banking bonuses worldwide over the same period.

That $100bn+ revenue is not coming back from simply sending mobile messages. It might partly come back from adding value to other ecosystems, or enabling particular purposes through A2P messaging integrated into business processes, but in terms of straightforward A-to-B transmission of text or pictures, it's gone. An SMS is not much more valuable, inherently, than an email, and will converge with email in terms of pricing. 



In any case, increasing A2P revenues is not "winning back" revenues lost from P2P. It's completely distinct, and isn't occurring at the expense of Internet-based alternatives.

(Obviously, RCS just worsens the situation, by consuming extra costs & staff resources, for zero extra usage, zero extra revenue, a major opportunity-cost impact, and possible brand damage. It is worse than useless and needs the industry to capitulate entirely. I believe RCS needs to die with an obvious bang, not a whimper, for everyone to "accept & move on").

Similarly, the decline in mobile telephony revenues isn't going to be slowed much by VoLTE, and certainly not reversed. It's just telephony v1.1, and although HD voice and fast call-setup are nice, they don't provide an obvious basis for billions in new revenue. VoLTE (and WiFi calling as well) are moderate feature upgrades - they don't change the value proposition of telephony, or the use-cases to which it can be applied. They will not "win back" revenue that has shifted from "vanilla phone calls" to other modes of communication.

Enterprise services are slightly more complex - but there the "OTT" services are essentially just IP-PBX or UC platforms from major vendors, or else they are 3rd-party cloud services for conferencing, contact centres and so on. Those have been in place for years, and while telco-hosted UC or SIP-trunking have important roles, few in the industry would suggest they are seriously "winning back" revenues from WebEx or Microsoft Lync.

We can also forget about the silly ideas that some suggest, about arbitrarily charging/taxing the Whatsapps and Skypes of this world - as for example Dutch, Indian and Singaporean operators have tried to propose in the past, before getting intense public and regulatory push-back.

Firstly, most Internet app providers and developers don't have the ability to pay 10's or 100's of billions of dollars. Secondly, unless compelled by telco-lobbied (bribed?) regulators, they have no reason to do so. They don't need interconnect, nor QoS, nor sponsored data. They simply need half-decent Internet access, to offer applications that consumers deem to be valuable. Thirdly, there are no obvious mechanisms for this - especially for peer-to-peer communications, or new formats. In many cases, interconnect doesn't make sense, as there is no feature-parity with humdrum "standard" services like SMS and telephony. There is no pot of money in saying "we're dumb, so please tax the clever people" - if telcos want to make money from selling Internet access, they need to balance it against the likelihood that users will shift some of their communications away from monopoly, legacy, unappealing services.

If operators want to "regain revenue from Internet players" there is only one way to do it: innovate at a service/application level, either internally or with specialist external help, and compete. Probably, that innovation will itself require the open Internet, the web, mobile apps - or perhaps, proprietary communications platforms for certain uses. 

It will need a combination of both service development (for direct monetisation) and platform innovation (to attract developers). Both require a culture of risk-taking, software development, innovation management, partnership, and a willingness to "act first, standardise later, if ever". It's possible that VoLTE, or network-based telecom app & API platforms, or A2P SMS might form a role, but they still need multiple layers of genuine novel service elements that add value and differentiation. 

Telcos need to solve specific user or business problems. There are no new generic, standardised services that will pass muster on a standalone basis. (No, ViLTE video-calling won't make a difference).

And yes, some vendor solutions might help here. Telecom application-development platforms, new billing and OSS systems, gateways and WebRTC systems (my report here) of various types, SDPs and their evolutionary descendants, virtualised NFV components that are flexible and scalable and so on. 

And potentially, all of these allow operators to create new services - as discussed in yesterday's post on NFV and SDN. But those will be incremental revenues - not somehow displaced from Facebook or Google, unless they specifically address the online advertising sector. The telecom & Internet business is not a zero-sum game. Revenues for plain-vanilla standalone phone calls and SMS are declining. Other things will rise, but the idea that telcos will "win back" revenues that have evaporated from services nearing obsolescence is a flawed and false narrative.

Not only that, but most operators are hoping to offer API-based capabilities to Internet firms, and act as developer platforms. And a golden rule of such business models is that the platform owner has to help the developers make more money even if they then take a cut. If telcos want to make money from Facebook, WeChat, Skype, they will need to help them earn yet higher revenues. They will have to employ developer-relations or partner management staff whose job will be increasing Viber's and YouTube's and Netflix' and SnapChat's scale and value.

So a more reasonable slogan might be "Telcos can win a share of OTT's future accelerated growth". They won't "win back" revenue unless they compete head-on and win.

Back to the terminology: as a general guideline, anyone who uses the term "OTT" is in the wrong job, especially if talking about voice/video/messaging. It betrays an antiquated sense of "entitlement" and "network privilege"- and a lack of understanding of the Internet and software development. (People in the IPTV/online video sector tend to use OTT in a different way that is less belligerent and confrontational).  All telcos have so-called "OTT" activities - none could even exist without their telco.com website on the Internet, for sales, customer service and even investor relations. To say otherwise is hypocrisy and ignorance.

Internet app providers are just peers and equals to telcos, at an application level. To "win back" revenues, telcos need to compete with them, not just mildly refresh ancient services or transfer them to virtualised infrastructure.

Note: Dean Bubley is a telecoms industry analyst & strategy consultant, working with many of the world's leading operators, vendors, regulators & innovative startups. Please get in touch if you would like to discuss advisory work, internal workshops or public speaking engagements. (Also see WebRTC report here & Mobile Broadband report here)

Sunday, November 16, 2014

Retiring the term “Telco-OTT”. "Digital services" is useless too. Long live “Telco-Apps”



I’ve long railed against the telecoms industry term “OTT”, standing for “over-the-top”. It is pointlessly divisive and arbitrary, and often said in a pejorative fashion, by people who don’t understand what it means and implies. On Twitter, I’ve often called for people using the term OTT in a serious way to be summarily fired for gross incompetence by their employers. (Given that many of the worst offenders are themselves CEOs, this is impractical, unfortunately). I generally prefix it with “so-called”, or use quote-marks, to give it the disrespect it deserves.

“OTT” is used to describe a subset of Internet-based services or applications, which are thought to compete with traditional telecoms services like telephony and SMS, or hoped-for future services, such as IM or video-calling. Skype, Whatsapp, LINE and SnapChat are examples of applications which have earned the despised “OTT” tag, usually uttered by people whose PR and legal departments told them not use stronger epithets. 

None of those companies call themselves "OTT players" any more than a washing-machine manufacturer considers themselves as running over-the-top of the electricity supply. They are simply web or Internet companies, offering communication apps or services. Call them CSPs or some other acronym, if you must. In future, as "OTT" communications capabilities get absorbed into most applications and websites as features, with WebRTC or other APIs, it will be a fairly pointless distinction anyway.

There is also a considerably different interpretation in the content space, where “OTT video” is used to describe channels or streaming platforms such as Hulu and NetFlix or BBC iPlayer, which go direct-to-customer and don’t need to work with normal digital TV aggregators such as cable MSOs or IPTV platforms. There seems to be less animosity in that area among telcos, perhaps because most don't have legacy businesses there.

Some other Internet companies often get lumped into the “OTT” category too, even though their main offerings don’t overlap with typical telecoms service domains. Facebook and Google, for example, often get called OTTs simply because they are seen as a strategic threat to the telecoms industry, so it makes sense to demonise and caricature them as “the other”. Web search, social networking and online advertising are not traditional telecom businesses - they are new and purely Internet-based.

Most other Internet services and applications don’t attract the same opprobrium. Nobody calls Salesforce or Wikipedia or Tinder or a Cisco IP-PBXs & WebEx an “OTT service”, even though they also “use our pipes for free”.

I’ve made the point in the past that if Internet services are “over the top”, then surely telecoms networks are better-called “under the floor”, as that’s where the pipes and plumbing goes. Yet oddly enough, I don’t encounter many telcos proudly declaiming their “UTF” status.

In a nutshell, "OTT" is simply a duplicitous, mealy-mouthed term for "bits of the Internet we don't like". "Dumb pipe" is a dumb term too - networks are neither pipes nor stupid. What "dumb pipe" means, translated from telco-ese, is "please tax the clever people for us, or let us do it instead".

I coined the term “Telco-OTT” in 2011, to describe the growing phenomenon of telecom operators launching their own services that use the public Internet as a platform, rather than their own managed network infrastructure. As well as grabbing attention, it was intended to highlight the hypocrisy - and sometimes outright lies - of many industry executive and observers (and sometimes regulators) when it comes to the Internet

Now, following a tweet from Chad Hart, I've decided to take his advice and kill the term.

Almost all Telcos have so-called OTT offerings, whether in the field of voice/messaging, cloud offers, content/video or even home-automation. These span both fixed and mobile networks, and “pure OTT” standalone applications and “extension” models linked to existing on-net services. Some are in-house developed, others created through partnerships. I identified well over 100 such services in 2011, and there are probably 200+ today.

And of course, every single telecoms company on the planet has its own Internet-based website, gladly using other telcos’ networks as sales, marketing and support channels for both their existing customers, and their rivals' subscribers they hope will switch. Vodafone.com, att.com and kddi.com are all “OTTs” in the broad sense of the word. Of course, all the industry associations and regulators happily make use of the public Internet as well, at the same time as some are trying to limit its reach and scope.

Curiously, none of these telco-run Internet and app properties have ever openly suggested paying for QoS on their rivals’ infrastructure, or sponsoring their users’ data consumption. Surely, given Telefonica’s distaste of OTTs (”It's not a level playing field"), it would have proactively sought to recompense its rivals forced to carry traffic from Terra, Tuenti or TuGo, as a good example? One would have also thought that GSMA’s or ETNO’s webmasters would have long ago volunteered to pay for visitors’ traffic, to demonstrate “innovative” broadband business models? Or perhaps Verizon would have sought to accelerate user transactions on Verizon.com, when viewed from an AT&T broadband connection, and pleaded with the FCC to allow it to buy a “fast lane”? 

Oddly, all the CEOs conveniently overlook their own Internet businesses, when it comes to grandstanding in front the FCC or EU or investors, about Net Neutrality and similar issues. 

The bottom-line: ALL telcos are “OTTs”. All of them exploit the Internet, and would complain bitterly if they were prevented from doing so. They’re not as successful in some areas as their rivals, but that’s a separate discussion.

When telecom industry representatives clamour about the lack of “a level playing field”, most are either ignorant, disingenuous, or unwilling to confront the organisational and cultural blockages in their own businesses. Plenty of telcos do launch run “pure OTT” apps and services, in exactly the same fashion as any other firm. That said, other telcos have limitations in areas such as user-data collection and exploitation, and I'd support broader equivalency of laws and rules there, versus Internet players.  It's up for discussion whether data privacy laws should be relaxed on telcos, or tightened on web firms.

Some also have actual – or merely perceived – regulatory hurdles on things like lawful intercept. But they have had 10 years to convince regulators and ministries to be more relaxed on communications areas outside of traditional telephony. Seriously, if an operator launches a karaoke app, are they expected to record hours of terrible singing, and metadata of the music tracks sung for the authorities? Instead, too many operators argue for new rules to be imposed on Internet companies, rather than arguing for relaxing rules on themselves.

The time has now come for me to retire the term "Telco-OTT". It is now in mainstream use, and various vendors and media outlets have come to embrace it more fully. The market has understood that telcos need to have web and mobile apps and services, decoupled from their own networks. WiFi-calling exploits third-party wireless connections. TV-anywhere apps use whatever networks are available. WebRTC services are quite clearly expected to be accessed from any Internet entry point. Many telco SaaS/cloud offers are accessible from anywhere. Numerous operators have VoIP apps intended for expats, travellers and the "diaspora" outside their home market, and away from their controlled and managed home networks.

Continuing with term Telco-OTT now just lends legitimacy to the unvarnished OTT label, and the phony war that is continually perpetuated by vendors and regulators in that regard. I want those that use the term OTT to be accused of blinkered "entitlement", as evidenced by ignorant comments about "OTT stealing revenues". Communications and content provision are open battlegrounds. Nobody is "entitled" to market share, revenues or profits for telecom and Internet services. They are up for competition. And if you offer Internet access to your customers, you should understand and accept the risk that Internet applications will be better/cheaper/cooler than on-net alternatives.

So what to call these services now that "Telco-OTT" is to be consigned to history?

Easy. Let's just call them "Telco Apps" (or Telco-Apps with a hyphen - I'm open to persuasion on the punctuation). Certain things may have to be called Telco Platforms or Telco Enablers, if they are thin delaminated Internet service "slices" rather than full applications.

I'm also calling time on "Digital Services". It's a stupid term as well. Apart from AM/FM radio, I can't think of any analogue communications services. They're all digital, as is the entirety of Internet & telecoms networking. As Alan Quayle often points out, "Digital" hasn't been a useful adjective since it was used to describe Casio watches in the 1970s, or perhaps the replacement of old phone exchanges in the 1980s. Today, "digital" is most often associated with techno-illiterate fools in the marketing and advertising industries, who talk about "digital marketing", or use cringeworthy phrases when you meet them like "Hi, I'm in digital".

So. "Telco-OTT" is dead, "OTT" is for telecom people who don't like the Internet but are too scared & hypocritical to say so as they use it too, and "Digital" is for people who haven't understood the last 50 years of technology. 

Internet companies make apps, websites & Internet services. Telcos exploiting the Internet do the same. Telcos are Internet companies. Call their Internet  activities Telco-Apps, if you need to distinguish them from network-integrated services - although even those will be extended over the Internet anyway. The Internet - and the Web & Apps - has won.

Oh, and make sure you understand the difference between the Internet and the Web, too. Or else, once again, you should be fired for incompetence.