Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Friday, January 21, 2011

Is mobile video traffic quite the threat that everyone thinks? Is the so-called "optimisation" approach flawed?

I smell the "Tyranny of Consensus", about mobile video data traffic.

This is a long blog post examining the "optimisation" of video for mobile. It forms part of Disruptive Analysis' ongoing research into policy & traffic management. For more details on custom work, please contact Dean Bubley directly.

Recently I’ve been bombarded with vendor announcements of video optimisation solutions, DPI and charging products, and lots of slides suggesting that operators might try to charge extra for mobile video traffic that is "swamping" 3G and 4G networks. Everyone is gearing up for "personalisation", tiered services and so forth – and even trying to talk up the prospect of video-optimised data plans. Everyone's putting out PR-driven surveys with predictably trite answers to loaded questions, usually ignoring cause-and-effect.

Normally when there's this level of consistency in stance, it's wrong. Or it's hiding something.

Most of the noise is coming from vendors of traffic management solutions sitting in the GGSN or in a box on the Gi interface – the link between the operator’s core and the Internet itself. Typically, these compress, transcode, buffer, block or generally fiddle-about with video traffic, with varying levels of sophistication and subtlety. Vendors in this general area here include Acision, ByteMobile, Cisco, Flash Networks, Mobixell, OpenWave, Vantrix and assorted others.

Some of them try to second-guess what’s going on in the radio by looking at throughput rates and other indirect indicators and only act when there's a perceived problem. Some can do relatively benign "lossless" compression which doesn't actually change the content. Some try to limit the amount of video that gets downloaded to the player's buffer, in case the user abandons viewing, closes the session and "wastes" the data already transmitted. Some control the numbers of parallel (concurrent) IP connections the device can use.

Conspicuously, the more radio-centric vendors have been comparatively quiet about video – although it’s possible they’re just looking forward to selling upgrades to LTE, rather than reducing traffic.

What's the problem?

So, is mobile video (a) really the problem that everyone suggests, and (b) if it is a problem, are they going about fixing it in the right way?

Obviously, it needs to be acknowledged that there's a ton of published information (and obviously more in private) that talks about the % of traffic volumes attributable to "video". So we know from Bytemobile, for example, that >40% of total global mobile data traffic in 2010 was video - at least from the networks they can measure. But then there's a bit of a "so what" there - as other analysis points out that usually only a few cells in a mobile network are actually congested, and we also know that there are plenty of other weak spots beyond data volume, such as RNC signalling storms, that cause congestion.

(The actual definition of video is that's something that's rarely or poorly defined itself. Is near-photorealistic cloud gaming video? Augmented reality overlays? A flash animation of a cartoon? Discuss)

It's about User Experience. Really?
So should we take at face value the claims of the video-optimisers that it's really about "enhancing user experience" as many people claim? (Everyone hates stalling video streams, don't they?). Or is it more about manufacturing a convenient bogeyman: an excuse to sell boxes to help operators "personalise" their mobile broadband or raise prices?

Video "consumes" lots of bits, so it must be evil, right? Never mind that in most of the world, mobile data is now priced by MB / GB tiers and quotas, so surely 3GB of email is just as evil as 3GB of video? Or more, given the greater signalling load. And doesn't the act of compressing traffic *unnecessarily* reduce the chance of upselling the user to a larger quota? Surely, when there's no real cost impact or congestion risk, doesn't it make sense to let users download as much as they want? Let the user, or app/content provider take the responsibility for reducing volumes if needed.

And never mind that the majority of mobile video still goes to PCs with USB modems, not tablets or smartphones, especially outside the US. Modems that have generally been sold by operators as alternatives or complements to fixed broadband, so it should be neither surprising nor a cause for action when customers use them for that purpose. (Although it’s interesting that T-Mobile UK has recently suggested that you should save your video usage for your proper network at home). It's desktop versions of YouTube or iPlayer, using Flash or HTTP, that are generating the tonnage - typically indoors. And people are tolerant of delays & buffering on PCs, and also know how to use the toggle for 360/480p or SD/HD.

In other words, a sizeable proportion of mobile video traffic is solely generated by operators' mis-selling and mis-pricing of PC dongles as real alternatives to ADSL and cable broadband.

Now obviously here there is a major issue: *when* a given cell or sector is actually congested, or a backhaul link, then one incremental video causes much more of a problem than an extra email. But all the coffee-table stats about "10% of users consuming 90% of the bandwidth" generally fail to state whether they also cause 90% of the problems. It's an anecdote, not a problem statement - especially as most of that 10% are PC users who've been sold a product designed specifically (and knowingly) for high consumption.

There's much less data to suggest that actual congestion is caused by bulky things like video streaming, or just that there are 20,000 people at Kings Cross Station at 9am on a Monday morning checking their email simultaneously. Anecdotally, I've spoken to operators that have agreed that the real busy-hour, busy-cell situation is much more complex than just blaming YouTube.

To give an analogy: I drive my car 4000 miles a year, as I live in central London and mostly use public transport. But I contribute more to road congestion than someone living in the middle of Wales, driving 40,000 miles annually from their cottage to town and back. Data “tonnage”, like mileage, is a lousy predictor of problem causation – yes, there’s probably a positive r-squared, but correlation is weak. But volume is easily measured, and to an uneducated ear it “sounds” fair to penalise volume, especially when prefixed with emotive PR gibberish like “data hogs”.


Everyone's seen the Cisco VNI forecasts for mobile video network traffic... but are they (and others) quite as accurate and meaningful as many seem to think?

I certainly agree that larger screen sizes means that video data will expand for the same duration of viewing. But other mobile applications for smartphones are proliferating too. Yet unless I'm missing something, the biggest growth in usage (minutes / events) is around social networking, gaming and the like. Yes, Facebook friends can embed or link to a video clip... but doesn't the growing amount of time spent messaging and communicating reduce the number of "snacking" opportunities to watch video? If you have 3 minutes waiting for a bus, do you watch YouTube on your phone, or message your friends or send a cleverly-crafted tweet or status update? 

I'm also not convinced that there's going to be a perpetual growth in minutes of video delivered over cellular macro networks to mobile devices *in congested cells*. There's too going to be many ways to offload and cache the content (eg podcasts, or connecting via WiFi or femtocells). Many of the places where people watch video or use 2-way video applications will be those with WiFi available. Those home with both fixed and mobile broadband will also increasingly use femtocells. Both almost completely eliminate congestion - especially if they route traffic promptly to the Internet rather than piping it back through the core. In any case, video will need to be treated differently by charging, optimisation and policy servers if it is delivered by a dedicated indoor solution. All such elements need to be made femto-aware to be useful.

So is the overall "problem" of macro-cellular mobile video going to get worse? Or is it manageable just with normal capacity upgrades and tiered pricing rather than complex control infrastructure, and questionable "optimisation" practices that are perhaps not optimal for user, content owner or operator?

Why optimise in the network anyway?

It's also worth noting that two of the most widely-used and successful mobile applications actually monetised by operators - BlackBerry's email service and Opera's Mini web browsing proxy-based service - are both self-optimised and data-compressed. The special magic is in RIM's or Opera's own servers and clients, not in the operator network in between. So why should video be any different? YouTube, the BBC and others are much more mindful of their users' quality of experience when watching their content than the operators.

The only way to measure real QoE is directly from the device or software client. Trying to decode "user experience" from the Gi interface is like a doctor diagnosing an illness by looking at how red your nose is, through a telescope from a mile away. Maybe the throughput has dropped because the user went to the basement for 30secs? Maybe a window was minimised? Maybe another app on a multitasking device squeezed out the video client with a big download? Maybe there's an OS glitch, or some other user intervention? The network does not - and cannot - know this.

There has even been some talk of network boxes checking that the video size doesn't exceed the phone's screen resolution, and reducing it to fit. This is astonishingly arrogant and apt to create huge user dissatisfaction (and perhaps lawsuits), as it ignores the possibility of the user zooming-in to part of the video, saving it to memory for later use, outputting via the TV port on some phones, or just using the device as a tether or modem.

Yes, some network elements can, in some instances, improve experience, some of the time. But it's inconsistent and unprovable, and may well have hidden side effects elsewhere. But it's like a spark-plug manufacturer claiming it can improve your driving experience with a new type of "optimising" plug. Yes, sometimes it could make the engine smoother and the driver experience better. But it's not much help if the tyres are bald - and it might also mess up the fine calibration and actions of the engine-management chip.

The gating factor is the radio network

The other elephant in the room is radio *coverage*, not capacity. Many 3G networks are still like Swiss cheese, with "not-spots" still common, especially indoors. The woolly survey questions about video user experience generally don't ask respondents if they live in a basement, or a building with metallised windows. Much of the problem is nothing to do with congestion - it's the pesky nature of radio link budgets. You can’t “optimise” video through brickwork.

Not only that, but the risible notion of offering "premium video" mobile data plans with so-called silver/gold/platinum service tiers generally overlooks the realities of physics, and the inter-dependence of shared media. Who’s going to extra pay for “priority” video when often you can’t get 3G at all? I’m writing this on a train – where I might actually want to watch video on my PC via mobile – and I’m lucky if I can get GPRS connection most of the time. I might pay extra for national roaming onto a competing network which can actually supply me with connectivity - but I'd rather just churn outright.
And what happens when a platinum user is right at the edge of the cell & wants a video? Do you boot off 70 gold-tier users who've got better radio conditions in the middle of the cell, just to serve them? In fact, even doing this potentially raises issues with consumer protection laws - the only way that HSPA can even theoretically get to 7.2 / 14.4 / 21 Mbit/s is by biasing traffic towards those people who've got the best signal. If you change the algorithm, it potentially makes the advertising claims false.

In conclusion

Overall, Disruptive Analysis believes that there is a good chance that the fears about mobile video traffic are being overstated, and being used as a smoke-screen to permit arbitrary degradation of video content, by unsophisticated boxes placed in the wrong part of the network, ill-equipped to understand the real causes of congestion and poor end user-experience. 

Regulators should scrutinise very carefully whether covert transcoding of video on uncongested links contravenes new views on Net Neutrality that permit only necessary and proportionate management. They should also enforce requirements on transparency stringently – a subtle “we reserve the right to manage traffic” in the terms & conditions is insufficient: there should be detailed and possibly realtime information on any active management / compression of traffic.

Content providers and aggregators should track whether their output is being modified over cellular networks without their permission or awareness, and should consider using encryption to protect against covert manipulation. The should embed agents in the browser or client app to detect unwanted “optimisation” (perhaps through digital watermarks or steganography) and alert the user when the operator is modifying the content in the background.

Operators should aim to work with content providers to enable them to optimise their content in the best fashion – either via rate-adaptive codecs, user alerts about congestion, altering frame rates, editing the video or other mechanisms. Some form of congestion API (ideally real-time but initially less-accurate) will help immensely. They should also pressure all of their core network and so-called "optimisation" vendors to adopt strong measures for radio-awareness and device/user-awareness.

Despite the hype about charging upstream content providers for "premium" QoS, operators should recognise that this will remain a pipe dream owing to the complexities of radio. If it works, it'll happen first in fixed networks, not mobile, where there's much more control and predictability. There's certainly no chance that a video provider, or user, will pay extra for so-called QoS if the network still has poor coverage

For those operators looking at offload, they should reject any policy management or optimisation solution which cannot distinguish between macro and femtocell traffic, or which cannot "optimise" by helping the user to connect to faster/less-congested WiFi if available. They should also be extremely sceptical of so-called personalisation services, pricing different traffic types (not “services”) differently, that look good on paper, but which are completely impractical and don’t reflect the reality of applications and the web, such as mashups.

Remember, video is not “an application”, it’s 10,000 applications, each of which needs individual consideration if personalisation is to be applied usefully. Unless there's someone with the skill and authority to think about this sensibly, there's a huge risk of failures and hidden gotchas.

Overall, I'm increasingly moving to the view that GGSN/Gi-based solutions for video traffic management are OK for urgent firefighting, but need to be very well-integrated with both RAN probes and, crucially, device-side intelligence to have longevity of usefulness. Most of the network vendors (understandably) shy away from working directly with handset OS's, connection managers and other tools - this will need to change.

Disruptive Analysis is one of the leading analysts covering mobile broadband business models, policy management and next-generation "holistic" models for controlling traffic and user experience. Please contact information AT disruptive-analysis DOT com for information about workshops and custom research projects.


Pal Zarandy said...

Well said. What I can add to this is that as online-video is becoming mainstream on smartphones, tablets, laptops, that will nicely pave the way for speed-tiered data tariffs. Subscribe for 0.5 Mbps: you cannot watch HQ on youtube. Subcriber for 1 Mbps: you can watch HQ but not HD. And so on.

A much less restrictive and more customer friendly way of data tariff tiering than volume caps.

Here in Finland this is the norm by the way. Luckily the networks are strong enough to give 5-6 Mbps sustained user throughput (HSPA) even in the busy hour, despite having the highest MBB/dongle penetration in Europe (over 22% pop).

So our (Rewheel) advice to operators is to focus their efforts on investing into the necessary network modernisations (on the right price) instead of restrict the digital needs of their consumers by "video optimisation" or throttling.

Considering the latest NodeB/RNC/SGSN/GGSN platforms the price of the hardware is extremely cheap, and 90-95% of vendor CAPEX goes to so called software license fees. That applies for the "signalling capacity" too. And software is always negotiable ;)

Todd Spraggins said...

I think we have been in agreement over this topic through past conversations/tweets although from a network side I would still look at transrating IFF it is correlated to actual cell congestion. Loosing FFW functionality is a lot better customer experience than loosing fidelity, size (artifacts of transcoding) or not having it work at all.

Furthermore, If I had to place money in this market I would follow the successful investments already being made by RipCode and MediaExcel who focus their products very profitably with the content owners and distributors e.g. CDNs.

Finally, what is the mental block operators have in seriously (all I see is lip service) using offload to solve congestion, especially wifi. With rare instances is it a trivial task to manage wifi connectivity (although it is much better than say 5 years ago), but in noway is it like macro-handover. This should be a top priority in their list of customizations they perform on the mobile OS. Then they should back it up with commercial incentives. If consumers are so cost conscious, then why would an operator expect them to pay additional tariffs for femto? Even if the tariff is neutral, having the data path go all the way to the GGSN/PGW (potentially another congestion point) and forcing the user to the same caps, filters and yes network congestion mangling/management will only drive users away. Internet offload early in the data path for Femto would be a win-win as then the operator has less sessions to pay GGSN/PGW vendors for (sorry to my pals at Cisco and E//).

Andy Barnard said...

Good post, with which I agree. The 'tyranny of consensus' is probably rather a load of 'sound and fury signifying nothing' from the marketing departments of the said vendors.

whitelassiblog said...

Optimizing the network for 'video' alone is a flawed approach, and I agree with the post.

Conceptually speaking, when we plan the RF capacity of a 3G/4G network; it is done keeping in mind the following basic service types -

1. Symmetric services (Which are both UL and DL intensive)

2. Asymmetric services (Which are skewed either on the UL or on the DL)

To add to these categories, certain target codecs need to be assumed to calculate expected throughput.

Usually RF designers take a certain DL to UL ratio to assume the expected throughput in the network once the service mix is known.

Talking about video -

Unicast Video streaming/download fits into category 2. So does file download ! Hence, from an optimization perspective, I view unicast video as one of the asymmetric services which are downlink intensive.

Video calling on the other hand fits into category 1 as it is both UL and DL intensive. Hence, delivering video calling is much more 'expensive' to the network as compared to video streaming (which is only DL intensive).

Hence, it all boils down to efficient planning and engineering of the RF and subsequently the backhaul.

This P&E has to take place with proper assumptions on the expected service mix and the UL/DL ratios assumed logically based on the expected services.

The service mix has to be carefully priced by the business team, so that the network RF design assumptions (expected concurrent sessions per service) are not horribly off track from the ground reality.

Here, TDD LTE has a significant advantage over FDD LTE. In TDD LTE, the DL/UL ratios can be dynamically adjusted to cater to unexpected network congestion. This is not possible in FDD LTE for obvious reasons.

Any anomalies in the above considerations (or lack of sync between technology and business teams), gives rise to unexpected network congestion and we see unnecessary 'video optimization products' being mandated by the vendors. They only add to the confusion.

-- Aayush

Chen Barnea-Didi said...

Mobile operators are looking for ways to offer tiered services all the time (at least declare they are). Since they will never be able to commit to a specific throughput, it is unpractical for them to offer QoS-based pricing plans to subscribers. Although these kind of plans do work for fixed networks (this is the way it works with my fixed operator).

Another possibility for operators to offer tiered pricing is to offer them per application (now that we agreed not to agree on mobile net neutrality). For example - free access to Facebook only for $3/month. Some Russian operators are already offering such plans.

Regarding the network optimization, I agree with the author that it should be done very carefully and if possible only in times of congestion in the local cell. Let me just add that in terms of user experience, network-based solutions do not need to care about the network coverage or the depth of the windows. If you have insight on the real-time throughput and the content original throughput, it's enough to transcode the video and bring benefit both to the network and the user (personally I hate buffering...). This is of course true also for browsing in the train when most of the time you browse in GPRS-like conditions.

As always operators will need to find a way not to throw the baby with the water.

Steve Crowley said...

Thank you for your insightful blog. I think video is indeed a problem on 3G/4G networks, and that 3GB of video is worse than 3GB of e-mail. The problem is that for maximum downlink throughput from a cell, these high-performance packet data systems depend on being able to serve those users getting the best signal quality at any instant. Is your signal getting weak for a few milliseconds? Serve a different user with a better signal for that instant. Downlink throughput is thus maximized.

So, with e-mail, when the link is poor, my device pauses until the link is better. With video, my device has to stay with the stream (more so) through both poor and good quality links. More physical-layer resources have to be used to serve that poor link compared to a good one. That takes resources away from other users, pulling down the whole cell.

Optimization helps to some extent. Longer term I think flat rates will be rationalized more toward "pay as you go" (including for the dongles), which will incentivize users to offload video data to, say, Wi-Fi, or download at home beforehand.

There will always be video on 4G. If you pay enough, you can have all you want. I think, though, that mobile video data estimates, say by Cisco (both last year's and for this year) overstate the amount of video data we will see on these networks.