I've written a few times in the past about the "capacity crunch" for mobile broadband, as well as the potential for offloading traffic or policy-managing the network to prioritise certain users or data types.
More recently, I've discussed the role of signalling as an important factor in driving network congestion, especially from smartphones.
But there is a fair amount of uninformed comment about what's causing the problems - it "must" be laptop users downloading HD video or doing P2P filesharing over their 3G dongles, or it "must" be iPhone users using Google Maps and Facebook.
My view is that these over-simplistic analyses are leading to knee-jerk reactions around capacity upgrades, or stringent policy-management and traffic-shaping installations. Many vendors don't want to (or can't) give a fully-detailed picture of what really causes problems for the network and user experience.
It is in many suppliers' interests to market a neat single-cause and single-solution message - "You need to upgrade to LTE to add more capacity"; "You need PCRFs and charging solutions to limit video"; "You need to upgrade your GGSNs to really big routers" and so on.
The truth is rather more complex. Different situations cause different problems, needing different solutions. Smartphone chipsets playing fast-and-loose with radio standards may cause RNCs to get blocked with signalling traffic. Clusters of users at a new college might overload the local cell's backhaul. A faulty or low-capacity DNS server might limit users' speed of accessing websites. And so on.
Or, as many parts of London are experiencing today, a fire at a BT office might knock out half the local exchanges' ADSL and also the leased-line connections to a bunch of cell towers.
Now, in the long run there certainly will be a need to carefully husband the finite amount of radio resources deployed for mobile broadband. My order-of-magnitude estimates suggest that the macrocellular environment (across all operators, with the latest technology) will struggle to exceed a total of maybe 3Gbit/s per square kilometre, even on a 10-year view. So offload to pico / femto / WiFi will certainly be needed.
But in the meantime, I'm moving to a view that Stage 1 for most operators involves getting a better insight into exactly what is going on with their mobile data networks. Who is using what capacity, in which place, with which device, for how long? And exactly what problems are they - or the network - experiencing?
In recent weeks I've spoken to three suppliers of products that try to analyse the "root cause" ofmobile data congestion [Velocent, Compuware and Theta] and I'm starting to hear a consistent story that "there's more than meets the eye" with regard to network pains.
Some of the outputs can be eye-opening. It may be that a lot of customer complaints about poor data speeds can be traced back to a single cell or aggregation box that is mis-configured. It could be that a particular group of devices are experiencing unusually high problem rates, that may be due to a fault in the protocol stack. It might be that viruses (or anti-virus updates) are responsible for problems. Or it might just be that all the iPhone users are using YouTube too much.
One thing is for certain - the yardstick of "Dollars per gigabyte downloaded" is an extremely blunt tool to measure the profitability of mobile broadband, especially when opex costs around support and retention are included in the equation. There's no value in having a blazing-fast and ultra-cheap network, if users end up spending an extra 4 hours on the phone to customer care, complaining that they can't get access because of flaky software.
Note: The new Disruptive Analysis / Telco 2.0 report on Broadband Business Models is now available. Please email information AT disruptive-analysis DOT com for details.
Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event
Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here
Wednesday, March 31, 2010
Tuesday, March 30, 2010
LTE and offload - a few random questions
A quick series of observations:
1) It is highly likely that LTE will have to be provided (at least in part) by femtocells and/or WiFi access points, rather than being solely transmitted via the macro network. This will be for coverage reasons, especially for 2.6GHz, but also because of the limits of capacity which will be quickly reached in some areas.
2) Given rising traffic volumes, there will also be a strong desire to offload bulk IP traffic direct to the Internet close to the edge of the network, rather than backhauling it all through to the operator core & then out again
3) Voice will in some packetised form on LTE, whether it's VoLTE or VoLGA or Skype or CSFB.
So... there will need to be a mechanism to send VoIP to the operator core, but dump bulk web traffic locally. I guess that could be achieved by using separate APNs (or whatever they're called in LTE-speak).
Or, will it need the new "Selective IP Traffic Offload" standard being worked on by 3GPP?
Or, could you use some sort of dual-radio solution in handsets, sending certain traffic over the macro LTE network (or HSPA), while sending low-value data via WiFi / femto?
Separately... presumably this also means than any VoIP that *does* go via an offload path (femto / WiFi) will need to be tunnelled back to the operator core via some sort of VPN.
So potentially we may see some LTE phones using CS-Fallback over a GAN-type tunnel.....
1) It is highly likely that LTE will have to be provided (at least in part) by femtocells and/or WiFi access points, rather than being solely transmitted via the macro network. This will be for coverage reasons, especially for 2.6GHz, but also because of the limits of capacity which will be quickly reached in some areas.
2) Given rising traffic volumes, there will also be a strong desire to offload bulk IP traffic direct to the Internet close to the edge of the network, rather than backhauling it all through to the operator core & then out again
3) Voice will in some packetised form on LTE, whether it's VoLTE or VoLGA or Skype or CSFB.
So... there will need to be a mechanism to send VoIP to the operator core, but dump bulk web traffic locally. I guess that could be achieved by using separate APNs (or whatever they're called in LTE-speak).
Or, will it need the new "Selective IP Traffic Offload" standard being worked on by 3GPP?
Or, could you use some sort of dual-radio solution in handsets, sending certain traffic over the macro LTE network (or HSPA), while sending low-value data via WiFi / femto?
Separately... presumably this also means than any VoIP that *does* go via an offload path (femto / WiFi) will need to be tunnelled back to the operator core via some sort of VPN.
So potentially we may see some LTE phones using CS-Fallback over a GAN-type tunnel.....
Monday, March 29, 2010
Hopefully, we'll be rid of the Twitter obsession soon
I've been a long-term Twitter skeptic.
I think it's value-negative, and of total irrelevance to anyone outside an unholy alliance of geeks, narcissists (politicians, celebrities etc), marketeers, "media" and their drone-like followers. It's mostly used by lazy journalists and broadcasters, as far as I can see.
I highlighted it as one of my "zeros" in my predictions for 2010.
So, it's edifying to see that the "growth" stats are proving my point for me.
The appearance of incredibly annoying floating Twitter buttons on some websites is a sign of desperation - and is hugely counter-productive, as visual spam of that sort is a great way to alienate people.
About time to swap the silly bird logo for a dodo, methinks.
Extinction beckons.
I think it's value-negative, and of total irrelevance to anyone outside an unholy alliance of geeks, narcissists (politicians, celebrities etc), marketeers, "media" and their drone-like followers. It's mostly used by lazy journalists and broadcasters, as far as I can see.
I highlighted it as one of my "zeros" in my predictions for 2010.
So, it's edifying to see that the "growth" stats are proving my point for me.
The appearance of incredibly annoying floating Twitter buttons on some websites is a sign of desperation - and is hugely counter-productive, as visual spam of that sort is a great way to alienate people.
About time to swap the silly bird logo for a dodo, methinks.
Extinction beckons.
Network policy management and "corner cases"
I've been speaking to a lot of people about policy management recently, fitting in with the work I'm doing on mobile broadband traffic management, as well as the Business Models aspect of my newly published report on Broadband Access for Telco 2.0.
A lot of what I hear makes sense, at least at a superficial level. Certainly, I can see the argument for using PCRFs to enable innovative tariffing plans, such as offering users higher maximum speeds at different times of day, or using DPI or smarter GGSNs to limit access by children to undesirable sites.
But there's a paradox I see on the horizon. In the past, telcos (fixed and mobile) have been pretty obsessed with corner-cases. "What happens if a user tries to set up a 3-way call while they're switching between cells?", "What happens to calling-line ID when I'm roaming?" and so on. Sometimes this is because of regulatory requirements, sometimes it's because they're worried about the impact on legacy systems not being supported - and sometimes it just seems to be preciousness about some minor complementary service that nobody really cares about.
So what happens with *data* policy management and corner cases? What happens if I'm roaming and the local operator's policy conflicts with my home operator's? Do I get a subset or a superset? The lowest common denominator, or some sort of transparency? Imagine my home operator allows VoIP on its mobile broadband, but limits YouTube viewing to 100MB a month. But the visited network doesn't allow VoIP for its local customers, but also doesn't have the ability to discriminate video traffic - or, perhaps, applies some sort of compression via a proxy. Sure, everything might be backhauled via my home network.... or it might be offloaded locally.
[Side question - what happens to international data roaming traffic on a visited operator that does WiFi offload, provided by a separate managed offload operator?]
In a nutshell, I guess this boils down to "Policy Interoperability". And a need for policy IOT testing, on an ongoing basis. I strongly suspect this won't be as easy an many think.
Whether the "corner case" problems impact the overall use of policy management will probably depend on hard problems around with local regulations and laws, I suspect. But as a customer, will I really be happy with having the most stringent superset of policies applied, if there are multiple operators involved in my providing my connectivity?
A lot of what I hear makes sense, at least at a superficial level. Certainly, I can see the argument for using PCRFs to enable innovative tariffing plans, such as offering users higher maximum speeds at different times of day, or using DPI or smarter GGSNs to limit access by children to undesirable sites.
But there's a paradox I see on the horizon. In the past, telcos (fixed and mobile) have been pretty obsessed with corner-cases. "What happens if a user tries to set up a 3-way call while they're switching between cells?", "What happens to calling-line ID when I'm roaming?" and so on. Sometimes this is because of regulatory requirements, sometimes it's because they're worried about the impact on legacy systems not being supported - and sometimes it just seems to be preciousness about some minor complementary service that nobody really cares about.
So what happens with *data* policy management and corner cases? What happens if I'm roaming and the local operator's policy conflicts with my home operator's? Do I get a subset or a superset? The lowest common denominator, or some sort of transparency? Imagine my home operator allows VoIP on its mobile broadband, but limits YouTube viewing to 100MB a month. But the visited network doesn't allow VoIP for its local customers, but also doesn't have the ability to discriminate video traffic - or, perhaps, applies some sort of compression via a proxy. Sure, everything might be backhauled via my home network.... or it might be offloaded locally.
[Side question - what happens to international data roaming traffic on a visited operator that does WiFi offload, provided by a separate managed offload operator?]
In a nutshell, I guess this boils down to "Policy Interoperability". And a need for policy IOT testing, on an ongoing basis. I strongly suspect this won't be as easy an many think.
Whether the "corner case" problems impact the overall use of policy management will probably depend on hard problems around with local regulations and laws, I suspect. But as a customer, will I really be happy with having the most stringent superset of policies applied, if there are multiple operators involved in my providing my connectivity?
Friday, March 26, 2010
Nokia acquisition of Novarra - fragmentation of optimisation?
Very interesting to see Nokia's acquisition of web/video optimisation & transcoding vendor Novarra, which has been quite widely deployed in operators looking to reduce data traffic sent over mobile networks.
The fascinating thing for me is that it's being pitched as a way to optimise web browsing on low-end Series 40 devices - in other words, it's *not* primarily about reduction in outright traffic levels for operators, which are dominated by laptops & top-end smartphones.
The other stand-out is that the acquisition is by *Nokia* and not NSN.
I've been giving a lot of thought recently to various ways to optimise / compress / offload / policy-manage mobile broadband networks, trying to work out a way of reconciling the different options available to operators.
As part of this, I've been looking at the approaches to transcoding and proxying of web traffic - either in centralised boxes from Novarra or peers like ByteMobile and Flash networks - or by specific client/server implementations like RIM's NOC for BlackBerry Internet traffic, or the Opera Mini platform.
My view is that there will be no "one size fits all" approach to traffic management, and that operators will have to be smart about treating different use cases in different ways. This is less about treating traffic differently on a per-application basis, and more about device/business model/customer scenario segmentation.
My current thinking is that laptop traffic will probably be offloaded close to the edge of the network, especially with WiFi, or femtocells once it's easier to use techniques like SIPTO or Direct Tunnel to avoid congesting the mobile core with traffic that is 99.99% destined for the wider Internet.
Smartphone traffic will be part-offloaded, and part-optimised or policy-planned.
And, based on this acquisition, it increasingly looks like featurephone traffic will be optimised at the application level.
In addition, there are likely to be a range of general capacity improvements and efficiency gains in the RAN, backhaul and elsewhere - dealing both with total traffic volumes and signalling load.
More on this topic to come...
The fascinating thing for me is that it's being pitched as a way to optimise web browsing on low-end Series 40 devices - in other words, it's *not* primarily about reduction in outright traffic levels for operators, which are dominated by laptops & top-end smartphones.
The other stand-out is that the acquisition is by *Nokia* and not NSN.
I've been giving a lot of thought recently to various ways to optimise / compress / offload / policy-manage mobile broadband networks, trying to work out a way of reconciling the different options available to operators.
As part of this, I've been looking at the approaches to transcoding and proxying of web traffic - either in centralised boxes from Novarra or peers like ByteMobile and Flash networks - or by specific client/server implementations like RIM's NOC for BlackBerry Internet traffic, or the Opera Mini platform.
My view is that there will be no "one size fits all" approach to traffic management, and that operators will have to be smart about treating different use cases in different ways. This is less about treating traffic differently on a per-application basis, and more about device/business model/customer scenario segmentation.
My current thinking is that laptop traffic will probably be offloaded close to the edge of the network, especially with WiFi, or femtocells once it's easier to use techniques like SIPTO or Direct Tunnel to avoid congesting the mobile core with traffic that is 99.99% destined for the wider Internet.
Smartphone traffic will be part-offloaded, and part-optimised or policy-planned.
And, based on this acquisition, it increasingly looks like featurephone traffic will be optimised at the application level.
In addition, there are likely to be a range of general capacity improvements and efficiency gains in the RAN, backhaul and elsewhere - dealing both with total traffic volumes and signalling load.
More on this topic to come...
Non-user revenue models for broadband - excellent example from Vodafone
One of the major themes I explore in the new Telco 2.0 / Disruptive Analysis report on broadband is that of "non-user revenues", otherwise known as two-sided business models.
The basic concept behind a 2SBM is for an operator (fixed or mobile) to use its network and IT platform to derive revenues directly from end-users, and also from various "upstream" companies like advertisers, governments, content providers, application developers and so on.
The idea is that retail broadband revenues will start to flatten - and must be incremented with new advanced wholesale propositions. Some of these will be evolutions of current telco-to-telco wholesale (roaming, interconnect, MVNOs, dark fibre and so on), while others will evolve sale of broadband capacity to "non-users". The Amazon Kindle is a good example of this - it's Amazon paying for the connectivity for book downloads, not the end user through a separate subscription.
One particular opportunity identified in the report is for governments to pay for broadband services (either outright, or for specific capacity / capabilities) on behalf of their citizens. It might be that data connections are bundled into an e-Healthcare service, or perhaps in the context of Smart Grids.
Or, as Vodafone has illustrated this morning, a government agency like a local council or development authority might choose to sponsor fixed or mobile broadband connections for those beyond the "digital divide". In this example, it's unclear whether Voda is providing fully "open Internet" broadband, or a more restricted service just providing access to educational websites. Either way, it's a perfect example of "non-user revenue streams" and highlights the power of two-sided models to add incremental opportunities to an operator's existing maturing propositions.
This type of sponsored broadband is just one of a class of "new wholesale" approaches to selling access. Telco 2.0 / Disruptive Analysis has developed a unique forecast model which suggests that these types of innovative propositions could ultimately account for over 15% of the total broadband access market value globally.
The full dataset, analysis and modelling methodology is featured in the new Fixed & Mobile Broadband Business Models Report, which is now available for purchase.
To inquire, please contact please contact Disruptive Analysis
The basic concept behind a 2SBM is for an operator (fixed or mobile) to use its network and IT platform to derive revenues directly from end-users, and also from various "upstream" companies like advertisers, governments, content providers, application developers and so on.
The idea is that retail broadband revenues will start to flatten - and must be incremented with new advanced wholesale propositions. Some of these will be evolutions of current telco-to-telco wholesale (roaming, interconnect, MVNOs, dark fibre and so on), while others will evolve sale of broadband capacity to "non-users". The Amazon Kindle is a good example of this - it's Amazon paying for the connectivity for book downloads, not the end user through a separate subscription.
One particular opportunity identified in the report is for governments to pay for broadband services (either outright, or for specific capacity / capabilities) on behalf of their citizens. It might be that data connections are bundled into an e-Healthcare service, or perhaps in the context of Smart Grids.
Or, as Vodafone has illustrated this morning, a government agency like a local council or development authority might choose to sponsor fixed or mobile broadband connections for those beyond the "digital divide". In this example, it's unclear whether Voda is providing fully "open Internet" broadband, or a more restricted service just providing access to educational websites. Either way, it's a perfect example of "non-user revenue streams" and highlights the power of two-sided models to add incremental opportunities to an operator's existing maturing propositions.
This type of sponsored broadband is just one of a class of "new wholesale" approaches to selling access. Telco 2.0 / Disruptive Analysis has developed a unique forecast model which suggests that these types of innovative propositions could ultimately account for over 15% of the total broadband access market value globally.
The full dataset, analysis and modelling methodology is featured in the new Fixed & Mobile Broadband Business Models Report, which is now available for purchase.
To inquire, please contact please contact Disruptive Analysis
Thursday, March 25, 2010
New Research Report on Fixed & Mobile Broadband Business Models
Written and published in collaboration with the Telco 2.0 Initiative, Disruptive Analysis' founder and director, Dean Bubley, has produced a new 248-page Strategy Report on the future of operator business models for both mobile and fixed broadband, spanning retail propositions and new, advanced wholesale offers.
The report examines critical issues such as:
- Whether operators can use policy management and deep packet inspection as the basis for new revenue streams from Internet companies
- The prospect of greater government intervention in broadband, through regulation, stimulus investment or major national projects like Smart Grids
- The opportunities from new approaches to selling broadband - tiering, prepaid, 3rd-party paid, capped, bundled with devices etc.
- Differentiated wholesale models for mobile and fibre-based networks, including "comes with data" propositions and advanced roaming.
Complete with comprehensive forecasts spanning retail and wholesale tiers, this report is a unique analysis of Business Model Innovation in broadband, separating the practical from the "wishful thinking".
The report also includes a very detailed "use case" chapter, looking at the opportunities for fixed/cable operators to assist their mobile-industry peers by providing "managed offload" capabilities via WiFi or femtocells.
I'll be highlighting specific themes from the report in coming weeks in future posts.
For more details, contents pages and an extract/summary, please contact Dean Bubley at Disruptive Analysis
The report examines critical issues such as:
- Whether operators can use policy management and deep packet inspection as the basis for new revenue streams from Internet companies
- The prospect of greater government intervention in broadband, through regulation, stimulus investment or major national projects like Smart Grids
- The opportunities from new approaches to selling broadband - tiering, prepaid, 3rd-party paid, capped, bundled with devices etc.
- Differentiated wholesale models for mobile and fibre-based networks, including "comes with data" propositions and advanced roaming.
Complete with comprehensive forecasts spanning retail and wholesale tiers, this report is a unique analysis of Business Model Innovation in broadband, separating the practical from the "wishful thinking".
The report also includes a very detailed "use case" chapter, looking at the opportunities for fixed/cable operators to assist their mobile-industry peers by providing "managed offload" capabilities via WiFi or femtocells.
I'll be highlighting specific themes from the report in coming weeks in future posts.
For more details, contents pages and an extract/summary, please contact Dean Bubley at Disruptive Analysis
Wednesday, March 24, 2010
Picocells and the return of the DECT guard band - New service in Netherlands
Many people forget that before the advent of femtocells, a similar technology - picocells - has been around since 2001 or earlier. Picos have more capacity, but are considerably more expensive than femtos, and have required more expensive controllers and specialised installation procedures.
While many picos have been deployed by mobile operators for cheap "fill-in" coverage, or used in niche locations like oil rigs, ships or small islands, a more interesting business model was "Low Power GSM", pioneered in the UK with the auction of the 1800MHz DECT guard band four years ago. This enabled multiple new operators to bid for low-cost licences for indoor wireless services, using a thin sliver of unused 2G spectrum - especially enabling low-cost or free indoor private cellular.
I wrote about this here and closely watched the evolution of service launches, although uptake out to be comparatively slow. Cable & Wireless launched a corporate service for clients including Tesco, and Teleware has had some success with its Private Mobile Network. Two years ago, the market status was still limited.
One of the big problems has been for the new "indoor" operators to gain some sort of MVNO or roaming deal with the incumbent "outdoor" service providers, so they can provide a universal mobile coverage service. Perhaps unsurprisingly, the traditional "macro" operators have been unwilling to assist their new indoor-only cut-price competitors.
But something more interesting is occuring in the Netherlands. I wrote about 4 months ago that LPGSM was being enabled on a licence-exempt basis. And one of the companies that is now exploiting it has solved the indoor/outdoor conundrum, as it is *already* an MVNE, operating on Vodafone's Dutch network. Teleena announced its converged service yesterday.
Now obviously this is just GSM - so perhaps not much use for today's 3G smartphone-toting executive who finds that data services are sent over EDGE when in the office. Nevertheless, I'm considerably more positive about this type of approach than enterprise femtocells, which I continue to believe are unlikely to make traction for many years.
While many picos have been deployed by mobile operators for cheap "fill-in" coverage, or used in niche locations like oil rigs, ships or small islands, a more interesting business model was "Low Power GSM", pioneered in the UK with the auction of the 1800MHz DECT guard band four years ago. This enabled multiple new operators to bid for low-cost licences for indoor wireless services, using a thin sliver of unused 2G spectrum - especially enabling low-cost or free indoor private cellular.
I wrote about this here and closely watched the evolution of service launches, although uptake out to be comparatively slow. Cable & Wireless launched a corporate service for clients including Tesco, and Teleware has had some success with its Private Mobile Network. Two years ago, the market status was still limited.
One of the big problems has been for the new "indoor" operators to gain some sort of MVNO or roaming deal with the incumbent "outdoor" service providers, so they can provide a universal mobile coverage service. Perhaps unsurprisingly, the traditional "macro" operators have been unwilling to assist their new indoor-only cut-price competitors.
But something more interesting is occuring in the Netherlands. I wrote about 4 months ago that LPGSM was being enabled on a licence-exempt basis. And one of the companies that is now exploiting it has solved the indoor/outdoor conundrum, as it is *already* an MVNE, operating on Vodafone's Dutch network. Teleena announced its converged service yesterday.
Now obviously this is just GSM - so perhaps not much use for today's 3G smartphone-toting executive who finds that data services are sent over EDGE when in the office. Nevertheless, I'm considerably more positive about this type of approach than enterprise femtocells, which I continue to believe are unlikely to make traction for many years.
Tuesday, March 23, 2010
How much mobile broadband traffic is outside the user's awareness?
There's a lot of talk about controlling mobile broadband traffic by segmenting customers, or by trying to use tariffing to "modify behaviour".
Some of this makes sense - perhaps offering discounts for certain types of day, or even for using uncongested cells.
But I see a problem, literally "in the background" which may stifle attempts to "make customers use networks more responsibly" by being more careful about which applications they choose to use, or how they consume bandwidth.
I suspect that the amount of data traffic and events that are outside users' conscious control is going to rise inexorably. It will be very difficult to charge users for downloads that did not arise from them specifically clicking on a link, or firing up a particular application.
I'm thinking about:
- Javascript on web pages fetching extra data or applications
- Automated software updates running in the background (eg virus profiles, OS security patches)
- Interrupted downloads resuming while mobile
- Retransmitted data when someone reloads a web page because it hangs
- Repeated downloads of the same content (eg email attachments) because a device doesn't allow "save as", or because the file system is too convoluted to find it anyway
- Push notifications which drive both signalling and media consumption
- Pings and keep-alives between applications and servers
- Tracking data like cookies
- Monitoring of data about device "state" by the operator, OEM or 3rd party
- Encryption overhead
- Firmware updates and patches
- Lack of clear delineation between local applications and cloud-based components
.. and so forth.
I think that factors like these will make any application-based policy and charging very difficult to realise for mobile broadband. Expecting the user to know what their phone or PC is doing on their behalf, in the background is going to be a pretty tough sell. It's not user behaviour, it's device and application behaviour.
Does any operator have liability insurance that covers them for the consequences of throttling virus update downloads?
And should the user be aware of any compression being conducted on their behalf by the network? If I download a 10MB attachment, but it was only 2MB over-the-air, how can I be certain I'm charged for the lesser amount? Or if I'm drip-fed a downloaded video from an operator's buffer, is there extra signalling involved that I'm supposed to pay for?
Later this week, I'm going to be announcing the publication of a report on Mobile and Fixed Broadband Business Models, written on behalf of my associates at Telco 2.0. One of the themes it looks at is the role of policy in helping operators define new operating and revenues models. My view is that it is very important, but some of the theoretical possibilities about charging are overstated, because of complexities like these. If you're interested in getting some more detailed information about the new report, please email information AT disruptive-analysis DOT com.
Some of this makes sense - perhaps offering discounts for certain types of day, or even for using uncongested cells.
But I see a problem, literally "in the background" which may stifle attempts to "make customers use networks more responsibly" by being more careful about which applications they choose to use, or how they consume bandwidth.
I suspect that the amount of data traffic and events that are outside users' conscious control is going to rise inexorably. It will be very difficult to charge users for downloads that did not arise from them specifically clicking on a link, or firing up a particular application.
I'm thinking about:
- Javascript on web pages fetching extra data or applications
- Automated software updates running in the background (eg virus profiles, OS security patches)
- Interrupted downloads resuming while mobile
- Retransmitted data when someone reloads a web page because it hangs
- Repeated downloads of the same content (eg email attachments) because a device doesn't allow "save as", or because the file system is too convoluted to find it anyway
- Push notifications which drive both signalling and media consumption
- Pings and keep-alives between applications and servers
- Tracking data like cookies
- Monitoring of data about device "state" by the operator, OEM or 3rd party
- Encryption overhead
- Firmware updates and patches
- Lack of clear delineation between local applications and cloud-based components
.. and so forth.
I think that factors like these will make any application-based policy and charging very difficult to realise for mobile broadband. Expecting the user to know what their phone or PC is doing on their behalf, in the background is going to be a pretty tough sell. It's not user behaviour, it's device and application behaviour.
Does any operator have liability insurance that covers them for the consequences of throttling virus update downloads?
And should the user be aware of any compression being conducted on their behalf by the network? If I download a 10MB attachment, but it was only 2MB over-the-air, how can I be certain I'm charged for the lesser amount? Or if I'm drip-fed a downloaded video from an operator's buffer, is there extra signalling involved that I'm supposed to pay for?
Later this week, I'm going to be announcing the publication of a report on Mobile and Fixed Broadband Business Models, written on behalf of my associates at Telco 2.0. One of the themes it looks at is the role of policy in helping operators define new operating and revenues models. My view is that it is very important, but some of the theoretical possibilities about charging are overstated, because of complexities like these. If you're interested in getting some more detailed information about the new report, please email information AT disruptive-analysis DOT com.
Monday, March 22, 2010
A quick tip for North American readers....
... if you represent a vendor (or 3rd party agency) and you want to convey relevance and understanding of the global telecom marketplace, you might want to consider putting an international dialling code in your email footers.
It's a small thing, but just putting 123 456 789 reduces your credibility and makes a company look really parochial, compared to +1 123 456 789.
It's a small thing, but just putting 123 456 789 reduces your credibility and makes a company look really parochial, compared to +1 123 456 789.
Wednesday, March 17, 2010
WiFi offload will not always win out over femtocells
There seems to be an undercurrent of skepticism about some of the usage scenarios for femtos - in particular, the notion of why anyone should bother with femto offload, if an increasing set of devices all have WiFi anyway.
The Femto Forum has released a study on femto/WiFi coexistence which highlights some reasons it sees for combined deployment, but I think a couple of additional ideas are also useful.
One important factor is the difference between laptop PCs and smartphones:
- In a 3G-enabled laptop, WiFi is almost always switched on, but 3G is usually only on part of the time (not least because most users will only plug in the dongle when needed)
- In a 3G-enabled smartphone, 3G is almost always switched on, but WiFi is usually only on part of the time
And for both, 3G (or at least data access on the phone) is likely to be switched off when roaming internationally.
This will mean that laptop 3G data offload is probably best done via WiFi, but smartphones may be more femto-centric - especially as smartphones are more likely to have operator-branded services that make it advantageous to keep the traffic on-net. Laptop data is 99.999% straight to and from the web, with almost zero operator value-add, so it makes sense to dump it to the cheapest connection as often as possible.
But even that overlooks yet more layers of subtlety in terms of user behaviour.
I've been using an iPhone 3GS the last few weeks, and the battery life is atrocious. So I've done the usual power-management tricks of turning down screen brightness, turning off GPS, and manually switching off WiFi when I leave my house, only switching it back on when I know I have access to free WiFi elsewhere such as a hotel or certain cafes.
So although it's not hitting the cellular network all the time, and my home WiFi certainly takes quite a lot of the strain, it's certainly not able to offload everywhere where the operator might like, or might have WiFi offload deals with hotspot providers.
My two local branches of Starbucks gives me a real sense of the paradox:
- I know I can get access to the BT Openzone WiFi for free with my Vodafone iPhone contract.... but I've got the WiFi switched off by default, it requires me to do some sort of registration process with my account number (which is at home) and 3G coverage works fine in both cafes, so I don't bother.
- For my 3UK dongle for my laptop, one cafe has good HSPA signal and the other is lousy. In the one which is lousy, I use the WiFi, which I can again get for free with a Starbucks loyalty card (there's no offload deal in place with 3). In the other, I always use the dongle as the WiFi access controller seems to have a 20-second setup time before hitting the splash page, and another 20 seconds before I get authenticated. And if I'm using the 3G, I'll switch off WiFi to save battery.
Now re-imagine these scenarios with a femtocell viewpoint. At home, I'd probably still use WiFi rather than a femto, as I'd expect it to be faster - and some pretty innocuous websites I visit fall foul of Vodafone's over-zealous content filter and get blocked. VoIP provider Fring's website is censored, for example. I can't be bothered to phone up for "adult access" permission, and anyway I'm sure there are other things in the T's and C's about fair usage that I can obviously ignore when I'm on my own WiFi. I obviously never use the 3UK dongle when I'm at home within range of WiFi, as my laptop just connects and authenticates immediately and without extra intervention.
The cafes are different - if there was a Vodafone femto and the iPhone switched onto it, I'd probably notice an improved performance and likely lower power consumption. Same thing with the 3UK dongle in the branch where I currently don't have coverage - I'd probably switch back from the WiFi with its more-cumbersome login process with passwords and splash screens.
But you've probably already spotted the problem - does Starbucks want 4 or 5 femtos in every branch, from different operators? Who would pay for them? The cafes already have "sufficient" connectivity for everyone with WiFi - it's unlikely to want to bear the cost of just making it marginally more convenient for a select group of its customers.
The point here is that the neat idea of a monoculture of WiFi offload everywhere, or 3G offload to femtocells do not fit with the annoying peculiarities of consumer behaviour. People very quickly find the most convenient / cheapest / fastest / best battery-saving strategies for their personal circumstances. It's very difficult for operators wanting to conduct offload to work their way around those optimisations - unless they use either very smart connection-management software, or very brute-force ways of ignoring the subtleties.
One other thing occurs to me: I wonder if it's possible to get the presence of a WiFi SSID to trigger a device to switch on 3G and look for a femto in that location. Or for the presence of a specific femto (on any carrier's network) to trigger a client to power up smartphone WiFi if it's switched off, and use that instead.
The Femto Forum has released a study on femto/WiFi coexistence which highlights some reasons it sees for combined deployment, but I think a couple of additional ideas are also useful.
One important factor is the difference between laptop PCs and smartphones:
- In a 3G-enabled laptop, WiFi is almost always switched on, but 3G is usually only on part of the time (not least because most users will only plug in the dongle when needed)
- In a 3G-enabled smartphone, 3G is almost always switched on, but WiFi is usually only on part of the time
And for both, 3G (or at least data access on the phone) is likely to be switched off when roaming internationally.
This will mean that laptop 3G data offload is probably best done via WiFi, but smartphones may be more femto-centric - especially as smartphones are more likely to have operator-branded services that make it advantageous to keep the traffic on-net. Laptop data is 99.999% straight to and from the web, with almost zero operator value-add, so it makes sense to dump it to the cheapest connection as often as possible.
But even that overlooks yet more layers of subtlety in terms of user behaviour.
I've been using an iPhone 3GS the last few weeks, and the battery life is atrocious. So I've done the usual power-management tricks of turning down screen brightness, turning off GPS, and manually switching off WiFi when I leave my house, only switching it back on when I know I have access to free WiFi elsewhere such as a hotel or certain cafes.
So although it's not hitting the cellular network all the time, and my home WiFi certainly takes quite a lot of the strain, it's certainly not able to offload everywhere where the operator might like, or might have WiFi offload deals with hotspot providers.
My two local branches of Starbucks gives me a real sense of the paradox:
- I know I can get access to the BT Openzone WiFi for free with my Vodafone iPhone contract.... but I've got the WiFi switched off by default, it requires me to do some sort of registration process with my account number (which is at home) and 3G coverage works fine in both cafes, so I don't bother.
- For my 3UK dongle for my laptop, one cafe has good HSPA signal and the other is lousy. In the one which is lousy, I use the WiFi, which I can again get for free with a Starbucks loyalty card (there's no offload deal in place with 3). In the other, I always use the dongle as the WiFi access controller seems to have a 20-second setup time before hitting the splash page, and another 20 seconds before I get authenticated. And if I'm using the 3G, I'll switch off WiFi to save battery.
Now re-imagine these scenarios with a femtocell viewpoint. At home, I'd probably still use WiFi rather than a femto, as I'd expect it to be faster - and some pretty innocuous websites I visit fall foul of Vodafone's over-zealous content filter and get blocked. VoIP provider Fring's website is censored, for example. I can't be bothered to phone up for "adult access" permission, and anyway I'm sure there are other things in the T's and C's about fair usage that I can obviously ignore when I'm on my own WiFi. I obviously never use the 3UK dongle when I'm at home within range of WiFi, as my laptop just connects and authenticates immediately and without extra intervention.
The cafes are different - if there was a Vodafone femto and the iPhone switched onto it, I'd probably notice an improved performance and likely lower power consumption. Same thing with the 3UK dongle in the branch where I currently don't have coverage - I'd probably switch back from the WiFi with its more-cumbersome login process with passwords and splash screens.
But you've probably already spotted the problem - does Starbucks want 4 or 5 femtos in every branch, from different operators? Who would pay for them? The cafes already have "sufficient" connectivity for everyone with WiFi - it's unlikely to want to bear the cost of just making it marginally more convenient for a select group of its customers.
The point here is that the neat idea of a monoculture of WiFi offload everywhere, or 3G offload to femtocells do not fit with the annoying peculiarities of consumer behaviour. People very quickly find the most convenient / cheapest / fastest / best battery-saving strategies for their personal circumstances. It's very difficult for operators wanting to conduct offload to work their way around those optimisations - unless they use either very smart connection-management software, or very brute-force ways of ignoring the subtleties.
One other thing occurs to me: I wonder if it's possible to get the presence of a WiFi SSID to trigger a device to switch on 3G and look for a femto in that location. Or for the presence of a specific femto (on any carrier's network) to trigger a client to power up smartphone WiFi if it's switched off, and use that instead.
Friday, March 12, 2010
The right way to sell mobile broadband....
Just seen at London's Victoria station - a (Sandisk-branded) vending machine selling memory cards, phone acessories... and mobile broadband USB dongles. Shrink-wrapped, no cumbersome sales or registration process, no annoying salesperson trying to upsell you.
Just data connectivity, on a stick.
Like memory, on a stick.
Yes, there's still absolutely a role for contracts, retail stores, expert sales executives and so on. But solely going that route leaves money on the table when someone wants a quick transactional purchase. When I get to Victoria, I want to hail a taxi, not sign up for a cab account. Same thing for if I was dashing to catch a train and needed Internet connection for my trip.
Just data connectivity, on a stick.
Like memory, on a stick.
Yes, there's still absolutely a role for contracts, retail stores, expert sales executives and so on. But solely going that route leaves money on the table when someone wants a quick transactional purchase. When I get to Victoria, I want to hail a taxi, not sign up for a cab account. Same thing for if I was dashing to catch a train and needed Internet connection for my trip.
Monday, March 08, 2010
"You can't use my eyeballs for free"
Let's look forward 10 years.
We've all got augmented reality browsers on our handsets, or perhaps our 4G-connected sunglasses. They can overlay all sorts of data and images onto our field of view.
There's a plethora of micropayment systems available, accessible via APIs to any developer with the right tools.
There are open and closed appstores. Any app you can imagine is available for unlocked devices.
Operators are starting to monetise contextual advertising - there are digital posters, sponsorship of content, location-based coupons.
And then there's the sudden backlash.
"You can't use my eyes for free"
"My visual cortex isn't a dumb pipe"
"I spend lots of money on contact lenses & eyetests"
"Let's prioritise certain aspects of our visual input"
"We should charge advertisers for access to our retinas"
Until, finally, the inevitable:
"We've created a deep-photon inspection (DPI) application for your smart AR glasses, which uses new visual-processing chips to recognise low-value-per-wavepacket images. It blocks or degrades incoming advertising traffic, unless the advertisers pay you a fee for guaranteed quality-of-sight (QoS) and delivery"
So there's the challenge for all of you marketeers talking about the future of personalised, contextual advertising, based on data-mining our phones and location and Internet usage. If you reckon you're so good.... well, let's see you pay us a deposit before you beam your messages to us. If it's relevant, informative or entertaining, we'll give you a refund and might even buy your product.
We've all got augmented reality browsers on our handsets, or perhaps our 4G-connected sunglasses. They can overlay all sorts of data and images onto our field of view.
There's a plethora of micropayment systems available, accessible via APIs to any developer with the right tools.
There are open and closed appstores. Any app you can imagine is available for unlocked devices.
Operators are starting to monetise contextual advertising - there are digital posters, sponsorship of content, location-based coupons.
And then there's the sudden backlash.
"You can't use my eyes for free"
"My visual cortex isn't a dumb pipe"
"I spend lots of money on contact lenses & eyetests"
"Let's prioritise certain aspects of our visual input"
"We should charge advertisers for access to our retinas"
Until, finally, the inevitable:
"We've created a deep-photon inspection (DPI) application for your smart AR glasses, which uses new visual-processing chips to recognise low-value-per-wavepacket images. It blocks or degrades incoming advertising traffic, unless the advertisers pay you a fee for guaranteed quality-of-sight (QoS) and delivery"
So there's the challenge for all of you marketeers talking about the future of personalised, contextual advertising, based on data-mining our phones and location and Internet usage. If you reckon you're so good.... well, let's see you pay us a deposit before you beam your messages to us. If it's relevant, informative or entertaining, we'll give you a refund and might even buy your product.
Friday, March 05, 2010
Will MIMO work indoors?
This post is more of a question than an answer.
Many larger buildings (airports, shopping malls etc) have various forms of indoor coverage - active and passive distributed antenna systems (DAS), in particular. These usually involve connecting small base stations - often from multiple network operators - to a network of antennas, splitters and other paraphernalia around the building.
All of which is fine.... until we get to technologies like LTE and some variants of WiMAX and HSPA+, which use MIMO technology. Multiple-in, multiple-out technology uses a number of antennas.
I've recently asked a few people the question "So, how does MIMO work with DAS systems installed historically in large buildings?"
The usual response has been "Errr...... that's a good question. Not sure".
I bounced this one off a DAS vendor this morning (Commscope's Andrew division), and got an answer that it *should* all work with their recently installed systems. Asked about whether older systems, or other vendors' installs will need upgrading got a less-clear answer.
So, a set of questions for anyone who might have looked at this already:
- Do older DAS installations work OK with MIMO?
- Does LTE (or WiMAX or HSPA+) work properly when MIMO doesn't do what it's supposed to? What are the side-effects? (Slower speeds? Lower aggregate capacity?)
- Are the effects made worse when you go to 4x4 or more complex versions?
- How do you test all this?
- If certain installations don't work OK, how much will it cost to fix them?
- And while we're on the in-building topic, will the older implementations support new bands like 700MHz and 2600MHz OK as well?
Many larger buildings (airports, shopping malls etc) have various forms of indoor coverage - active and passive distributed antenna systems (DAS), in particular. These usually involve connecting small base stations - often from multiple network operators - to a network of antennas, splitters and other paraphernalia around the building.
All of which is fine.... until we get to technologies like LTE and some variants of WiMAX and HSPA+, which use MIMO technology. Multiple-in, multiple-out technology uses a number of antennas.
I've recently asked a few people the question "So, how does MIMO work with DAS systems installed historically in large buildings?"
The usual response has been "Errr...... that's a good question. Not sure".
I bounced this one off a DAS vendor this morning (Commscope's Andrew division), and got an answer that it *should* all work with their recently installed systems. Asked about whether older systems, or other vendors' installs will need upgrading got a less-clear answer.
So, a set of questions for anyone who might have looked at this already:
- Do older DAS installations work OK with MIMO?
- Does LTE (or WiMAX or HSPA+) work properly when MIMO doesn't do what it's supposed to? What are the side-effects? (Slower speeds? Lower aggregate capacity?)
- Are the effects made worse when you go to 4x4 or more complex versions?
- How do you test all this?
- If certain installations don't work OK, how much will it cost to fix them?
- And while we're on the in-building topic, will the older implementations support new bands like 700MHz and 2600MHz OK as well?
Netbooks - a skewed view on mobile broadband
In recent months, I've noticed an interesting misconception.
Some observers - notably from North America - seem to be under the impression that netbooks are, by default, mobile-connected. And, in particular, that they are major contributors to 3G data traffic.
As far as I know, this is not really true. Yes, in the US (and, I think, Japan), a fairly decent proportion of netbooks are sold through mobile carrier channels, with embedded or bundled 3G modems, usually on monthly plans.
Elsewhere, although that model exists, it is far from the most important. The majority of netbooks are sold through ordinary PC retail, corporate or online channels. And these generally do not have in-built wireless modules. Yes, some get used with USB 3G dongles (such as the one I'm writing this post on, from my local cafe), but many are just used with WiFi or even ethernet.
In 2010, there will probably be around 40m netbooks shipped. I'd be surprised if more than 10% are sold with built-in 3G, with maybe another 10-15% used with separate dongles. Ordinary retail netbooks rarely ship with 3G modules, as the cost is a very large % of the manufacturer gross margin - so the OEM won't wear the cost unless they're certain of either getting a higher retail price (unlikely as dongles are cheap), or some form of bounty from operators when/if customers sign up (rare).
The majority of PC-based mobile broadband traffic is generated by ordinary, larger *notebooks*, not netbooks, as both the installed base and new shipments are far higher than those for netbooks.
Some observers - notably from North America - seem to be under the impression that netbooks are, by default, mobile-connected. And, in particular, that they are major contributors to 3G data traffic.
As far as I know, this is not really true. Yes, in the US (and, I think, Japan), a fairly decent proportion of netbooks are sold through mobile carrier channels, with embedded or bundled 3G modems, usually on monthly plans.
Elsewhere, although that model exists, it is far from the most important. The majority of netbooks are sold through ordinary PC retail, corporate or online channels. And these generally do not have in-built wireless modules. Yes, some get used with USB 3G dongles (such as the one I'm writing this post on, from my local cafe), but many are just used with WiFi or even ethernet.
In 2010, there will probably be around 40m netbooks shipped. I'd be surprised if more than 10% are sold with built-in 3G, with maybe another 10-15% used with separate dongles. Ordinary retail netbooks rarely ship with 3G modules, as the cost is a very large % of the manufacturer gross margin - so the OEM won't wear the cost unless they're certain of either getting a higher retail price (unlikely as dongles are cheap), or some form of bounty from operators when/if customers sign up (rare).
The majority of PC-based mobile broadband traffic is generated by ordinary, larger *notebooks*, not netbooks, as both the installed base and new shipments are far higher than those for netbooks.
Thursday, March 04, 2010
CTIA....
... I'm not going to be there, so please don't bother inviting me to events or meetings. (If it's like MWC, for which I got about 250 invites, I probably won't get a chance to write individual "no thanks" replies)
Thanks.
Thanks.
Tuesday, March 02, 2010
Mobile traffic management - video confusion
A lot of my meetings at MWC two weeks ago were about managing mobile broadband traffic - by offload, by compression, by policy and various other means.
There is a total lack of agreement on where the emphasis should be. Everyone has a solution - and it's far from obvious that they are not pulling in opposite directions. Dump most traffic to WiFi, and it becomes much harder to justify restrictive policies when the phone is "on net". Offer "premium" or "platinum" connectivity - and get let down by poor coverage. Put femtocells in place - and then try to distinguish femto vs. macro traffic in the policy engine, because throttling someone's 3G access *on their own cell & broadband* is a recipe for disaster.
And all this is before we get to the fun-and-games which will ensue in a world with widespread connection-sharing.
The starkest choices seem to be around "what to do about mobile video". Endless copies of Cisco's Visual Networking Index charts were trotted out (or other large vendors' inhouse equivalents) showing terrifying increases in forecast mobile video traffic, swamping all other types. Even leaving aside that other new apps might be even more traffic-generative (maybe augmented reality, for example), there are still huge issues about how to treat video.
First off, let's be clear - there is no single application called "video". There's a mix of downloads and streaming, one-way and two-way, embedded and standalone, numerous codec and frame rate options and so on.
In the eyes of a user, clicking "I like" for a YouTube clip shared on Facebook by a friend, is a very different application to watching the same thing on YouTube.com in a separate window. Context is very difficult manage in a policy engine, especially if it's a web mash-up.
One theme that I heard repeated a few times was the idea that some sort of server or gateway would intercept inbound video traffic on an MNO's main Internet connection, spot who/what was requesting it, and (let's be polite here) "mess about with it" in order to reduce the impact on the network.
It's a very similar idea to the various Web messing-about-with engines ("transcoders" or "content adaptation") we've seen before from the likes of Novarra, ByteMobile and Flash Networks. Yes, they sometimes reduce load, but when first introduced they mangled content and advertising, caused problems with mobile commerce and did some horrible things to sites that were *already* mobile-optimised by the original website owner. Over time, the worst effects of these seem to have been ameliorated, with some sort of grudging consensus now almost attained.
The same set companies, as well as a group of new ones like Acision and Ericsson, are now talking up the idea of compressing video traffic on the fly to protect the network. One comment made was that the network could choose to buffer video traffic itself, rather than let the phone/PC buffer fill up quickly, over the air. YouTube was cited as being "aggressive" in encouraging users to download whole videos upfront, when sometimes they closed the video window and "wasted" a lot of bandwidth, because they hadn't watched the whole thing.
To punish this sort of profligate downloading behaviour, it's envisaged that the network could either compress the video traffic, or "drip feed" the buffer.
Cue consternation from other people they haven't yet spoken to.
First up are the radio guys, who, paradoxically, would much *rather* whole videos downloaded fast rather than being drip-fed. It's more efficient for base stations to blast as much traffic as possible at a user, in a short space of time, if it's not actually congested. And it's also much better for the user's battery to download a big gulp of data in one go, and then switch the radio off. Keeping the connection up for longer may also increase the signalling load even if the actual traffic is reduced.
There is also an interesting regulatory angle here - if mobile broadband speeds have to be advertised on the basis of average actual speeds, rather than theoretical peaks, then the drip-feed approach could have serious marketing consequences.
The compression side of the equation is also likely to be extremely controversial. There's an interesting set of issues of liability if you've paid for HD video (or an advertiser has), and the network arbitrarily steps in and mangles it for you during delivery. There's another interesting set of questions of whether the network can work out exactly what video stream to compress, if it's sent via a CDN like Akamai, which might peer at various points in the network.
But all of these are minor compared to the likely outrage from various video content and aggregation companies that have their content "messed about with".
Put simply, any attempts to "optimise" video in the core of the network are likely to be very ham-fisted, especially if they attempt to compress or transcode. The core is unlikely to be able to know how best to compress a video as it will vary immensely on a genre-by-genre or even scene-by-scene basis.
Content can be delivery-optimised in various ways: frame rate, frame size, key frame rate, audio rate, etc. all can affect bandwidth, however the best encoding recipe may be influenced by the type of content: sport, talking heads, music video, being genres of content that could be argued to have opposing views as to what's the best encoding recipe. Think of an interviewer static in a chair with a fixed background, versus a live-streamed F1 race. Applying a single policy to both videos is likely to be very crude, and result in a very poor user experience.
Some policies are already causing consternation - for example, by blocking overhead-light delivery protocols such as RTSP/RTP over UDP, forcing publishers to more inefficient alternatives.
Other side-effects of network-centralised video policy management may be to push a greater processing burden down onto the handsets (and hence their batteries) - for example, re-coding video in formats like H.264. As well as requiring a lot of network-side processing power to do this in real time, we could end up with the interesting situation of power consumption differing on a per-network basis. That could make for some interesting marketing angles for operators "Get your XYZ handset on the network that gives you 3 hours longer between recharges!"
(Hat-tip here to input from my friend David Springall, CTO of Yospace, which provides platforms to mobile video content publishers).
One answer seems to be to optimise mobile video through a balanced combination of editorial control and network technology. If operators (or their network partners) provide adequate guidelines on making mobile-friendly video, it may be that providers all the way back through the value chain can help - for example, content can be editorially controlled to be more compressable, such as by avoiding unnecessarily moving cameras during production. This is similar to attempts by operators and device vendors to encourage their app developers to follow "network friendly" approaches like fewer server pings.
It seems unlikely that video publishers will pay operators "cold hard cash" for QoS (or even an SLA) on mobile broadband, despite the rhetoric around DPI / policy management boxes. It's simply too difficult to control end-to-end quality as far as the device UI - and *prove* it to the content provider. But publishers would likely respond to doing what they can with their video content to make it 'work better' on mobile networks. Although they don't have the technical ability to do this, they are using third-party platform providers to deliver their video and it is these third parties that can implement whatever is considered best practice by the networks.
Operators could publish clear guidelines to help third parties optimise video on their networks; platform providers should work to these guidelines in the interests of delivering a better user experience (assuming some measure of consistency across operators, of course - it's unlikely that they would wish to see 800 different sets of rules across the world - or, worse, 2400 if they vary for 2G, 3G and pseudo-4G networks). It would also help video content publishers if MNOs provided APIs that provided more information about the network quality in realtime, allowing platform providers to be more sympathetic to the network. While this may also be feasible via handset-side APIs, the network may enable a more consolidated view to be provided. Over time, this could even evolve to cell-by-cell "traffic reports" allowing video to be delivered differentially to those in the best/worst radio or backhaul congestion conditions.
Depending on the laws in each country, well-behaving video traffic (however that is measured and defined) could be prioritised in some circumstances.
One issue that is likely to crop up is how to deal with PC-based video traffic transiting mobile broadband networks via 3G dongles. This is a little more problematic than smartphone video, as it is usually marketed as being a direct alternative to ADSL or cable broadband. Nevertheless, some form of in-application smarts could still allow optimisation - it's been pretty common for years to have a choice of different broadband speeds / types before starting a video "Are you watching on: ADSL, LAN, 3G, WiFi etc" might put the user back in their normal state of control. There's also a very strong argument to prefer outright offload for PC-Internet traffic to WiFi (or femto) and avoid the core network and compression argument altogether.
Overall, I definitely think that the question of how to deal with mobile video is still at very early stages. Although various solutions are emerging, it seems probable that the initial knee-jerk attempts to reduce network load will have some fairly nasty unintended consequences in terms of user experience, device battery life, radio performance and content-mangling - and hence generate support calls and dissatisfaction. The idea that the network can somehow magically reduce the impact of video traffic, without any input from video publishers or aggregators / platforms, seems misguided.
There is a total lack of agreement on where the emphasis should be. Everyone has a solution - and it's far from obvious that they are not pulling in opposite directions. Dump most traffic to WiFi, and it becomes much harder to justify restrictive policies when the phone is "on net". Offer "premium" or "platinum" connectivity - and get let down by poor coverage. Put femtocells in place - and then try to distinguish femto vs. macro traffic in the policy engine, because throttling someone's 3G access *on their own cell & broadband* is a recipe for disaster.
And all this is before we get to the fun-and-games which will ensue in a world with widespread connection-sharing.
The starkest choices seem to be around "what to do about mobile video". Endless copies of Cisco's Visual Networking Index charts were trotted out (or other large vendors' inhouse equivalents) showing terrifying increases in forecast mobile video traffic, swamping all other types. Even leaving aside that other new apps might be even more traffic-generative (maybe augmented reality, for example), there are still huge issues about how to treat video.
First off, let's be clear - there is no single application called "video". There's a mix of downloads and streaming, one-way and two-way, embedded and standalone, numerous codec and frame rate options and so on.
In the eyes of a user, clicking "I like" for a YouTube clip shared on Facebook by a friend, is a very different application to watching the same thing on YouTube.com in a separate window. Context is very difficult manage in a policy engine, especially if it's a web mash-up.
One theme that I heard repeated a few times was the idea that some sort of server or gateway would intercept inbound video traffic on an MNO's main Internet connection, spot who/what was requesting it, and (let's be polite here) "mess about with it" in order to reduce the impact on the network.
It's a very similar idea to the various Web messing-about-with engines ("transcoders" or "content adaptation") we've seen before from the likes of Novarra, ByteMobile and Flash Networks. Yes, they sometimes reduce load, but when first introduced they mangled content and advertising, caused problems with mobile commerce and did some horrible things to sites that were *already* mobile-optimised by the original website owner. Over time, the worst effects of these seem to have been ameliorated, with some sort of grudging consensus now almost attained.
The same set companies, as well as a group of new ones like Acision and Ericsson, are now talking up the idea of compressing video traffic on the fly to protect the network. One comment made was that the network could choose to buffer video traffic itself, rather than let the phone/PC buffer fill up quickly, over the air. YouTube was cited as being "aggressive" in encouraging users to download whole videos upfront, when sometimes they closed the video window and "wasted" a lot of bandwidth, because they hadn't watched the whole thing.
To punish this sort of profligate downloading behaviour, it's envisaged that the network could either compress the video traffic, or "drip feed" the buffer.
Cue consternation from other people they haven't yet spoken to.
First up are the radio guys, who, paradoxically, would much *rather* whole videos downloaded fast rather than being drip-fed. It's more efficient for base stations to blast as much traffic as possible at a user, in a short space of time, if it's not actually congested. And it's also much better for the user's battery to download a big gulp of data in one go, and then switch the radio off. Keeping the connection up for longer may also increase the signalling load even if the actual traffic is reduced.
There is also an interesting regulatory angle here - if mobile broadband speeds have to be advertised on the basis of average actual speeds, rather than theoretical peaks, then the drip-feed approach could have serious marketing consequences.
The compression side of the equation is also likely to be extremely controversial. There's an interesting set of issues of liability if you've paid for HD video (or an advertiser has), and the network arbitrarily steps in and mangles it for you during delivery. There's another interesting set of questions of whether the network can work out exactly what video stream to compress, if it's sent via a CDN like Akamai, which might peer at various points in the network.
But all of these are minor compared to the likely outrage from various video content and aggregation companies that have their content "messed about with".
Put simply, any attempts to "optimise" video in the core of the network are likely to be very ham-fisted, especially if they attempt to compress or transcode. The core is unlikely to be able to know how best to compress a video as it will vary immensely on a genre-by-genre or even scene-by-scene basis.
Content can be delivery-optimised in various ways: frame rate, frame size, key frame rate, audio rate, etc. all can affect bandwidth, however the best encoding recipe may be influenced by the type of content: sport, talking heads, music video, being genres of content that could be argued to have opposing views as to what's the best encoding recipe. Think of an interviewer static in a chair with a fixed background, versus a live-streamed F1 race. Applying a single policy to both videos is likely to be very crude, and result in a very poor user experience.
Some policies are already causing consternation - for example, by blocking overhead-light delivery protocols such as RTSP/RTP over UDP, forcing publishers to more inefficient alternatives.
Other side-effects of network-centralised video policy management may be to push a greater processing burden down onto the handsets (and hence their batteries) - for example, re-coding video in formats like H.264. As well as requiring a lot of network-side processing power to do this in real time, we could end up with the interesting situation of power consumption differing on a per-network basis. That could make for some interesting marketing angles for operators "Get your XYZ handset on the network that gives you 3 hours longer between recharges!"
(Hat-tip here to input from my friend David Springall, CTO of Yospace, which provides platforms to mobile video content publishers).
One answer seems to be to optimise mobile video through a balanced combination of editorial control and network technology. If operators (or their network partners) provide adequate guidelines on making mobile-friendly video, it may be that providers all the way back through the value chain can help - for example, content can be editorially controlled to be more compressable, such as by avoiding unnecessarily moving cameras during production. This is similar to attempts by operators and device vendors to encourage their app developers to follow "network friendly" approaches like fewer server pings.
It seems unlikely that video publishers will pay operators "cold hard cash" for QoS (or even an SLA) on mobile broadband, despite the rhetoric around DPI / policy management boxes. It's simply too difficult to control end-to-end quality as far as the device UI - and *prove* it to the content provider. But publishers would likely respond to doing what they can with their video content to make it 'work better' on mobile networks. Although they don't have the technical ability to do this, they are using third-party platform providers to deliver their video and it is these third parties that can implement whatever is considered best practice by the networks.
Operators could publish clear guidelines to help third parties optimise video on their networks; platform providers should work to these guidelines in the interests of delivering a better user experience (assuming some measure of consistency across operators, of course - it's unlikely that they would wish to see 800 different sets of rules across the world - or, worse, 2400 if they vary for 2G, 3G and pseudo-4G networks). It would also help video content publishers if MNOs provided APIs that provided more information about the network quality in realtime, allowing platform providers to be more sympathetic to the network. While this may also be feasible via handset-side APIs, the network may enable a more consolidated view to be provided. Over time, this could even evolve to cell-by-cell "traffic reports" allowing video to be delivered differentially to those in the best/worst radio or backhaul congestion conditions.
Depending on the laws in each country, well-behaving video traffic (however that is measured and defined) could be prioritised in some circumstances.
One issue that is likely to crop up is how to deal with PC-based video traffic transiting mobile broadband networks via 3G dongles. This is a little more problematic than smartphone video, as it is usually marketed as being a direct alternative to ADSL or cable broadband. Nevertheless, some form of in-application smarts could still allow optimisation - it's been pretty common for years to have a choice of different broadband speeds / types before starting a video "Are you watching on: ADSL, LAN, 3G, WiFi etc" might put the user back in their normal state of control. There's also a very strong argument to prefer outright offload for PC-Internet traffic to WiFi (or femto) and avoid the core network and compression argument altogether.
Overall, I definitely think that the question of how to deal with mobile video is still at very early stages. Although various solutions are emerging, it seems probable that the initial knee-jerk attempts to reduce network load will have some fairly nasty unintended consequences in terms of user experience, device battery life, radio performance and content-mangling - and hence generate support calls and dissatisfaction. The idea that the network can somehow magically reduce the impact of video traffic, without any input from video publishers or aggregators / platforms, seems misguided.
Subscribe to:
Posts (Atom)