Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Wednesday, December 03, 2008

The mobile industry buzzword of 2009 will be......


This year has been all about mobile broadband revenue and traffic growth. Dongles, iPhones, embedded PCs, Android, consumer BlackBerries, Nokia E/N series.

But there is a mismatch. While operator data revenues might have risen 50% or 100%, 3G traffic has gone up by 500% or 1000%.

Until now this has, largely, been absorbing existing 3G/HSDPA capacity that has been lying dormant up since original deployment. Clearly, this has been perceived as beneficial - generating at least some revenue from data is better than nothing, and there are also signs of additional upside in using mobile broadband as a retention tool.

But the storm clouds are gathering, in my view. Not everywhere - some operators, and some parts of their networks, are more exposed than others. In the US, traffic is being driven more by the iPhone and other "superphones", while in Europe it's consumer user of 3G dongles. Given variations in population density, cellsite locations (and planning process), spectrum allocations, speed of backhaul upgrade & numerous other factors, it's certainly unlikely that the whole industry will grind to a congested halt.

But while some networks will be more robust than others, that doesn't mask a simple fact - the macrocell capacity of 3G - or even WiMAX or LTE - is not unlimited. While it can be tweaked and optimised, with more spectrum and MIMO and improved coding and other tricks, the laws of physics start to intervene.

Put simply, I reckon that the theoretical, mid-term, aggregate capacity of all operators' macrocell mobile broadband in a given urban location is in the range of 1-3Gbit/s per square kilometre. In other words, all the mobile capacity in that area equates to a single fibre used for current-generation metro ethernet.

Yes, that's quite a lot of traffic. But it would get absorbed very quickly if used for real "heavy lifting" applications like corporate data, HD TV, mass use of P2P and so on. The growing availability of HSPA and WiMAX devices with good browsers and big screens represents an ideal breeding ground for the next "viral" application after social networking.

It's not just the radio network that's a future bottleneck either. It's also the backhaul transport, the core & gateway elements like SGSN and GGSN, any ancillaries like DNS servers and so on. The usual steady onward march of mobile technology generations is impressive: HSPA+, LTE, SAE etc - but it's not quite up to scaling at growth rates more generally expected of fixed-line ISPs.

The only answer I can see is this is offload. Take the traffic off the macro network, and off the existing backhaul and core as far and as fast as possible.

There are various solutions to this:

  • Femtocells - these are the most visible heros of the offload strategy, but I'm not convinced they'll ride in for the rescue quite quickly enough. There's also not enough emphasis on local breakout onto the Internet - the mobile industry still wants to funnel everything through the femto gateway & GGSN to retain control.
  • WiFi and dual-mode devices are due a resurgence - both in homes/offices and in public locations. There's a lot out there already that can be exploited: hence AT&T's acquisition of Wayport
  • Flattened IP cores, bypassing the SGSN. Ericsson and Nokia-Siemens Networks have already been deploying these for certain carriers.
  • Optimised backhaul - there are various strategies here, including shunting all the Internet-destined traffic onto higher-bandwidth / lower-QoS / lower-cost connections, keeping voice and other priority traffic separate.
  • Smarter and ultimately software-defined radios that can choose less-congested frequencies or technologies, or operate in shared spectrum like white spaces.
  • Content delivery networks (CDNs) can also spare the operator core network the pain of dealing with some of the real high-volume traffic - although these don't yet delivery rich media like video direct to the base station. As we move towards IP-based RANs, that should also improve.

Of course, all these are very network-centric approaches. My expectation is that device, OS and application vendors will also take matters into their own hands, and develop their own offload approaches. There will be a rise of smarter connection managers and APIs, that will allow the apps to pick the appropriate bearer and adjust their traffic profile to suit it. They'll monitor congestion, latency and packet loss. They'll actively look for their own offload channels, especially via WiFi.

The bottom line - 2009 will be about "offload" from a network viewpoint, and "connection optimisation" from an app/handset viewpoint. Much of the time the strategies will be aligned, but there will also be some conflicts.

I also refer to the "capacity crunch" issue in the new December 2008 Disruptive Analysis research report on Mobile Broadband Computing. For details see here.


Anonymous said...


Very interesting post. I know you have been talking about this - 90%+ of mobile data traffic is generated by PCs, smartphones+superphones - the common thread being WiFi availability in these terminals. How is the femto solution (as opposed to the WiFi offload) useful here given all its other baggage. As a solution for voice coverage, I can see why femtocells are being deployed (even if there are several technical issues as I point out in my blog http://mobilebroadbandblog.wordpress.com/2008/11/14/femtocell-deployments-how-are-they-doing/

but as a solution for data offload, I am still scratching my head). Am I missing something here?

Dean Bubley said...


It's a tricky one this.

Yes, lots of devices have WiFi in them, which in theory reduces the potential for femtos.

However, the software involved in operators integrating with WiFi is non-trivial, especially where it is owned by an enterprise, or provided by a 3rd-party ISP's home gateway.

On the other hand, offload to femtos also obviously isn't mature yet at various levels.

I expect it's going to be a mix - with nothing being an easy win. Much will depend on work on the connection manager layer in the device OS, plus auto-authentication tools etc.

In some cases, it may be easier (practically) to offload to femto, as it can essentially be done transparently in the modem. On the other hand, that might not be optimal for certain applications.

I still think that in an ideal world, apps are bearer-aware, and choose the optimal connection route - macro, femto, WiFi, whatever.


Anonymous said...

You have some eye catching numbers in this post. It would be great if you would share any specific data on the 500-1000% traffic statistic (e.g. country locations, urban/metro?, constant or peak etc.).

It would also be interesting to hear your views on profitability; revenue is rocketing, traffic is sky-rocketing and the cost of carrying that traffic (2G -> 3G) is plummeting. Overall, do you have any numbers on profitability?

Anjan said...


I think the debate of Femto v/s Wifi to do data offload is alive and well and in my opinion boils down to a trade off.

Both connect to the core network via a generally untrusted network over which the carriers have no control (i.e. internet). So on this part both are even.

The pro for the Femto is that it better integrates with the core network and works with any cellular device. The down side of course is having to deploy a new equipment (Femtocell) at the customer site (both cost and logistic challenge).

The biggest thing going for Wifi is that it is ubiquitous and requires no additional CPE. The down side is that it only works when you have access to Wifi coverage, have a Wifi enabled handset and in a lot of cases require a special client on the handset.

What we see is the 'forward leaning' operators are willing to take on a Femto solution. The more risk averse players may stick with Wifi to meet their offload needs.

SolMan said...

Thank you, Dean, your insights are just that:
Here's one: a start-up company creater of carrier-neutral metro-wide WIFI Networks with intergrating full suite of 3G-quality value-added services, and utilizes long range super-WIFI antennas (GoNet, BelAir)

Check out Blu Linx Technology
[ www.blu-linx.com ]

You can contact "SolMan" at turner.esq@gmail.com