This year has been all about mobile broadband revenue and traffic growth. Dongles, iPhones, embedded PCs, Android, consumer BlackBerries, Nokia E/N series.
But there is a mismatch. While operator data revenues might have risen 50% or 100%, 3G traffic has gone up by 500% or 1000%.
Until now this has, largely, been absorbing existing 3G/HSDPA capacity that has been lying dormant up since original deployment. Clearly, this has been perceived as beneficial - generating at least some revenue from data is better than nothing, and there are also signs of additional upside in using mobile broadband as a retention tool.
But the storm clouds are gathering, in my view. Not everywhere - some operators, and some parts of their networks, are more exposed than others. In the US, traffic is being driven more by the iPhone and other "superphones", while in Europe it's consumer user of 3G dongles. Given variations in population density, cellsite locations (and planning process), spectrum allocations, speed of backhaul upgrade & numerous other factors, it's certainly unlikely that the whole industry will grind to a congested halt.
But while some networks will be more robust than others, that doesn't mask a simple fact - the macrocell capacity of 3G - or even WiMAX or LTE - is not unlimited. While it can be tweaked and optimised, with more spectrum and MIMO and improved coding and other tricks, the laws of physics start to intervene.
Put simply, I reckon that the theoretical, mid-term, aggregate capacity of all operators' macrocell mobile broadband in a given urban location is in the range of 1-3Gbit/s per square kilometre. In other words, all the mobile capacity in that area equates to a single fibre used for current-generation metro ethernet.
Yes, that's quite a lot of traffic. But it would get absorbed very quickly if used for real "heavy lifting" applications like corporate data, HD TV, mass use of P2P and so on. The growing availability of HSPA and WiMAX devices with good browsers and big screens represents an ideal breeding ground for the next "viral" application after social networking.
It's not just the radio network that's a future bottleneck either. It's also the backhaul transport, the core & gateway elements like SGSN and GGSN, any ancillaries like DNS servers and so on. The usual steady onward march of mobile technology generations is impressive: HSPA+, LTE, SAE etc - but it's not quite up to scaling at growth rates more generally expected of fixed-line ISPs.
The only answer I can see is this is offload. Take the traffic off the macro network, and off the existing backhaul and core as far and as fast as possible.
There are various solutions to this:
- Femtocells - these are the most visible heros of the offload strategy, but I'm not convinced they'll ride in for the rescue quite quickly enough. There's also not enough emphasis on local breakout onto the Internet - the mobile industry still wants to funnel everything through the femto gateway & GGSN to retain control.
- WiFi and dual-mode devices are due a resurgence - both in homes/offices and in public locations. There's a lot out there already that can be exploited: hence AT&T's acquisition of Wayport
- Flattened IP cores, bypassing the SGSN. Ericsson and Nokia-Siemens Networks have already been deploying these for certain carriers.
- Optimised backhaul - there are various strategies here, including shunting all the Internet-destined traffic onto higher-bandwidth / lower-QoS / lower-cost connections, keeping voice and other priority traffic separate.
- Smarter and ultimately software-defined radios that can choose less-congested frequencies or technologies, or operate in shared spectrum like white spaces.
- Content delivery networks (CDNs) can also spare the operator core network the pain of dealing with some of the real high-volume traffic - although these don't yet delivery rich media like video direct to the base station. As we move towards IP-based RANs, that should also improve.
Of course, all these are very network-centric approaches. My expectation is that device, OS and application vendors will also take matters into their own hands, and develop their own offload approaches. There will be a rise of smarter connection managers and APIs, that will allow the apps to pick the appropriate bearer and adjust their traffic profile to suit it. They'll monitor congestion, latency and packet loss. They'll actively look for their own offload channels, especially via WiFi.
The bottom line - 2009 will be about "offload" from a network viewpoint, and "connection optimisation" from an app/handset viewpoint. Much of the time the strategies will be aligned, but there will also be some conflicts.
I also refer to the "capacity crunch" issue in the new December 2008 Disruptive Analysis research report on Mobile Broadband Computing. For details see here.