Network capacity appears to be the theme of the week.
First off, I spent a couple of days at IIR's conference on LTE - how will it fit against a background of WiMAX & UMB, when will it appear, what speeds can it deliver, what are the applications & services? Lots of slides mentioning 100Mbit/s and similar numbers, plus a tacit recognition that, give or take some IPR issues, LTE = WiMAX = UMB in terms of much of the underpinning technology.
There's still plenty of unanswered questions though - especially whether LTE has any real use if operators can't get hold of enough spectrum to run it in 2x10MHz or 2x20MHz channel widths, which is the only way to approach the peak bandwidths that are being mentioned. If they're stuck with the current UMTS networks' 2x5MHz channels, operators might just be better of sticking with HSPA.
Another theme which emerged was around applications - realistically, are there actually any mobile/handset applications that could exploit 100Mbit/s? Can screens or browsers actually exploit that much data? (I'll leave as a separate question whether anything running at 100Mbit/s without a fan could be held by hand without asbestos gloves).
So, what does need that much bandwidth?
...on mobile, anyway. But in terms of fixed networks, I've just heard a presentation that really raises the bar on bandwidth requirements and makes you realise just how many zeros we can use if we get the opportunity. I'm now at NetEvents in Malta, and the keynote speaker was the Network & Comms chief supremo from the CERN particle physics accelerator in Geneva. I was blown away to find out that the new Large Hadron Collider (LHC), when it finally comes online, will generate peak 'raw data' outputs of about 1 Petabyte per second. Not MB, not GB, not TB, that's PB. He bemoaned the fact that this had to be filtered down in realtime electronics to 'a few hundred GBit/s' of the most useful data, with the rest just discarded as it couldn't all be networked or stored even by their state-of-the art private optical network. (As a sidenote about backhaul/transmission capacity, these guys also have dedicated 10Gbit/s links to international research labs). They'd love to upgrade to full speed if they get a chance & network technology catches up.
Now the awkwardness of carrying around 1000+ 30-ton superconducting magnets means that we won't all be detecting Higgs Bosons or leptoquarks on our phones any time soon. But it's also worth pointing out that when I asked him whether they any wireless tech anywhere at all at CERN, I got a shrug and a comment that there's a bit of WiFi in the labs and experimental chambers, but nothing in 'the critical data path'.
OK, this is probably the most extreme example.... but it's also an ultimate reality check for those that suggest we can ever get too much capacity - wireless or wireline.
Dean, often people quote data rates that are unrealistic. I've seen it many times with Wimax, when somebody says the throughput is something like 40Mb. In actuality, this is correct for a backhaul link or setup for a few users on a cell. In other words, it's complete bunk.
ReplyDeleteDoes the 100Mb you quote suffer from the same hypocrisy?
Hi Dean,
ReplyDeleteI agree with Rick. Of the 100 MBit/s they might get 20 MBit/s per sector if they are lucky. Let's say there are 3 sectors per base station which would give you 60 MBit/s per base station. Let's further assume there is one base station each kilometer. That means your km2 network capacity is 60 MBit/s. Sounds good? But also consider that in an average city there are about 3000 - 4000 people living per km2. Let's say that operator shares the space with 3 other operators which quadrupples that data rate. But still it's a lot of people to serve with a fraction of the capacity you would get over DSL (or fiber for that matter).
So big number become quite small very quickly :-)
Cheers,
Martin
Hi Dean
ReplyDeleteThe idea I got from Madrid was more like that LTE flat-rate data plans would top at some 20 GB/month, instead of the meager 2-4 GB monthly allowance we are getting with HSPA.
In any case, comparing HSPA, LTE and their kin for their maximum data rate only makes sense in Matlab. Choosing among competing cellular radio technologies has become more about interference resilience, fringe coverage areas, weird band allocations and weirdly planned towns.
Hi Dean, thx for sharing your view.
ReplyDeleteI agree with Rick and Martin. The speed HYPE has done much damage already (eg on WiMax). Its amazing how well the media have misinformed (the operators themselves got caught). The biggest b.s. was to never mention that this THEORETICAL throughput was per sector, not per user!
Reality is today around 10Mbs per sector which would translate to 256Kbs to 1Mbps for the end-users (slightly better than HSPA).
The other point regarding LTE is how much would it cost to deploy and where will be the revenues coming from??
The operators have spent billions on the spectrum and equipment but are still waiting for related ARPU!!
Good to remember that the killer data apps is.... SMS!! Yep that good old SMS still drive 60 to 70% of the data rev!
A couple of issues with what Martin said: A dense urban environment as mentioned would have site-to-site distance less than 200M, so in the km2 mentioned there could be 20 LTE base stations each from 4 operators. This would offer 80X60Mbps offering 4.8Gbps, or 1.2Mbps per person. Considering contended DSL today offers 8Mps/50 or 160Kbps, I fail to see any severe limitation of LTE here?
ReplyDeleteRegardless of the hype, WiMAX/LTE/UMB/XYZ will max out at a spectral efficiency of about 2.5 bps/Hz/cell. So an operator with 20 MHz of spectrum can expect about 50 Mbps. The number of users over which that will be shared obviously depends on the cell size. Sure you can have site-to-site distances of 200m (i.e. a 100m cell radius) but that's a LOT of infrastructure (you're getting into WiFI hotspot economics, and we all know how well that's working out in the world of muni-wifi).
ReplyDeleteA lot of WiMAX business cases are looking at numbers like 1000 subs/cell (split between 3 sectors). So you're talking 50 kbps per sub on average (though users will be able to burst at much higher rates, 50kbps is the number that they'd see if every subscriber on the cell was downloading simultaneously).
Now 50 kbps isn't really that impressive is it? Well many DSL operators in the US budget 10 kbps of backhaul per sub...that 100X of overbooking if you assume an advertised service of 1 Mbps! It works as long as people don't continuously download or watch streaming video. That's why operators have terms of service that limit their "unlimited" service packages. The networks don't have the capacity to actually offer all the services that they entice customers with.
BTW, the 2.5 bps/Hz/cell figure is a big improvement over 3G. HSDPA/HSUPA get less than 1 bps/Hz/cell. These numbers reflect real deployments. They're not the fantasy numbers where everyone's parked underneath the base-station, getting 64-QAM while doing 4x4 MIMO.
I agree with most commenters here. I'm not expecting real-world 100Mbit/s for end-users - the main point of the post was that demand for bandwidth will always outstrip supply, especially in the wireless domain where you don't get the capacity scalability of fibre.
ReplyDeleteI also agree that headline numbers are misused - I've repeatedly talked about backhaul limitations, bandwidth shared between a cell's users and so forth.
One thing to bear in mind in comparing LTE vs fixed-broadband is that the infrastructure for fixed is partly subsidised by corporations buying dedicated or less-contended links, especially in urban areas. There's not really a cellular equivalent of an STM-1 connection to the Internet, or an MPLS VPN. Those type of services really aren't going to migrate to wireless - so at best you end up with a situation of wireless consumers & fixed enterprises, which doesn't help anyone's economics.