Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, March 17, 2011

WiFi highlights an inconvenient truth about QoS...

... it's not always needed.

Increasingly, smartphones get used with WiFi. Some estimates suggest that up to half of data usage now goes over WiFi. Most of that WiFi is connected from homes, offices or public hotspots over backhaul provided by an operator other than that providing the cellular connection to the smartphone. Although in some cases there is an offload agreement in place, there is usually no direct measurement or control of QoS end-to-end.

But some operators have (or are launching) their own data and content services - whether it's a content site, their appstore, remote backup or even RCS. This means that some of the access will come in to the operator domain via the open Internet. This isn't new in itself - technologies such as UMA/GAN have been around for a while, as have assorted softphones, remote access clients and so forth. But what this implicitly means is that for some of the time, at least, operators are happy to have their services accessed by their customers over the public Internet. With all of the potential downsides that suggests.

Plus, this means that in those situations, the operator is itself acting a so-called "OTT" provider, riding for free on somebody else's pipes. Are they first in the queue to offer to pay their ADSL/cable saviours for QoS guarantees? No, I thought not.

So the obvious question has to be - if it's OK to connect via an unmanaged network some of the time, then why not all of the time? Are they warning their customers that reliability might be lower if they connect via WiFi? What rights do their customers have if performance is below par?

Now obviously in most cases here the fixed connection used for WiFi is faster than the mobile network would have been - so "quality" in some regards is arguably actually better. But it's still not actively monitored and managed, and both the Internet portion of the access and the WiFi radio itself are subject to all sorts of contention, congestion, packet loss and other threats.

I know that various attempts are being made to bring WiFi into the operator's control - or at least visibility and policy oversight - with selective offload and ANDSF and I-WLAN and various proprietary equivalents. But even these will not cover all situations, even when viewed throught the rosiest tinted glass.

But surely, if a QoS-managed and policy-controllable network is that critical, surely there ought to be explicit notifications to users that they are accessing the service via an unmanaged connection? Maybe, in extremis such access should even be blocked?

Flipping this around the other way.... if it's OK for your access customers to access your services over the Internet on an OTT basis, at least some of the time, why not also let other people access those services as well?

Tuesday, March 15, 2011

UK ISPs Code of Practice on Traffic Management - OK as a start, but major flaws

A group of the UK's largest fixed and mobile ISPs have published a "Code of Practice" about managing traffic on their broadband networks. The full document is here with the announcement press release here. The group includes BT, Vodafone, 3, O2, Virgin, BSkyB and TalkTalk, but currently excludes others, notably EverythingEverywhere, the Orange/T-Mobile joint venture.

(Regular readers may remember that I put up a suggested draft Code of Conduct for traffic management last year - there seems to be a fair amount that has been picked up in the UK document. My input also fed into the manifesto published my partners at Telco 2.0, here)

There's some good stuff, and some less-good stuff about the new Code of Practice. Of course, if you a Net Neutrality purist, your good/bad scale will shift a bit.

On the positive side, the general principle of transparency is extremely important. The committment to being "Understandable, Appropriate, Accessible, Current, Comparable, Verifiable" is entirely the right thing to do. I think there is a lot of good stuff in the Code here, going as far as the need for independent verification (although that would probably happen anyway - I'm sure Google and others have their own techniques for watching how traffic shaping is used by telcos).

The fact that it has been signed by both fixed and mobile operators is also a good thing, although there isn't much in the document about the specific issues inherent in wireless networks.

But the main problem is that it attempts to define traffic management policies by "type of traffic" in terms of descriptions that are only meaningful to boxes in the network, not to users themselves. Ironically, this fails the Code's own insistence on being understandable and appropriate. There are also no clear definitions on what constitutes the various categories such as "gaming" or "browsing".

The problem here is that DPI boxes don't really understand applications and services in the way that users perceive them. "Facebook" is an example of an application, including links or video which are displayed on the web page or inside a mobile app. "WebEx" is another application, which might include video streaming, messaging, file transfer and so on. Add in using HTML5 browsers and it all gets messier still.

Having a traffic policy that essentially says "some features of some applications might not work" isn't very useful. It's a bit like saying that you've got different policies for the colour red, vs. green. Or that a telephone call is #1 priority, unless a voice-recognition DPI box listens and senses that you're singing, in which case it gets reclassified as music and gets down-rated.

And even in terms of traffic types, the CoP conspicuously misses out how to deal with encrypted and VPN traffic, which is increasingly important with the use of HTTPS by websites such as YouTube and Facebook. Given that SSL actually is a protocol and "traffic type" this is pretty important. At the moment, the footnote "***If no entry is shown against a particular traffic type, no traffic management is typically applied to it." to me implies that encrypted traffic passes through unmolested under this code of practice. (I'd be interested in a lawyer's view of this though).

Another problem is that there is an assumption that traffic management is applied only at specified times (evening, weekends etc), and therefore not just when or where there is *actual* congestion. I suspect Ofcom will take a dim view of this - my sense is that regulators want traffic management to be proportionate and "de minimis" and there seems no justification for heavy-handed throttling or shaping when there is no significant congestion on the access network or first-stage backhaul.

There is also no reference to what happens to any company which fails to meet its obligations under the Code (which is "voluntary"), or how enforcement might happen in the future.

Lastly, there is no reference to bearer-type issues important in mobile. In particular, whether the same policies apply to femtocell or WiFi offload.

Overall, on first read I'd give it a 5 out of 10. A useful start, but with some serious limitations.

Thursday, March 10, 2011

Revenue from content/app transport? Operators need to be part of solution, not part of the problem

I'm still seeing a lot of discussions that go along the traditional and rather tired lines of saying that Facebook / YouTube / Hulu / BBC etc should "pay for their use of our pipes". I've just been debating on Twitter with Flash Networks, an optimisation company, about the fact that YouTube is now watched by a huge proportion of broadband-enabled people in India (mostly fixed, not mobile)

Flash asked the question "should YouTube be financially accountable", to which the answer I think is pretty clearly "no" - the users are financially accountable for buying Internet access services. If they all seem to prefer the same website for video, so what? Maybe at some point it becomes a question for competition authorities, but I really can't see what difference it makes if people watch videos from one site or 10 different ones.

If I have a mobile phone plan with 600 minutes, and use 500 of them calling my best friend and 100 calling everyone else, you wouldn't send my friend a bill for "generating traffic".

But that doesn't preclude the operator doing a deal with YouTube for something extra. Maybe they offer QoS guarantees (empty promises won't cut it, there needs to be proof and an SLA) for prioritisation or low-latency. Maybe they have a way to over-provision extra bandwidth - for example the customer subscribes for a 6Mbit/s line speed, but YouTube pays extra to boost it to 10Mbit/s if the copper can handle it. Maybe the operator gives YouTube a way to target its advertising better, through exposing some customer data. Maybe the operator improves performance and reduces costs by using caching or CDN technology.

But all that is on top of the basic Internet access - and of course, YouTube will be doing its own clever things to squeeze better performance out of basic access as well. It will be playing with clever codecs and buffering and error-correction and so on, so the telco has to make sure its value-add "happy pipe" services give YouTube a better ROI than spending it on a more R&D tweaking the software.

What won't fly (in most competitive markets) is attempting to erect tollgate for the baseline service. The telco gets a chance to participate in the upside beyond that, if it can prove that it's adding value. It can't just exploit YouTube's R&D, user loyalty and server farms "for free".

The same is true in mobile - the operator needs to be part of the solution, not part of the problem. Which means that before it has the moral authority to say it's providing value from "extras", it needs to get the basics right, such as adequate coverage and reasonable capacity. It also has to demonstrate neutrality on the basic Internet access service - it can't be seen to transcode or otherwise "mess about" with traffic.

But assuming that there is good - and provable - coverage (including indoors, for something like YouTube), then once again the operator has a chance to participate in improving the performance of vanilla Internet access. It can offer device management, user data, possible higher speeds and prioritisation and so forth. But there are many more complexities to getting this right, as mobile is less predictable and "monitorable" than fixed-line. Ideally, quality needs to be seen and measured from the user's perspective, not inferred imperfectly from the network. And there needs to be some pretty complex algorithmic stuff going on in the radio network too - how do you deal with a situation where you have both "Gold users" and "Gold applications" competing for resources in a cell? And just how much impact should one Gold user/app right at the cell-edge have on 50 Silver users in the middle?

All of this needs to be based on upside from what is possible with a best-effort standard mobile Internet connection, where the user and app provider are in control and can alter their behaviour according to personal preferences. The operator and network need to show a demonstrable solution which offers more than can be reasonably expected, and not just try to extract fees by creating an artifical problem.

So in pharmaceutical terms, the performance of the baseline, unmodified transmission is like a placebo in a double-blind test of a new drug. Any new network "treatment" such as higher QoS or optimisation has to show measurable and repeatable benefits against the placebo. It is also possible (and necessary) to double-check that the placebo is uncontaminated.

This is the challenge for mobile operators in particular, looking to derive extra fees from users and/or content and application providers from "smarter" networks. They need to get the basics right (coverage), and provide an acceptable basic service (unmolested Internet). And then they have to offer something more (proven quality or targetting) at a cost and effectiveness better than that which could be done either by the app software, or simply providing more capacity.

Tuesday, March 08, 2011

Insistence on a single, real-name identity will kill Facebook - gives telcos a chance for differentiation

Note: This post was written before Google+ , Google's stance on pseudonyms, and the rise of #nymwars . Most of this article applies just as much to Google as Facebook.
There's been a fair amount of debate about online identity in recent days, partly spurred by Techcrunch's shift to using Facebook IDs for blog comments in an effort to reduce trolling and spamming. Various web luminaries have weighed in on one side of the debate or the other.

Mark Zuckerberg, founder of Facebook, has been quoted in David Kirkpatrick's The Facebook Effect: "You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly ... Having two identities for yourself is an example of a lack of integrity."

I think that's narrow-minded nonsense, and I also believe that this gives the telcos a chance to fight back against the all-conquering Facebook - if, and only if, they have the courage to stand up for some beliefs, and possibly even push back against political pressure in some cases. They will also need to consider de-coupling identity from network-access services.

Operators could easily enable people to have multiple IDs, disposable IDs, anonymity, tell lies, have pseudonyms or nicknames, allow your past history to "fade" and various other options.

In other words, they could offer "privacy as a service".

There are numerous reasons why people might wish to use a "fake" identity - segmenting work and personal lives, segmenting one social circle from another and so on. There are many real-world situations in which you want to participate online, but with a different name or identity: perhaps because you have a stage or performance name, perhaps you have a (legal) "guilty secret" of some sort, or maybe because you want to whistleblow against people in authority or those that you perceive as dangerous. It can even be because your name is just too common (JohnSmith16785141), or too unusual or difficult to spell (Bubley). It is also common for people to want to participate as part of a company, not an individual.

I know plenty of people who use pseudonyms on Facebook and other social media sites, and for *personal* things I'd say that's good for all sorts of reasons. In a business context, I agree with websites such as LinkedIn and Quora that enforce real names, because there is a strong "reputation" angle to their businesses. But on the other hand, if I had to deal with 300 LinkedIn requests a day from random people I haven't met, I'd probably change my mind.

There is another, important side to anonymity and multiple identities - obfuscating parts of your persona and contact details from advertisers and spammers. Being able to give a secondary (and ideally disposable) email address or mobile phone number to untrusted parties is important. I still use my fixed number for most online forms in the UK, because there's a legally-enforced telemarketing opt-out, while giving a mobile number risks spam SMS. The same is true of online identities - I want to be able to corral spammers and unwanted advertisers in a corner of my Internet world that I can safely nuke if I have to.

So, there is an opportunity for operators to offer - either individually or collectively - a more friendly set of identity options. This probably relates more to mobile operators than fixed operators, but not necessarily. A critical element here is that ID *cannot* be always tied to a SIM card or phone number, for most of these use cases. Users will not wish to be tied to a single access provider, not least because many times they will not be using a single, operator-issued device or that provider's access network. They will also not want to pay for an access account in perpetuity, just to make blog comments or something equally trivial. And, painful though it is to telcos, they *will* churn, and using identity as a lock-in will reduce trust and take-up of the services.

In other words, a telco-provided custom ID will need to be provided OTT-style - something like Orange's ON service , a cross-network app which enshrines principles from studies of psychology and anthropology - such as the right to lie. You need to be able to "take your privacy/idnetity profile with you" when you move to another operator. Unless we want to wait 10 years to force through "identity portability" laws, operators will fail to exploit this opportunity if they just try and see it as a churn-reduction tool.

This also means that interoperability between privacy providers is unncessary and even undesirable. Operators can - and should - go it alone to start with, which is why fixed operators have a chance as well as mobile. Living in the UK, would I use AT&T or Telenor as a privacy provider? Maybe, depends on whether I like a specific service and trust them, but I'd be more keen than going with one of the UK operators who'd try to link the capability into other services. Although that said, I'd probably use certain aspects of this broader idea from my current telecom providers - perhaps a second "fake" number I could use for advertisers and potential spammers.

(It goes without saying that most or all of this will need to be built outside rigid architectures such as IMS or RCS, which also have centralised repositories for subscriber information, unique personal identifiers attached to credentials such as SIMs, and an assumption of access/service coupling).

Now there is an open question here about full anonymity. A lot will come down to local attitudes and laws. Some countries already force users of previously-anonymous services such as Internet cafes or prepaid mobile phones to register with the authorities - for example Italy, Spain and India. Others like the UK and Portugal are still OK with off-the-shelf purchases of SIM cards, anonymous web access and so forth - luckily our new government binned the hideous UK ID card project when it came to power last year. As events in the Middle East have shown, anonymous and easy access to communications helps protesters against despotism - possibly a price worth paying for a minuscule rise in terrorism risk. Personally I have the luxury of democracy, and I tend to vote for libertarianism rather than nannying state intervention, but your opinion may vary.

(And yes, I understand that real, true anonymity is almost impossible - both online and in the real world. We are traceable via credit cards, mobile phone records, facial-recognition CCTV, and probably online semantics and other behaviours. But at the moment, it's difficult to join the dots unless you are Google or a government security agency).

Don't get me wrong, I'm a huge fan of Facebook and believe that in many ways it is going to eat the telcos' collective lunch. Friend lists are already usurping the notion of a phone "address book", and web-based approaches make social networks much more flexible than a telecoms infrastructure can be. It's tempting to believe that Facebook is now too big to fail - but don't underestimate the fickleness of social groups. I've had a few friends who have had pseudonym-based profiles deleted, and they are definitely no longer loyal users.

I strongly suspect this is not an area in which the telcos will move together, en masse. It is an opportunity for some of the more forward-thinking and perhaps renegade operators (or specific product teams) to move aggressively and across network boundaries. If ID gets mired in years of interop talks and nonsense about support of roaming, it will go the same way as other "coalitions of the losers". This needs to be done NOW and done aggressively by those brave enough to step up - perhaps in partnership with a web provider or two.

Monday, March 07, 2011

Time for the word "terminal" to reach the end of the line

I stirred up a bit of debate over the weekend via posts on Twitter suggesting that the use of the word "terminal" in the telecoms industry is always a good sign that the speaker is stuck in a legacy age. (Twitter being the terrible medium for debate that it is, I was unable to discuss this meaningfully - hence this post).

Typically used by network-centric, standards-centric, telephony-centric members of the industry, I have long believed that 'terminal' exemplifies the denial of reality endemic in many "old school" telecoms professionals. Nobody outside of the network fraternity uses the word "terminal". You'll never hear Steve Jobs, or even most of Nokia's current and former execs utter the term. People say "mobile", "device", "cellphone", "smartphone".

This is not a new stance of mine either - I made the same point almost exactly 5 years ago in this blog post.

After a bit of a verbal ping-pong match with @TMGB this morning (I'm tempted to describe him as the dinosaurs' "Chief Asteroid Denier", but that's perhaps a bit unfair), I've reached a slightly clearer position. In historic telephony standards, there is indeed still a specific technical notion of a "terminal" defined. It's a bit similar to the old mainframe/green-screen architecture, or various other technology domains like industrial SCADA systems.

But in the past, being a terminal was pretty much the only thing that a phone did. Even more recently, being a terminal was the main or most important thing it did, even if it was as an SMS terminal rather than a telephone terminal. Therefore it was fairly natural for people to refer to any mobile phone as a "terminal", firstly because that was the only type of device, and secondly because it was - to all intents and purposes - the only useful thing it did.

But obviously, over the last 10 years, things have changed. Modern devices do a huge range of things - often simultaneously. Acting as network terminal in a standards-based, telephony sense is simply one of a smartphone's functions, and increasingly not the most important. Many of those functions are not even anything to do with a network connection - the camera, MP3 player and so on. Arguably, connectionless technologies like HTTP and IP do not have "terminals" in the telecoms sense of the word. The majority of device value thus resides in "non-terminal" functions.

Using the word "terminal" now to refer to a smartphone or other new device is therefore extremely sloppy. Today, terminal=function in mobile, not terminal=physical product. And yes, this is more than just an abstruse semantic discussion, because perpetuating the idea that the terminal function is somehow the paramount use case of a device- and, moreover, is independent of the other functions is a huge fallacy which may drive the industry down blind alleys.

The idea that a telephony call (the most obvious example of the terminal function) should over-ride anything else the device or user may be doing is not just arrogant, but a huge error in understanding user behaviour and modern OS's. Yet that remains an unspoken assumption among many in the industry.

Often a smartphone (or, certainly, tablet) user will be doing many things more important than receiving a phone call, particularly a trivial one from somebody they don't want to talk to. Yet the "terminal is the #1 application" mentality is insidious - standards like Circuit-Switched Fallback for LTE telephony assume it to be true. Multi-tasking, multi-connection devices mean that the terminal capability does not exist in isolation - and concurrent tasks need to be considered and sometimes given priority. This will need clever UI design, as well as various user interactions in the device's upper software layers that are not generally considered in network-centric views of "terminal" behaviour.

Furthermore, as we move towards smarter devices and especially VoIP-based telephony, the idea that the "terminating software client" is actually the last point of the chain becomes ever less true. The OS, or another application or browser, might intercept a phone call before it reaches you, or initiate an outbound one on your behalf. The ultimate "voice" application may simply be calling a telephony API - or may pick-and-choose other non-service based voice capabilities.

In other words, even the word "terminal" becomes factually incorrect.

So, to be clearer:

The word "terminal" is a legacy of a time when mobile devices were primarily intended for connection to specific services (especially voice telephony), over a network access run by the same service provider. Nowadays, a mobile device may have a terminal function but can also operate in many other modes - standalone & offline, connected to another network (eg WiFi), using a specific installed app. It is therefore not just factually wrong, but dangerously naive to continue referring to it as just a "terminal" - and thus I believe I am justified in my views that continued misuse of the term is a good indicator of the mindset of the person saying it.

Wednesday, March 02, 2011

I want to report a 3G coverage problem - how difficult can it be?

Various emerging business models demand good, reliable, near-ubiquitous mobile data coverage, especially in dense urban areas. We hear a lot about congestion, but rather less about the more basic problems of getting a signal. Whether it's a "not-spot" because of buildings, poor setup of the antennas, inability to site a base station, a recurring equipment fault or just some other RF weirdness, gaps and other coverage-free zones are going to be an increasing problem.

In particular, cloud-based services are going to be very sensitive to the quality of a given operator's network. It's bad enough losing access to the web and email in certain locations - think how much more problematic it would be for critical business processes dependent on hosted applications, used via mobile devices.

Because of this, you'd expect that operators would want to get prompt feedback from their customers about any real-world problems they've missed. Surely in this area of their business, they'd recognise that overall "quality of experience" is best monitored and reported by the end-user, not simply deduced and inferred from boxes & probes and software in the network.

Well, that's certainly not the case for Vodafone UK. Over the last year I've been on its network for my main phone, I've noticed quite a lot of coverage gaps and holes around central London. Sometimes I get bumped down to 2G, sometimes nothing at all. And some of those gaps are in absolutely predictable and consistent physical locations - I've encountered them repeatedly, at different times of day, to the extent that I can even plan my usage around them on certain trips around town. To me, this suggests that congestion and capacity isn't the problem - it's plain and simple coverage.

I've put them on this personalised Google Map - http://goo.gl/maps/hTv3 - both are near Regents Park and Camden in London. One is right in between two of the busiest train stations in the country - Euston  and Kings Cross, right outside the British Library and near the Eurostar terminal at St Pancras.

In the big scheme of things, the two most obvious gaps are not a huge problem for me. Given my typical travel patterns around London, I probably lose 2 mins of mobile data access a week, usually when I'm on a couple of specific bus routes and using my phone for a mix of email, personal apps and so forth. But they contribute to my sense that Vodafone's London network isn't that great - especially as the company hasn't detected and fixed the (very consistent) problems proactively using whatever "service assurance" tools it presumably has at its disposal.

So I decided to report the issue.

I've heard good things about the @vodafoneUK Twitter team, so I thought I'd try that route rather than calling customer service on the phone, especially as I was reporting outdoor locations without knowing the postcodes. The @vodafoneUK team pointed me towards the VFUK online e-forums, rather than (say) giving me a direct phone line or email address to report coverage issues.

Already feeling like this was a lot of work, I nevertheless proceeded to register for the eforum (which needs a different login to other VF services, naturally), read through their harsh instructions to search for pre-existing forum posts that might cover the problem already. Then I had to go to the coverage-checker engine to see if there were any existing problems reported - which meant that I had to use Google to find two appropriate post-codes to enter, as you can't just click on the map.

Both inquries gave the response "Important service information - we're working on correcting a network problem that may affect the performance of your device"

Given that both problems have been ongoing for months, I didn't have too much confidence in this being accurate, so I put this post up on the eforum. Nothing too controversial, just a quick note to tell Voda they've got some issues. I gave a link to this blog so that their support people would know I'm not just an "average user" but have some knowledge of the industry.

The first response almost beggars belief "Now I'm not saying there isn't a problem, but the investigation I've just done points to this at the moment." . Yes, that's right, I spend all day signing up for forums and posting messages about non-existing problems. I've got nothing better to do. And your "open cases" support system is obviously better than a real-world customer with a real-world device, reporting on a real-world problem. Unreal.

Somehow, I remain civil, writing another post pointing out that yes, these issues are still real. And give some hints on how the VF engineers might replicate them if they want to do tests.

The next reply takes the biscuit: "If you can provide 3 examples of these drops  for every area you experience these in then I will definitely raise this case.". Coupled with a request by email (with a spam-tastic "Customer Service" as sender and "No subject") for my information. So if I wanted to "raise a case", I had to send through not just my phone number, but also full name (OK), and also "for security" - two digits of my VF security code (!!! very secure via email), my address (irrelevant to the question and they know this from my number), and my date of birth.

Because "security" is always important when reporting network problems.... perhaps I am some evil-doer wanting to do a "denial of service" attack on their radio engineers' time by submitting fake faults?

Oh and then the email asks for a few more details, copy-and-paste from some stupid template (possibly the wrong one too, voice not data):
  • Fault description: (please detail the exact nature of the fault)
  • Tests performed (Manual roam SIM in different handset)
  • Date issue started:
  • Device make an model:
  • Results of trying SIM in another handset:
  • IMEI number of the handset:
  • Postcode of location:
  • How far do you have to travel to get signal?
  • Address of issue:
  • Error tone/wording:
  • Numbers effected (Please provide 3 failures, including Number called, date, time and location when call made/received):
As you can understand, I decided that a more profitable use of my time was to write this blog post instead. I'm shaking my head in disbelief about how hard it is to report an important - but simple - problem. Without basic coverage, a whole host of future business models are rendered useless. The idea, for example, of getting media companies or Internet firms to pay for "priority delivery" for 3G data, or some other sort of non-neutral network approach, is totally contingent upon delivering a reliable service.

So just to spice things up a bit more, I've also reported some other holes.... in the road.... to my local council, Westminster. I pay them about the same per month as I pay Vodafone. The road in question is less than a mile from the other sites mentioned. Let's see which one has better processes & more efficient engineering. The Council has a head start, as they have a simple page to report problems, including doing it via street name (not postcode) or "pinpoint on a map". Asks for details, gives a reference number, sends an email acknowledgement. Not a complex customer interface, but about 10x better than a supposedly customer-centric phone company worried about churn.

So - it's definitely easier to report holes in the road, than holes in the air. Let's see if it's quicker to get them fixed too.

Tuesday, March 01, 2011

Policy and traffic management moves to the edge of the network - the device

One of the hidden trends that I've been watching for a while, in the complex world of mobile broadband traffic management, is now starting to come to the surface: the action is moving down to the device/handset itself.

While a lot of manufacturers of "big iron" boxes like to imagine that the core network or the RAN is all-seeing and all-powerful, the truth is that any discussion of "end-to-end" is only true if it extends out to the user's hand (or ideally, retina or ear-drum). That is where quality of experience (QoE) really manifests itself and where radio decisions (especially about WiFi) are controlled. Anything observed or inferred from within the network about the handset is a second-best simulacrum, if that.

That's not say that the network-side elements are not useful - clearly the policy engines, offload and femto gateways and analytical probes in the RAN have major (even critical) roles to play, as well as the billing/charging functions allowing the setting of caps & tiers - even if I am less convinced by the various optimisation servers sitting behind the GGSN on the way to the Internet.

But most major network equipment vendors avoid getting involved in client software for devices for a number of reasons:

  • The standards bodies are generally very poor at specifying on-handset technology beyond the radio and low-level protocols, and even worse at encouraging OEMs to support it. Few network equipment firms are willing to go too far down the proprietary route
  • There is a huge variety of device types and configurations, which means that vendors are likely to need to develop multiple complex solutions in parallel - a costly and difficult task. It is also unclear how device-level software can be easily monetised by network vendors, except in the case of integrated end-to-end solutions.
  • There are various routes to market for devices, which makes it very difficult to put operator-centric software on more than a fraction of products. In particular, buyers of unlocked devices such as PCs or "vanilla" smartphones are going to be vary wary of installing software seen as controlling and restricting usage, rather than offering extra functionality
  • Testing, support, localisation, upgrades and management are all headaches
But despite these difficulties, some vendors are (sometimes grudgingly) starting to change their stance and are dipping their toes into the on-handset realm.

There are various use cases and software types emerging around device "smarts" for assisting in mobile traffic management, for example:

  • Offload assistance and WiFi connection management
  • Security such as on-device application policy and encryption
  • User alerting - or operator feedback - on congestion and realtime network conditions from the handset's point of view
  • Quota / data-plan management
  • Feedback to the network on device status (eg power level, processor load etc)
  • User control of application data traffic
  • Low-level connectivity aspects
 I'm maintaining a list of vendors active in these areas (and a few others) as well as my thoughts on who really "gets it", but I'm going to hold off on naming them all on this occasion, as I know many of my esteemed  rivals occasionally drop by this blog.

However, one that I will highlight as being very interesting is Mobidia [not a client], which aims to put control into users' hands, rather than boxes in the network making arbitrary policy decisions. For example, it's one thing for an optimisation server to guess whether the user prefers a "non-stalling" but degraded video - but quite another (and much better) solution, for a software client to let the user participate directly in that decision and trade off quality vs. impact on their monthly data quota, via an app. I was very impressed when speaking to them, especially in comparison with some of the purely network-centric DPI/policy/optimisation vendors I met in Barcelona. I think this type of user involvement in policy will be an important piece of the puzzle.

Management of WiFi connectivity is another area where device-level touch points are important. Although some aspects can be managed from a device management / configuration box in the network - or via standards like 802.11u - that is only ever going to be a partial answer. There will need to be a proper on-device client with a UI, in order to get the experience right in all contexts. (I'll do another post on WiFi offload soon as there's other important issues, especially about the idea of backhauling traffic through the core).

Overall - device-based policy management is difficult, messy, heterogeneous and difficult to monetise. But it is going to be increasingly important, and the most far-sighted network vendors would do well to look to incorporate the "real edge" into their architectures.