Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, June 30, 2011

Something to watch - voice comms and voice apps in the browser

Tomorrow I'll be running the first of the Future of Voice Masterclasses I've been developing with Martin Geddes. We'll cover a broad array of topics around the value, business model and technology of voice communications, especially as we go beyond the basic telephony service we're so familiar with.

I've spent the last couple of days at the wonderful eComm conference in San Francisco, listening to a challenging series of speakers cover everything from telecom regulation to wireless sensors to the psychology of motivation.

One of the presentations that most struck me as surprising was one from Voxeo. It referred to the potential for running voice (specifically VoIP) inside HTML5 browsers and apps, rather than through standalone applications like a Skype client.

This made me wake up, as I've previously been following the whole native-apps/web-apps debate without really being swayed by the web side of the argument. Indeed, I went to a Mobile Monday London event recently which did actually debate the issue formally, with two opposing teams. I asked a question about whether web applications would be suitable for demanding apps like VoIP - and even the web advocates said no, that was out of scope.

The Voxeo presentation covered WebRTC and RTCWeb standards. (RTC=realtime communications). In a nutshell, there's basically a lot of work going on to enhance HTML5 so that it can deal with various codecs and streaming protocols, as well as Javascript APIs to control media - for example with access to the microphone and speaker.

But the really interesting things are that:
- Signalling is all done with web protocols like HTTP and XMPP, not SIP.
- Google has donated a ton of its GIPS code to the project, which does clever acoustic stuff like dealing with echo and packet loss
- Voxeo's platform called Phono.com has a variety of software functions which enable all this capability to bundled into useful formats - such as initiating a call.
- Using something called Phonegap, web developers can create web apps for Apple IOS which incorporate voice connections and calls natively .

So one web page (or browser) could make a voice connection with another server or browser. These are not phone calls. These are additional voice applications, which could theoretically connect to the public phone network, but don't need to. They might be voice-enabled game sites, or social networks, or whatever, where voice just works,

Think about that for a moment. Voice communications becomes a feature of a web page, the same way that tables, or style sheets, or embedded images are. Voice as a feature, not a service.

Now all of this is still some way away from being fully practical for mainstream phones. But over the next few years, it seems likely this going to get built into future browsers as part of the standard of HTML5. In other words, an HTML5-compliant mobile browser in 2013 may *have* to support this, although that may be dependent on whether a given device gives API access to all the relevant bits & pieces like microphone and speaker in the right way, without latency or other glitches.

I'm still trying to get my head around the ramifications of this - but either way, it's deeply, deeply important and potentially represents more of an alternative for LTE Voice than even some OTT apps like Skype. Because this isn't voice as a separate OTT application - it's voice *in* the web itself.

Wednesday, June 22, 2011

Is mobile voice revenue being hugely overstated? And if so, what does that imply for VoLTE?

In our upcoming Masterclasses on "The Future of Voice", Martin Geddes and I introduce the idea of "peak telephony". This is the point at which today's traditional telephony services, fixed or mobile, hit the top of the curve for both revenue and importance, after which price erosion and substitution by alternative applications means decline for normal operator voice.

Various mobile operators have already reported declining voice ARPU - even allowing for distortions from users spreading their spend over multiple SIMs and accounts. This is not just a mature-market problem either - at the Femtocell Summit yesterday, I saw a presentation from a Malaysian operator forecasting an overall drop in mobile voice revenues over the next few years in that market.

In order to stave off the inevitable, we believe that operators need to innovate in both technology and business model, looking beyond "plain old phone calls" to new ways of delivering and monetising voice services and functions.

Mobile operators also have to deal with a second disruption, as LTE networks force a push towards VoIP. They need to absorb the costs of implementation - without a clear path to delivering more revenue to justify that investment. The guest post I wrote on Visionmobile about The Future of Voice reflected that VoLTE is merely "old telephony" reinvented to run on LTE, rather than a platform for enhanced "neo-phone" services that could significantly add value to operator voice business models.

However, all this potentially pales into insignificance compared to a third possible disruption for mobile voice revenues: The Revenge of the Accountants.


In a nutshell, the adoption of new international accounting standards may mean that users' repayment of handset subsidies have to be "unbundled" from the underlying service revenues. This applies most critically to postpaid users, who are given a "free" or heavily-discounted phone at the start of their contract. It is not uncommon for a $600 iPhone to be sold to a user for a headline price of $200, with the other $400 essentially recouped as part of the monthly service fees over two years.

At the moment, the whole of the user's billed payments - let's say $75 a month - is recognised as service revenues, and then sliced up into voice / data / SMS in their financial reports. So maybe $40 is deemed to be voice, $15 is messaging, and $20 is data. The $400 handset subsidy gets buried in the accounts as a cost of sale, or subscriber acquisition cost.

Now I'm not going to pretend to be an accountant or fully understand all the nuances here. I'm sure there are various wizards at some operators who can make the numbers "dance". But if I'm reading things right, the key thing to watch is Draft IAS (Intl. Accounting Standard) 18: Revenue in Relation to Bundled Sales  which forms part of the IFRS (International Financial Reporting Standards) approach to bean-counting.

My original reading was that the subsidy ($400) in this case, was divided up and stripped out of the monthly revenues. So for a 24-month contract, this would mean that $16.67 each month was essentially a loan repayment, meaning that the service component would have been $75-$17 = $58 "real ARPU".

Actually, that's oversimplified. This document from Etisalat gives an accountant's view of IFRS and treatment of handset subsidy, which actually involved the "fair value" of the standalone, SIM-free price of the handset - maybe $700, not $600.

Applying that here, we take the "total consideration" of the contract as $200 (upfront handset payment) plus 24x$75 to yield $2000 overall spend over the lifetime of the contract. That's set against the value of the deliverables as $2500, including the $700 fair-value of the handset. That then translates out to recognising 700/2500*2000 = $560 for the handset purchase, and $1440 for the service, over the life of the contract.

In other words, the allocated amount to operator services should be $60 a month, not $75. Which means that the voice portion is also reduced by 20%, from $40/month to $32. Obviously, the data element is also reduced.

I'm trying to work out where we are in the accounting standards draft / ratification cycle. This document from PWC seems to suggest that the regulations are likely to come in from 2014/2015, with a need to start preparing parallel accounts as early as next year. There are also various national bodies (eg FASB & IASB in the US) that have their own variations, detailed rules and so forth. PWC references the proposal "Revenue from contracts with customers” published in the US in June 2010. To be honest though, the details are beyond the point for this post. The key thing is that real mobile voice revenues (and data as well) are almost unarguably being overstated because of the blurring effect of handset subsidies. Exactly how, when and where the financial reports change doesn't change the fundamentals.

It could well be argued that these changes should be applied retrospectively anyway, so maybe it all just nets out so that the peak of peak telephony was simply lower, but the shape of the curve remained the same. And that's absolutely fair if we look at the past - maybe we just say "Oh, the market was worth $600bn, not $700bn, because we didn't split out the $100bn we spent on handsets" and leave things stand. But going forward, if we are specifically going to look at business cases for new voice-related capex, this all starts to matter much more - especially if we consider the relative business case against keeping voice on circuit-switched networks (2G, 3G) instead of migrating it to VoIP on 4G.

There is also a separate discussion to be had about service bundling and whether we should keep thinking of data services as something added "on top" of a voice and text plan, as many operators do today. Especially with LTE, there is a strong argument to say we should have a general "IP line access" fee, on top of which services - telephony, SMS, Internet access, content etc are layered. So maybe the $75 monthly fee should be allocated as $15 handset, $15 IP access, $24 IP telephony, $9 (IP) SMS and $12 Internet access.

That's a topic for another time, though, although I'd previously written about this type of approach here.

Either way, I think that today's mobile voice revenues are significantly overstated - perhaps by as much as 30-40%. As I said before, I'm not an accountant, but I think it is very important to recognise that some of our cherished data-points, which we're using to make investment decisions, are much more fluid and badly-defined than we might think.

EDIT - Another thought: it will be interesting to see if the accounting treatment of VoLTE (which clearly needs an IP connection and therefore 'data service' in order to work), will be different to that of either circuit-switched fallback or "Velcro-like" dual radio solutions. There is an argument that being able to continue selling separate voice plans on a voice-only network will be much easier for auditors to agree on, rather than the everything-over-IP approach.

One last plug for our Masterclasses on the Future of Voice (which I promise won't have more than a few minutes on accountancy): The first one is next week in Santa Clara, with the second one in London on July 14th. Details are at http://www.futureofcomms.com/ and booking is either via Amiando (linked from that site), or direct from me or Martin. (information AT disruptive-analysis DOT com )

Friday, June 17, 2011

Key takeouts from the Mobile Data Offload conference in Berlin - and why we need to keep some seams


I’ve just spent a couple of days at the first offload-specific conference I’ve come across, organised by IIR. It’s been useful, giving me some good new contacts and allowing me to reconnect with some existing clients and friends. This blog post is just a summary of some of my take-outs and reflections – some people may already have seen some tweets I posted with the #mobileoffload tag.

One thing that seems to be coming through quite strongly is that WiFi offload is currently taking the high ground in comparison to femtocells, especially in the residential marketplace. Conversely, there seems to be growing momentum for outdoor “small cells” of various types compared to WiFi hotzones. 

Neither of these are absolutes, but these fit into a general narrative that:
  • Outdoor coverage and capacity is “classical” mobile network territory in terms of both personnel, planning and operator processes. Consequently, the idea of small cells fits with the notion of “more of the same, but smaller and cheaper and easy to site and manage”.
  • Indoor data offload tends to be driven by consumers’ familiarity with – and often preference for – WiFi. The heaviest mobile data users are almost certainly a subset of those that are happy with the setup and operation of WiFi in their homes. Operators' ability to deal with WiFi's vagaries through device-side software or inbuilt standards is also improving.
The conference focused heavily on this last point - what I'm calling the “telco-isation” of WiFi. There are various standards and specifications being worked on by the WiFi Alliance, Wireless Broadband Alliance, 3GPP, GSMA and others. There’s an alphabet-soup of acronyms here – 802.1x, 802.11u, Hotspot 2.0, ANDSF, I-WLAN, WISPr and plenty of others. There was lots of talk of EAP-SIM authentication and so-called "seamless" mobility. There are various approaches to dealing with the operators' own and partnered WiFi accesses, especially around extensions of the mobile roaming mechanisms.

Some of this is very important and is being done intelligently and effectively. The idea of improved "network discovery" so that you can tell more about a WiFi access than it's SSID name makes a lot of sense. It is also important in some cases for operators to be able to "steer" users to particular APs or SSIDs, and collect information and maybe enforce certain policies. In some cases, SIM-based authentication can make sense as well - an area where my opinion has shifted a bit recently.

But however – and this is a big however – I think there are some serious issues. In my view, the industry is in danger of making the same mistakes it made with UMA about 5 years ago. The giveaway is in this cliched word "seamless". I spent a lot of timing criticising this aspect of UMA, and I can see myself having the same conversations all over again. Seamlessness is not the utopian ideal, just as it wasn't in 2006.

In a nutshell - sometimes, and for some use-cases - automated and seamless (ie zero user-touch) connection to WiFi is absolutely desirable, ideally with session continuity and all that other fine stuff. But, critically there are also various use cases where seams are important, and need to be made visible to the user and/or applications running on the device. The tricky part is designing the end-to-end system, and especially the user interface on the connection manager, to cope with both sets of scenarios.

Seams might be "messy", but they are appropriate for certain contexts. To reiterate the analogy I made in my presentation, we don't all go around wearing Lycra catsuits. Our clothes still have seams for good reasons, and the same is true of networks. Ultimately a seam is a border, at which things change - speed, latency, security, cost, ownership, policy, power consumption and many other parameters. The idea that the border should always be crossed with the user kept unawares risks a whole host of problems.

There are various angles here:
  • The user will often wish to connect to WiFi networks that are not "approved" or linked to the operator's network or WISP partnerships. Most obviously, the user will want to use home broadband WiFi, private enterprise networks (often behind a firewall and with the corporate network's own security and authentication) and free public WiFi where it is available. The operator-driven WiFi software must not get in the way of this type of scenario - and neither should it try to tunnel back via the operator core in these cases.
  • The user may have access to multiple WiFi networks in a given location. The operator-preferred one may not be the best - perhaps because it costs more, perhaps because it is slower, perhaps because policies are enforced that the user would rather were not. Auto-connecting anyway may be an undesirable outcome.
  • The same WiFi network may be available locally on better terms. I'd be annoyed if my phone automatically logged on to a hotel WiFi (at a cost or lower performance), when the conference organiser was giving out free pass-codes. (Not the at the useless Kempinski in Berlin though, obviously - no inclusive delegate WiFi at the offload conference, ironically).
  • Some applications may "come with WiFi" themselves. Skype partners with Boingo, for instance. A presentation by Sky's recently-acquired WiFi network The Cloud suggests that Sky's future video apps and content will be tightly coupled its own WiFi footprint. If I am watching Sky HD movies on my phone in a public place, I'll want to connect to its own optimised connectivity (apparently guaranteeing 1MBit/s per user) rather than someone else's that is heavily contended and which routes traffic through a video-compression box.
More generally, this fits with my concern that the telco-isation of WiFi is starting to look quite Machiavellian and unrealistic. Speaking to people with a view on the evolution of standards, some operators are apparently attempting to own and control WiFi on smartphones outright. While some level of improved control is understandable, we should be wary of the idea that an operator might control the overall WiFi connectivity on a device. 

There are plenty of use-cases for WiFi which are not service-provider centric but "private"- notably enterprise connectivity, or connected-home technologies such as DLNA. If someone sends photos from their phone to their TV or home media server, that is not a "service", but merely data transferred locally over the individual's own network. You wouldn't expect an operator to be involved if you just moved the memory card, which is functionally identical to local WiFi use.

Ultimately, WiFi is a form of Wireless LAN. It's a form of Ethernet. In general, companies that don't understand LANs in general are not the right one to get wireless vesions working properly in particular. Most ethernet use is private, and WLAN is no different.

Some other points from the event:
  • Offloading signalling does not appear to be well-understood yet, but was at least a topic of discussion
  • There wasn't as much talk about on-device client software for offload control/management as I'd expected, a on lthough there were companies such as Roke and Onavo in attendance
  • The session on Net Neutrality was lively, but didn't really touch on offload that much. The AT&T speaker was  very vocal against the hard neutrality laws being mooted in the Netherlands, but conspicuously silent on how non-neutrality might impact its own femtocell traffic when carried over competing fixed/cable ISPs broadband.
  • Some very good sessions on mobile broadband economics - especially around the mix of data from different devices, and the fact that for most operators, only a few cells really face congestion at the moment. 
  • It's worth bearing in mind that for those MNOs selling USB dongles as an alternative to fixed broadband, that means that their customers won't have home DSL/cable to which to attach a femto or use WiFi for offload....
  • Offloading traffic from a MiFi-style personal hotspot (or smartphone tethering) clearly makes no sense
  • There are plenty of complex connection-management scenarios to deal with around offload, for instance selecting between LTE macrocell, HSPA femtocell and various WiFi connections, especially with multi-radio capable devices and multi-tasking where certain apps have different needs.
  • LTE offload is going to get tricky around managing VoIP, whether it's operator-based or third party.
  • Use of WiFi when travelling internationally is going to be an important part of operator strategy. I wouldn't be surprised to see aggregate WiFi+3G roaming statistics being used to convince regulators that "average" data roaming prices are falling fast, even though the cellular portion remains very high-cost.

One other thing I'm becoming aware of: there’s quite a lot of smoke & mirrors about WiFi offload stats. In particular, a lot of the published numbers for “% of data offloaded by operator X” need to be viewed through a lens of scepticism. 

In my view, WiFi usage on smartphones falls into three main categories:
  • Private WiFi use, typically either in the home or office but also elsewhere. As discussed above, this is WiFi access that is used with the mindset of “having a small computer connected to a LAN or broadband” – ie those applications and content that might have been used even without having a cellular data plan anyway. One way to think about it is the type of use that you might see with an iPod Touch – which clearly isn’t offload WiFi traffic as it doesn’t have a 3G modem.
  • Offload WiFi – this is the traffic which is directly moving from 3G/4G connectivity over to the WiFi access, directly substituted. This is the number that is the most important in terms of the economics – traffic which would otherwise have gone over the macro-cellular infrastructure.
  • Elastic WiFi – this is linked to offloaded traffic, but represents the extra amount that users will tend to consume given faster speeds or (perceived) lower price. In other words, this is incremental and not substitutional, even if it is mobility-centric use cases (eg watching video in a café or airport) 
I suspect that we'll see a lot of over-inflated WiFi offload business cases based on spurious calculations that don't take this into account.

I'll be uploading my presentation to Scribd and Slideshare soon & will post accordingly.

Tuesday, June 07, 2011

A classic example of app complexity that network DPI would find hard to resolve

Today seems to be the day for me to needle some of my main targets. This morning I had another shot at the hapless RCS service, and now it's the turn of my biggest network-side punchbag, application-based charging.

I've just been given a classic example of why this is going to be nigh-on impossible to ever get right.

In theory, the network should be able to pick out the fact that I'm using Google Maps. I'm sure it's got a pretty predictable "signature" that the average DPI can spot.

But what it probably can't spot is *why* there is Google Maps traffic being used. I've just downloaded the latest version of the Vodafone "MyVodafone" app for my iPhone. It's pretty useful, with a good dashboard feature showing how much data I've used against my cap and so on. This version also comes with a WiFi logon feature.

The sign-up for this has a warning message, telling you that in order to find the nearest WiFi access point, the app uses (guess what) Google Maps. And that I am liable for the data charges incurred in doing so. Now I'm guessing that this is done for a good reason - most probably speed and expediency of getting the thing released, plus I also expect it doesn't use *that* much data in the big scheme of things.

In theory, Vodafone ought to have set up some sort of rule in its network to obviate this, and zero-rate its own offload-location data consumption, especially as its reduced macro network load makes it the main beneficiary. But that would have needed to somehow check that the offload app was indeed the "user" of Google Maps, rather just than me trying to find my way around normally. And that's rather hard, without some sort of agent on the device watching what's going on and trying to decode what GMaps packets are "native" in the mapping client, and which are used via the local API for specific apps.

This is precisely sort of hard and complex situation that I have in mind when I say that app-specific charging is going to be a nightmare. Imagine for a moment that Vodafone had a "menu-driven" non-neutral pricing model, where I got charged £3 a month for using the Google Map app. I'd be rightly irritated if *I* didn't use it, but the operator did itself through its own software, charging me for the privilege anyway. I don't expect the regulator would be too happy either.

On another note, let's see how the Vodafone WiFi app manages to coexist with my other WiFi finder (BT Fon) on my handset. I don't think either is auto-logon, but I can imagine some interesting situations if they are, as both use BT Openzone. Will I be able to tell which "virtual" WISP I've logged into? 

Creating user engagement in RCS and other communications services

I've been having many more discussions recently about my vehement views on RCS and why I think it is (still) destined for failure. In short, the current hoopla about various operators and vendors doing a big push to "make it happen" is not enough.

Yes, it helps that DT, FT, the US & Korean operators and (belatedly) Vodafone seem to be getting their marketing machines & spin-merchants lined up. Yes, it helps that RCS-e ditches the early RCS presence function which normally kills batteries and generates large amounts of extra signalling traffic. Yes it helps that Android is "malleable" so operators can get RCS-e clients onto some future handsets without too much pain. Yes, Orange and others are reportedly trying to strong-arm handset vendors into implementing it. Yes, executives from DT and other operators are name-checking it wherever possible on the conference circuit. Yes, I've even heard the word "freemium" mentioned in the same sentence as RCS.

All good stuff. But falling under the banner of "necessary but not sufficient".

These improvemens still don't mean that RCS-e somehow overcomes the other dozen or so problems I identified last year in my report on its near-inevitable demise. I predicted it would launch, splutter along for a bit, and then fail.

It's notable that when I have discussions with operators or vendors about what the problems really are, the one theme that seems to resonate is that of user engagement. How do you encourage people to actually use and exploit RCS rather than the myriad of other messaging and sharing and social-networking tools at their disposal? What makes them "invite friends" and others to accept those invitations? What makes them "invest" in the service?

Top of the list of things that versions of RCS I've seen *don't* do is permit the little snippets of user interaction that make alternatives like Facebook or Twitter or BlackBerry BBM so engaging.

The most obvious is "Like". On Facebook, you get instant validation that you've posted a cool picture, added a fun status update, attended a great event, listened to a great music track or whatever. It's a single click, but it communicates involvement, friendship, respect, attention, humour and all those other human qualities. It's a way to say "No, I haven't forgotten you, I am reading your stuff but don't have time to write a full message". It's like smiling at a friend, rubbing your partner's back, winking at someone in a crowd.

"Retweet" is similar. As are a whole host of "Vote up/down", "+1", "Recommend", "Share" and so forth.

These create user involvement and engagement, with a simple HTML link. They also tend to be extensible - as seen by the amount of Facebook Connect logos around the web.

Maybe a future version of RCS - or perhaps some operator-specific variants - will do something similar. Because if not, the services are likely to be very "dry".

There's another form of user interaction for messaging I've just become aware of in this context as well, triggered by this article about Apple's new iMessage service. It has something that most PC-based IM software has had for years, as well as BlackBerry Messenger - "typing indication". That's the little animation on a Skype or Yahoo IM window that shows that the other person is composing a reply. It will be interesting to see if any RCS clients can do the same thing - some of the specification documentation suggests it should.

The problem is that in future, communications users will have a very low tolerance of "clunkiness" - and they will also expect features to be upgraded like today's best apps, on a monthly or quarterly basis. There will also need to be a mechanism for operators to test different types of apps on certain groups of *live* customers. Google and Facebook can change their web page layouts, or app behaviours, for certain groups of their users, to see what works best. In my experience, it's pretty rare for telcos to do comparison-testing of different versions of services on their "production" customer base.

Overall, I still think that RCS is going to face insurmountable challenges - especially with newcomers like iMessage and whatever Google does with adding communications services into the browser. I think there will be a few niche usage cases - and perhaps specific countries where local conditions are unique enough to support it. But unless they get the user experience not just "good", but "fun" and "engaging" as well, it will struggle to gain traction.

Monday, June 06, 2011

Can telcos compete in an era of fashion-driven services?

NEW: Download the Future of Voice Masterclass flyer here

We're all used to the descriptions of the mobile phone business being (to some extent) fashion-driven. Just like clothes, some things go in and out of style - touchscreens, clamshells, big, small, black, coloured and so on. We've also heard plenty of handset brands described as cool / uncool - obviously with variations around the world.

I remember a few years ago, for example, SonyEricsson was very much an edgier and slightly counter-cultural brand in the UK, back in the pre-iPhone / Android era. I remember being at a gig and noticing which phones were being lofted overhead to take photos or videos of the band - S-E's were dominant among the younger fans.

So we see device brands - SonyEricsson, Apple, Nokia, HTC, Motorola and so forth - compared with cars (Audi, BMW, Ford, Nissan or whatever) or clothes (Ted Baker, Calvin Klein, Marks & Spencer, Armani and so on).

Up to a point, that's been mostly irrelevant to the mobile operators - barring the need to subsidise the more expensive ones, but that's usually (pre-iPhone) meant particular models rather than the whole brand. Sure, they've been able to exploit exclusive deals or other arrangements - but I don't think they've particularly cared if LG is seen as the equivalent of Mercedes or Citroen or Hyundai.

But now there is another issue - one already seen in the fixed-Internet world.

*Services* are now being driven by fashion, as well hardware. With the coming of smartphones and apps - and fast access to the public Internet, with new ways of creating "viral" adoption among communities - we have seen the rapid rise (and often fall) of novel ways to communicate. Facebook, Twitter, WhatsApp, BBM, Skype, Viber, LinkedIn and so on have grown in part because of adoption within groups. They can be tribal, cliquey, ephemeral - used for a season and then discarded (remember MySpace, Bebo, MSN?). Or they can be regional (Hyves, Friendster, Cyworld, vKontakt, Orkut, QQ etc).

This is much more problematic for telcos, as operators are used to egalitarian, very long-lived service offerings that don't vary much in popularity, awareness or coolness. This has been because in the past, there were very few communications services - phone calls, SMS, email, fax. All were essentially "designed by committee" and so none could possibly be thought of as cool or fashionable - they just "were there".

Sure, there are parts of the communications-using population which aren't particularly fashion-driven, but fewer than you might think. Plenty of CEOs want to connect their latest, shiniest i-Toy to the corporate network. Plenty of businesspeople were using BBM long before the teenagers go hold of their 'Berries. Even 10 years ago, people in finance were sending messages (and jokes) via the proprietary Bloomberg messaging system rather than corporate email.

But in any case, two important groups - people with money, and younger people - often *are* fashion-driven, or at least status-driven.

Now there's an important distinction here between equating phones, services and other non-tech brands such as cars and clothes. Phones are similar to cars in that most people only have one, or maybe two, keeping them for a considerable time. But people have wardrobes full of clothes, some new, some old, some cool, some utilitarian - and buy new ones regularly. They might buy the trendiest new shirt or coat for socialising, or something cheap and comfortable to chill out with on the sofa.

I think the PSTN and SMS and basic mobile telephony are going sofa-wards. They're not going to be made obsolete, but relegated to the status of lowest common denominator clothes essentials that everyone has. Underwear that gets worn when nobody else is likely to see it. Sweat-pants for doing the gardening. Comfy shoes for a long-haul flight. Stuff that gets worn when you don't care about being fashionable.

It's quite common even for the coolest of hipsters to buy their socks from Marks & Spencer. Plenty of people pair one item unique and expensive, with another which is totally generic. Prada + Primark. Zegna + Zara. Missoni + M&S. Tiffany + TopShop. (Not sure of the US or China or India equivalents here...)

The question is whether - and how - telcos could either turn into Primark equivalents, or develop platforms that could form the basis of continually-churning fashion-driven services. Primark, for those unaware, is hugely popular and quite profitable, even for low-end clothes. Its shop on London's Oxford Street is always swarming with people buying basic, cheap, almost-disposable clothes which nevertheless have an essence of coolness. Like Zara, it's been radically engineered to be responsive, with great back-office supply chain management. Conversely, other higher-end clothes brands have developed the annual cycles of fashion shows and manage to reinvent themselves regularly - and you also have fashion house with multiple brands.

Some operators - notably DoCoMo in Japan - have long been pitching "this season's new services", but that's still not common given the lengthy cycle times for development and standardisation.

It's really not obvious to me how standards-based telecoms offerings can ever again play at the top end of communications services. Even if industry initiatives like RCS succeed, I suspect that the best they can aim for is the being the next universal telecoms equivalent of a pack of £6-for-three Primark Y-fronts, worn underneath a pair of £300 designer/developer jeans. And to get to where Primark is today, they will still need prime retail space, a very hard-working team and flawless back-office functions.

NEW: Download the Future of Voice Masterclass flyer here
For Santa Clara event tickets on June 30th, book here  
For London tickets for July 14th, please contact me at information AT disruptive-analysis DOT com

Wednesday, June 01, 2011

Inspecting the inspectors & throttlers - reverse engineering network policy

I first wrote three years ago about the likelihood of various companies or other organisations starting to "reverse engineer" operators' traffic management policies.

Indeed, one of the common features of most regulators' pronouncements on more "flexible" regimes for Net Neutrality is that any traffic management is absolutely transparent to the user. Clearly, that transparency will need to be tested, either by regulators, consumer advocacy organisations or application providers.

So a hat-tip to Azi Ronen's great blog on Traffic Management for spotting this research paper from the US state of Georgia, which does some great analysis of US ISPs' throttling activities. A whole range of other tools are also listed on this page: http://rk.posterous.com/tools-for-testing-your-internet-connection

Over time, I'm expecting to see much more granular approaches to this - for example tracking application-specific policies or other rules and controls. I've seen some analysis by Epitiro presented at a conference, which showed a certain ISP degrading IPsec traffic at certain times each day. It seems likely that many others will join this trend as well - the EFF has certainly been doing it for a while, for example. 

I also expect that Google, Apple, Netflix or others are collecting a huge amount of their own data and measurements about application performance metrics from smartphones and other devices. They probably have very good views on what looks like "natural" variation in congestion and throughput, versus that which looks "unnatural". As is the case with the Georgia study, any "messing about" with the IP stream will stick out like a sore thumb - as well any background optimisation, content adaptation and so forth.

In other words, operators' network policies are likely to be transparent - whether they want it or not.

What will be interesting is what happens in circumstances in which the network's performance appears to have been modified - in direct contradiction to an operator's marketing campaigns or the local laws. It will be unsurprising if we see some prosecutions for mis-selling or outright fraud in some cases.



Tuesday, May 24, 2011

Gaming the new peering arrangements - will non-neutrality really work?

I'm hearing a lot of discussion at the moment about peering being the new battleground for "non-neutrality" of networks, and especially a mechanism for operators to try and monetise traffic from Internet/video providers.

In theory, the argument goes along the lines of "symmetrical peering is fine, but if there's heavy asymmetry, eg mass downloads outweighing uploads, then we've got a right to renegotiate our deals with our Internet peers".

Surely, this is just going to result in Netflix, Google, BBC and others developing new services that are predominantly upload-centric, to try and redress the balance of traffic? Rather than either reducing downloads, or actually paying cold hard cash?

So instead of combatting traffic load, this surely just encourages Google to offer realtime augmented reality with video-processing in the cloud, clogging uplink as well as downlink? Or for the BBC to archive your old VHS tapes somehow? Or for Apple to push an audio "black box" that records all your ambient sounds during the day & streams them to a server? Or for all of them to start using peer-to-peer techniques for content distribution? (all of this encrypted and hidden from DPI, obviously).

This is just an idle musing for now, and I've never seen the details of a peering arrangement. I guess there could be some way of tying it to actual congestion (eg traffic from YouTube that gets downloaded in busy cell / busy hour is accounted for differently to that during quiet periods). But in that case, there would need to be some serious OSS/BSS integration work to actually be able to *prove* this

Monday, May 23, 2011

Future of Voice Masterclasses: June 30th Santa Clara, July 14th London

NEW: Download the Future of Voice flyer here

Regular readers of this blog and my Twitter stream ( @disruptivedean ) will have noticed an increase in focus on voice and communications services and applications in the last couple of months. I've also written a guest post for Visionmobile on The Future of Voice, and last week spoke at the LTE World Summit in Amsterdam about Voice, VoLTE and the Future of Personal Communications. (A copy of my presentation can be downloaded here)

So, I am pleased to formally announce the first Future of Voice masterclass, which I am launching in collaboration with Martin Geddes. The event will be held at Intel's headquarters in Santa Clara on 30th June, the day after the eComm conference (which I would exhort people to make every effort to attend - I'm speaking about telcos' own-brand OTT services)

To book a space for the first event on June 30th, click here

The event is not going to be a one-way bombardment of slides, but will instead be a mix of presentation, collaborative group exercises and realtime consultation with Martin & myself. We are aiming for 20-25 people with a diverse mix of operators, vendors, Internet companies and other market stakeholders.

In terms of themes, we will be covering new business models and delivery technologies for voice, including:
  • The difference between "voice" and "telephony" - and what that means for telco and Internet companies
  • Delivering voice on LTE and "direct-to-cloud" voice on fixed networks
  • Understanding "voice as a service" and the new fragmented voice landscape - particularly after the Microsoft/Skype deal.
  • How to make money from free voice and messaging with B2C unified comms
For details of the event, please contact me at information AT disruptive-analysis DOT com, and I can send you a copy of the event flyer, and pricing information.

If you can't make it to California, then there is an event in London on 14th July (venue TBC) and US East Coast, probably the week after Labor Day.

To book a space for the second event in London on July 14th, click here

I look forward to seeing some of you there!

Download the Future of Voice flyer here


Friday, May 20, 2011

Thoughts from the LTE World Summit


Earlier this week, I spent two days at the LTE World Summit in Amsterdam. More than 2000 great & good members of the telecom industry, including a ton of operators and most of the major vendors. Multiple streams of presentations, a decent-sized exhibition show floor and “big conference” production rather than a small meeting room in a hotel. I was hosting an analyst roundtable on Voice, VoLTE and the Future of Communications, and also presenting a 30 minute presentation on similar topics.

Those of you following my Twitter stream (@disruptivedean) will have seen a fair amountof  ongoing commentary, but I thought a few issues were worth drilling into here. I'll be writing a separate post about the Future of Voice, and my upcoming workshops with Martin Geddes, so I won't overdo the VoLTE analysis here.

Overall though, I’ve come away rather pessimistic, despite all the bombastic hyperbole I’ve heard. I’m hearing the same old stories I heard at last year's event – and a lot of them are getting worse rather than better. Loads of hoary old clichés about peak rates and “exponential” data growth. How flatrate plans don’t cut it, long after most of them have been phased out anyway. A ton of unrealistic vendor hype about application-specific policy and charging “business models”.

The big story was also much the same as last year, albeit stated a bit more loudly rather than just implied – there are too many spectrum bands for LTE. At least 8 “core” bands, and another 10-20 also being deployed or considered. Europe will probably get by with three mains ones – 800, 1800 and 2600MHz. With perhaps a little bit of 2100. And some use of bits of TDD spectrum knocking around. Then there’s a variety of US bands, Japanese-specific ones, Chinese ones and a variety waiting in the wings to get approved. 

That is *much* worse than 3G, which had one core band for much of the world (2100MHz) and still took a long time to get either coverage or good handset performance.

Bottom line is that LTE spectrum fragmentation is not going to go away. This has a number of implications – firstly, roaming is going to be a real pain when moving “off-net” beyond a single operator’s OpCos, or between regions of the world. In all likelihood, HSPA will continue be used for roaming in a lot of cases. Secondly, handset vendors will likely have to create either regional versions of handset hardware platforms, or make “world phones” that suffer from coverage issues in some markets. Either way, scale economies will be lower, prices higher, testing more problematic and time-to-market longer.

It will not be possible, for example, to have one iPhone variant that supports 3 European FDD bands, Verizon and AT&T 700MHz, the Chinese LTE-TDD variant, something for Japan, and perhaps another US band like AWS or LightSquared. I reckon that Apple will need to create three, possibly four distinct versions of future LTE iPhones.

Now Apple can afford to do that - it only has a single model introduced at a time, it sells in high volumes per device model/version and makes a huge margin on each. In other words, even if each "spin" costs an extra $100m to develop, it's still a drop in the ocean. If it creates three versions and sells 10m of each, it will probably make $2-3bn gross margin on each variant, so it can "wear" the extra hardware development and test cost quite easily.

But it would get very painful for lower-volume devices, or manufacturers that have broad ranges of devices. This in turn means it's probably going to be painful for operators with unusual spectrum bands (eg LightSquared) to get a decent range of decent handsets. 

In Amsterdam, we heard repeated pleading from operators - even DoCoMo - essentially saying "Support *my* band! Please! It's really good, and we can get economies of scale & support from all the vendors!". 

There are going to be some disappointed players left standing in this game of musical frequency chairs. And everyone else is likely to feel the knock-on effects of component suppliers' hesitation and uncertainty. Some operators will likely hold off on LTE decisions until the spectrum situation becomes a bit clearer.

One other option for LTE that got a little exposure - but was obviously still highly contentious - was that of wholesale-only shared networks like Yota (and LightSquared and a couple of others). I think that although that model makes sense in terms of spectrum usage efficiency, it also poses a risk for incumbent operators that will start to lose control over their core business enabler (the network) and may face a future where all differentiation comes in terms of the (often mythical, and always competitive) "services" layer.

I'll be writing more about the threat from "under the floor" players in the coming months - and why shared/outsourced/structurally-separate mobile infrastructure plays are both inevitable and highly disruptive. I'll be at the network-sharing conference in London next week as well.

One interesting angle on voice and VoLTE that is starting to bubble up - and which I've been suggesting / advocating for some time is that of dual-radio phones. We already see dual-radio CDMA/LTE phones for Verizon and Metro PCS, which use CDMA for voice and LTE for data. This has a distinct advantage over the proposed "Circuit Switched Fallback" standard, in that an incoming voice call doesn't switch off the LTE data channel. I'm expecting to see the same approach appear for GSM/LTE dual-radio phones, but that is much more complex as (unlike CDMA) both radios will probably need separate SIM cards, or two IMSIs on one card. At least one major vendor was openly discussing this approach - but at the moment the lack of standards about handling this type of device is a concern for operators.

Like VoLGA before, dual-radio "velcro" GSM/LTE is a solution that *works* conceptually very well, but it will be interesting to see if the politics of the standards world - and some entrenched interests wanting to ensure that nothing detracts from VoLTE/IMS's uncontested anointment as top solution - get in its way. My view is that this should be the main backup plan or straight replacement for VoLTE: as telephony revenues start to fall, why would many operators want to invest in a new core network and applications when their existing GSM telephony works perfectly? 

In my view, operators should invest their future voice/telephony budget in creating new voice products and playtforms - and do the absolute minimum necessary to get decent "old school" telephony working on LTE smartphones. I think the Velcro (yes I know it's a trademark) approach could free the operators to concentrate on creating new and possibly more valuable voice and VoIP applications - before Skype/Microsoft does it for them.

The last comment in this post is about WiFi and LTE. I've had a few conversations recently about the rising star of WiFi usage for offload, onload, roaming and other operator use cases. I think that all of these are extremely important.... but I also sense a dangerous level of groupthink around the "telco-isation" of WiFi. There's a host of new standards and solutions that make bolting WiFi onto 3G/4G networks more "seamless" or more controllable. 

Those of you with long memories will know that I have an intense suspicion of the word "seamless". It represented all that was wrong with the ill-fated UMA technology. More than four years ago, I wrote what I thought was the requiem of seamlessness. But it's back, it seems. In a nutshell - seams are important. They're boundaries. Sometimes I want to know when I reach a boundary, sometimes I don't. Things change at boundaries - speeds, policies, price, ownership, security, latency and so on. In particular with WiFi, it is absolutely critical to enable a good user experience between choosing between "operator WiFi" and "private WiFi". 

I see far too few advocates of the "private WiFi" use cases - there seems to be an asusmption that WiFi access on smartphones will default to being "service"-mode. I think that is a deeply flawed belief, and unless address will come back to haunt some of the new approaches to offload or operator-provisioned WiFi. More to come in later posts, conference presentations and so forth.

A few quick bullet points of "other" interesting items:
  • Apparently, TeliaSonera intends to charge extra for VoIP on its LTE network. Good luck with that. Maybe you can start by providing us with a clear legal definition of "voice"? Downloading a spoken poem? Audio telepresence? Skype video with "mute" switched on? Accessing voicemail? Encrypted speech inside HTML streams? If you're a Swedish-speaking telecoms lawyer, you're going to make a lot of money over the next few years....
  • Verizon was being very coy about its rollout and recent outage. Its conference speaker was not even from Verizon Wireless but from the EMEA arm of the company which is mostly the former MCI/WorldCom enterprise services division. Unsurprisingly, probing questions about the progress of VoLTE testing were not especially illuminating.
  • Apparently, SMS over the SG interface *is* working. Just that vendors haven't bothered to tell anyone about it as it's not considered sexy. Let's see how the full SMS-over-LTE experience works on future phones though.
  • It was good to hear an anecdote from T-Mobile Netherlands that the biggest problem isn't "tonnage" of data traffic, but simultaneous signalling from lots of smartphones and apps in the same place. More interesting still was the massive explosion of the SMS-replacing "WhatsApp" service in Holland, which apparently got to 70% penetration (of smartphones I assume) in just 3 months. Hence KPN's profit warning a couple of weeks ago. (It's worth noting that Netherlands is slightly unusual when it comes to messaging, as it's historically been a low-Facebook use country, instead using its own local social network Hyves)
There were certainly more nuances I picked up about LTE, but the overwhelming sense was that, in Europe at least, there is "no hurry" to push it to the massmarket. That's a big contrast to the US, where a 4G marketing frenzy is taking place, dragging network deployment in its wake.


Monday, May 16, 2011

Telcos paying OTT players - balance of payments will look ugly

Through my work with Telco 2.0, I spend quite a lot of time thinking about how telcos can get "two-sided business models" (2SBM) to work. This involves deriving revenues from companies "upstream" of the users themselves, who pay to use the operator as a "platform" for doing business with the users more effectively.

An easy example of 2SBM is advertising, with the telco facilitating a brand by helping it market to the telco's (paying) users. Google does much the same, but only monetises the upstream (ads) and not the downstream (users searching). Another example of existing telco 2SBM is "bill on behalf of" - for example collecting payment for apps through carrier billing, and taking a rev-share from the developer.

Harder examples of 2SBM are where the operator wants to act as an identity/authentication provider, enabling various network-based APIs like location, or where it wants to provide some form of QoS or "slices and diced" bandwidth for fixed or mobile broadband. Notwithstanding the ongoing wrangling about Net Neutrality, operators would dearly love to charge Internet companies such as Google or Facebook or Netflix for using "their pipes". As I've written before, simply acting as a bottleneck or tollgate is improbable - for any chance of getting "cold hard cash" for broadband 2SBM, the operators need to help the so-called OTT players do something extra, which best-efforts connectivity cannot do.

This is proving tricky, because the Internet companies have proven quite adept at making the most of ordinary Internet access connections, while the operators have found it hard to deliver "provable" enhanced QoS, especially in mobile, even where the law permits.

So at present, the amount of revenue flowing from to operators from YouTube, Hulu, eBay and so on is vanishingly small, once you exclude basic connectivity from their servers - and perhaps some newer trends about peering / transit for those generating the greatest volume of video. Many of these companies have developed their own in-house alternatives to operator APIs (location has been the easiest, but others such as messaging and identity are evolving too).

So despite some ridiculous, sycophantic and wishful-thinking "telcowash" (4MB PDF) from the likes of consultants such as ATKearney, the chances of deriving extra revenues from Internet companies, by just sitting in the middle of the network with a couple of DPI and optimisation boxes, seems as hard today as it did three years ago.

Instead, there's a slow trickle of cash going *the other direction*. Operators are paying OTT companies for their unique applications and capabilities. DoCoMo has just cut a deal with Twitter to embed its apps into featurephones, and use its "firehose" feed for location-based services. Verizon has partnered with Skype, as has H3G - something I feel might evolve much further now, given the Microsoft acquisition. Facebook is reportedly charging for bulk access to its own APIs - which makes those RCS visions of handset addressbooks injesting profile pictures and statuses look unlikely. And then we've had the acquisitions - France Telecom buying a major stake in local YouTube rival DailyMotion, Telefonica buying Jajah and so on.

And then of course there's the huge amount that operators spend on Google Advertising.

In other words, despite all the rhetoric, it seems like the OTT players are charging the telcos, not vice versa. The reason is simple - the OTT players are typically selling innovation and new value *first*, not attempting to monetise control. Enhanced Twitter will add value to DoCoMo's customers. Google's clever advertising and analytics help operators sell more stuff.

When the operators can demonstrate that their 2SBM offers add value (and revenue) to upstream players, especially on broadband, they are likely to buy them. But they are unlikely to pay a "control point tax" without upside.

How many operators employee marketing staff to show that they can help Facebook, Google et al make more money if they use the operator APIs or QoS mechanisms?

Until that point, the balance of payments between telcos and OTTs will stay in the red.

Wednesday, May 11, 2011

Another reason why app / service based pricing for mobile broadband will fail

Imagine, if you will, that you are the CEO of a mobile operator that's just launched a new tiered-pricing model for mobile data on laptops and smartphones. It's based on differential pricing and QoS for different data/Internet applications. You've bought a ton of DPI and policy boxes to detect and enforce traffic, and you've proudly announced a new "menu" of pricings.

$10 per month = email, IM, basic web browsing
$15 per month = adds in social networking & mapping
$25 per month = adds in low-quality video & selected cloud services & basic VoIP
$35 per month = adds in high-quality video & high-quality VoIP

You've nicely defined all the different web services into the different buckets, and set up the T's and C's and the policy boxes appropriately.

Now this morning, you've woken up to find that Microsoft has bought Skype. So now Microsoft has extra IM, VoIP and video-calling, as well as its own way of doing WiFi offload via the Skype/Boingo relationship. There's likely to be a whole host of mashed-up applications, launched over the next couple of years - some fixed, some mobile, some consumer, some business, some free, some paid.

So how exactly does that fit with the carefully-crafted pricing model and network policy setup? What's the business process for evaluating what has to change? What are the technical implications? What are the legal implications? How does it fit with partnering deals? How will users be informed? Does Microsoft have VPN services? What happens to stuff Microsoft / Skype does in the cloud? Does everything look the same on different devices & OS's? And how fast can any updates be made?

The list of headaches is endless. The scope for messing up is huge. And it's all highly dynamic & will change continually.

For me, this is yet another example of why app-specific pricing & policy is doomed to be limited to a few niches (eg anti-virus, throttling P2P uplink). Never mind the Net Neutrality legal debate - it is practical problems like this that make service-specific tariffs and so-called "personalisation" service menus irrelevant at best, and outright damaging at worst.

Tuesday, May 10, 2011

Microsoft + Skype + Nokia = NextGen 4G Mobile VoIP & messaging done properly

NOTE: The Microsoft / Skype deal is not yet confirmed, as I write this.

But if it goes ahead "as leaked" this is another major step for Microsoft's aggressive pursuit of Google and Apple, which also may have a secondary effect: further pain for the telcos and especially mobile IMS and its flag-waving applications VoLTE and RCS.

[Plug: I'm running a series of upcoming "Future of Voice" Masterclasses if you want to understand more about the implications & rationale for this. Contact details below to learn more]

I'm pretty sure that a lot of the comment and analysis today will be around whether Microsoft can execute better than eBay, why the price is so high, whether this is "all about Google" and whether Skype would have been better off living inside Facebook.

For me, this actually looks like a near-perfect fit for Skype. The other candidates I had in mind were Vodafone, AT&T, Cisco and Ericsson. No, not the most intuitive choices indeed - but companies with deep pockets, an interest in innovative services models and a willingness to pick and choose among standards vs. proprietary solutions where it suits them.

Some comments that help to explain my conclusions:

  • A substantial part of Skype's current user base is from PCs. Although mobile devices get all the glory at the moment, Skype epitomises what's best about desktop VoIP. More importantly, a laptop is probably the perfect device for many video-calling use cases, as the keyboard+hinge and upright camera is much better ergonomically than a propped-up tablet or mobile phone.This would have been lost in a purely handset-focused company (eg Nokia in the past, RIM or perhaps Qualcomm). This may have ruled out Vodafone too, I guess.
  • Skype gets widely-used in business - often only semi-officially, but it's a critical tool for many travellers, people doing conference calls and so forth. It is also increasingly working on corporate-grade solutions. This would have been lost inside a Facebook or similar company.
  • I think that some of the operators that are less aggressive about deploying LTE - especially for smartphones - are doing so partly because of doubts about getting VoIP to work properly, to a degree comparably-good with GSM telephony today. Skype has a significant chance of being the only massmarket VoIP that has a big user base, and works well on LTE, by 2014. The "option value" for that is potentially huge. Hence AT&T and Vodafone on my "other possible acquirers" list - I also would have added Hutchison 3 and maybe Telenor, but the price is too high.
  • Skype is class-leading in terms of understand and helping manage QoE (quality of experience) for IP communications *from the user device*. It doesn't control QoS (in the middle of the network), but involves the user and the device hardware to make the best of what's available, and alert the user not just to problems "in the middle", but also to other things like not having microphone working properly, temporarily poor WiFi or 3G reception, or if your device's processor is running too slow. Both Cisco and Ericsson urgently need device-side expertise to really understand "end to end" performance, but both know how hard it is to get across numerous classes and brands of device.  Skype has that knowledge. They have missed out today - but I suspect that Cisco's investors would have been wary, and Ericsson probably would worry too much about annoying its telco customers. It is also why it would have been a poor fit with Apple, which is much less platform-agnostic than Microsoft, especially in mobile.
  • Skype is leading the way on personal video communications. I don't use it personally, but many users do - the % of minutes that are video-based is astonishing. I remember speaking to a friend recently who didn't know Skype could work in voice-only mode. He thought it was JUST a video comms tool. It just works, and is cross-platform unlike FaceTime.
  • In the massmarket, Skype is probably the only platform that has (by skill or luck) will worked out a way to get users to adopt "permission-based" voice communications. Many Skype voice or video calls are pre-arranged, or "escalate" from an IM chat and presence in a way that telcos have long dreamed about. Its desktop-first strategy (and timing) has enabled it to do what IMS should have done, had it been universally available and using a Freemium model in 2005. As such, this would have been a near-perfect (if expensive) Telco-OTT proposition - and also help craft a voice experience that is much more than "just telephony", but fits with the Future of Voice vision I wrote about recently for VisionMobile
Would Skype have fitted well inside Google? It's difficult to tell. Google doesn't have much heritage of making and integrating large acquisitions, while Microsoft is "not bad", with some successes (eg Hotmail, Great Plains) and some failures (eg Danger). More importantly, Google has its own voice/VoIP initiatives, and internal politics would probably have been horrible with a Skype acquisition.

There are many other issues to explore around the Microsoft/Skype deal - especially the missed opportunity for one of the telecom operator community to "escape the herd" and lead the emerging Telco-OTT space with a head start.

But it's worth stepping back and focusing here on the impact on IMS, VoLTE and RCS. I still take the view that VoLTE is "necessary but not sufficient" - it's very late in coming, but there definitely needs to be a "simple circuit telephony replacement" technology for 3G/4G networks. GSMA and its partners are heavily focused on getting VoLTE working, especially focusing on interoperability and familiar themes like roaming. However, there also needs to be a focus on two other things that I reckon are being overlooked:

  • There needs to be a view about the Future of Voice angle. If VoLTE had started as VoHSPA 5 years ago, it could have just been Telephony v1.1 and that would have been fine. But the timing now is wrong - LTE is a key transition point, further catalysed by the smartphone explosion. In the next few years, voice *will* change irrevocably, expanding well beyond mere telephony to a huge diversity of applications and use cases. If VoLTE gets delayed, it will have missed its window of opportunity - and I think that's a significant risk.
  • More practically, I think that VoLTE will have to content with a ton of real-world horribleness about getting VoIP to work while mobile and on cellphones. RF issues, battery issues, echo, poor acoustics, sound glitches, codec choices, packet-loss concealment and so forth. QoS only gets you so far - and then you need software and proper audio expertise to fix what's left. The network companies and standards bodies in VoLTE aren't really focused on microphones and sound volume levels - they're hoping that the handset companies will fix all that. Have a look back at the history of fixed-line VoIP to see how "easy" all that is to get right, even on relatively predictable home broadband or corporate LANs. Skype has been doing mobile VoIP for many years - and while it's not perfect, it's got a huge head-start.
In other words, Microsoft is buying a $8bn option on the future of the mobile telephony industry. If we get to 2014 and VoLTE isn't working as well as it should - Microsoft (and its partners like Nokia) will have both an OTT option and a "white label" proposition for operators. Also, don't forget that Microsoft also sells IP-PBX functionality in its Lync / OCS product - it doesn't think that all call control should reside in the operator domain.

As for RCS.... well I think this is just another nail in an already well-sealed coffin. Microsoft has never really bothered to grasp IMS ("Oh, that's just SIP isn't it?" was one response I got in a interview at MWC a few years ago) and it's now looking even more of a poor fit when combined with MSN, Live, corporate UC products and so forth. It seems likely that none of the big smartphone OS providers - Apple, Google, RIM or Microsoft will be particularly well-disposed to RCS. Sure, there will be various third-party add-ons for Android and perhaps other platforms, but it's unlikely to be a key priority for Windows Phone now.

I'll try and update this post or add another later on, after some reflection, but this should be enough to catalyse some discussion.

Also, now seems like a good time to highlight some upcoming events on "The Future of Voice" I'm running together with Martin Geddes. These will be small-group collaborative Masterclasses, drilling into the use cases, technologies, applications and user behaviour for voice communications, as we pass the point of "peak telephony" and move on to other modes of B2B, B2C and C2C interaction. The first one is in Santa Clara on June 30th, followed by London on July 14th. Martin and I will also be conducting customised private workshops for specific clients. Email information AT disruptive-analysis DOT com for details.

Monday, May 09, 2011

My Top 10 blog posts from the past few years

I don't check Google Analytics that often for this blog - most of the time I write what I want, and don't stress about the levels of readership beyond a link on Twitter or two. I'm not selling advertising space, and much of my "target audience" seems to find the blog anyway, so why bother with all that SEO nonsense?

But I thought it was interesting to look at the most popular posts I'd written and see if there's a pattern somewhere. Over the past 3 years or so, these are the ones that come out on top:

  1. A post about sharing 3G dongle modems via a docking-station and WiFi. Not sure why this was so popular, except for the fact I coined the term "Dongle Dock". They never really took off as a concept, as MiFi-style products make for a simpler solution of 3G over WiFi.
  2. My (in)famous post re-writing tbe Monty Python dead parrot sketch - in which the deceased Mobile IMS has been nailed to its LTE perch by the 3GPP/GSMA pet store owner. I know this one got very widely circulated - I still get comments about it today - so no surprises there.
  3. A complaint about poor customer service I got from Carphone Warehouse. "Personal" rants tend to get seen by large numbers of similarly-dissatisfied people looking for others who share their pain. In fact, this would probably be at #1 as it was written in 2006, but I only started tracking hits this way in 2008.
  4. Back to IMS, LTE and VoIP, with a post discussing the original announcement of GSMA OneVoice (now VoLTE)
  5. This was a very short post from 2006, talking about how to integrate SMS into IP and IMS. It's notable that I still haven't seen any live demos (or even major announcements) about getting SMS over SGs working in LTE.
  6. An ongoing theme of mine is about the over-hype of NFC and mobile payments. In particular, I expect NFC is going to be about interactions not transactions. One of my first & best-read posts on the theme was a couple of years ago. A more recent analysis is here
  7. LTE voice once again - and specifically looking at the ill-fated VoLGA. I still think that it makes sense - and if VoLTE encounters the problems I anticipate, I wouldn't be surprised to see it get reincarnated somehow, perhaps as the basis for a Telco-OTT VoIP service on other telcos' LTE phones.
  8. Another post stemming from personal frustration - this time about Vodafone's egregious data roaming pricing strategy about a year ago. Who knows, maybe I contributed to their eventual decision about adoption of the £2/day plan.
  9. And *another* post on IMS, LTE, VoLGA and SMS. I hadn't fully realised how much traffic this discussion had got. For all those interested in my views, I'm hosting both a breakfast session on LTE Voice (Tues 17th at 8am), and an Analyst Spotlight on VoLTE and the Future of Voice (Weds 18th at 11am) at next week's LTE Summit in Amsterdam.
  10. And finally in the Top 10, my predictions for 2010, from the perspective of end-2009. Not too bad on the whole. But yes, I know I'm still complaining about Twitter despite using it for a year now. It's still awful, but unfortunately essential. I'm @disruptivedean.

Friday, May 06, 2011

Business model innovation in mobile broadband - the insurance model?

At the recent Telco 2.0 event in Palo Alto, I was on the panel discussing mobile broadband economics.

I had an idea there, on the spur of the moment, that I haven't had a chance to write up until now. It's still in "prototype" form and definitely not 100% practical straight away, but nevertheless represents the sort of lateral thinking I have yet to see in the mobile industry.

I pay my car insurance based on an annual premium payment. I phone around (or look online) for quotes, which typically ask me for my age, address, type of car, security I use, history of accidents or convictions, some evidence of history of my actual insurance usage (ie claims) and a bunch of other questions that help them categorise my risk level with some very complex software. Some specialist insurers target particular demographics, or have detailed underwriting expertise that allows them to provide custom quotes, taking into account unusual circumstances. I also get a discount if my previous year's driving didn't result in excess "usage" - ie a no-claims discount.

It got me thinking - why don't we price for mobile data in a similar way? A 37yo male living in central London with an iPhone 4, commuting during busy periods, with a history of video downloads & obsessive Facebook use might get quoted £500 a year for mobile data, while a 57yo female with a BlackBerry living in a rural area and working from home might get a quote of £200. And if someone "abuses" the service, the operator has the right to decline to quote them for a continuation of service next year - or raise the premium considerably - so there's an incentive to be sensible.

Now clearly, this would need a major change to IT and billing systems - as well as some interesting discussions with regulators and re-training of customer service. I'm certainly not saying it's easy. But leave that aside for a second - do you really believe that if the *insurance* industry (hardly the most dynamic group of companies....) can do something like this, then the telecoms industry couldn't as well?

The nice thing about it is that the actual metrics that the telco uses to estimate risk are hidden privately inside the system. It might be a measure of GB data "tonnage". It might bias against people who use lots of signalling-intensive applications. It might involve clever location-based algorithms. It might give discounts for people who have use of 2+ phones. It might discount people prepared to accept a higher "excess" (eg policy management downgrades during busy periods). There's an infinity of clever ways to tweak the system.

I'm sure that there are other industries whose pricing schemes might be borrowed as well - energy, airlines, hotels and so on. Once again, it's about getting rid of the notion that "subscriptions" - especially monthly-based - are the only way to bill or market for telecoms services.

There's lots of nonsense being talked at the moment about "personalisation" fo mobile data - picking from a menu of apps and other such implausibilities. *This* is an example of true personalisation - a unique price and policy, just for you, calculated by examining your individual "risk" characteristics based on network cost and contribution to congestion.