Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Friday, July 29, 2011

What changes when "opened" vendor-specific technologies are better than "official" standards?

I've just been reading up on the history of PDF (Portable Document Format) on Wikipedia . A couple of lines to consider:


"PDF was originally a proprietary format controlled by Adobe, and was officially released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008.......  granting a royalty-free rights for all patents owned by Adobe that are necessary to make, use, sell and distribute PDF compliant implementations"

"PDF's adoption in the early days of the format's history was slow. Adobe Acrobat, Adobe's suite for reading and creating PDF files, was not freely available..[....].... required longer download times over the slower modems common at the time; and rendering PDF files was slow on the less powerful machines of the day. Additionally, there were competing formats such as  [.....] Adobe soon started distributing its Acrobat Reader program at no cost, and continued supporting the original PDF, which eventually became the de facto standard for printable documents on the web"

Imagine, back in 1999, that you were a service provider, or the standardisation group for a number of SPs. And you'd just invented the concept of a "document conversion and viewing service". You'd created the .xyz document format, worked out the billing system and knew how much you wanted to charge to interconnect with the leading word processors and other applications of the day. You were going to sell monthly subscriptions to end users, allowing them to read web documents.

Sounds silly now, doesn't it? PDF instead took document viewing/creation down the route of being an application (free reader and paid authoring tool), through to being a feature of some web browsers, to today's existence of PDF-ing something as a mere function on a menu, or right-click-save-as. Early attempts to do PDF-creation-as-a-service disappeared.

I often use PDF as an example of the difference between delivering value as a service or as merely a feature/function of something else. This is hugely relevant in voice, and features in the Future of Voice Masterclass discussions around voice-enabled applications.

But this has also got me thinking about the general case of large technology companies releasing an existing successful or de-facto-standard as a fully-open technology, especially where it is better than an "official" standard developed through the usual committee-and-politics process.

What is the impact of this? Why would that company open up that standard in the first place - how do they monetise it? What's the other strategic value? My thoughts are that it:
  • Needs to be based on something so widespread already (eg PDF), or something so superior, that it can gain firm and enduring traction, even though it has a proprietary heritage.
  • Weakens any related technology that is rigidly dependent on the official standard, and which can't flex to accommodate the superior now-open one. This might be deliberate or an accidental side-effect.
  • Allows the original company to retain a strong share of the necessary software, even though it's free. And it can add in extra features or capabilities that help them monetise it via different products. For example, you don't need Adobe Reader to view PDFs, but most people have it anyway - and it also allows various still-proprietary technologies to be displayed
  • Gets more developers involved in using that standard
  • Helps to commoditise part of the value chain, shifting value (implicitly) elsewhere
There's probably some more, but I've only just started thinking about this.

Now, why does this matter in mobile?

Three things come to mind:

  • Skype's release of the SILK codec for VoIP
  • Google's release of WebRTC for browser-based communications, which also includes the iSAC codec it obtained with its GIPS acquisition
  • Apple's release of HLS (HTTP Live Streaming)
There's also Google release of the WebM video format, and Real's Helix technology a few years ago, plus others from Microsoft and probable a variety of others. Others such as Jabber/XMPP [for IM interoperability] have started life as open-source and then been adopted by large companies like Google and Cisco. Many of these are around audio and video, for which it's necessary to have a good population of viewers/clients in the field to avoid chicken and egg problems with content developers.

What I've been trying to work out is the impact of all these new standards (or drafts) on "official" alternatives that are baked-in to some wireless network infrastructure offerings and standards.

So for example, quite a number of people seem to believe that SILK is better than the AMR-WB codec, which forms a core part of VoLTE for delivering telephony on LTE. Given that VoLTE is less flexible than various other OTT-style voice platforms, in terms of creating "non-telephony" voice applications, this might have a serious long-term strategic impact on the overall voice marketplace. Coupled with smart use of the ex-GIPS Google acoustic toolkit, this could mean that OTT-style VoIP on LTE might actually have better performance than the official, QoS-integrated, IMS-enabled version, at least in certain circumstances.

Apple HLS is another teaser. Along with a couple of other web-based streaming protocols, this is an "adaptive rate" video format that can vary the quality/bandwidth used based on realtime prevailing network throughput rates. In other words, it watches network congestion and cleverly self-adjusts the bitrate to minimise delays and stalls from buffering. In other words, it kills quite a lot of the touted benefits of so-called "transparent video optimisation" in the operator's network, not least because HLS is (indirectly) under control and visibility of the video publisher.

WebRTC and in-browser communications is probably the most direct analogy to PDF. Potentially, it turns voice (and that's voice generally, not just "telephony" as an application) into a function, rather than a service. Now clearly there may need to be other services at the back end for certain use cases (eg interconnect with the PSTN), but it has the potential to completely disrupt parts of the communications infrastructure and operator business model - because it doesn't need infrastructure. It does the whole thing "in the cloud" - not as a dedicated technology like Skype, but simply as an integral part of the web.

The open question is why Apple, Google and Skype are doing this. Apple is probably the easiest - HLS seems to be part of its anti-Adobe crusade, plus it helps it to perpetuate iTunes and potentially use it to sell to non-Apple devices. Google and Skype might be trying to run a "codec war" with each other with iSAC vs. SILK (why? I'm not sure yet), and might just take out AMR-WB (and by extention, VoLTE) as collateral damage.

This is an area I want to dig into more deeply - and please paste comments and theories here to support / attack / extend this argument, as it's still only part-formed in my mind.

Thursday, July 28, 2011

Deep inspection of Allot's mobile data trends report

Quite a few technology vendors put out interesting research reports on mobile data - the best-known probably being Cisco's VNI data and forecasts, which gets cited by about half the rest of the industry.

However, a number of the smaller DPI and policy vendors (also WiFi specialists) also put out papers and reports, sometimes based on observed data from their own implementations, and sometimes from commissioned surveys.

(I've got absolutely no problem with this in principle - I've done various reports and papers for companies myself, although typically they've been for those wanting a "thought leadership" position associated with my often-contrarian opinions and analyses).

Clearly, all these reports are for principally for the purpose of raising awareness, and acting as marketing vehicles, by providing interesting newsworthy soundbites, and grabbing the attention of network purchasing folk at operators. But it's worth scrutinising them for what they say, what they don't say, and the methodology/assumptions involved.

One of these companies is Allot Communications, which has issued a series of reports on mobile data bandwidth use, applications and so forth. It's just published its H1 2011 report, downloadable here.

There's some good stuff in there, but also plenty which raises questions.


  • The source of the data for the report is Allot's own installed base of network elements, spanning "networks representing more than 250 million subscribers". It's entirely unclear which operators these are. That's critical, because it determines if this is a representative sample of the world's 5 billion or so subsciptions, or if it's somehow skewed because of particular operators' or countries' specific local circumstances. It's also unclear how many of the 250m are active data users at all - my reading is that's the "potential" reach of those networks, not the current user base.This is all critical, because if the survey is based on (say) developing-world operators, you'd expect to see a much higher overall growth rate for data than in mature markets.
  • The most glaring omission is any reference to the volume of traffic from laptops (3G dongles) versus smartphones or other devices.  This is hugely important in terms of interpreting the other statistics, as many dongles are sold as alternatives to fixed broadband, so you expect a broadly similar usage profile. It may well be that a large part of VoIP, P2P and mobile video streaming is consumed on PCs, which most operators cannot change easily - it's pretty hard to say that a USB modem service is "just like ADSL, except you can't use Skype. Or YouTube in HD" and remain competitive.
  • However, an important guide is filesharing. Generally, smartphones aren't significantly responsible for P2P traffic as far as I know. That suggests that the bulk of the 29% will come from PCs - and therefore, so will much of the video and web browsing, as almost nobody *just* uses PC and dongle to swap files. (As evidenced by Allot's chart on *fixed* broadband traffic) In other words, I expect that the contribution of PC mobile broadband is hugely skewing the overall study. I'm going to call out Allot and say it's trying to avoid this discussion - there is no mention of the word laptop, notebook, PC, modem or dongle in the whole document. The word "smartphone" appears 5 times.
  • There are no absolutes in terms of MB or GB, or in terms of actual numbers of unique users. So it's impossible to tell if growth is coming from more subscribers or more use per subscriber. I suspect that the average might actually be tending down as we see a shift from dongles to smartphones, and as late-adopters start getting smartphones.
  • It's unclear whether *all* data traffic gets funnelled through the Allot box in those networks. Does some get siphoned off "in front" of the box? (eg telco-hosted data services, BlackBerry traffic or corporate VPNs) Does some get injected in deeper in the network (eg with a CDN)? Where there are video compression / optimisation boxes or caches, is the data showing the compressed or uncompressed amounts of data? Are there any proprietary direct-tunnel or offload solutions involved that bypass the core network?
  • The report misuses the term "application" - video is not "an application" but a traffic type. An application (at a user level) can involve several different traffic types, for instance a web page with an embedded video advert or an audio plug-in.
  • Application-aware charging is something that most DPI vendors are huge fans of (unsurprisingly, as it typically needs DPI boxes), but which I'm a huge critic of. The study is based on analysis of vendors' stated policies or tariffs on the web, but it's unclear exactly what "application" means here - for example, many operators have very different charging for M2M data devices and applications than for smartphones or dongles. It's not clear from the Allot survey that the reported 32% of operators using app-aware charging are doing this with DPI, rather than (say) using a separate APN for BlackBerries or Facebook Zero or whatever. (It's worth noting that essentially *all* operators zero-rate internal data traffic used for device management)
  • Some of the definitions are pretty woolly. So-called VoIP traffic also includes video communications (eg Skype) and presumably also the associated IM and advertising data, although those are small at present. Given that a huge % of Skype calls are video-based (from laptops!) that's rather important. So we have the strange situation that some of the most-used mobile IM platforms - Skype, Facebook Chat and BlackBerry Messenger - don't appear in the chart at all, although BBM's absence could be attributed to a laptop-centric overall sample.
  • The word "signalling" doesn't appear at all
  • Neither does the word "encryption" or HTTPS - both of which are becoming increasingly important, and which are essentially opaque to most forms of DPI
Overall, there's some good data points there, but a "deep inspection" suggests that there's rather a lot that's not being said. In particular, the downplaying of the role of PC mobile broadband seems deliberate. Allot is also very keen to talk up "personalisation" in terms of "app-aware charging", but seems to have been pretty selective with its evidence to support its assertions.

Edit: it's just struck me that this is a piece of analysis that is based on interpretation and inferences, rather than direct collaboration or discussion with the content provider. Just like DPI and application-aware networking, in other words. (My self-determined acceptable rate for "false positives" on a piece of this type is 10%. If I've got more than that amount wrong, I'll do a re-write or retraction. I've yet to see a DPI vendor give a false-positive threshold).

    Monday, July 11, 2011

    Beware of traffic statistics....

    One of the problems with Twitter is that it forces people to abbreviate important details. Compounded with multiple layers of interpretation, it's quite possible for information to get filtered & misrepresented.

    A case in point - I've just seen a tweet linking to this blog post about "traffic" from mobile and "non-computer" devices hitting 6% of the total in the US. The blog post originated from this Comscore survey which looks like it's generating some interesting and useful data.

    However, that data is very specifically about Web Page viewing traffic by device type.

    The blog post and tweet don't really make it clear that this is (a) conflating two definitions of "traffic" - one is essentially web page hits, the other is a measure of the volume of data being sent across the network; or (b) that this is just the web, not the whole Internet (ie including most sorts of streaming, email, VoIP, presumably a lot of non-HTTP app traffic and so forth).

    If it had been a bit further in the future, there would be further confusion with HTML5 applications - what would constitute traffic / hits / consumption then?

    I'll bet that over the next few days, we see that data recycled to suggest that mobile devices are generating 6% of overall GB / TB / EB of data "tonnage" across the generalised Internet, probably linking the story to cellular capacity crunches, offload, spectrum etc etc.

    Incidentally, the red alarm for me when I spotted this was a lack of mention of Internet-connected TVs and set-tops. If I had to guess what "non-computer" devices generated *bulk* data across the Internet at large, I would expect a relatively small number of Roku's and Tivo's and similar TV-connected devices to consume huge volumes of video. (HDTV = about 5GB per hour). There's also presumably a huge chunk of (non-web) Internet traffic which is server-to-server.

    Edit - in future, it'll get even more complex because of things like adaptive rate streaming, which divides videos into "chunks" a few seconds long, typically each with a unique URL. Is each one a web-page hit?

    Thursday, July 07, 2011

    UK phone-hacking scandal - does this go beyond an issue about journalism?

    Like everyone in the UK, I've been listening in horror to the recent reports that the News of the World's journalists have listened to the private voicemails not just of celebrities and politicians, but those of victims of crime and terrorism.

    I certainly think that those responsible must face the force of both the law and public opprobrium.

    But it's also made me think about the process they used. While dastardly, it doesn't sound that difficult - basically either guessing users' default voicemail PIN codes (0000 etc) or - allegedly - bribing somebody to divulge them.

    This leads me to three conclusions:

    • I can't believe that the NoTW journalists were the only ones who invented and used this technique. Firstly, other journalists are probably equally implicated, as there's a lot of job mobility in that industry. But secondly, this technique has most probably also been used in other countries, and in other contexts. I've got to believe that this goes beyond news, and probably extends to industrial espionage, financial insider-dealing and assorted other forms of snooping and spying.
    • The mobile operators (and by implication their vendors/integrators) appear to have been seriously remiss about defining good practice and standards for voicemail security. This does not just extend to allowing default passwords to remain in use indefinitely, it also involves the accessibility of PINs to customer service or other staff. It seems that these PINs are much more weakly locked-down that banks' ATM codes. I also find it hard to believe that UK operators are uniquely lax about this - presumably it's an equal issue around the world. 
    • Lastly, this is another example of the "cloud" failing in its security. Just because this involved some "social engineering" does not make voicemail hacking any less scary than Sony's loss of customer details or other recent failures. Maybe there should be questions about whether the network is the right default place to store voicemails, rather than downloading them to handsets when connectivity is available.
    To my mind, the UK Information Commissioner needs to do a full review into how voicemail privacy and security is run in the telecoms industry. And other countries' authorities ought to be following suit. I think the unique intensity of the UK journalism / political sphere has broken the dam on this issue, but I'll be very surprised if one newspaper is the sole culprit when the rest of the story floods out.

    EDIT: this blog post (found easily on Google) discussed voicemail snooping and vulnerabilities, specifically as related to US mobile operators. Apparently many voicemail services just use Caller ID to identify when the inbound call is coming from a handset - so easily spoofed. Doesn't even use SIM-based authentication when calling from the phone itself. 

    Friday, July 01, 2011

    Zero-rating, sender-pays, toll-free data... the next business model for mobile broadband?

    I've noticed a sudden upswing in discussion around the idea of "zero-rating" of mobile data traffic recently. This is where certain types of data - specific websites, apps, times of day, locations etc - do not count against the user's monthly data cap or prepaid quota. Clearly, zero-rating makes no sense if the user has a completely flat dataplan anyway.

    Cisco has a blog post about the idea here , Andrew Bud of mBlox has been talking a good game on "sender-pays data" for some time, a company called BoxTop presented on its idea of "toll-free apps" at eComm, its cropped up in numerous discussions with operators recently - and its something I've been talking about for years in reports such as Mobile Broadband Computing (Dec 2008) and Telco 2.0 Fixed & Mobile Broadband Business Models (Mar 2010).

    It's got the great advantage of being easy to understand - and there's often a zero-rate function built into existing billing systems anyway (eg to zero-rate internal "operational" data usage by the telcos for updates etc) so there isn't the headache of re-writing half the BSS/OSS stack that some other business models imply.

    But in my mind though is a major question. Yes, certain data will definitely be zero-rated to the end user, but the big question is whether they will paid for by anyone else (ie an upstream party like an advertiser or app developer)? Or will the operator give away certain traffic "for free" as a marketing tool, or even as a way of (paradoxically) reducing their own costs?

    Cisco's article points out advertisers as low-hanging fruit, something I wrote about myself last year. This is also a discussion I've had with companies such as Yospace in the mobile video arena, although when I asked an advertising agency at a recent mobile conference the notion of paying for bandwidth resulted in a look of bemusement.

    However, there are some extra complexities to the model to consider:

    - Excess usage and fraud risk / management. Would the upstream party be effectively signing a blank cheque for an unlimited amount of data use? I'm not sure how this works for 1-800 numbers, for example.
    - Offload awareness. How does the model work for traffic which either does - or could - go via WiFi or femtocell access? Especially in the case where the data is backhauled through the operator core (femtos, or some new flavours of WiFi integration), I'd be mightily annoyed as the content provider if I was charged the same fee for data transmission even though the operators costs were 10x lower
    - Is there any discrimination between data sent to busy cells during busy hour, vs. data sent during quiet periods?
    - What happens with CDNs? Firstly, how do you account for and bill stuff routed via Akamai to a particular service provider? Secondly what happens if content comes from an operator's cache?
    - Do you charge for the amount of raw data sent by the content company, or that which comes out of the compression/optimisation box in the operator's network and sent to the user?
    - How do you deal with uplink traffic? And if the other party is paying, can I bankrupt the content company by emailing them a terabyte of random numbers?
    - How do you sell and market this to media and content companies? How do you bill them? Do you need a completely new IT system to manage all of this?
    - If the upstream company is paying, will they expect a strict SLA in terms of coverage, throughput rates - and for evidence that the telco has delivered on its obligations?
    - Roaming will need to be considered - few content companies will want to pay $20,000 for delivering a movie downloaded by a user on holiday.
    - Various types of problems identifying unique traffic streams when all this runs inside an HTML5 browser. Web mashups generally will cause a problem, for example if a "free" website has a YouTube video embedded on a page. Who pays for the YouTube traffic?

    As a result, I expect that the short-term approach for zero-rating will be for those use cases where no money changes hands. Getting "cold hard cash" from this type of two-sided models is fraught with complexity. Instead, we'll see this type of zero-rating used mostly for promotional purposes: "1GB a month plus free zero-rated YouTube!", or for zero-rating the operator's own content and apps, especially where they are done "telco-OTT style". For example, I'd expect Orange to zero-rate traffic for its 50%-owned DailyMotion Internet video arm to some subscribers.

    We may also see some zero-rating done as a way of encouraging content providers to use local CDNs, especially if they are run by the operator themselves. It would make sense for an Australian provider to tell Netflix that any content delivered from servers locally (and therefore not needing GB of data shipping across the Pacific needlessly by the operator) would get zero-rated to the end user. Obviously that would need to be set against radio and backhaul network load and would probably be part of a wider partnership deal.

    There is also a promotional angle to giving away a certain amount of usage to non-data subscribers, in the hope that some will see the value and sign up for a data plan at a later date. Facebook Zero seems to fall into this camp at the moment.

    Maybe some companies would stump up for the equivalent of 1-800 numbers. Maybe an airline's app, or a bank's? But in reality, the amounts are likely to be so small unless the apps are really heavy and frequently used (maybe 1MB per user per month for an airline app?) that the cost of sale might outweigh the revenues.

    Overall, I expect to see zero-rating becoming more important in various guises. But I'm doubtful that it's as easy to monetise as some seem to think.

    Thursday, June 30, 2011

    Something to watch - voice comms and voice apps in the browser

    Tomorrow I'll be running the first of the Future of Voice Masterclasses I've been developing with Martin Geddes. We'll cover a broad array of topics around the value, business model and technology of voice communications, especially as we go beyond the basic telephony service we're so familiar with.

    I've spent the last couple of days at the wonderful eComm conference in San Francisco, listening to a challenging series of speakers cover everything from telecom regulation to wireless sensors to the psychology of motivation.

    One of the presentations that most struck me as surprising was one from Voxeo. It referred to the potential for running voice (specifically VoIP) inside HTML5 browsers and apps, rather than through standalone applications like a Skype client.

    This made me wake up, as I've previously been following the whole native-apps/web-apps debate without really being swayed by the web side of the argument. Indeed, I went to a Mobile Monday London event recently which did actually debate the issue formally, with two opposing teams. I asked a question about whether web applications would be suitable for demanding apps like VoIP - and even the web advocates said no, that was out of scope.

    The Voxeo presentation covered WebRTC and RTCWeb standards. (RTC=realtime communications). In a nutshell, there's basically a lot of work going on to enhance HTML5 so that it can deal with various codecs and streaming protocols, as well as Javascript APIs to control media - for example with access to the microphone and speaker.

    But the really interesting things are that:
    - Signalling is all done with web protocols like HTTP and XMPP, not SIP.
    - Google has donated a ton of its GIPS code to the project, which does clever acoustic stuff like dealing with echo and packet loss
    - Voxeo's platform called Phono.com has a variety of software functions which enable all this capability to bundled into useful formats - such as initiating a call.
    - Using something called Phonegap, web developers can create web apps for Apple IOS which incorporate voice connections and calls natively .

    So one web page (or browser) could make a voice connection with another server or browser. These are not phone calls. These are additional voice applications, which could theoretically connect to the public phone network, but don't need to. They might be voice-enabled game sites, or social networks, or whatever, where voice just works,

    Think about that for a moment. Voice communications becomes a feature of a web page, the same way that tables, or style sheets, or embedded images are. Voice as a feature, not a service.

    Now all of this is still some way away from being fully practical for mainstream phones. But over the next few years, it seems likely this going to get built into future browsers as part of the standard of HTML5. In other words, an HTML5-compliant mobile browser in 2013 may *have* to support this, although that may be dependent on whether a given device gives API access to all the relevant bits & pieces like microphone and speaker in the right way, without latency or other glitches.

    I'm still trying to get my head around the ramifications of this - but either way, it's deeply, deeply important and potentially represents more of an alternative for LTE Voice than even some OTT apps like Skype. Because this isn't voice as a separate OTT application - it's voice *in* the web itself.

    Wednesday, June 22, 2011

    Is mobile voice revenue being hugely overstated? And if so, what does that imply for VoLTE?

    In our upcoming Masterclasses on "The Future of Voice", Martin Geddes and I introduce the idea of "peak telephony". This is the point at which today's traditional telephony services, fixed or mobile, hit the top of the curve for both revenue and importance, after which price erosion and substitution by alternative applications means decline for normal operator voice.

    Various mobile operators have already reported declining voice ARPU - even allowing for distortions from users spreading their spend over multiple SIMs and accounts. This is not just a mature-market problem either - at the Femtocell Summit yesterday, I saw a presentation from a Malaysian operator forecasting an overall drop in mobile voice revenues over the next few years in that market.

    In order to stave off the inevitable, we believe that operators need to innovate in both technology and business model, looking beyond "plain old phone calls" to new ways of delivering and monetising voice services and functions.

    Mobile operators also have to deal with a second disruption, as LTE networks force a push towards VoIP. They need to absorb the costs of implementation - without a clear path to delivering more revenue to justify that investment. The guest post I wrote on Visionmobile about The Future of Voice reflected that VoLTE is merely "old telephony" reinvented to run on LTE, rather than a platform for enhanced "neo-phone" services that could significantly add value to operator voice business models.

    However, all this potentially pales into insignificance compared to a third possible disruption for mobile voice revenues: The Revenge of the Accountants.


    In a nutshell, the adoption of new international accounting standards may mean that users' repayment of handset subsidies have to be "unbundled" from the underlying service revenues. This applies most critically to postpaid users, who are given a "free" or heavily-discounted phone at the start of their contract. It is not uncommon for a $600 iPhone to be sold to a user for a headline price of $200, with the other $400 essentially recouped as part of the monthly service fees over two years.

    At the moment, the whole of the user's billed payments - let's say $75 a month - is recognised as service revenues, and then sliced up into voice / data / SMS in their financial reports. So maybe $40 is deemed to be voice, $15 is messaging, and $20 is data. The $400 handset subsidy gets buried in the accounts as a cost of sale, or subscriber acquisition cost.

    Now I'm not going to pretend to be an accountant or fully understand all the nuances here. I'm sure there are various wizards at some operators who can make the numbers "dance". But if I'm reading things right, the key thing to watch is Draft IAS (Intl. Accounting Standard) 18: Revenue in Relation to Bundled Sales  which forms part of the IFRS (International Financial Reporting Standards) approach to bean-counting.

    My original reading was that the subsidy ($400) in this case, was divided up and stripped out of the monthly revenues. So for a 24-month contract, this would mean that $16.67 each month was essentially a loan repayment, meaning that the service component would have been $75-$17 = $58 "real ARPU".

    Actually, that's oversimplified. This document from Etisalat gives an accountant's view of IFRS and treatment of handset subsidy, which actually involved the "fair value" of the standalone, SIM-free price of the handset - maybe $700, not $600.

    Applying that here, we take the "total consideration" of the contract as $200 (upfront handset payment) plus 24x$75 to yield $2000 overall spend over the lifetime of the contract. That's set against the value of the deliverables as $2500, including the $700 fair-value of the handset. That then translates out to recognising 700/2500*2000 = $560 for the handset purchase, and $1440 for the service, over the life of the contract.

    In other words, the allocated amount to operator services should be $60 a month, not $75. Which means that the voice portion is also reduced by 20%, from $40/month to $32. Obviously, the data element is also reduced.

    I'm trying to work out where we are in the accounting standards draft / ratification cycle. This document from PWC seems to suggest that the regulations are likely to come in from 2014/2015, with a need to start preparing parallel accounts as early as next year. There are also various national bodies (eg FASB & IASB in the US) that have their own variations, detailed rules and so forth. PWC references the proposal "Revenue from contracts with customers” published in the US in June 2010. To be honest though, the details are beyond the point for this post. The key thing is that real mobile voice revenues (and data as well) are almost unarguably being overstated because of the blurring effect of handset subsidies. Exactly how, when and where the financial reports change doesn't change the fundamentals.

    It could well be argued that these changes should be applied retrospectively anyway, so maybe it all just nets out so that the peak of peak telephony was simply lower, but the shape of the curve remained the same. And that's absolutely fair if we look at the past - maybe we just say "Oh, the market was worth $600bn, not $700bn, because we didn't split out the $100bn we spent on handsets" and leave things stand. But going forward, if we are specifically going to look at business cases for new voice-related capex, this all starts to matter much more - especially if we consider the relative business case against keeping voice on circuit-switched networks (2G, 3G) instead of migrating it to VoIP on 4G.

    There is also a separate discussion to be had about service bundling and whether we should keep thinking of data services as something added "on top" of a voice and text plan, as many operators do today. Especially with LTE, there is a strong argument to say we should have a general "IP line access" fee, on top of which services - telephony, SMS, Internet access, content etc are layered. So maybe the $75 monthly fee should be allocated as $15 handset, $15 IP access, $24 IP telephony, $9 (IP) SMS and $12 Internet access.

    That's a topic for another time, though, although I'd previously written about this type of approach here.

    Either way, I think that today's mobile voice revenues are significantly overstated - perhaps by as much as 30-40%. As I said before, I'm not an accountant, but I think it is very important to recognise that some of our cherished data-points, which we're using to make investment decisions, are much more fluid and badly-defined than we might think.

    EDIT - Another thought: it will be interesting to see if the accounting treatment of VoLTE (which clearly needs an IP connection and therefore 'data service' in order to work), will be different to that of either circuit-switched fallback or "Velcro-like" dual radio solutions. There is an argument that being able to continue selling separate voice plans on a voice-only network will be much easier for auditors to agree on, rather than the everything-over-IP approach.

    One last plug for our Masterclasses on the Future of Voice (which I promise won't have more than a few minutes on accountancy): The first one is next week in Santa Clara, with the second one in London on July 14th. Details are at http://www.futureofcomms.com/ and booking is either via Amiando (linked from that site), or direct from me or Martin. (information AT disruptive-analysis DOT com )

    Friday, June 17, 2011

    Key takeouts from the Mobile Data Offload conference in Berlin - and why we need to keep some seams


    I’ve just spent a couple of days at the first offload-specific conference I’ve come across, organised by IIR. It’s been useful, giving me some good new contacts and allowing me to reconnect with some existing clients and friends. This blog post is just a summary of some of my take-outs and reflections – some people may already have seen some tweets I posted with the #mobileoffload tag.

    One thing that seems to be coming through quite strongly is that WiFi offload is currently taking the high ground in comparison to femtocells, especially in the residential marketplace. Conversely, there seems to be growing momentum for outdoor “small cells” of various types compared to WiFi hotzones. 

    Neither of these are absolutes, but these fit into a general narrative that:
    • Outdoor coverage and capacity is “classical” mobile network territory in terms of both personnel, planning and operator processes. Consequently, the idea of small cells fits with the notion of “more of the same, but smaller and cheaper and easy to site and manage”.
    • Indoor data offload tends to be driven by consumers’ familiarity with – and often preference for – WiFi. The heaviest mobile data users are almost certainly a subset of those that are happy with the setup and operation of WiFi in their homes. Operators' ability to deal with WiFi's vagaries through device-side software or inbuilt standards is also improving.
    The conference focused heavily on this last point - what I'm calling the “telco-isation” of WiFi. There are various standards and specifications being worked on by the WiFi Alliance, Wireless Broadband Alliance, 3GPP, GSMA and others. There’s an alphabet-soup of acronyms here – 802.1x, 802.11u, Hotspot 2.0, ANDSF, I-WLAN, WISPr and plenty of others. There was lots of talk of EAP-SIM authentication and so-called "seamless" mobility. There are various approaches to dealing with the operators' own and partnered WiFi accesses, especially around extensions of the mobile roaming mechanisms.

    Some of this is very important and is being done intelligently and effectively. The idea of improved "network discovery" so that you can tell more about a WiFi access than it's SSID name makes a lot of sense. It is also important in some cases for operators to be able to "steer" users to particular APs or SSIDs, and collect information and maybe enforce certain policies. In some cases, SIM-based authentication can make sense as well - an area where my opinion has shifted a bit recently.

    But however – and this is a big however – I think there are some serious issues. In my view, the industry is in danger of making the same mistakes it made with UMA about 5 years ago. The giveaway is in this cliched word "seamless". I spent a lot of timing criticising this aspect of UMA, and I can see myself having the same conversations all over again. Seamlessness is not the utopian ideal, just as it wasn't in 2006.

    In a nutshell - sometimes, and for some use-cases - automated and seamless (ie zero user-touch) connection to WiFi is absolutely desirable, ideally with session continuity and all that other fine stuff. But, critically there are also various use cases where seams are important, and need to be made visible to the user and/or applications running on the device. The tricky part is designing the end-to-end system, and especially the user interface on the connection manager, to cope with both sets of scenarios.

    Seams might be "messy", but they are appropriate for certain contexts. To reiterate the analogy I made in my presentation, we don't all go around wearing Lycra catsuits. Our clothes still have seams for good reasons, and the same is true of networks. Ultimately a seam is a border, at which things change - speed, latency, security, cost, ownership, policy, power consumption and many other parameters. The idea that the border should always be crossed with the user kept unawares risks a whole host of problems.

    There are various angles here:
    • The user will often wish to connect to WiFi networks that are not "approved" or linked to the operator's network or WISP partnerships. Most obviously, the user will want to use home broadband WiFi, private enterprise networks (often behind a firewall and with the corporate network's own security and authentication) and free public WiFi where it is available. The operator-driven WiFi software must not get in the way of this type of scenario - and neither should it try to tunnel back via the operator core in these cases.
    • The user may have access to multiple WiFi networks in a given location. The operator-preferred one may not be the best - perhaps because it costs more, perhaps because it is slower, perhaps because policies are enforced that the user would rather were not. Auto-connecting anyway may be an undesirable outcome.
    • The same WiFi network may be available locally on better terms. I'd be annoyed if my phone automatically logged on to a hotel WiFi (at a cost or lower performance), when the conference organiser was giving out free pass-codes. (Not the at the useless Kempinski in Berlin though, obviously - no inclusive delegate WiFi at the offload conference, ironically).
    • Some applications may "come with WiFi" themselves. Skype partners with Boingo, for instance. A presentation by Sky's recently-acquired WiFi network The Cloud suggests that Sky's future video apps and content will be tightly coupled its own WiFi footprint. If I am watching Sky HD movies on my phone in a public place, I'll want to connect to its own optimised connectivity (apparently guaranteeing 1MBit/s per user) rather than someone else's that is heavily contended and which routes traffic through a video-compression box.
    More generally, this fits with my concern that the telco-isation of WiFi is starting to look quite Machiavellian and unrealistic. Speaking to people with a view on the evolution of standards, some operators are apparently attempting to own and control WiFi on smartphones outright. While some level of improved control is understandable, we should be wary of the idea that an operator might control the overall WiFi connectivity on a device. 

    There are plenty of use-cases for WiFi which are not service-provider centric but "private"- notably enterprise connectivity, or connected-home technologies such as DLNA. If someone sends photos from their phone to their TV or home media server, that is not a "service", but merely data transferred locally over the individual's own network. You wouldn't expect an operator to be involved if you just moved the memory card, which is functionally identical to local WiFi use.

    Ultimately, WiFi is a form of Wireless LAN. It's a form of Ethernet. In general, companies that don't understand LANs in general are not the right one to get wireless vesions working properly in particular. Most ethernet use is private, and WLAN is no different.

    Some other points from the event:
    • Offloading signalling does not appear to be well-understood yet, but was at least a topic of discussion
    • There wasn't as much talk about on-device client software for offload control/management as I'd expected, a on lthough there were companies such as Roke and Onavo in attendance
    • The session on Net Neutrality was lively, but didn't really touch on offload that much. The AT&T speaker was  very vocal against the hard neutrality laws being mooted in the Netherlands, but conspicuously silent on how non-neutrality might impact its own femtocell traffic when carried over competing fixed/cable ISPs broadband.
    • Some very good sessions on mobile broadband economics - especially around the mix of data from different devices, and the fact that for most operators, only a few cells really face congestion at the moment. 
    • It's worth bearing in mind that for those MNOs selling USB dongles as an alternative to fixed broadband, that means that their customers won't have home DSL/cable to which to attach a femto or use WiFi for offload....
    • Offloading traffic from a MiFi-style personal hotspot (or smartphone tethering) clearly makes no sense
    • There are plenty of complex connection-management scenarios to deal with around offload, for instance selecting between LTE macrocell, HSPA femtocell and various WiFi connections, especially with multi-radio capable devices and multi-tasking where certain apps have different needs.
    • LTE offload is going to get tricky around managing VoIP, whether it's operator-based or third party.
    • Use of WiFi when travelling internationally is going to be an important part of operator strategy. I wouldn't be surprised to see aggregate WiFi+3G roaming statistics being used to convince regulators that "average" data roaming prices are falling fast, even though the cellular portion remains very high-cost.

    One other thing I'm becoming aware of: there’s quite a lot of smoke & mirrors about WiFi offload stats. In particular, a lot of the published numbers for “% of data offloaded by operator X” need to be viewed through a lens of scepticism. 

    In my view, WiFi usage on smartphones falls into three main categories:
    • Private WiFi use, typically either in the home or office but also elsewhere. As discussed above, this is WiFi access that is used with the mindset of “having a small computer connected to a LAN or broadband” – ie those applications and content that might have been used even without having a cellular data plan anyway. One way to think about it is the type of use that you might see with an iPod Touch – which clearly isn’t offload WiFi traffic as it doesn’t have a 3G modem.
    • Offload WiFi – this is the traffic which is directly moving from 3G/4G connectivity over to the WiFi access, directly substituted. This is the number that is the most important in terms of the economics – traffic which would otherwise have gone over the macro-cellular infrastructure.
    • Elastic WiFi – this is linked to offloaded traffic, but represents the extra amount that users will tend to consume given faster speeds or (perceived) lower price. In other words, this is incremental and not substitutional, even if it is mobility-centric use cases (eg watching video in a café or airport) 
    I suspect that we'll see a lot of over-inflated WiFi offload business cases based on spurious calculations that don't take this into account.

    I'll be uploading my presentation to Scribd and Slideshare soon & will post accordingly.

    Tuesday, June 07, 2011

    A classic example of app complexity that network DPI would find hard to resolve

    Today seems to be the day for me to needle some of my main targets. This morning I had another shot at the hapless RCS service, and now it's the turn of my biggest network-side punchbag, application-based charging.

    I've just been given a classic example of why this is going to be nigh-on impossible to ever get right.

    In theory, the network should be able to pick out the fact that I'm using Google Maps. I'm sure it's got a pretty predictable "signature" that the average DPI can spot.

    But what it probably can't spot is *why* there is Google Maps traffic being used. I've just downloaded the latest version of the Vodafone "MyVodafone" app for my iPhone. It's pretty useful, with a good dashboard feature showing how much data I've used against my cap and so on. This version also comes with a WiFi logon feature.

    The sign-up for this has a warning message, telling you that in order to find the nearest WiFi access point, the app uses (guess what) Google Maps. And that I am liable for the data charges incurred in doing so. Now I'm guessing that this is done for a good reason - most probably speed and expediency of getting the thing released, plus I also expect it doesn't use *that* much data in the big scheme of things.

    In theory, Vodafone ought to have set up some sort of rule in its network to obviate this, and zero-rate its own offload-location data consumption, especially as its reduced macro network load makes it the main beneficiary. But that would have needed to somehow check that the offload app was indeed the "user" of Google Maps, rather just than me trying to find my way around normally. And that's rather hard, without some sort of agent on the device watching what's going on and trying to decode what GMaps packets are "native" in the mapping client, and which are used via the local API for specific apps.

    This is precisely sort of hard and complex situation that I have in mind when I say that app-specific charging is going to be a nightmare. Imagine for a moment that Vodafone had a "menu-driven" non-neutral pricing model, where I got charged £3 a month for using the Google Map app. I'd be rightly irritated if *I* didn't use it, but the operator did itself through its own software, charging me for the privilege anyway. I don't expect the regulator would be too happy either.

    On another note, let's see how the Vodafone WiFi app manages to coexist with my other WiFi finder (BT Fon) on my handset. I don't think either is auto-logon, but I can imagine some interesting situations if they are, as both use BT Openzone. Will I be able to tell which "virtual" WISP I've logged into? 

    Creating user engagement in RCS and other communications services

    I've been having many more discussions recently about my vehement views on RCS and why I think it is (still) destined for failure. In short, the current hoopla about various operators and vendors doing a big push to "make it happen" is not enough.

    Yes, it helps that DT, FT, the US & Korean operators and (belatedly) Vodafone seem to be getting their marketing machines & spin-merchants lined up. Yes, it helps that RCS-e ditches the early RCS presence function which normally kills batteries and generates large amounts of extra signalling traffic. Yes it helps that Android is "malleable" so operators can get RCS-e clients onto some future handsets without too much pain. Yes, Orange and others are reportedly trying to strong-arm handset vendors into implementing it. Yes, executives from DT and other operators are name-checking it wherever possible on the conference circuit. Yes, I've even heard the word "freemium" mentioned in the same sentence as RCS.

    All good stuff. But falling under the banner of "necessary but not sufficient".

    These improvemens still don't mean that RCS-e somehow overcomes the other dozen or so problems I identified last year in my report on its near-inevitable demise. I predicted it would launch, splutter along for a bit, and then fail.

    It's notable that when I have discussions with operators or vendors about what the problems really are, the one theme that seems to resonate is that of user engagement. How do you encourage people to actually use and exploit RCS rather than the myriad of other messaging and sharing and social-networking tools at their disposal? What makes them "invite friends" and others to accept those invitations? What makes them "invest" in the service?

    Top of the list of things that versions of RCS I've seen *don't* do is permit the little snippets of user interaction that make alternatives like Facebook or Twitter or BlackBerry BBM so engaging.

    The most obvious is "Like". On Facebook, you get instant validation that you've posted a cool picture, added a fun status update, attended a great event, listened to a great music track or whatever. It's a single click, but it communicates involvement, friendship, respect, attention, humour and all those other human qualities. It's a way to say "No, I haven't forgotten you, I am reading your stuff but don't have time to write a full message". It's like smiling at a friend, rubbing your partner's back, winking at someone in a crowd.

    "Retweet" is similar. As are a whole host of "Vote up/down", "+1", "Recommend", "Share" and so forth.

    These create user involvement and engagement, with a simple HTML link. They also tend to be extensible - as seen by the amount of Facebook Connect logos around the web.

    Maybe a future version of RCS - or perhaps some operator-specific variants - will do something similar. Because if not, the services are likely to be very "dry".

    There's another form of user interaction for messaging I've just become aware of in this context as well, triggered by this article about Apple's new iMessage service. It has something that most PC-based IM software has had for years, as well as BlackBerry Messenger - "typing indication". That's the little animation on a Skype or Yahoo IM window that shows that the other person is composing a reply. It will be interesting to see if any RCS clients can do the same thing - some of the specification documentation suggests it should.

    The problem is that in future, communications users will have a very low tolerance of "clunkiness" - and they will also expect features to be upgraded like today's best apps, on a monthly or quarterly basis. There will also need to be a mechanism for operators to test different types of apps on certain groups of *live* customers. Google and Facebook can change their web page layouts, or app behaviours, for certain groups of their users, to see what works best. In my experience, it's pretty rare for telcos to do comparison-testing of different versions of services on their "production" customer base.

    Overall, I still think that RCS is going to face insurmountable challenges - especially with newcomers like iMessage and whatever Google does with adding communications services into the browser. I think there will be a few niche usage cases - and perhaps specific countries where local conditions are unique enough to support it. But unless they get the user experience not just "good", but "fun" and "engaging" as well, it will struggle to gain traction.

    Monday, June 06, 2011

    Can telcos compete in an era of fashion-driven services?

    NEW: Download the Future of Voice Masterclass flyer here

    We're all used to the descriptions of the mobile phone business being (to some extent) fashion-driven. Just like clothes, some things go in and out of style - touchscreens, clamshells, big, small, black, coloured and so on. We've also heard plenty of handset brands described as cool / uncool - obviously with variations around the world.

    I remember a few years ago, for example, SonyEricsson was very much an edgier and slightly counter-cultural brand in the UK, back in the pre-iPhone / Android era. I remember being at a gig and noticing which phones were being lofted overhead to take photos or videos of the band - S-E's were dominant among the younger fans.

    So we see device brands - SonyEricsson, Apple, Nokia, HTC, Motorola and so forth - compared with cars (Audi, BMW, Ford, Nissan or whatever) or clothes (Ted Baker, Calvin Klein, Marks & Spencer, Armani and so on).

    Up to a point, that's been mostly irrelevant to the mobile operators - barring the need to subsidise the more expensive ones, but that's usually (pre-iPhone) meant particular models rather than the whole brand. Sure, they've been able to exploit exclusive deals or other arrangements - but I don't think they've particularly cared if LG is seen as the equivalent of Mercedes or Citroen or Hyundai.

    But now there is another issue - one already seen in the fixed-Internet world.

    *Services* are now being driven by fashion, as well hardware. With the coming of smartphones and apps - and fast access to the public Internet, with new ways of creating "viral" adoption among communities - we have seen the rapid rise (and often fall) of novel ways to communicate. Facebook, Twitter, WhatsApp, BBM, Skype, Viber, LinkedIn and so on have grown in part because of adoption within groups. They can be tribal, cliquey, ephemeral - used for a season and then discarded (remember MySpace, Bebo, MSN?). Or they can be regional (Hyves, Friendster, Cyworld, vKontakt, Orkut, QQ etc).

    This is much more problematic for telcos, as operators are used to egalitarian, very long-lived service offerings that don't vary much in popularity, awareness or coolness. This has been because in the past, there were very few communications services - phone calls, SMS, email, fax. All were essentially "designed by committee" and so none could possibly be thought of as cool or fashionable - they just "were there".

    Sure, there are parts of the communications-using population which aren't particularly fashion-driven, but fewer than you might think. Plenty of CEOs want to connect their latest, shiniest i-Toy to the corporate network. Plenty of businesspeople were using BBM long before the teenagers go hold of their 'Berries. Even 10 years ago, people in finance were sending messages (and jokes) via the proprietary Bloomberg messaging system rather than corporate email.

    But in any case, two important groups - people with money, and younger people - often *are* fashion-driven, or at least status-driven.

    Now there's an important distinction here between equating phones, services and other non-tech brands such as cars and clothes. Phones are similar to cars in that most people only have one, or maybe two, keeping them for a considerable time. But people have wardrobes full of clothes, some new, some old, some cool, some utilitarian - and buy new ones regularly. They might buy the trendiest new shirt or coat for socialising, or something cheap and comfortable to chill out with on the sofa.

    I think the PSTN and SMS and basic mobile telephony are going sofa-wards. They're not going to be made obsolete, but relegated to the status of lowest common denominator clothes essentials that everyone has. Underwear that gets worn when nobody else is likely to see it. Sweat-pants for doing the gardening. Comfy shoes for a long-haul flight. Stuff that gets worn when you don't care about being fashionable.

    It's quite common even for the coolest of hipsters to buy their socks from Marks & Spencer. Plenty of people pair one item unique and expensive, with another which is totally generic. Prada + Primark. Zegna + Zara. Missoni + M&S. Tiffany + TopShop. (Not sure of the US or China or India equivalents here...)

    The question is whether - and how - telcos could either turn into Primark equivalents, or develop platforms that could form the basis of continually-churning fashion-driven services. Primark, for those unaware, is hugely popular and quite profitable, even for low-end clothes. Its shop on London's Oxford Street is always swarming with people buying basic, cheap, almost-disposable clothes which nevertheless have an essence of coolness. Like Zara, it's been radically engineered to be responsive, with great back-office supply chain management. Conversely, other higher-end clothes brands have developed the annual cycles of fashion shows and manage to reinvent themselves regularly - and you also have fashion house with multiple brands.

    Some operators - notably DoCoMo in Japan - have long been pitching "this season's new services", but that's still not common given the lengthy cycle times for development and standardisation.

    It's really not obvious to me how standards-based telecoms offerings can ever again play at the top end of communications services. Even if industry initiatives like RCS succeed, I suspect that the best they can aim for is the being the next universal telecoms equivalent of a pack of £6-for-three Primark Y-fronts, worn underneath a pair of £300 designer/developer jeans. And to get to where Primark is today, they will still need prime retail space, a very hard-working team and flawless back-office functions.

    NEW: Download the Future of Voice Masterclass flyer here
    For Santa Clara event tickets on June 30th, book here  
    For London tickets for July 14th, please contact me at information AT disruptive-analysis DOT com

    Wednesday, June 01, 2011

    Inspecting the inspectors & throttlers - reverse engineering network policy

    I first wrote three years ago about the likelihood of various companies or other organisations starting to "reverse engineer" operators' traffic management policies.

    Indeed, one of the common features of most regulators' pronouncements on more "flexible" regimes for Net Neutrality is that any traffic management is absolutely transparent to the user. Clearly, that transparency will need to be tested, either by regulators, consumer advocacy organisations or application providers.

    So a hat-tip to Azi Ronen's great blog on Traffic Management for spotting this research paper from the US state of Georgia, which does some great analysis of US ISPs' throttling activities. A whole range of other tools are also listed on this page: http://rk.posterous.com/tools-for-testing-your-internet-connection

    Over time, I'm expecting to see much more granular approaches to this - for example tracking application-specific policies or other rules and controls. I've seen some analysis by Epitiro presented at a conference, which showed a certain ISP degrading IPsec traffic at certain times each day. It seems likely that many others will join this trend as well - the EFF has certainly been doing it for a while, for example. 

    I also expect that Google, Apple, Netflix or others are collecting a huge amount of their own data and measurements about application performance metrics from smartphones and other devices. They probably have very good views on what looks like "natural" variation in congestion and throughput, versus that which looks "unnatural". As is the case with the Georgia study, any "messing about" with the IP stream will stick out like a sore thumb - as well any background optimisation, content adaptation and so forth.

    In other words, operators' network policies are likely to be transparent - whether they want it or not.

    What will be interesting is what happens in circumstances in which the network's performance appears to have been modified - in direct contradiction to an operator's marketing campaigns or the local laws. It will be unsurprising if we see some prosecutions for mis-selling or outright fraud in some cases.



    Tuesday, May 24, 2011

    Gaming the new peering arrangements - will non-neutrality really work?

    I'm hearing a lot of discussion at the moment about peering being the new battleground for "non-neutrality" of networks, and especially a mechanism for operators to try and monetise traffic from Internet/video providers.

    In theory, the argument goes along the lines of "symmetrical peering is fine, but if there's heavy asymmetry, eg mass downloads outweighing uploads, then we've got a right to renegotiate our deals with our Internet peers".

    Surely, this is just going to result in Netflix, Google, BBC and others developing new services that are predominantly upload-centric, to try and redress the balance of traffic? Rather than either reducing downloads, or actually paying cold hard cash?

    So instead of combatting traffic load, this surely just encourages Google to offer realtime augmented reality with video-processing in the cloud, clogging uplink as well as downlink? Or for the BBC to archive your old VHS tapes somehow? Or for Apple to push an audio "black box" that records all your ambient sounds during the day & streams them to a server? Or for all of them to start using peer-to-peer techniques for content distribution? (all of this encrypted and hidden from DPI, obviously).

    This is just an idle musing for now, and I've never seen the details of a peering arrangement. I guess there could be some way of tying it to actual congestion (eg traffic from YouTube that gets downloaded in busy cell / busy hour is accounted for differently to that during quiet periods). But in that case, there would need to be some serious OSS/BSS integration work to actually be able to *prove* this

    Monday, May 23, 2011

    Future of Voice Masterclasses: June 30th Santa Clara, July 14th London

    NEW: Download the Future of Voice flyer here

    Regular readers of this blog and my Twitter stream ( @disruptivedean ) will have noticed an increase in focus on voice and communications services and applications in the last couple of months. I've also written a guest post for Visionmobile on The Future of Voice, and last week spoke at the LTE World Summit in Amsterdam about Voice, VoLTE and the Future of Personal Communications. (A copy of my presentation can be downloaded here)

    So, I am pleased to formally announce the first Future of Voice masterclass, which I am launching in collaboration with Martin Geddes. The event will be held at Intel's headquarters in Santa Clara on 30th June, the day after the eComm conference (which I would exhort people to make every effort to attend - I'm speaking about telcos' own-brand OTT services)

    To book a space for the first event on June 30th, click here

    The event is not going to be a one-way bombardment of slides, but will instead be a mix of presentation, collaborative group exercises and realtime consultation with Martin & myself. We are aiming for 20-25 people with a diverse mix of operators, vendors, Internet companies and other market stakeholders.

    In terms of themes, we will be covering new business models and delivery technologies for voice, including:
    • The difference between "voice" and "telephony" - and what that means for telco and Internet companies
    • Delivering voice on LTE and "direct-to-cloud" voice on fixed networks
    • Understanding "voice as a service" and the new fragmented voice landscape - particularly after the Microsoft/Skype deal.
    • How to make money from free voice and messaging with B2C unified comms
    For details of the event, please contact me at information AT disruptive-analysis DOT com, and I can send you a copy of the event flyer, and pricing information.

    If you can't make it to California, then there is an event in London on 14th July (venue TBC) and US East Coast, probably the week after Labor Day.

    To book a space for the second event in London on July 14th, click here

    I look forward to seeing some of you there!

    Download the Future of Voice flyer here


    Friday, May 20, 2011

    Thoughts from the LTE World Summit


    Earlier this week, I spent two days at the LTE World Summit in Amsterdam. More than 2000 great & good members of the telecom industry, including a ton of operators and most of the major vendors. Multiple streams of presentations, a decent-sized exhibition show floor and “big conference” production rather than a small meeting room in a hotel. I was hosting an analyst roundtable on Voice, VoLTE and the Future of Communications, and also presenting a 30 minute presentation on similar topics.

    Those of you following my Twitter stream (@disruptivedean) will have seen a fair amountof  ongoing commentary, but I thought a few issues were worth drilling into here. I'll be writing a separate post about the Future of Voice, and my upcoming workshops with Martin Geddes, so I won't overdo the VoLTE analysis here.

    Overall though, I’ve come away rather pessimistic, despite all the bombastic hyperbole I’ve heard. I’m hearing the same old stories I heard at last year's event – and a lot of them are getting worse rather than better. Loads of hoary old clichés about peak rates and “exponential” data growth. How flatrate plans don’t cut it, long after most of them have been phased out anyway. A ton of unrealistic vendor hype about application-specific policy and charging “business models”.

    The big story was also much the same as last year, albeit stated a bit more loudly rather than just implied – there are too many spectrum bands for LTE. At least 8 “core” bands, and another 10-20 also being deployed or considered. Europe will probably get by with three mains ones – 800, 1800 and 2600MHz. With perhaps a little bit of 2100. And some use of bits of TDD spectrum knocking around. Then there’s a variety of US bands, Japanese-specific ones, Chinese ones and a variety waiting in the wings to get approved. 

    That is *much* worse than 3G, which had one core band for much of the world (2100MHz) and still took a long time to get either coverage or good handset performance.

    Bottom line is that LTE spectrum fragmentation is not going to go away. This has a number of implications – firstly, roaming is going to be a real pain when moving “off-net” beyond a single operator’s OpCos, or between regions of the world. In all likelihood, HSPA will continue be used for roaming in a lot of cases. Secondly, handset vendors will likely have to create either regional versions of handset hardware platforms, or make “world phones” that suffer from coverage issues in some markets. Either way, scale economies will be lower, prices higher, testing more problematic and time-to-market longer.

    It will not be possible, for example, to have one iPhone variant that supports 3 European FDD bands, Verizon and AT&T 700MHz, the Chinese LTE-TDD variant, something for Japan, and perhaps another US band like AWS or LightSquared. I reckon that Apple will need to create three, possibly four distinct versions of future LTE iPhones.

    Now Apple can afford to do that - it only has a single model introduced at a time, it sells in high volumes per device model/version and makes a huge margin on each. In other words, even if each "spin" costs an extra $100m to develop, it's still a drop in the ocean. If it creates three versions and sells 10m of each, it will probably make $2-3bn gross margin on each variant, so it can "wear" the extra hardware development and test cost quite easily.

    But it would get very painful for lower-volume devices, or manufacturers that have broad ranges of devices. This in turn means it's probably going to be painful for operators with unusual spectrum bands (eg LightSquared) to get a decent range of decent handsets. 

    In Amsterdam, we heard repeated pleading from operators - even DoCoMo - essentially saying "Support *my* band! Please! It's really good, and we can get economies of scale & support from all the vendors!". 

    There are going to be some disappointed players left standing in this game of musical frequency chairs. And everyone else is likely to feel the knock-on effects of component suppliers' hesitation and uncertainty. Some operators will likely hold off on LTE decisions until the spectrum situation becomes a bit clearer.

    One other option for LTE that got a little exposure - but was obviously still highly contentious - was that of wholesale-only shared networks like Yota (and LightSquared and a couple of others). I think that although that model makes sense in terms of spectrum usage efficiency, it also poses a risk for incumbent operators that will start to lose control over their core business enabler (the network) and may face a future where all differentiation comes in terms of the (often mythical, and always competitive) "services" layer.

    I'll be writing more about the threat from "under the floor" players in the coming months - and why shared/outsourced/structurally-separate mobile infrastructure plays are both inevitable and highly disruptive. I'll be at the network-sharing conference in London next week as well.

    One interesting angle on voice and VoLTE that is starting to bubble up - and which I've been suggesting / advocating for some time is that of dual-radio phones. We already see dual-radio CDMA/LTE phones for Verizon and Metro PCS, which use CDMA for voice and LTE for data. This has a distinct advantage over the proposed "Circuit Switched Fallback" standard, in that an incoming voice call doesn't switch off the LTE data channel. I'm expecting to see the same approach appear for GSM/LTE dual-radio phones, but that is much more complex as (unlike CDMA) both radios will probably need separate SIM cards, or two IMSIs on one card. At least one major vendor was openly discussing this approach - but at the moment the lack of standards about handling this type of device is a concern for operators.

    Like VoLGA before, dual-radio "velcro" GSM/LTE is a solution that *works* conceptually very well, but it will be interesting to see if the politics of the standards world - and some entrenched interests wanting to ensure that nothing detracts from VoLTE/IMS's uncontested anointment as top solution - get in its way. My view is that this should be the main backup plan or straight replacement for VoLTE: as telephony revenues start to fall, why would many operators want to invest in a new core network and applications when their existing GSM telephony works perfectly? 

    In my view, operators should invest their future voice/telephony budget in creating new voice products and playtforms - and do the absolute minimum necessary to get decent "old school" telephony working on LTE smartphones. I think the Velcro (yes I know it's a trademark) approach could free the operators to concentrate on creating new and possibly more valuable voice and VoIP applications - before Skype/Microsoft does it for them.

    The last comment in this post is about WiFi and LTE. I've had a few conversations recently about the rising star of WiFi usage for offload, onload, roaming and other operator use cases. I think that all of these are extremely important.... but I also sense a dangerous level of groupthink around the "telco-isation" of WiFi. There's a host of new standards and solutions that make bolting WiFi onto 3G/4G networks more "seamless" or more controllable. 

    Those of you with long memories will know that I have an intense suspicion of the word "seamless". It represented all that was wrong with the ill-fated UMA technology. More than four years ago, I wrote what I thought was the requiem of seamlessness. But it's back, it seems. In a nutshell - seams are important. They're boundaries. Sometimes I want to know when I reach a boundary, sometimes I don't. Things change at boundaries - speeds, policies, price, ownership, security, latency and so on. In particular with WiFi, it is absolutely critical to enable a good user experience between choosing between "operator WiFi" and "private WiFi". 

    I see far too few advocates of the "private WiFi" use cases - there seems to be an asusmption that WiFi access on smartphones will default to being "service"-mode. I think that is a deeply flawed belief, and unless address will come back to haunt some of the new approaches to offload or operator-provisioned WiFi. More to come in later posts, conference presentations and so forth.

    A few quick bullet points of "other" interesting items:
    • Apparently, TeliaSonera intends to charge extra for VoIP on its LTE network. Good luck with that. Maybe you can start by providing us with a clear legal definition of "voice"? Downloading a spoken poem? Audio telepresence? Skype video with "mute" switched on? Accessing voicemail? Encrypted speech inside HTML streams? If you're a Swedish-speaking telecoms lawyer, you're going to make a lot of money over the next few years....
    • Verizon was being very coy about its rollout and recent outage. Its conference speaker was not even from Verizon Wireless but from the EMEA arm of the company which is mostly the former MCI/WorldCom enterprise services division. Unsurprisingly, probing questions about the progress of VoLTE testing were not especially illuminating.
    • Apparently, SMS over the SG interface *is* working. Just that vendors haven't bothered to tell anyone about it as it's not considered sexy. Let's see how the full SMS-over-LTE experience works on future phones though.
    • It was good to hear an anecdote from T-Mobile Netherlands that the biggest problem isn't "tonnage" of data traffic, but simultaneous signalling from lots of smartphones and apps in the same place. More interesting still was the massive explosion of the SMS-replacing "WhatsApp" service in Holland, which apparently got to 70% penetration (of smartphones I assume) in just 3 months. Hence KPN's profit warning a couple of weeks ago. (It's worth noting that Netherlands is slightly unusual when it comes to messaging, as it's historically been a low-Facebook use country, instead using its own local social network Hyves)
    There were certainly more nuances I picked up about LTE, but the overwhelming sense was that, in Europe at least, there is "no hurry" to push it to the massmarket. That's a big contrast to the US, where a 4G marketing frenzy is taking place, dragging network deployment in its wake.