Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Wednesday, October 31, 2012

Interoperable does not mean "federated" - lessons from email

When I've recently debated telecoms "traditionalists" about #TelcoOTT and the Future of Voice, I've noticed that a familiar theme is around interoperability.

The argument sometimes goes along the lines of insisting that we shouldn't have "islands" for messaging or voice or social networks. There is then a rapid leap to conclude that the traditional model of interconnected telecoms networks/services is something to be emulated in future. (And, indeed, enshrined in platforms like IMS).

In general, I think that fragmentation is always a stronger and more powerful trend than convergence. I have been presenting slides highlighting the importance & value of "divergence" for more than 15 years now.

But let's leave aside the hypothetical discussion about silos for a minute. There is definitely value in having *some* services or applications interoperate at least *some* of the time, that I will agree.

But there is a critical distinction that is not made by many:

Interoperability does NOT imply "Federation".

Federation is where every network has its own, dedicated service or application platform, and they interoperate via standardised network-network interfaces. This is familiar from services such as today's telephony and SMS. In those cases, there is a direct link between network infrastructure and services - so each telco has its own services and network, and interoperates/interconnects at their "border". A call starts on one telco's network, and ends on another's - AND, critically, it uses BOTH telcos' application platforms as well. The user has a service identifier (number) which is directly linked to the access line.

But that is not the only model of interoperation

There is another hugely-popular form of fully-interoperable communications, which is not "federated" in the same way. It operates in a fashion completely divorced from the network and access mechanisms.

That application is email. It interoperates almost perfectly. But it is not tied to an access provider (although your ISP can give you a dedicated email address too). It can be accessed on any device, via various protocols (POP3, IMAP and so on). It has been enhanced over the years. it can be web-based or client-based. And it works pretty flawlessly, most of the time.

A good way to think about it is that you can email yourself, using multiple accounts or apps on the same device - or across multiple devices. You, the user, might choose a primary email account, but it's decoupled from the access part. In theory, you can have private email "islands", for example inside companies, that don't interoperate.

I think the email model is a possible way to evolve voice telephony and make it more useful and enduring, especially in mobile. You could have multiple "lines" from multiple service providers, on a single device. At one level they would interoperate perfectly, but they might have separate special features or business models, in the same way that Gmail is different to Hosted Exchange or assorted others. (I still pay a subscription for Yahoo Mail Plus, because I like the disposable email aliases & the spam filtering is really good).

However, for this to happen, there needs to be a disaggregation of phone numbers from SIM cards, and will likely need to be done via Telco-OTT and LTE networks (or perhaps WiFi). It might be possible to have multiple "VMVNOs" on a single SIM as well, I guess - perhaps using multi-IMSI.

I think that future service/app interoperation will be driven by business model needs, or customer demand. If 100m users demand that Skype interoperates with Google Voice (or VoLTE), I'm sure Microsoft will consider it. Various IM and VoIP services already interoperate, either directly like MSN-Yahoo messengers, or via an exchange like Xconnect.

But while interoperability will continue to have value, I see zero - or perhaps negative - value in the legacy federated model. If anything, it enshrines business model and technical rigidity. It would be difficult to have an interconnect agreement between a fully-paid and a freemium telephony service, for example, as payments would have to depend on the status of each party.

In my view, email is the "forgotten" ubiquitous service. While it might be deeply unfashionable, and with revenues that are both small and hard to extricate from wider Internet usage, it is worth examining as a model for future interoperability. In particular, its standards do not require a specific type of access service, or enshrine a business model. Also, because it is easy to sign up for 2nd, 3rd, 4th or n'th accounts, it is a low-key way to extend your brand or ecosystem, without the pain of negiotiating users' switching barriers.

As such, an interoperable but non-federated model could also be the solution to the "speaking agency" problem I outlined in yesterday's post.

Tuesday, October 30, 2012

Telcos' role as "speaking agents" in voice telephony will inevitably be disintermediated

Imagine, if you will, a business where hundreds or thousand of independent service providers link millions of customers with a complex network which enables long-distance communications, earning a commission each time the users make a payment.

I am, of course, talking about travel agents selling flights.

Like many other agency or brokerage-type business models, they have seen revenues and margins drop precipitously because of the Internet, especially where they historically occupied bricks-and-mortar premises. They have often been disintermediated by new web-based businesses with different cost structures (eg Expedia) or airlines selling direct (EasyJet, JetBlue etc).

The buying of airline tickets is not a "service" as such. It's just a normal function of the airline along with flying the planes, baggage-handling and maintenance. (Yes, I know they contract out certain bits, but that's not a service from the passengers' viewpoint).

Some offline travel agents have nevertheless survived, along with a new tier of online aggregators / affiliates (Kayak etc). If you want to book a round-the-world flight, you'll probably still go to an expert, although even there I've seen online systems gain in sophistication.

Other travel agents have gone down a bundling route (eg lowest-common denominator package holidays) or specialised in unusual destinations, demographics or unique adventures or experiences (eg The Adventurists). Often they will make money from ancillary services (accommodation, visas etc) rather than the flights. Some package operators have vertically-integrated and now own their own airlines.

Why is this relevant?

Well, firstly because this illustrates an important problem with the Future of Voice that many of my august peers seem to overlook. We over-focus on increased supply of telephony (so-called OTTs, MVNOs, new entrants like Free in France) and don't focus enough on peaking or decreasing demand for telephony. When was the last time you actually phoned a travel agent to book a flight? There are simpler better (not just cheaper) ways to do it.

But it's the second point that's more important - philosophical, even.

Today's telcos are often just acting as "speaking agents". They just intermediate between you and the person you want to talk to, over a distance, taking (effectively) a commission from the value of your conversation.

That is not a sustainable business model. It is ripe for disintermediation.

Speaking at a distance is not obviously (or exclusively) a "service" proposition. You don't need an agency involved, unless it adds significant value. In some cases, bundling can be "value" but only if it's cheaper or much more convenient than buying the components separately. Some of colleague Martin Geddes' ideas on Hypervoice (adding context & actions to voice streams) are valuable. DoCoMo's cloud translation service is value. Numbers have value for instances where you're calling someone new, or to a place (eg for a pizza). Emergency calls have value. There is some value in "quality", but it's not really enshrined in simple network QoS.

But connecting two friends or work colleagues together for a basic phone call does not involve any provision of value beyond the access layer. In fact, the restrictions of the phone call format may detract from the value of the conversation.

I know the analogy is not perfect. But the "speaking agent" model is going to become ever more niche. We only think of it as a service because of the ancient history of the telegraph, and then the use of manual intervention to connect you to your recipient's line.

(That model still exists in some instances with personal assistants "Hello, is that Mr Bubley? I'm connecting you to Mr X now". Ironically the only time in recent memory that's happened to me has been when speaking to representatives of the ITU before the recent conference in Dubai).

While telecom users might sometimes be lazy in switching, they're not stupid. Trying to eke out the last bits of growth in voice telephony-agent business makes sense, but blaming it on those "dastardly OTTs" is completely missing the point. Voice communications is already moving to cheaper/richer applications (eg Skype) and it's about to become embedded in the web (via WebRTC - I'm speaking at the conference in SF in Nov).

The idea that regulators (who are usually tasked with improving value to consumers, as well as competition fairness) will happily sustain a basic speaking-agency model long into the future are over-optimistic. Once ministers and regulators pick up on the idea that "voice" doesn't have to be a service, but can just be a function or application, the world will likely change rapidly. We will see efforts to decouple the valuable aspects (eg emergency calls) and provide them perhaps as a standalone service or basic citizen right.


If telecom operators want to continue to fight their corner, they need to:

  • Think deeply about the "agency" dilemma. Are you really just brokering (and metering) peoples' conversations?
  • Work out how to add real value to conversations. This will need careful segmentation of *why* people make calls, and look for unmet needs for specific contexts. It's ridiculous that we use the exact same product for a sales call, as we do for calling a relative overseas.
  • Promote the use of telephony and other voice services much better. 
  • Stop focusing myopically on the OTT bogeymen on the supply-side, or you'll miss the real elephant in the room, which is falling demand for an ageing and clumsy product.
  • Decouple the number from the service and access. While there is still some value in E164 numbers, the perspective of number=identity is extremely flawed. 
  • Review how your accountants do revenue allocation of voice telephony from bundles. I believe that it is often massively overstated to begin with
  • Understand the difference between voice & telephony, and between services & apps/features
  • Ensure your billing & OSS systems are up to the challenge of new business models - freemium, sponsored, differentiated or affiliated services etc.  Stop thinking that the "minute" is the fundamental unit of telephony.
  • Tell regulators to stop thinking in minutes, and to understand that the very nature of voice comms is changing.
  • Warn your investors & explain what you're doing (if you think Utility valuations are bad, have a look at Agencies of various types)
  • Get up to speed on the threats & opportunities from WebRTC. It's probably the most disruptive thing I've seen in more than 5 years. I'm doing a presentation on what it means for telcos, and also sitting on a panel at this conference next month.
There's a ton of other stuff I could add here. But it's critical to avoid the complacency from some of my rival analysts that it's all fixable, if you just hang on to the number and do some clever bundle-pricing. That is pure wishful thinking.

Martin Geddes & I did a Future of Voice / Telco-OTT workshop in London last week. Sign up to both his & my mailing lists and we'll let you know about our 2013 schedule soon. Or if you'd like to arrange a private workshop or brainstorm session, contact me at information AT disruptive-analysis DOT com .

Friday, October 12, 2012

Essential reading for ITU World attendees: Ubiquity is EARNED not ASSUMED



I've written before about the “Death of Ubiquity”.

In the run-up to next week’s ITU World summit in Dubai, where I’m on two panels on ecosystems & future service platforms, I thought it was a theme worth revisiting. I’ve also just spent two days at the VoLTE andWebRTC joint events in Paris, which has given me further food for thought.

Many operators, industry associations and vendors refer to the “ubiquity” of the PSTN, and the universal “reachability” of E164 telephone numbers. This is also sometimes linked to the history of universal availability of emergency calling as well. This ubiquitous nature is taken to be, unarguably, a benefit, and something to be maintained at all costs.

I’ve identified three important implicit assumptions that don’t bear scrutiny:
1) The assumption that the phone call – and phone number – will remain our primary – and ubiquitous - means of communication in the future
2) That IP-based versions of those services (generally IMS-based) are somehow “entitled” to ride on the coat-tails of their ubiquitous circuit-based predecessors, and will also inevitably become ubiquitous. 
3) That the classical model of each telecom company owning its own (commodity) switches and applications, and interoperating/federating them, will remain central in future

In a week in which we learned that Facebook has passed a billion users, yet Verizon has pushed back commercial VoLTE rollout to 2014, there is clearly a reality-check needed about what will be “ubiquitous” and when.

Voice is more than just phone calls

Let’s start with the phone call. As Martin Geddes and I point out in our Future of Voice workshops** , it was a pretty decent idea 100 years ago – but it really isn’t a great reflection of how we’d ideally like to communicate in spoken form. It’s interruptive, rather unnatural and suffers from what is termed the “Hegemony of the Caller”. It’s fine for certain types of interaction, but looking increasingly poor for others – which is why many people now pre-schedule calls, or “escalate” from IM. (It also has numerous other limitations which I can discuss on request). We’re seeing the emergence of new forms of voice communication (eg ambient voice or app-embedded communications such as in-game chat). Some markets like the UK have now gone past “peak telephony”, with minutes-of-use falling as we discover other mechanisms that work better for certain functions.

**(a few spaces left for Oct 23-24 in London - sign up now!)

That said, I don’t expect the phone call to disappear quickly or entirely (although revenues will). What is less certain is whether the phone number will endure. I’m increasingly irritated that web-based forms expect me to enter a phone number, when often I’d rather be contacted by the service of my choosing such as Skype (or not at all by voice). As technologies like WebRTC start to turn voice communications into a function of certain websites or apps, rather than just a service, we’ll have ever more voice interactions that don’t need a numerical identifier attached to a subscription or SIM card.

Cut the number?

I don’t think it’s feasible yet, but I’d quite like to “cut the number” when I get the chance. I only use the phone for a few functions, and my call minutes diminish year on year, despite getting more in my mobile or fixed broadband allowance. It’s a hassle having to port the mobile number whenever I churn, as it’s linked to my account/SIM.

Overall I think that numbers could still remain quite useful – but only if they can be totally decoupled from access line and SIM. Increasingly, we will have multiple access providers anyway (especially as we use multiple devices and assorted 3rd-party WiFi connections), so there is ever-less argument to have a single “master” access against which everything is tied. (Separately, I think that WhatsApp and Viber are storing up problems by using the number as an identifier for their services).

But today, phone numbers and phone service are pretty ubiquitous, I agree. They grew up in an era in which there were no alternatives, offered good reliability, extra features like emergency connectivity, and has served us well and gained popularity.

BUT….

This means that the PSTN has *earned* its ubiquity. Billions of people have seen it to be good, and bought into it. Mobile telephony (and SMS) has gradually usurped fixed telephony and extended its reach.

Fake it till you make it?

The problem is that the IP-based successors of telephony – such as IMS-based VoLTE – have conspicuously not earned their ubiquity yet. Some in the industry are assuming that will happen in the future, but it’s far from obvious. RCS is even further from being “entitled” to ubiquity as it doesn’t have a circuit-switched predecessor as heritage (no, it's not SMS 2.0).

That VoLTE and RCS5 are being combined by some operators undermines this assumed “right” even further – it has not been given a “mandate” by end-users yet, and in the new world of choice it is wrong to pre-suppose that it necessarily will.

Some vendors and the GSMA are taking a stance of “fake it till you make it”, in order to scare recalcitrant operators into adoption against their better judgement. But CFOs don’t want to invest in expensive new systems to service a declining market.

It’s indisputably clear that such operator-based services will not be the only games in town. Indeed, at this point in time, Facebook, Skype and WhatsApp are all more “ubiquitous” than IMS-based services. Even for fixed telephony, IMS-based VoIP solutions compete with simpler NGN-VoIP, 3rd-party services such as Vonage or Skype over “naked DSL”, and of course circuit telephony, which is still leading after 10 years of grindingly-slow substitution.

While many in the industry claim that so-called OTT players are “silos” or “islands”, that is neither accurate nor relevant. In its current state, it is IMS that is the silo, albeit one managed by an arguing committee rather than an individual company. Not to mention that there are many ways by which Internet-based services can and do interoperate – not all the time, or for all examples, but it is a trivial problem where there is demand. (Indeed, email is the best example of an OTT communications application which interoperates perfectly).

Which brings us to a more important issue: it seems abundantly clear that users positively like silos. (Note to regulators/ministers: users are also voters). By and large, people don’t seem to mind that Twitter or Facebook are run by individual companies, and they have plenty of choice if they do mind. 

(Edit: It's also clear that users don't always mind about variable quality or reliability either. But then you already knew that, if you'd witnessed the original uptake of patchy/drop-ridden cellular telephony. I will address the wider issue of QoS, QoE and Net Neutrality in another upcoming post)

No, you can't reach me

This supposed issue of “ubiquitous reachability” is also a chimera. Increasingly, people don’t want it. They want something much more nuanced – easy reach by some people (friends, clients, colleagues), slightly more difficult reach by others (loose contacts, who ought to make a bit of effort, as a filter), and no reach at all by others (eg telesales). Facebook, LinkedIn and various other social networks get this – they build in ideas like “mutual contacts”, contact requests, “how do you know X” functions and so on.

In a busy, networked, multitasking world, we simply don’t want ubiquitous reach.

The problem is that IMS proponents, most vendors & operators, and industry bodies, never bother to think about behavioural psychology, or social anthropology. They develop technical standards based purely on engineering principles, not human ones.

We now have enough advanced technology that we can engineer pretty much any form of communication that we want. So telecoms companies need to start with “want” not “engineer”. It is conspicuous how few IMS, RCS and “ubiquity” advocates mention end-users, or talk about actual behaviour and preferences, rather than how they’d like the world to work.

Ubiquity might occur again in telecoms, but it will be earned, not assumed or mandated.

A federal imperative?

Once you understand that, you understand why the federation approach to telecoms also fails. Not only does it take far too long to evolve – and with too many compromises based on committees – but the underlying economics are bunk as well. Federation of services means that each operator produces, distributes and sells the same commodity product. You can call these “dumb services”. No other industry has 1000+ manufacturers of an undifferentiated commodity with falling prices and zero shipping costs.

Federation may occur after services are successful, and the owners/users think that there is a good rationale. They will use tools like SBCs and other gateways, which are getting ever-cheaper and more powerful. Federation is not a starting point. You can federate from a position of strength, not in anticipation of it, or else you risk creating a brittle, inflexible, slow-moving bureaucracy which is incapable of backtracking when it makes a mistake.

For all their size and power, companies like Facebook and Microsoft and Google have changed direction – often very rapidly – when faced with challenges from their users (or competition authorities). I can’t remember 3GPP or GSMA ever doing the same.

Again, the difference revolves around users. There is no mechanism for end-users to force a change in “ubiquitous” services, especially if they are somehow viewed as special. Regulators can play around with pricing and a few other issues, but cannot easily drive changes in the underlying technology or characteristics of the services themselves, especially in a short timespan.

For Facebook, every change it makes risks key people – ie social “hubs” with lots of friends – abandoning the service and switching allegiance elsewhere, potentially taking hundreds of people with them. It is those individuals, if anyone, who holds the “hegemony” – much like real life, where the most popular and connected people determine success or failure of restaurants or theatres or fashion brands.

There is no path for end-users to petition the 3GPP to change the nature of deep-packet inspection, or the role of SIM cards. For federated services, churning doesn’t help, because there is no competition at the basic layer of service features and capabilities. You have to take what you’re given.

Unsurprisingly, in a world of choice and crowd-sourced product direction, this is not popular any more. Users are rejecting federated services for better, more-tailored and often free/cheap alternatives, delivered via open Internet access and apps. The more egregious moves (eg on privacy and net neutrality) are sometimes ultimately tackled through the ballot box, although the obfuscating noise of politics and lobbying makes that a tricky path - and which of course get conflated with a hundred other non-telecoms issues.

Dial 911!

Emergency calling usually rears its head at this point in the argument, as an example of the “greater good” that customers are only aware of when they really need it. It is used as excuse for continuing the controlled, centralised, federated-telco model.

I think that is a non-sequitur.

I agree that good emergency communications is a must. It also needs a bottom-up rethink. Nobody sensible would suggest being able to call 911 from inside voice chat in World of Warcraft (“Police? My sword’s been stolen”). But nobody sensible would say it’s a bad idea to allow SMS’s to emergency services either, yet 20 years on it’s still not possible in most countries.

We need to start thinking about decoupling emergency comms from the telephone network and look – in the round – at better evolution paths. For $50-100bn, we could probably find a global 5MHz of spectrum, build dedicated networks (high range/low capacity is fine given the loads) and give every person on the planet a cheap cellular emergency keyfob or bracelet. Alternatively, banging together the heads of Microsoft, Google, Apple, Facebook AND some telcos could yield a rich and extensible “Emergency API” that far exceeds today’s voice-only 911.

It shouldn’t be the emergency tail wagging the broader communications dog. In fact, the tail shouldn’t be attached to the dog anyway, solely because of a 100-year old legacy technology and industry structure that is breaking up. Let’s use the ITU event next week to start thinking about imaginative alternatives for the next 100 years more seriously.

Conclusion
 
So, overall I think it is time to rethink this term “ubiquitous”. What ought to be ubiquitous is the right for the individual to be contacted primarily on their terms, not those of whoever is trying to contact them. If we decide as a group that we still want a lowest-common denominator telephony service in perpetuity, then optimise GSM and CS voice even more, for maximum efficency and lowest cost and power consumption. 

GSM, unlike VoLTE, RCS or IMS, has earned its ubiquity. IMS and VoLTE might succeed and become ubiquitous eventually (RCS certainly won't). But the industry shouldn't assume or pretend that it's inevitable, because it isn't.


Friday, September 14, 2012

How to save NFC: Kill the idea of mobile payments & operator involvement

There's been lots of hand-wringing over Apple's decision to exclude NFC support from the iPhone 5. It's not because it can't. It's because it won't. Apple's marketing VP Phil Schiller is quoted as saying It’s not clear that NFC is the solution to any current problem”.

Spot on.

I've been critical of NFC for some considerable time, and it feels pleasing to be vindicated by Apple's doubtless consumer-centric and design-first approach.

I see three main problems with NFC:

1) Focus on mobile payments & other transaction-based use cases
2) Complexities around the secure element stemming from telcos' insistence on being involved in the NFC value chain
3) Ergonomic deficiencies.

The third one is easiest to explain. Simply: tapping a piece of expensive, glass-encased electronics on solid objects is stupid. Furthermore, making people interrupt whatever they're doing on a phone to buy something / get on a train / whatever is equally stupid. We all multitask. Let me use my phone & a card/cash simultaneously.

1) and 2) are linked. The belief that the "killer app" for NFC is paying for things - or other "monetised" apps like ticketing - has led mobile operators to say "we want a slice of that!". This has then led to interminable wrangling over the architecture for security, and in particular the linkage of NFC to SIM cards. This has had numerous side-effects:

  • It's delayed the whole thing through massive bureaucratic & political procrastination
  • It's created a technical structure which means that transactions are actually too slow on many phones (eg turnstiles on London's Tube need to work in 300ms from tap-to-open, so people can walk through without breaking stride. Oyster cards work, phones don't)
  • It's meant that NFC hasn't been properly opened up to developers as a general API to do cool stuff with.
  • It wouldn't be able to work well on non-SIM devices (eg tablets) and would likely have a hard time dealing with dual-SIM devices, or the half the planet which either has 2 phones, or swaps SIMs all the time
  • It's led to ridiculous protracted trials & consortium-forming which has earned a lot for lawyers and PR people and totally messed it up for everyone else.
But the simple fact is that the whole mobile payments thing is a chimera. Yes there are some corner cases (unbanked people in Africa on M-Pesa, specific apps like Starbucks, Square for accepting payments). But the basic notion of "paying for stuff with a mobile phone" is simply flawed. Firstly, cash & cards work perfectly well. I've never had a problem buying a sandwich & thought "what terrible experience taking £3 out of my pocket". I can use cards anywhere on the planet with a pretty good acceptance rate. Chip & pin means it's more secure than before. And I never see anyone bothering to tap their cards on the contactless readers either.

The idea of your purchases "going on your phone bill" completely ignores the fact that most people on the planet use PAYG prepaid and don't get a bill. Average outstanding prepay balance is something like $5, I believe. Most contract users won't want a sandwich or a flight on their phone bill - especially corporate expense managers. It just doesn't fit with our mental model of "phone bill", which many people don't both looking at anyway as they're on a standard plan. Linking purchases to credit cards stored virtually in your phone just seems pointlessly geeky & needs interruptive apps to be useful. I don't buy all this couponing & analytics hype either - it's just putting lipstick on the pig.

The idea of electronically transferring money without so much as a PIN or a signature scares me and most other people. I don't trust any of the parties involved except the card provider and my bank, and adding in the handset-maker and mobile operator just increases the already-too-high perceived risk. Note: this is totally different to my Tube Oyster card as that is stored-value & decoupled from my bak & credit accounts. The most I can lose is the £20 I top it up with. I can also use the card while I'm on the phone - I like to multitask when I'm travelling.

It's notable that the much-vaunted Japanese Felica system is still little-used for actual purchases of goods with phones. And that's despite NTT DoCoMo spending something like a billion dollars buying a bank and a stake in a convenience-store retailer to catalyse the market.

Schiller is right. The "tap-to-pay" thing is a nonsense, a solution looking for a problem. The involvement of a telco adds zero value and lots of friction. At some point I might want to use the phone to make transactions against a loyalty account (hence Starbucks), but that's likely to be very specific to a particular brand or store & I'd like to do it "in the app". QR codes (as used for airline mobile boarding passes) are not a bad option for this, and *maybe* NFC in the much longer term, but even then I still prefer the "visible" code - and no need for physical contact with the device. [I don't believe in phones for *reading* QR codes, but displaying seems OK]

Where the real value of NFC might lie (and I'm still not 100% convinced on these either) is in what I refer to as "interactions", not "transactions". Stuff like a "click to connect to WiFi" pad in a cafe, or a "touch-to-like" Facebook icon in a restaurant. The WiFi example has already been done by Blue Butterfly is much more elegant & sensible than the pointless and wrong-headed "seamless" approach suggested by some carrier-WiFi advocates. I wrote about this 18 months ago - the volume of free non-monetised NFC interactions will outstrip paid ones by orders of magnitude, like free apps in the AppStore. Operators probably won't want to be involved in that loop - and will likely slow down the developer app-creation process anyway.

We need to get rid of cellular operators from the NFC value chain, except as just another class of app developer. There won't be many transactions for them to take a cut from anyway, and their involvement in "interactions" just adds extra complexity and bureaucracy without providing any value. We also need to bin the idea of NFC being transactions-first entirely, as it has perverted the entire development course of the technology. It *might* come later, once normal uses of contactless have crossed the barrier of public acceptance - along with trust that it's OK to tap your precious device on things.

Apple has not "killed" NFC with iPhone 5. It's just merely pointed out that it's on its deathbed already, dying from a virulent payments & telco-involvement disease. It might be resuscitated, but I doubt it - and Apple has cleverly avoided contamination from its corpse.

One last thing: If a miracle happens & NFC does start to recover from its debilitating case of payments, then tablets will make great NFC *readers* for many applications. Apple, Google, MS, Samsung & co. should be embedding that functionality, before Square takes even more of the merchant market away from them.

Thursday, September 06, 2012

Nokia's wireless charging proposition - very clever indeed

Yesterday I watched Nokia's launch of its new Lumia 920. Apart from making me want to throttle whoever came up with the hideous canary yellow colour, it seemed fairly impressive. I've been playing with a Lumia 900 for a while now as a secondary data-only device, and I like it and the Windows 7.5 OS quite a lot. The 920 takes it further - the PureView technology, in less-bulky/Symbian-inhibited form than the 808 - is an obvious winner.

One other thing that  has taken a few hours to sink in has been the wireless charging idea. Now this isn't new -  I've been shown demos of mats and pads for about five years, I think (Powermat was founded in 2007). Various companies have tried either selling it themselves or partnering (eg Duracell) but it's remained resolutely unsuccessful and over-geeky.

My initial reaction to Nokia's inclusion of it was "gimmick".

But on reflection, I'm no longer so sure. Smartphones with big batteries still go flat quite quickly, especially if used "in anger". We shouldn't really be surprised about this - if you use something as a miniature version of a laptop & perform similar tasks on a high-res screen with lots of processing, the energy still has to come from somewhere. Even a big 1500-2000MAh phone battery is a fraction of a typical notebook's.

So. We all complain about battery life. Yes, there's some stuff the network vendors & operators can do to improve the radio's consumption, but that's only part of the story - the screen & chips still drain a lot of power even when offline or via WiFi. (Incidentally - remember that WiFi use to be a "battery killer" on phones? Now it's a saviour compared to 3G/4G).

Yes, we can charge most phones from our PCs' USB sockets these days, but while we might sometimes carry the cable, we probably don't carry about the main standalone phone charger in our bags (if we have a bag). I've been in cars or on planes with USB sockets to charge things, but that's pretty rare - and also a bit inconvenient with the wires anyway.

Nokia has come up with two clever options with its wireless charging that make me think the idea could have legs:
 
  • Integration with other bits of hardware / accessories (the JBL speaker / charger thing)
  • Establishment of public wireless charging hotspots with its deals with Coffee Bean Cafes and Virgin Atlantic lounges (I'm going to call them "Powerspots" and see if I get to claim coining rights in a few years' time)
The interesting this is that this is all standards-based using something called Qi, proposed by the  Wireless Power Consortium. So Qi-enabled phones and chargers should be able to interoperate in future.

One Nokia exec is quoted as saying "The Virgin deal is a first step in our plan to make wireless charging as ubiquitous as Wi-Fi is today."

It's an obvious link to make (hence "Powerspots" - see you like it already, don't you?).

But what's really interesting is how today's WiFi models came about. Initially used for industrial applications and driven by companies like Symbol, WiFi spread in the early 2000's to enterprise offices, and then consumers' homes and public hotspots.
  
But do you know which company first "consumerised" WiFi and ultimately catalysed its adoption in laptops and homes? No, not Intel with the Centrino chip in 2003, although that was a "crossing the chasm" moment. Four years earlier, in 1999, a certain Mr Steve Jobs of Apple introduced the AirPort card and base station, based - perhaps unusually for Apple - on the standardised 802.11 technology.

While other people already thought about public hotspots, the home use of WiFi was still very new - unsurprising as ADSL and cable modems were only just starting to emerge. Importantly, the timing meant that third-party WiFi became popular enough - and usable enough - that telecom operators were not able to exert much power over it, especially as it operates in licence-exempt spectrum.

Now clearly, there's no obvious telco business model for wireless charging anyway. But what Nokia has (perhaps) catalysed here is a separate trend towards places offering - and maybe in some cases charging for - power for phones, or offering it as a value-add to gain loyalty, much like WiFi today.

(Sidenote: there are already various phone-charging-for-money business models, eg with assorted safe-boxes in hotels/bars or even whole shops in developing countries, which just act as power-points for people without electricity at home).
 
So, I think Nokia's been quite smart here. Whether it can directly monetise Qi and wireless charging is yet to be seen (I can't see it setting up a big PowerSpot network itself) but it might just buy it a couple of points of market share at a critical period. It's been sensible going down the standards route because it catalyses the public-powerspot market and therefore (maybe) spiked Apple's or Samsung's guns if they had anything proprietary in development.

One last thought-experiment for network operators though:

What happens to your network if all your customers had fully-charged phones, all the time? What's the incremental use, and is it "more of the same" or fundamentally different in character?

Friday, August 24, 2012

Telcos will suffer because of "subscription myopia". WebRTC & WiFi don't need subs

I've been thinking a lot about WebRTC recently. How and where it will become important, and what it might do to our concepts of voice/video communications and the existing telecom value chain.

It's still very early days, but the momentum and details suggest that it will be of incredibly high importance. There are certainly complexities - not least of which is Apple not yet revealing its intentions - but overall the general premise "feels" right.  There are no obvious irreversible "gotchas", and there are plenty of interesting use-cases, and a whole plethora of innovators from both small and large companies alike.

See more recent posts on WebRTC here and here , and watch out for the forthcoming Disruptive Analysis research report here.

This is diametrically opposite to things like NFC payments or RCS, for which there are plenty of hard, easily-described and unfixable flaws in the basic concept, and where support and innovation are thin.

WebRTC fits well with the idea that much of what we consider as communications "services" are in fact just "applications", and increasingly drifting further down to become "features" and eventually "functions". Messaging is already a long way down that curve - IM chat inside apps such as Facebook or Yammer or Bloomberg are not "services", any more than the bold type button is a service. They just send words from A to B, rather than highlight them on the page.

WebRTC extends that metaphor to spoken words or visual images. They will just be sent via a browser or web widget (obviously needing access to camera, microphone, codecs & acoustic processing). It is already possible to have direct browser-to-browser conversations without plug-in or downloaded applications on the desktop. Massmarket versions of Chrome, Firefox and IE are all likely to support WebRTC during 2013, with a steady move onto mobile over the next couple of years.

This will mean that voice communications (and in some cases video, although I think that will be minor) will become much more pervasive, cropping up in all sorts of interesting contexts. I've long talked about "non-telephony" forms of voice, such as Siri, in-game voice chat, push-to-talk, business-process integrated voice and so on. WebRTC is likely to be the single biggest catalyst enabling "voice as a feature" to be used by web developers in the same fashion as any other aspect of HTML.

Maybe in two years time, you'll be on the Amazon website and you'll suddenly hear a voice saying "Hey, congratulations to all of you browsing right now - there's a 10% discount on everything for the next 5 minutes!". It could be the web equivalent of a tannoy in a supermarket "Special on Aisle 3!". That's not a phone call. It's not a service, either. But it is voice communications. Other possibilities are too numerous to mention, but many have observed that this means that "the website becomes the call centre". Not "click to call" or even "Skype me", but just having an in-browser real-time voice interaction in the same fashion we already see with IM chat. Adding WebRTC voice to LinkedIn, Facebook and numerous other sites is obvious, and so are things like web-karaoke without plugins, or voiceprint-based authentication instead of passwords.

This is disruptive to both traditional phone calls, and also to "legacy" standalone VoIP clients such as Skype's. It is doubly disruptive to new VoIP platforms such as telcos' IMS-based VoLTE, which is mostly just a recreation of the old telephone mindset, and is having enough problems even doing that.

At the core of this is a central problem for the telecoms industry. It is addicted - perhaps even enslaved by - the idea of the "subscription". All operators report subscriber numbers, the word SIM means Subscriber Identification Module, and much of the technology elements such as HSS's and most billing systems assume subscription-type relationships. Regulation is also heavily subscriber-centric.

Now subscriptions are a very valuable business model. Ongoing payments are attractive for companies, and predictable for users. Many businesses - include a lot of technology analyst firms - are heavily dependent on subscription revenue streams.

But they're certainly not the only business model, and neither are they without flaws. They mandate an ongoing customer relationship. They assume that the capability being provided is an identifiable and separable service.

While that has been fine for the past 100 years of telephony, it is clear that the landscape is changing. Voice or video communications is going to appear in lots of contexts - service, application, feature, function. Sometimes it will be based on the need for enduring relationships and "reachability", for example with a phone number and subscription. Sometimes it will be transient and in-app.

Some communications capabilities will continue with ongoing identities and billing relationships. Others will be sponsored, free, ad-hoc, one-offs, occasional use, ambient, ad-supported and so on. I'll get my spoken words delivered - and paid for - in as many ways as I get my italic words. Sometimes I'll get italics in my subscribed and paid-for magazines. Sometimes they'll be on a website or billboard for free.

If I want to speak to an Ikea customer service agent with a query on how to put my cupboard together, I don't need their number, and they won't need mind. I'll just click the "help!" button in the Ikea app which has already tried to show me where I'm going wrong, perhaps with a one-off fee associated with it.

Now it's possible that could be done with Telco APIs, hooking into an IMS core and telephony app server. But I might be using a WiFi-only tablet with no associated phone number or operator relationship. And Ikea, in this example, is not going to want to deal with either 100 telcos or the constraints of some collaboration like OneAPI, when it could just add the function simply and easily into the browser or app, at no cost or hassle.

My view is that WebRTC will ultimately be the "ubiquitous" voice and video communications service. There will be more browsers, and voice-embedded websites and apps than mobile and fixed phones. The telco/IMS world will be a subset of this, constrained by the narrow formula of subscription-style relationships and defined identities.

Yes, there will be security issues around the perceived dangers of anonymised voice communications. Yes, in some cases network quality will be too poor to support good-enough voice using best-efforts connections. But those will be (fixable) corner cases, and not things to derail the wider trend.

We already see service providers looking at opportunities around WebRTC - addressing services, legacy interoperability, premium billing, perhaps quality enhancement or emergency-calling-as-a-service. AT&T, China Mobile, Telefonica and others have spoken publicly about WebRTC, and I know many more that are watching or involved in the standards work. Vendors like Ericsson are looking too - this is not just a Google / Microsoft / Apple (??) fight, with traditional telecoms getting squashed in the middle.

There are still plenty of questions, and this won't all happen overnight. But one thing is, to my mind, utterly inevitable. Those companies who refuse to see beyond the "subscription" - and those technologies which cannot flex enough for non-subscription relationships - are facing decline into niches or outright irrelevance.

(Footnote: WiFi doesn't need a subscription either. LTE does)
(Footnote #2: One good way for Telcos to get around the legacy subscription mindset & infrastructure base is to pursue Telco-OTT services and business models. Buy the report!)

Thursday, August 23, 2012

Upcoming Telco-OTT & Future of Voice events: US, UK, Asia & MidEast

The debate about Telcos & OTT / Telcos vs. OTT services refuses to die down, spanning VoIP, messaging, content and cloud services.

Recent months have seen continued debate at all levels of the industry. There have been countless articles written with various levels of apocalyptic and/or messianic tone. Skype, Google, Apple, Amazon and Facebook are becoming more entrenched in users' minds and smartphones, along with newcomers such as WhatsApp, Pinterest and Twitter.  We've felt the rumblings of WebRTC (more on that in coming months from me). We have seen operator CEOs opine on both perceived threats and opportunities (most notably Telefonica). We've seen organisations like ETNO try to flex political muscles, lobbying the ITU about the whole structure of the Internet and about permitting operators to levy transport charges (more like telecom-style termination fees) on 3rd-party applications and content.

We've also seen the difficulties of some Internet business models - notably Facebook, Netflix and Zynga. As such, these firms are unlikely to ever pay rent-seeking telcos any form of transport toll without additional value-add that helps their business. Google, BBC and others have ways around network-quality issues and are unlikely to ever pay for QoS, even if it becomes feasible.The only ways to "monetise" OTT services are those which enhance their current business and revenues, not tax what they're doing already.

We might see some traction in areas like customer data intelligence, identity, congestion or billing APIs, but the operators are desperately slow, especially where they attempt to collaborate.

In the meantime, customer expectations from voice and messaging are changing significantly. Users seem entirely happy without the need for "ubiquity", except as a last-ditch common denominator for people they contact outside their normal network. For almost every use-case of communications, there's something better, cheaper or cooler than phone calls or SMS.

With all this in mind, I'm going to be on the road over the next couple of months, participating in a broad variety of events, and speaking on the central themes of:

  • Clash of ecosystems: telecoms standards, apps & web
  • Telco-OTT services and strategies
  • Future of Voice & Messaging
  • Why telcos cannot hope to "monetise" OTT comms/content/cloud services if all they are offering is data transport
  • At some of the events I'll also be looking at network-side issues like policy & charging, or WiFi offload/onload models

The format of these events vary. Some are private vendor-led customer conferences at which I am a "stimulus speaker". Some are paid public conferences. I'm also doing various private behind-closed-doors workshops.
 
First up, on 20th-21st September, I'll be running a 2-day telecoms excellence course in Singapore with Clariden Events. Titled "Managing and Understanding Disruptive New Technologies in the Mobile Telecommunication Business", it will cover a broad array of trends around both communications services (voice, Telco-OTT, WebRTC etc), and the underlying infrastructure. It will cover both global and Asia-specific developments.

On 27th September I'm doing a webinar on Telco-OTT with Acme Packet. Details here 

Next up is the US, where Martin Geddes & I will be running a shortened version of our Future of Voice / Telco-OTT workshops as the pre-conference for Metaswitch's customer forum in Orlando, on October 1st. We're both also speaking or moderating panels in the main part of the conference.

Then, from 14-18th October, I'll be in Dubai at the ITU Telecom World Summit, attended by a variety of global telecom luminaries, including national telecom ministers and operator CEOs. I'm on a couple of panel sessions, including the "Battle of the Ecosystems", which will examine telco business models in the world of OTT services and consumer data. I'll be voicing a number of opinions, including the pivotal role of OTT, and the need to maintain a strict view of the "Real Internet" alongside any other non-neutral varieties of data service. My other session will be on "Service delivery", which will cover areas like IMS, RCS and WebRTC. I'll be arguing that operators and standards bodies need to look beyond legacy platforms such as IMS, if they are to survive the next decade.

October 23rd & 24th in London is the next iteration of my and Martin's full 2-day workshop series on Future of Voice / Telco-OTT. We'll be revamping the material to cover recent developments - from WebRTC, through outcomes from ITU, to updates on RCS/VoLTE launches. Full details are at www.futureofcomms.com and sign-up is here. These interactive, small-group events feature a careful mix of operators, vendors, Internet companies and developers - often with some regulators and investors thrown in as well. We've got a very strict 25-person maximum so we can give personal attention to everyone's specific situation, and encourage collaboration between people in the room.

I'll also be chairing the Total Telecom World conference in London on November 13th, which will also examine OTT disruption, and how to rebuild the telecom ecosystem to recapture growth and revenue.

There will probably also be a few other events I'll be at in 2012 - I've already got a Telco-OTT webinar (details soon) and a couple of private presentations/workshops booked in. I'll also probably be wearing my "Telco 2.0" hat at STL's Digital Asia event in Singapore on 3-5 December.

If you're interested in booking me as a stimulus speaker, event chair or panel moderator, please get in touch via information AT disruptive-analysis DOT com.

Lastly, one bit of advanced warning. I will NOT be attending MWC'13 in Barcelona next year, for all the same reasons that Alan Quayle eloquently discusses in this blog post. I think the move to the new out-of-town venue will destroy the nature of the event, and I've got no desire to suffer the "commute" to and from central Barcelona. However, I'll be extra making time available for meetings in London the week before - and if anyone fancies joining me, maybe we can organise some sort of Mobile London Congress instead, or at least a few drinks.

Saturday, August 04, 2012

London & Technology: The Mayoral 2012 Debate & my city's future direction

I'm a native of the world's greatest city. And despite my extensive travel schedule, I still live a mile from where I was born, right in the centre of London.

The Olympics - and Team GB's performance - are making me doubly proud of my home. And so I'm absolutely delighted to have been asked to take part in one of the "Mayor of London 2012 Debates" this afternoon - unsurprisingly, the one called "Technology: Disruption or Convergence". The lead speaker is Jimmy Wales, of Wikipedia fame. I'm down as one of eight "spotlight speakers" who will assist the debate with questions to him.

The theme of the event overall is this: "London has demonstrated resilience over many economic cycles, but what role will it play in the global economy as we shift to meet new challenges, and how can it incubate innovation?"

I've watched with interest in recent years as the technology industry in London has surged once again, especially in the parts of East London around Shoreditch - sometimes called "Silicon Roundabout", in reference to the Old Street road system. A bunch of interesting startups have emerged, companies like Google have set up shop, and - it's just a mile from the City of London - sources of investment have  filtered in. There's definitely a sense that innovation is indeed being "incubated", and there's certainly no shortage of encouragement for digital businesses in areas like e-commerce, social media and so on.

But I worry slightly that London focuses too much on the glossy - and rather emphemeral - part of the tech value-chain. It's very much "all about digital" - an extension of the city's heritage in media, advertising and trade. What's missing, to my mind, is hardware and other more engineering-led disciplines. While design is clearly critical to many firms' success (as Johnny Ive's recent knighthood highlights), there is also a need for nuts and bolts to underpin the sexier, flashier part of the Internet and mobile experience.

Ironically, there are no companies involved in silicon anywhere near Silicon Roundabout.

To me, the reason that Silicon Valley in the US has been so successful has been that it has had everything from university led research at Stanford, to silicon vendors such as Intel, and leading IT/networking players such as HP and Cisco and Apple and Sun, all alongside software (enterprise and consumer), finance and assorted supporting functions. Yes, media in the US tends to congregate in New York or LA, but the Bay Area has had pretty much everything else "on site".

I worry that London doesn't have the same depth - and despite having centres of excellence in places such as Bristol and Cambridge, there isn't the same "corridor" effect. Cambridge is almost exactly the same distance from Old Street, as San Jose is from Market Street in San Francisco. Yet the M11 motorway most certainly isn't Highway 101. While a drive from SF to SJ takes you past numerous famous tech locations - Redwood Shores, Cupertino, Palo Alto, Santa Clara - the equivalent trip in the UK embraces Walthamstow, Bishops Stortford and lots of pretty scenery - as well as some rather unpleasant bits of Northeast London. Although Stansted Airport is directly in between, most international travellers go via Heathrow, which is a battle with traffic and transport right across town.

In the past, UK technology seemed centred on the "M4 Corridor", from West London, past Heathrow, out towards Reading and Swindon and Bristol. Yet with a few exceptions, most of the many offices are just the UK sales and marketing HQ's for US or other international players. Not that much innovation and R&D happens there, and it also lacked the investment firepower (Not many VCs would want offices in Slough or Basingstoke - they're hardly Sand Hill Road equivalents). Not to mention the fact that London's rather sniffy media industry doesn't really fit with the business parks of Berkshire.

My question this afternoon is therefore going to be on these lines:

At the moment, London has a great deal of creativity and investment in the digital space, from mobile to e-commerce to social media. Yet despite the name ‘Silicon Roundabout’ to refer to the Old Street and East London technology hub, the one thing London lacks is expertise in the ‘silicon’ aspect – there are no IT hardware or semiconductor firms, unlike California. Can London’s technology sector continue to prosper without that hardware-engineering baseline – or can it rely on satellites such as Cambridge and Bristol to supplement its software and design competencies?
 
Personally, I'd like to see an East London / Cambridge corridor being considered in more concrete  terms, with better transport links and perhaps enterprise investment incentives being extended there, probably also including the Olympic Park legacy area around Stratford as well. Otherwise, I fear that for all the hype around London's tech renaissance, we're going to lose out on the synergies and self-reinforcement gained from combining all parts of the value chain. Semiconductors, network design and enterprise software might not appeal to Number 10 Downing Street's desire for photo-opportunities, but that's where a lot of success - and employment and tax revenues - could come from.
 
It might be the Internet age, but Silicon Valley proves that geography - and the chance for people to meet and travel - remains an incredibly strong factor in sustaining innovation and the relevance of a local economy on a global stage. And the valuations of Apple, Cisco, Intel and their peers also illustrate that engineering and hardware make up for their relative lack of sexiness against social media in other - perhaps more important - ways.
 
I'd like to see London on its current success by extending its investment and innovation in both physical location, and new parts of the broader technology industry
 

Friday, August 03, 2012

Mobile data traffic growth - a thought experiment & forecast

I'm deeply skeptical about a lot of the rhetoric about "mobile data explosions" and "tsunamis". In particular, I believe that a lot of the forecasts are unrealistic and often self-serving. Cisco's VNI is the best-known, but many other vendors (eg Ericsson, Huawei) and analysts put out their own data as well.

The predictions of several more years of 100% traffic growth seem a particularly poor fit, given that numerous operators (eg Vodafone) have reported notable falls in growth rate, often to below 50%pa annualised in developed markets with tiered/capped plans.

Clearly, suggesting that networks might get overwhelmed is a great way to suggest that operators should "buy more kit". There's also usually a particular focus on video and the percentage of traffic it makes up.

I also think that overstated & misrepresented data traffic forecasts are mis-used by the operators and industry bodies, especially when it comes to talking about "spectrum shortages", or perhaps the onerous effects of Internet content that should justify non-neutrality of service provision. There is a bigger battle being fought here, with the telecoms industry trying to claim spectrum previously used for TV or government functions. This has two functions - it makes network expansion simpler than using alternative approaches, and it also reduces the strategic and competitive power of the broadcast industry, which is a gating factor on telcos' IPTV opportunity.

SIGN UP TO RECEIVE THIS BLOG BY EMAIL

So it is easy to understand why there is a good reason to high-ball estimates of future mobile data growth. There is also a desire to try and create - or at least influence - self-fulfilling prophesies about the role of mobile broadband.

In any case, market forecasting is imprecise, because often the market itself is dependent on decisions made in the light of people reading them. To be accurate, you'd really need to forecast what new actions will occur as a result of your forecasts being believed, which is clearly a circular argument.

But in any instance, these discussions generally overlook numerous inconvenient issues that ought to be front-and-centre for mobile data before we run to the hills (or the regulators) from the "tsunami":

-  Tiered/capped pricing seems to "work" very well in limiting data consumption and congestion, especially if users have a "fuel gauge" and some idea of what activities burn the most of their quota
- Gross measures of traffic "tonnage" don't translate either to costs or congestion. It's traffic in busy hours and busy cells that matters. An extra 10GB of video downloaded at 3am in a rural cell is essentially free. An "offpeak" dataplan might increase reported traffic volumes but have no impact on costs or congestion. In fact, it might generate incremental and very profitable revenue by increasing capacity utilisation during quiet periods.
- For many networks, signalling load (against both the RAN and the core network) is the problem, not data tonnage. Multiple short bursts of data or "pings" clog up the network in different ways to a single, consistent stream. But it's harder to measure and bill for signalling, so it tends to get ignored
- Smaller cells give greater capacity density, at a lower price. They allow better spectrum re-use, reducing the need for new bands. In the long run, we get much more extra capacity by reducing cell size rather than adding extra radio channels - but this conflicts with the desire to grab more spectrum from alternative/competing users.
- Other new technologies and processes will improve network efficiency too: beam-forming, better sectorisation, MIMO and so on. But these are less well-proven than simply adding extra bands.
- "Video" isn't an application, it's hundreds. Amalgamating 500+ different applications and services under a single banner is completely arbitrary and meaningless. It's like saying the web is disproportionately dominated by the colour blue, and it should therefore be treated differently. (It might be green, I don't know).
- The dynamics of demand for mobile broadband are over-simplified. Much of the historic growth has been from "more users" rather than "more use per user". There are additional issues (discussed below) about coverage, device capability and so on.
- Demand growth and capacity growth are not directly linked - especially because "capacity" is impacted by numerous factors such as backhaul as well as radio-network scale. We also find that some base stations are "congested" because they haven't yet been upgraded to the maximum number of radio carriers already available.
- The dynamics of the mobile data market differ for post-paid and pre-paid users. As PAYG (which makes up the bulk of the planet's mobile customer base) becomes predominant, the idea of a monthly allowance will shift to a more usage-linked model. Early evidence suggests that PAYG data users - with smartphones - consume much less data than those on fixed plans. This is not factored into most forecasts.
- Some forecasts start from 2009 or 2010, therefore building in a huge initial leap from a low base. Ignore anything that doesn't use 2011 or 2012 as a start year, as otherwise the statistics will get swamped by vague and patchy measurements of historic data.

NEW REPORT - 10 REASONS WHY 1-800 TOLLFREE APPS MODEL WON'T WORK

We're already satisfying much of the latent demand

I'd argue that in developed markets such as the US and UK, we are already at 50%+ of  potential mobile data usage saturation given TODAY's devices, data-plans, apps and user behaviour. Almost anyone who really wants a smartphone already has one.

Many people who want cellular-connected tablets or laptops already have them too. Sure, there are still a few demographics that want them but can't afford them, but that group is diminishing rapidly. The remainder are mostly "laggards" - folk who might like a new device, but who are likely to be "unenthusiastic" in usage behaviour, at least in comparison to those with a 4S or S3 or L900 in their hands 24x7. There are surprising numbers of mobile data "refuseniks" whose attitude is unlikely to change.

If you magically increased the smartphone penetration of all developed countries to 100% right NOW, I'd be surprised if it added more than 50% to tomorrow's data consumption stats.

NEW REPORT - 10 REASONS WHY 1-800 TOLLFREE APPS MODEL WON'T WORK

A thought experiment

Consider a fictional place where 50% of people have mobile broadband (smartphones with data plans, plus some fraction have 2nd/3rd devices), and 50% don't have MBB, either because it's too expensive, or because they live outside coverage, or they're just apathetic.

Let's say that the 50% of current data users are using 1GB/month as an average (mean). There's a mix of capped and uncapped plans, some people are heavy users (with one or two devices), others are more parsimonious, or perhaps just use WiFi a lot.

What is the "unconstrained demand" from these people at around current data-plan prices? I think we can now assume that the mobile broadband marketplace is now pretty efficient at giving people roughly what they want, at roughly the price they're prepared to spend. I'd be surprised if the true unconstrained demand from existing mobile data users would be much more than 50-100% higher than today.  (Yes, if we dropped the average price massively there'd be an elasticity effect and demand would rise, but let's leave that option for a moment).

What are those constrains?


Thinking about Mobile Data Demand Constraints

A good way to think about this is to consider "what would happen to overall data traffic if we removed certain constraints?". This is often a counter-intuitive thought process because it involves going beyond the raw statistics and thinking about the real world and user behaviour.

So it's tempting to say that going from 50% penetration to 100% overnight would result in an instant doubling of traffic. But actually, it wouldn't because the remaining 50% would be much less enthusiastic users than the early adopters, especially on Day 1. Similarly, if we had perfect 3G/4G coverage tomorrow, we'd see traffic growth but not that much overall because all the busiest areas are already covered. What's left are big zones of occasional use (rural), and quiet corners of some indoor spaces.

This type of analysis is inherently much more complex, and goes beyond most statisticians' comfort zones. But it's a critical application of common sense. It's a sanity-checking phase too, that often seems absent in a lot of the mobile data forecasts I've seen.

For the thought experiment, let's consider I've got a magic wand to remove constraints. Each time, I'm going to leave the other variables untouched, especially price. What might happen?

1) Device penetration - if you gave everyone a smartphone tomorrow with a dataplan they could afford, plus cellular tablets/dongles at a pro-rata penetration to the early-adopter base, I expect we'd get around another 40% of traffic. (Heavily dependent on existing penetration of smartphones, eg 30% vs 50% - by the time some people read this post, we'll be nearing saturation anyway as it's moving so fast). In fact, the figure might be much lower - maybe just +20-30%, because most of those users will be prepay subscribers who tend to have lower data consumption anyway.

2) If we had perfect cellular coverage everywhere, I reckon we'd get a 20% uplift in aggregate traffic. Network planners aren't stupid - they know where the demand is, and the economics of satisfying it. Providing coverage to every mile of road and rail, or to every small village would definitely be nice - but it doesn't compare to a a big metro area when added together.

3) Now for a biggie. If we improved network speeds to 4G-type rates, with better latency, what would that do to user behaviour? More video streaming? Probably. More web use? Sure. More cloud-based apps? Perhaps. This is a tough one to predict, but we see some indications from people who move from 3G to LTE (although some of that is about getting upgrading to a new & better device rather than having a better network). Some stats say that LTE users typically use 50% more data than 3G users. BUT, that is skewed by early adopters switching first. But given today's apps and data plans and user expectations - which are often met pretty well on 3G after all - I'd say 40% is reasonable if the speed constraints were removed.

4) Device performance is another variable here. A lot of people have quite old devices that are slow, clunky, have low-res screens or are otherwise constrained, irrespective of network capabilities or coverage. But how much of a big deal is this really? Again, most of the real heavy users and enthusiasts do have the latest devices. If you waved the proverbial magic wand and upgraded all the old 3GS's and Galaxy S2's and assorted BlackBerries to today's state of the art, what would happen? Not much I reckon, again if all other variables were kept constant. (There's a bit of a co-dependency with LTE availability as noted above, though). I reckon that across the user base as a whole , we'd see perhaps a 30% uplift in data usage.

Now let's bring all these together to see what might happened if we removed the constrains (except price, and again bear in mind this is with today's typical apps and behaviours):

Device penetration = 1.4x (maybe lower)
Coverage = 1.2x
Network speed = 1.4x
Device performance = 1.3x

Multiplying through, we get an estimate of unconstrained demand = 3x today's constrained demand

But some of this is - in all reality - never going to happen. We're always going to have a spread of device ages and capabilities. We're never going to get 100% coverage. Some people will never use Facebook or Netflix or Dropbox, no matter how fast the network.  PAYG prepay users will use less data for various reasons. And some people will hold onto their Nokia 6310 from 2003, even if you try and bribe them with the latest smartphone for free.

So in the confines of this thought experiment, we're probably addressing 50% of *current* unconstrained demand for mobile data, at current prices.

That's probably true of a bunch of other industries as well. I'd guess that we're probably at around 50% of unconstrained price-constant demand for anything from flights (people don't have enough holiday time, or are scared of flying etc), or even unconstrained demand for beer (can't drink at lunchtime before a meeting, too young, health/religious issues etc).

It also passes the "taste test". Most people don't spend all day moaning about how they're only getting a fraction of the mobile data they want. Generally, apart from a few minor gripes (coverage mostly, plus congestion in some hot-spots), people seem pretty happy that their mobile data demands are being met.

Obviously, this is macro-level stuff. Specific places (eg Olympic Park) clearly see much faster growth in demand as there will be localised drivers. Also, the calculation in the thought experiment above will vary by country a lot too - there are different levels of network rollout, smartphone/dataplan adoption and so on. India, for example, is starting from a much lower base for traffic, and so many of the variables will be considerably higher to work out latent demand.


Mapping future demand

So. Let's say that with current devices, networks, apps, behaviour and pricing, we are dampening consumption of mobile data by a factor of maybe two from the theoretical realistic demand in today's developed markets. That's a useful number to bear in mind, as it means that:

Any future mobile data traffic growth is primarily going to come from new demand not satisfying current latent demand

That's important in technology. It's often said that "usage always expands to fill the capability available", or "build it and they will come", but that's not actually true. The reason that computer processor speed has always been exploited (and so quickly) has been that companies such as Intel have spent lots of time and resources on "demand creation", seeding developers with new technology, running marketing programmes and so on.

So where are all these forecasts for 10x, 20x - even 1000x - growth for mobile data coming from?

Is it just the mobile industry's normal ridiculous arrogance (almost a sense of entitlement about growth) and its propensity to ignore lessons painfully learnt elsewhere in the technology industry?

Well, firstly there's still a lot of untapped growth in emerging markets, although it again needs to be born in mind that at current network costs and dataplan prices it is likely that data use will be lower per-capita. A $2 data prepay ARPU is not going to give 100% coverage, $300 subsidy of an iPhone 4S, and a 1GB per-month plan. In general, that part of the market will be using cheaper/less-capable devices, on thinner/slower networks, with more restrictive tariffs than elsewhere. They will also likely adopt different behaviour to "squeeze more from less".

And, as mentioned above, we'll also see growth in smartphone use among late-adopters in developed markets.

So while this means we should see mobile data user numbers continue to grow rapidly, this will also paradoxically drive down the average data consumption as heavy users (who may well be using more data year-on-year) will get diluted by newer and ever more numerous lighter users.

I have not seen this mathematical inevitability called out on any forecasts, yet it happens in virtually all markets as we see a progression of maturity. A grandparent in rural Bolivia getting their first smartphone is unlikely to be using Facebook and streaming video 24x7 on Day 1.

So, where else is the extra usage going to come from? Various sources are possible:

- Even faster / better networks making apps more usable
- Device improvements like bigger screens
- New apps and services (eg mobile cloud-based offers)
- More cellular devices per person (either "attach rates" of personal devices like 3G tablets, or others like M2M that bump up the total)
- Behaviour changes meaning greater time spent on mobile apps
- Lower prices driving elasticity (eg through changing behaviour faster/further)
- Better structured data-plans driving off-peak usage volumes
- Switching from using fixed broadband or WiFi to LTE

NEW REPORT - 10 REASONS WHY 1-800 TOLLFREE APPS MODEL WON'T WORK

Many of these are interlinked, obviously - if the network is faster, it enables new apps, and alters peoples' behaviour.

It is also important to note some downward pressures on users' average consumption of mobile data:

- Better devices and OS's that compress data (eg similar to BlackBerry, Opera Mini & Nokia Asha which route "optimised" data via a server), or third-party software like Onavo's
- Various types of network-based optimisation and compression, especially for streamed video
- More use of adaptive applications (eg HLS-encoded video) which self-optimise to network conditions, or more efficent codecs
- Substitution of 3G/4G data with WiFi, either as true "offload", or (much more importantly) user-driven preference for accessing private WiFi, usually for free
- Older devices being retired (for example, I'm about to cancel my 3G dongle contract as I never use it - I get WiFi almost everywhere I want it)
- App developers becoming wiser and more parsimonious about how their software consumes data, especially if they get better development/testing tools, or are "shamed" into it in appstore ratings

I don't have quantitative forecasts for all of these drivers and constraints. But some appear to me to be especially important:

- More usage per person driven by better apps and devices, and behavioural change. Let's tackle the latter first. I honestly don't think that the average mobile data user is going to increase their time spend on mobile devices by another 2x, 3x, whatever. We're close to saturation on that one already. Better apps and devices? Definitely. I agree that there will be more video and cloud-app usage. I can certainly see scope for an extra 2x or 3x over a 5-8 year period. However, the swing factor here is likely to be tablets and the evidence suggests more extra usage will be WiFi-based.
- WiFi is to my mind the biggest "decelerant" here. Operators are (often) trying to do their own controlled offload of traffic, although I still believe that most examples are going down the wrong path of "seamlessness" with things like ANDSF and Hotspot 2.0. However, that is becoming less relevant anyway, given the huge global explosion of non-operator WiFi and increasing sophistication of users in exploiting it. Partly because of data pricing and caps, users are actively seeking "free WiFi" wherever they go, and becoming adept at using it - especially the high-end power users that normally generate the most traffic. Unless we see lots of WiFi congestion (possible), that move now seems irreversible. Recent Ofcom data shows that more people are becoming "WiFi-primary", just using 3G/4G where they have to.
- More devices per person. Yes, this is going to rise, even though most tablets will likely remain WiFi-only. We'll see various new mobile-enabled gadgets in all walks of life. M2M can, however, be dismissed as a major traffic driver as the vast bulk of products are low-bandwidth. Despite a few high-consuming categories like digital signage or in-car telemetry, there are none that obviously have the scope to scale to billions of units. Against phones and to a less degree PCs, tablets & MiFis, they are lost in the noise.
- Despite better uplink speeds and slow rollout of fibre, I don't see signs of much shift from ADSL/cable to mobile-only. Coverage limitations and need for IPTV and WiFi are likely to keep fixed broadband largely protected except for a few niches.
- I do see quite a lot of traffic growth being driven by "off-peak" data plans. Marketeers and their billing systems are becoming smarter. However, this is essentially irrelevant from the point of view of network capex, spectrum needs and so on.

The price elasticity issue is a difficult one to address. If 4G was completely free & ubiquitous, you'd find people getting LTE-enabled 42" TVs in their home, running HDTV over cellular even when there is nobody in the room. In that case, even the most bullish forecasts would still be too low. Clearly, that's an extreme example, but various other milder scenarios are possible.

This is the paradox in all the forecasts: there seem to be no clear assumptions on pricing. Cisco's VNI projections probably are achievable, but only if mobile data is priced at levels that nobody would make any profit from it. As we found with flatrate, it's easy to drive usage if you're throwing away money.That said, we may see users being encouraged to migrate to LTE with offers of larger data bundles, which effectively reduces prices.

Taken together, my belief is that the bulk of forecasts are over-hyped. I think that too many of the projections are being made without considering how devices, apps or users are actually changing. There also seems little recognition of the "dilution of the average" as late-adopters, prepay users and developing-market subscribers bring down the headline "per-user" numbers.


Quantitative estimates and forecasts

I haven't done a full spreadsheet model analysis & forecast, but since I'm sure everyone's going to ask me anyway, I'll "take a punt" on overall global data traffic volume growth rates, based on existing published stats & bearing in mind the qualitative factors described in this post. It's worth noting that most spreadsheets I've seen are long on detail but a bit short on commonsense, especially with assumptions on per-user data growth continuing inexorably, despite the "average dilution" effects described here. There is also a general assumption that "subscription" remains the main business model, rather than more ad-hoc usage through PAYG. (Although, obviously, the overhyped 1-800 tollfree models won't be happening)

My prediction for overall global data traffic growth (Indexed to full-year 2011) is as follows:

2012: 70%
2013: 55%
2014: 45%
2015: 40%
2016: 35%
2017-2020: CAGR 30%

In other words, 2011-2016 growth is 7.2x - and in developed markets, probably more like 4x, and by 2015 we should see growth rates broadly comparable to that in fixed broadband in developed markets.

Given advances in technology, this should be "sustainable" with ongoing evolution of networks to LTE, especially small cells & better radio technology like beamforming. Some of the traffic growth is likely to be off-peak, as pricing & policy becomes smarter.

This means that growth will be more of a regular tide than a "tsunami", and definitely not something for operators  to panic about - and regulators need to learn to be skeptical of shrill demands for more spectrum.

SIGN UP HERE TO RECEIVE THIS BLOG BY EMAIL

NEW REPORT - 10 REASONS WHY 1-800 TOLLFREE APPS MODEL WON'T WORK

Some comparisons with other forecasts:

Cisco VNI expects 18x growth from 2011-2016 (vs. 7.2x from my best estimates)
Ericsson predicts 15x growth 2011-2017 (me: 9.4x for that period)
Reading from a chart in this ALU presentation suggests 30x from 2010-2015, or about 10x from 2011-2015 (me: 5.5x)

My peers over at ABI reckon 8x from 2012-2017 (me: 5.5x for that period)
Informa predicts 10x for 2011 to 2016 (me: 7.2x)
Morgan Stanley gives scenarios for 5x, 9x & 23x for 2011-2015 (me: 5.4x)
 
Edit, 14th September - AnalysysMason makes very similar arguments to me (and has been similarly pessimistic to me for a while). They reckon 5.5x global growth for 2012-17 (me: 5.6x) and "dangerously low" for Europe. I agree - overcapacity is the risk, not a mythical "spectrum crunch"