A very quick post...
Apple is now in an extremely happy place and unusual place. It has worked out a magic formula (big fan base, good products, expectations of pricing), where it can pretty much guarantee that it can earn at least $200 of gross margin on any mid-to-high end product it sells - iPhones, Macs, iPads and so on.
So as long as there is a decent-sized market, at low enough risk, it can afford to treat certain new things as "projects" even though they may not be game-changing. If they do change the game, even better.
For example - the iPad. Even if it only sold 10 million, Apple would be up perhaps $2bn in gross profit. Even if the R&D upfront was $500m, that's still a pretty decent return.
And so now, everyone is talking about a Verizon iPhone. A year or so ago, I would have been skeptical - a CDMA iPhone would have been a risky distraction. Now... with an extra year's traction, it seems like 10m units is pretty much a baseline. And I'll assume the cost/profit structure will look pretty similar to the HSPA ones.
In other words, it's money in the bank, assuming that nothing goes horribly wrong. The only argument against it might be the opportunity cost - could those engineers be doing something *even more profitable*. But I'd imagine the company has had the time & resources to get its hiring aligned with its business opportunities.
By the same token.... could Apple make & sell 10m LTE iPhones, at $200+ gross margin, at equivalent low risk next year? No. And probably not in 2012 either. There's not an installed base of existing customers to sell to, the technology isn't mature, the chipsets expensive and the user experience would likely have issues that would mean something "going horribly wrong" would be much higher probability.
If it can stop its margins creeping down, I'm sure there are plenty more alternative 10-30m unit segments that Apple can target for its next few billion, while it's waiting for LTE to make the cut.
Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event
Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here
Friday, October 29, 2010
Thursday, October 28, 2010
Sprint's mobile wallet sounds sensible
I'm not generally a big believer in mobile payment solutions for developed-world countries.
Cash, cards and online payments already work perfectly well for me and other people, thanks. I don't need to "store value" in my phone, I don't want to scan it across an NFC reader, and I certainly don't want my operator bumping up inflation by taking a slice of everything I spend.
I especially don't buy the "bill it to my phone bill" concept of mobile payments - like many people, I have a very different relationship with my bank & my telco(s) and I'm quite happy to keep it that way. You'd have to be crazy to have a financial-services arrangement with an operator which tied it to an access account provision, although if it was access-independent it might make a more sensible proposition. There's also no way I'd exclusively use my phone for payment when travelling, unless all the transaction data was very explicitly zero-rated for roaming, and I had a guarantee of 100% coverage.
I'm also very happy with my existing payment mechanisms - Visa, Paypal, Mastercard, Amex and so forth. I might set up another, but I'd need a lot of persuasion.
But, Sprint's announcement of its mobile wallet solution is much more appealing - you get to keep all your existing accounts, but get access to them through your phone. Makes sense. Adds to what people already have, doesn't try to substitute it. Doesn't stop you carrying a physical wallet around as well as a virtual one if you choose. Doesn't try to bill things to your phone account. Maybe over time, if it's got a good UI and proves itself, you might change your approach to physical payments, some or all of the time. That's fair enough.
In other words, it doesn't force a behavioural change, but works with what people are already happy with. Which is good.
Now it's not 100% clear to me what the business model is, but in terms of "will this fly", my gut feel is that it has 100x the chances of all the various NFC and other mobile payments nonsense that's been trotted out in recent years.
Bottom line is that unlike most people in the industry, Sprint has actually bothered to look up the dictionary definition of a wallet: something that contains various different payment mechanisms from third parties.
Edit - looks like AT&T is also entering the fray. But they are going to go for the bill-to-the-phone approach. Let's see if people are actually prepared to wear that. My money's on "no" - although I'd rather not use an MNO-powered betting application & account....
Cash, cards and online payments already work perfectly well for me and other people, thanks. I don't need to "store value" in my phone, I don't want to scan it across an NFC reader, and I certainly don't want my operator bumping up inflation by taking a slice of everything I spend.
I especially don't buy the "bill it to my phone bill" concept of mobile payments - like many people, I have a very different relationship with my bank & my telco(s) and I'm quite happy to keep it that way. You'd have to be crazy to have a financial-services arrangement with an operator which tied it to an access account provision, although if it was access-independent it might make a more sensible proposition. There's also no way I'd exclusively use my phone for payment when travelling, unless all the transaction data was very explicitly zero-rated for roaming, and I had a guarantee of 100% coverage.
I'm also very happy with my existing payment mechanisms - Visa, Paypal, Mastercard, Amex and so forth. I might set up another, but I'd need a lot of persuasion.
But, Sprint's announcement of its mobile wallet solution is much more appealing - you get to keep all your existing accounts, but get access to them through your phone. Makes sense. Adds to what people already have, doesn't try to substitute it. Doesn't stop you carrying a physical wallet around as well as a virtual one if you choose. Doesn't try to bill things to your phone account. Maybe over time, if it's got a good UI and proves itself, you might change your approach to physical payments, some or all of the time. That's fair enough.
In other words, it doesn't force a behavioural change, but works with what people are already happy with. Which is good.
Now it's not 100% clear to me what the business model is, but in terms of "will this fly", my gut feel is that it has 100x the chances of all the various NFC and other mobile payments nonsense that's been trotted out in recent years.
Bottom line is that unlike most people in the industry, Sprint has actually bothered to look up the dictionary definition of a wallet: something that contains various different payment mechanisms from third parties.
Edit - looks like AT&T is also entering the fray. But they are going to go for the bill-to-the-phone approach. Let's see if people are actually prepared to wear that. My money's on "no" - although I'd rather not use an MNO-powered betting application & account....
Wednesday, October 27, 2010
Ray Ozzie, the so-called "post PC" era, and the naivete of the software industry
A post on ForumOxford pointed me towards Ray Ozzie's monologue about Microsoft and the future direction of the IT industry, "Dawn of a New Day".
Beautifully-written, yes. And containing much wisdom and knowledge.
But displaying, once again, the arrogance of the software mindset which believes it has conquered physics.
Contrast this with Jobs' comment last week:
"We create our own A4 chip, our own software, our own battery chemistry, our own enclosure, our own everything"
The idea that software engineering has the capability of beating hard limits and challenges in RF, power management, silicon and so on is myopic. Apple understands this and works to juggle all of them appropriately. (As does RIM, incidentally). But then Apple value hardware at least as much as software, from milled aluminium shells to customised bits of radio front-ends.
Ozzie assumes that wireless networks will be fast and pervasive. But what the software crowd (whether it's in Redmond, Silicon Valley or London) fails to comprehend is that there are no 3G networks to connect tablets or clouds to, in most parts of the world. Nor the money to justify the build-out in the depth needed to realise Ozzie's vision. Nor the willingness to support subscription-based business models. Even in the developed world, ubiquitous, high-performance indoor networks would need fibre everywhere for WiFi or femtos.
And lets not even touch on the legal and regulatory hurdles. But good luck trying to persuade the GSMA to ditch roaming charges for data, specially for cloud devices. Maybe give the WTO a call and see if they can sort it out over the next decade or two? Until then, I'll keep my local storage and processing, thank you very much.
It's interesting that none of the best-known software billionaires talk about the "post-PC era". Maybe that's perhaps because, through their non-IT philanthropic work, they get exposure to true "ecosystems", ones based on far more complexity than Ozzie's filtered view of computing. We're not yet living in a post-malaria world, despite Gates' heroic efforts through his Foundation.
At a recent conference, I crossed swords with a well-known and outspoken financial analyst, who complained that PCs hadn't evolved in 20 years. I pointed out that sharks & crocodiles haven't evolved in over 100 million. They are still occupying and controlling their own (literal) ecosystems rather better than humanity does with its.
Ozzies's comment about devices that "They’re instantly usable, interchangeable, and trivially replaceable without loss" also displays the overwhelming naivete of the software mindset.
It ignores that fact that end-users (you know, customers) like expensive, unique and tangible hardware. It performs many social and behavioural functions, not just acting as a cloud-services end-point. Software isn't loved and cherished, and neither will cloud services be. Software isn't polished, used as a status symbol, or oohed-and-aahed over. Nobody walks across a cafe to excitedly ask a stranger if they've *really* got the new OS7.3.
Yes, there will be some "trivially replaceable" devices (3G dongles, for example, already are), but anything truly personal will stay expensive. Again, Apple understands this - as does Nokia, squeezing out an extra few bucks at the low-end, differentiating hardware against no-brand competitors in the developing world.
Telecom dinosaurs refer to "dumb pipes". I predict that software/cloud dinosaurs will refer to "dumb devices". Both are wrong. (Yes, I know - Larry Ellison already got this one wrong in the 1990s)
Yes, cloud services will be more important and we'll see more devices accessing them, or optimised for them.
But the notion that this means that the world is somehow destined for a "post PC" era remains as risible now as it did a year or two ago, when the term was first coined.
Beautifully-written, yes. And containing much wisdom and knowledge.
But displaying, once again, the arrogance of the software mindset which believes it has conquered physics.
Contrast this with Jobs' comment last week:
"We create our own A4 chip, our own software, our own battery chemistry, our own enclosure, our own everything"
The idea that software engineering has the capability of beating hard limits and challenges in RF, power management, silicon and so on is myopic. Apple understands this and works to juggle all of them appropriately. (As does RIM, incidentally). But then Apple value hardware at least as much as software, from milled aluminium shells to customised bits of radio front-ends.
Ozzie assumes that wireless networks will be fast and pervasive. But what the software crowd (whether it's in Redmond, Silicon Valley or London) fails to comprehend is that there are no 3G networks to connect tablets or clouds to, in most parts of the world. Nor the money to justify the build-out in the depth needed to realise Ozzie's vision. Nor the willingness to support subscription-based business models. Even in the developed world, ubiquitous, high-performance indoor networks would need fibre everywhere for WiFi or femtos.
And lets not even touch on the legal and regulatory hurdles. But good luck trying to persuade the GSMA to ditch roaming charges for data, specially for cloud devices. Maybe give the WTO a call and see if they can sort it out over the next decade or two? Until then, I'll keep my local storage and processing, thank you very much.
It's interesting that none of the best-known software billionaires talk about the "post-PC era". Maybe that's perhaps because, through their non-IT philanthropic work, they get exposure to true "ecosystems", ones based on far more complexity than Ozzie's filtered view of computing. We're not yet living in a post-malaria world, despite Gates' heroic efforts through his Foundation.
At a recent conference, I crossed swords with a well-known and outspoken financial analyst, who complained that PCs hadn't evolved in 20 years. I pointed out that sharks & crocodiles haven't evolved in over 100 million. They are still occupying and controlling their own (literal) ecosystems rather better than humanity does with its.
Ozzies's comment about devices that "They’re instantly usable, interchangeable, and trivially replaceable without loss" also displays the overwhelming naivete of the software mindset.
It ignores that fact that end-users (you know, customers) like expensive, unique and tangible hardware. It performs many social and behavioural functions, not just acting as a cloud-services end-point. Software isn't loved and cherished, and neither will cloud services be. Software isn't polished, used as a status symbol, or oohed-and-aahed over. Nobody walks across a cafe to excitedly ask a stranger if they've *really* got the new OS7.3.
Yes, there will be some "trivially replaceable" devices (3G dongles, for example, already are), but anything truly personal will stay expensive. Again, Apple understands this - as does Nokia, squeezing out an extra few bucks at the low-end, differentiating hardware against no-brand competitors in the developing world.
Telecom dinosaurs refer to "dumb pipes". I predict that software/cloud dinosaurs will refer to "dumb devices". Both are wrong. (Yes, I know - Larry Ellison already got this one wrong in the 1990s)
Yes, cloud services will be more important and we'll see more devices accessing them, or optimised for them.
But the notion that this means that the world is somehow destined for a "post PC" era remains as risible now as it did a year or two ago, when the term was first coined.
Tuesday, October 26, 2010
Moving away from measuring mobile broadband "tonnage" (the mythical $/GB)
There's a couple of charts on mobile broadband that I'm now heartily sick of seeing:
1) The "diverging curves" chart, with data traffic rising exponentially, and revenue only growing anaemically, if at all.
2) The "comparative $ per GB" for LTE vs. HSPA and so forth.
The first one made an interesting point and observation about two years ago, but is now well past its sell-by date. In particular, it always gets positioned as a "problem statement" when it's just an illustration.
I don't think I've ever heard a compelling argument for why it's actually an issue - not least because a similar chart is possible for virtually every technology industry out there. Imagine a similar chart of "computer processing cycles" vs. revenue spent on silicon chips. Or "packets processed" vs. router revenues. Or flight passenger-miles vs. airfares, and so forth.
Not only that, but it encapsulates price elasticity - average price goes down, so average usage goes up. Surprising?
A much more useful chart would be one which directly links weak revenue growth to stronger growth in costs for the operators (ideally combining aspects of capex + opex).
The problem is that the measure of mobile broadband by "tonnage" (ie GB of data) had become the single over-riding metric used by much of the industry, certainly in public. Sure, much more subtle calculations are used by the people who actually have to dimension and build the infrastructure, but the public persona of the "evil Gigabyte of traffic" as the key culprit dragging down the industry has a wide and damaging impact elsewhere.
Policymakers, marketers, vendors and investors often gloss over the details - and their opinions often have knock-on impacts based on this simplistic and often-meaningless analysis. .
In particular, I see a risk that the "tonnage" argument is used spuriously in debates about Net Neutrality or other aspects of competition and regulatory affairs. That chart is a lobbyist's dream if used effectively - never mind if the message and true chain of causality get a bit blurred? It's a pretty good tool for "educating" users with simple messages about pricing and tiering, too. Only yesterday, I saw a presentation from a very well-known strategy consulting firm that pursued exactly this line of debate, with highly questionable conclusions and assertions
Given that we already know that the real problems are often to do with busy hour / busy cell issues, signalling load, rather than tonnage - or uplink rather than downlink load - the trite comments about "20% of users consuming 80% of the capacity" are not meaningful.
On the other hand, if there was a way to prove that "20% of users are causing 80% of the problems, and driving 80% of the extra costs", that would be a very different - and much more compelling - story. But at the moment, the diverging-curves chart is simply an exercise in non-sequiturs.
(On a related topic - if you use the chart to assert that traffic should always be proportional to revenues, aren't you also just implicitly arguing that SMS prices should really be cut by 99.9% to fit your neat desired pattern?)
The fact is that the cost side of the equation for mobile broadband is very, very complex - which is why the second chart is misleading too. Yes, I absolutely agree that LTE can squeeze more bits per Hz per cell across the network, which makes it much more efficient than older WCDMA deployments - but surely, the baseline for most of today's MBB is using R5 or R6 or R7 HSPA? And yes, the flatter IP architecture should also be cheaper.
But then there's the question of whether we're talking about deploying LTE in 5, 10 or 20MHz channel widths, which spectrum band it's going to be used in, whether it can use the existing cell-site grid, whether it costs extra to have an overlay on top of the old 3G networks, and of course any incremental costs of devices, testing and so forth.
It's not much use having a chart which essentially says "all other things being equal....", when in the real world they're patently not equal. It's the same as arguing that hydrogen-powered cars are much more efficient than petrol, but conveniently forgetting about the infrastructure required to support it, or what to do with the rather large legacy installed base.
Not to mention, of course, that it again uses the mythical Gigabyte as the arbiter of all things to do with broadband economics. No mention, once again, of signalling, offload, spectrum, up/downlink and so forth. No consideration of any additional costs for supporting voice as well as data.
Plus, I'm pretty sure it's the same chart I've seen recycled through about 17 iterations, with little clarity on the original source, or how old it it. I think it pre-dates most of the discussion around small cells, SIPTO and various other innovations - and quite possibly also comes from before the finding that things like fast dormancy and RNC load are often limiting factors for smartphone-heavy networks.
If freight companies bought vehicles based on "tonnage", then a small container ship would cost more than a Boeing 747. There is very little rationale for telecom networks to be based on such a narrow metric either.
To me, the fact that these two charts are repeated almost as mantras points to this being another "tyranny of consensus". Obviously, tonnage measured in GB is an easy metric for the industry and end-users to understand. But that doesn't mean it's the right one.
1) The "diverging curves" chart, with data traffic rising exponentially, and revenue only growing anaemically, if at all.
2) The "comparative $ per GB" for LTE vs. HSPA and so forth.
The first one made an interesting point and observation about two years ago, but is now well past its sell-by date. In particular, it always gets positioned as a "problem statement" when it's just an illustration.
I don't think I've ever heard a compelling argument for why it's actually an issue - not least because a similar chart is possible for virtually every technology industry out there. Imagine a similar chart of "computer processing cycles" vs. revenue spent on silicon chips. Or "packets processed" vs. router revenues. Or flight passenger-miles vs. airfares, and so forth.
Not only that, but it encapsulates price elasticity - average price goes down, so average usage goes up. Surprising?
A much more useful chart would be one which directly links weak revenue growth to stronger growth in costs for the operators (ideally combining aspects of capex + opex).
The problem is that the measure of mobile broadband by "tonnage" (ie GB of data) had become the single over-riding metric used by much of the industry, certainly in public. Sure, much more subtle calculations are used by the people who actually have to dimension and build the infrastructure, but the public persona of the "evil Gigabyte of traffic" as the key culprit dragging down the industry has a wide and damaging impact elsewhere.
Policymakers, marketers, vendors and investors often gloss over the details - and their opinions often have knock-on impacts based on this simplistic and often-meaningless analysis. .
In particular, I see a risk that the "tonnage" argument is used spuriously in debates about Net Neutrality or other aspects of competition and regulatory affairs. That chart is a lobbyist's dream if used effectively - never mind if the message and true chain of causality get a bit blurred? It's a pretty good tool for "educating" users with simple messages about pricing and tiering, too. Only yesterday, I saw a presentation from a very well-known strategy consulting firm that pursued exactly this line of debate, with highly questionable conclusions and assertions
Given that we already know that the real problems are often to do with busy hour / busy cell issues, signalling load, rather than tonnage - or uplink rather than downlink load - the trite comments about "20% of users consuming 80% of the capacity" are not meaningful.
On the other hand, if there was a way to prove that "20% of users are causing 80% of the problems, and driving 80% of the extra costs", that would be a very different - and much more compelling - story. But at the moment, the diverging-curves chart is simply an exercise in non-sequiturs.
(On a related topic - if you use the chart to assert that traffic should always be proportional to revenues, aren't you also just implicitly arguing that SMS prices should really be cut by 99.9% to fit your neat desired pattern?)
The fact is that the cost side of the equation for mobile broadband is very, very complex - which is why the second chart is misleading too. Yes, I absolutely agree that LTE can squeeze more bits per Hz per cell across the network, which makes it much more efficient than older WCDMA deployments - but surely, the baseline for most of today's MBB is using R5 or R6 or R7 HSPA? And yes, the flatter IP architecture should also be cheaper.
But then there's the question of whether we're talking about deploying LTE in 5, 10 or 20MHz channel widths, which spectrum band it's going to be used in, whether it can use the existing cell-site grid, whether it costs extra to have an overlay on top of the old 3G networks, and of course any incremental costs of devices, testing and so forth.
It's not much use having a chart which essentially says "all other things being equal....", when in the real world they're patently not equal. It's the same as arguing that hydrogen-powered cars are much more efficient than petrol, but conveniently forgetting about the infrastructure required to support it, or what to do with the rather large legacy installed base.
Not to mention, of course, that it again uses the mythical Gigabyte as the arbiter of all things to do with broadband economics. No mention, once again, of signalling, offload, spectrum, up/downlink and so forth. No consideration of any additional costs for supporting voice as well as data.
Plus, I'm pretty sure it's the same chart I've seen recycled through about 17 iterations, with little clarity on the original source, or how old it it. I think it pre-dates most of the discussion around small cells, SIPTO and various other innovations - and quite possibly also comes from before the finding that things like fast dormancy and RNC load are often limiting factors for smartphone-heavy networks.
If freight companies bought vehicles based on "tonnage", then a small container ship would cost more than a Boeing 747. There is very little rationale for telecom networks to be based on such a narrow metric either.
To me, the fact that these two charts are repeated almost as mantras points to this being another "tyranny of consensus". Obviously, tonnage measured in GB is an easy metric for the industry and end-users to understand. But that doesn't mean it's the right one.
Wednesday, October 20, 2010
Will telcos have a positive "balance of trade" in APIs?
I had an interesting discussion today, about mobile cloud applications and syncing.
In part, it covered my regularly-expressed disdain for social network aggregation, and the metaphor of the extended address book on mobile phones. I remain unconvinced that users really want to have their contact lists heavily integrated with Facebook, Skype, Twitter or whatever.
Nevertheless, it highlighted an interesting angle I wasn't previously aware of. I'd recognised that most of the major Internet players had exposed APIs so that developers can hook into them for various mashup purposes - but I hadn't realised how the terms and conditions worked.
In particular, I didn't realise that if you're a large-scale user of some APIs (Facebook, say), then you have to pay. So an operator or handset vendor wanting to do a complex enhanced-phonebook with imported photos & status updates, for millions of users, is not only competing with the standalone downloadable Facebook client, they may also be writing a cheque for the privilege. Ditto if they want to give an MNO-customised variant to their users out-of-the-box.
I've been saying for a while that the power of Apple, Google, Facebook et al was such that operators play Net Neutrality games with them at their peril ("No, how about *you* pay *us*?") . I hadn't realised that they - or some of them, at least, I don't have details which will undoubtedly by confidential - were already throwing their weight about.
Now, I've also been talking and writing about operator-provided APIs for a long time as well, including through my work with Telco 2.0. Initiatives like the GSMA's OneAPI, as well as telco-specific services like BT Ribbit and many others in the mobile world, point the way towards operators selling access to messaging, billing, authentication, voice call control and numerous other features and functions.
In theory. In the real world, telcos' commercial sales of API access has been evolving at a glacially-slow pace, hamstrung by painful back-end integration work and lots of design-by-committee delays. In the meantime, some supposedly valuable telco "assets" have now depreciated to being (essentially) worthless, such as location. I expect "identity" to be the next to be replicated and improved by Internet players.
So... the telcos' API revenue streams haven't yet materialised to a meaningful degree. But instead, they're starting to have to spend money on other companies' APIs in order to provide the services their customers actually want.
I wonder where we'll be in a few years time, in terms of the "balance of trade" in APIs - will operators be "exporting" or "importing" more? In which direction will the net flow of cash be going? It will probably be difficult to measure, but it's certainly going to be an important analytical question to answer.
In part, it covered my regularly-expressed disdain for social network aggregation, and the metaphor of the extended address book on mobile phones. I remain unconvinced that users really want to have their contact lists heavily integrated with Facebook, Skype, Twitter or whatever.
Nevertheless, it highlighted an interesting angle I wasn't previously aware of. I'd recognised that most of the major Internet players had exposed APIs so that developers can hook into them for various mashup purposes - but I hadn't realised how the terms and conditions worked.
In particular, I didn't realise that if you're a large-scale user of some APIs (Facebook, say), then you have to pay. So an operator or handset vendor wanting to do a complex enhanced-phonebook with imported photos & status updates, for millions of users, is not only competing with the standalone downloadable Facebook client, they may also be writing a cheque for the privilege. Ditto if they want to give an MNO-customised variant to their users out-of-the-box.
I've been saying for a while that the power of Apple, Google, Facebook et al was such that operators play Net Neutrality games with them at their peril ("No, how about *you* pay *us*?") . I hadn't realised that they - or some of them, at least, I don't have details which will undoubtedly by confidential - were already throwing their weight about.
Now, I've also been talking and writing about operator-provided APIs for a long time as well, including through my work with Telco 2.0. Initiatives like the GSMA's OneAPI, as well as telco-specific services like BT Ribbit and many others in the mobile world, point the way towards operators selling access to messaging, billing, authentication, voice call control and numerous other features and functions.
In theory. In the real world, telcos' commercial sales of API access has been evolving at a glacially-slow pace, hamstrung by painful back-end integration work and lots of design-by-committee delays. In the meantime, some supposedly valuable telco "assets" have now depreciated to being (essentially) worthless, such as location. I expect "identity" to be the next to be replicated and improved by Internet players.
So... the telcos' API revenue streams haven't yet materialised to a meaningful degree. But instead, they're starting to have to spend money on other companies' APIs in order to provide the services their customers actually want.
I wonder where we'll be in a few years time, in terms of the "balance of trade" in APIs - will operators be "exporting" or "importing" more? In which direction will the net flow of cash be going? It will probably be difficult to measure, but it's certainly going to be an important analytical question to answer.
Webinar on Holistic approach to Mobile Broadband Traffic Management *tomorrow*
A quick heads-up that I'm doing a webinar tomorrow morning (Thursday 21st) on holistic approaches to mobile broadband traffic management. I'll be covering various themes around Net Neutrality, the need for radio-, location- and device-awareness in mobile policy management, and the challenges of "silo" solutions such as video compression operating without understanding the user's and network's context.
I'll also be covering the role of what I'm terming a "Congestion API" as a goal to work towards - an eventual win/win/win for user, operator and content/app providers.
Sign-up details are here: http://bit.ly/9q9vLk
Disclosure: this webinar is being conducted in conjunction with (and sponsored by) a client of mine, CCPU
I'll also be covering the role of what I'm terming a "Congestion API" as a goal to work towards - an eventual win/win/win for user, operator and content/app providers.
Sign-up details are here: http://bit.ly/9q9vLk
Disclosure: this webinar is being conducted in conjunction with (and sponsored by) a client of mine, CCPU
Monday, October 18, 2010
Upstream billing in two-sided telco models will face same challenges as normal user billing
I just read this post on the Telco 2.0 blog. While it's an interesting idea (to try to create a business model for free broadband, using apps as a means to create cable-TV style service bundles and sell lots of advertising), at first glance I can see multiple flaws in the theory and approach.
I'm not going to do a forensic dissection of it for now, because it highlighted a specific set of problems likely to be generic across all "upstream"-paid telco models: how to do fair and secure billing to the advertisers / content / app provider, especially where there's some form of volume-based charging involved.
Let's assume for a moment that non-neutrality of access is permitted, at least to specific upstream walled gardens, rather the full Internet. So to use that blog post's idea, maybe the user has a set of apps from "useful" services such as Facebook, YouTube, Salesforce or whatever, rather than the full open Internet through a browsers. Each app's access profile is able to be analysed and accurately modeled, based on the various new app-analytics frameworks evolving.
So, in theory, each app's owner could be charged for the data consumed within its application - eg YouTube gets billed $x per GB and so on. It's a bit like freephone or freepost services, paid for by the company, not the user.
Sounds like the Holy Grail for monetising broadband, no?
No. I'm unconvinced, on several levels.
Firstly, what are the protections for upstream provider? Are they exposed to a potentially unlimited bill, for example if a virus or unscrupulous competitor racks up huge downloads? I'm not sure how this works with Freephone, but presumably it's a lot easier to track usage and spot fraud and abuse - if a robot keeps calling your number, your call centre agents will spot it pretty fast. For freepost, I'm pretty sure if you stuck the prepaid envelope to a brick or something else heavy, the recipient wouldn't get stung for a huge postage bill.
Next, how are errors and re-sends accounted for? Netflix is probably not going to be happy if a network outage or other glitch means resending large volumes of data and thus having to pay twice.
What are the appropriate metrics for billing, anyway? A simple cost per MB or GB of data downloaded is a spectacularly poor way of pricing data, especially on mobile, as it doesn't reflect the way that network costs build up. How are uploads priced compared to downloads? For mobile, how do you deal with small-volume / high-signalling apps? Do you charge apps that download during quiet hours the same amount as those that are oriented towards peak-hour commuters?
Then there are future "gotchas". How aer mashups dealt with? What happens when one app on a device discovers the other apps running concurrently on a PC or phone or set-top box, and routes some traffic through their own exposed APIs? On the server side, is Facebook expected to pay for YouTube traffic displayed within its own app, without a way to "cascade" payments onward?
There are plenty more issues I could list.
The bottom line is that upstream charging is going to need just as much sophistication in terms of rating, realtime capabilities, anti-fraud, revenue assurance, policy and so forth, as we currently see in normal downstream billing. For some of these, even more sophistication will be needed as things like anti-fraud will need to be bi-directional.
Overall, this isn't going to be easy - and it's not obvious that the operators or billing vendors have yet to go far enough down this path in terms of ready solutions.
I'm not going to do a forensic dissection of it for now, because it highlighted a specific set of problems likely to be generic across all "upstream"-paid telco models: how to do fair and secure billing to the advertisers / content / app provider, especially where there's some form of volume-based charging involved.
Let's assume for a moment that non-neutrality of access is permitted, at least to specific upstream walled gardens, rather the full Internet. So to use that blog post's idea, maybe the user has a set of apps from "useful" services such as Facebook, YouTube, Salesforce or whatever, rather than the full open Internet through a browsers. Each app's access profile is able to be analysed and accurately modeled, based on the various new app-analytics frameworks evolving.
So, in theory, each app's owner could be charged for the data consumed within its application - eg YouTube gets billed $x per GB and so on. It's a bit like freephone or freepost services, paid for by the company, not the user.
Sounds like the Holy Grail for monetising broadband, no?
No. I'm unconvinced, on several levels.
Firstly, what are the protections for upstream provider? Are they exposed to a potentially unlimited bill, for example if a virus or unscrupulous competitor racks up huge downloads? I'm not sure how this works with Freephone, but presumably it's a lot easier to track usage and spot fraud and abuse - if a robot keeps calling your number, your call centre agents will spot it pretty fast. For freepost, I'm pretty sure if you stuck the prepaid envelope to a brick or something else heavy, the recipient wouldn't get stung for a huge postage bill.
Next, how are errors and re-sends accounted for? Netflix is probably not going to be happy if a network outage or other glitch means resending large volumes of data and thus having to pay twice.
What are the appropriate metrics for billing, anyway? A simple cost per MB or GB of data downloaded is a spectacularly poor way of pricing data, especially on mobile, as it doesn't reflect the way that network costs build up. How are uploads priced compared to downloads? For mobile, how do you deal with small-volume / high-signalling apps? Do you charge apps that download during quiet hours the same amount as those that are oriented towards peak-hour commuters?
Then there are future "gotchas". How aer mashups dealt with? What happens when one app on a device discovers the other apps running concurrently on a PC or phone or set-top box, and routes some traffic through their own exposed APIs? On the server side, is Facebook expected to pay for YouTube traffic displayed within its own app, without a way to "cascade" payments onward?
There are plenty more issues I could list.
The bottom line is that upstream charging is going to need just as much sophistication in terms of rating, realtime capabilities, anti-fraud, revenue assurance, policy and so forth, as we currently see in normal downstream billing. For some of these, even more sophistication will be needed as things like anti-fraud will need to be bi-directional.
Overall, this isn't going to be easy - and it's not obvious that the operators or billing vendors have yet to go far enough down this path in terms of ready solutions.
Tuesday, October 12, 2010
New Report: Zero chance that IMS RCS will become a massmarket service, but niches may be possible
I have just published a new research report on the many failings of the Rich Communications Suite (RCS), which is a proposed IMS-based service for enhanced mobile messaging, phonebooks and communications services. The industry effort is coordinated by the GSMA's RCS Initiative.
My belief is that RCS is not "fit for purpose" as a massmarket application on mobile devices. It is late, it is inflexible, and it has been designed with many flawed in-built assumptions. The report identifies at least 12 different reasons why it cannot and will not become a universal standard. I've written about RCS several times before, since its inception more than 2.5 years ago.
Although originally intended as a ""epitaph" for a dead technology, I've tried to be open-minded and see if something might be salvaged. The report gives a few possible ways that RCS might be reincarnated.
I've been writing about mobile implementations of IMS for more than 4 years. In mid-2006, Disruptive Analysis published a report I authored, analysing the complete lack of useful standards defining what constituted an IMS-capable phone. I've subsequently talked about the failings of the MMtel standard for mobile VoIP in another report on VoIPo3G, and the recklessness of attempting to tie the LTE sports car to the IMS boat-anchor (or indeed, a dead parrot to its perch).
On the other hand, I've been broadly supportive of the GSMA's initiative for VoLTE, as it essentially does what I suggested in the 2007 VoIPo3G report - create a workable "bare bones" replacement for circuit voice over mobile IP. Although it doesn't preclude later adoption of some of the extra baggage that MMtel carries (video and so-called "multimedia", for example), it focuses on the here-and-now problem of making mobile VoIP phone calls, and interconnecting them.
If you look at why IMS has had moderate success in the fixed world, it's because it has been driven by direct PSTN-replacement, often with absolutely no additional features and gloss for the end-user. Yes, there can be more bells-and-whistles for corporate users, or people who want a fancy home screenphone. But it also works for seamless replacement of ordinary phone services for people who want to re-use their 1987-vintage handset plugged into a terminal adapter.
VoLTE also seems to be an attempt to "start simple" - essential, because there will be enough problems with optimising VoIP for mobile (coverage, battery, QoS, call setup, emergency calls etc), without layering on additional stuff that most customers won't want, initially at least. There will also, critically, be a ton of competition from other voice technologies, so speed and flexibility are critical.
Lastly, VoLTE can be deployed by individual LTE operators, without the need for all cellular providers (or handset manufacturers) to adopt it. VoLTE can interconnect quite easily with any other telco or Internet voice service, much as circuit voice, fixed VoIP or Skype can today.
RCS is another matter entirely. Rather than being a "bare-bones" way to migrate SMS (and, OK, MMS) to IP, in both network and handset, it attempts to redefine the messaging interface and phonebook UI at the same time. Rather than just getting SMS-over-mobile-IP to work as well as the original, it layers on additional complexities such as presence, IM and a reinvention of the handset's contacts list. It has been presented as enabling operators to create their own social network platforms, and interoperate amongst themselves.
All this extra functionality is intended to be a "core service" of future phones and networks, with an RCS software client functioning at the very heart of a handset's OS, especially on feature-phones. It is intended to ship (out of the box) in new handsets - although aftermarket RCS clients should also be available on Android and other devices.
Most of the initial thought seems to be an attempt to replicate a 2005-era MSN or Yahoo Messenger IM client. It certainly doesn't appear to have been invented for an era of downloadable smartphone apps, current-generation mobile browsers, mashups - or the harsh realities of a markeplace in which alternatives such as Facebook, BBM and various Google and Apple and Ovi services are already entrenched.
Much of the RCS Initiative effort is focused around interoperability. While this is very worthy, it is unfortunately demonstrating inter-operation with the wrong things: other operators' RCS platforms, rather than the hundreds of millions of people happily using other services already. There are some hooks into Facebook and related services, but these fall into the already-failed effort to put a clunky layer of social network aggregation on top of already-refined services.
The net result is that if an interoperable RCS is going to do something *better* than Facebook, it needs to be available to *everyone* and be adopted by all their friends. In reality, this means that all operators in a country will need to deploy it (roughly) together. And it means RCS functionality needs to be in all handsets - even those bought unsubsidised by prepay users.
As a result, RCS is yet another of a long series of "coalitions of the losers". It has not been designed with either web mashups or 3rd-party developers in mind. (At the 2010 RCS developer competition, there were "almost 40" entries). It has not been designed with the idea that an individual operator could launch a blockbuster service across all networks. It comes from an era of mobile central-planning, where a roomful of people could determine a single universal architecture and service, despite a lack of awareness of users' actual behavioural needs.
Smartphones, and the emergence of Apple, Google, Nokia, RIM, Facebook and others as applications powerhouses, has now guaranteed that there will never again be another single, ubiquitous mobile service, controlled solely by the operators. That ship has sailed, MMS being the last vessel limping from the port.
Let me ram that point home. There are now more users of open and flexible 3G smartphones, than there were total cellular subscribers when MMS was first invented. Almost all of them know what a good IM or social network service looks like - and which ones their friends are on.
The report covers all the topics raised here in greater depth, and also looks in more detail at some of the other "minor" gotchas around RCS, such as its impact on radio network signalling and handset battery life. Or its complete lack of obvious business model. Or any enterprise focus.
As I mentioned above, the report does contain some suggestions about possible "salvage" use cases for RCS. It could be used to create single-operator niche services, perhaps - maybe a music or movie fan service, perhaps.
Contents, pricing and online purchase of the report are available here , or contact me directly via information AT disruptive-analysis DOT com .
(Note: I'm trying out a new online payment / download service, please drop me a message if there are any problems and I can send the report by conventional invoice and email)
My belief is that RCS is not "fit for purpose" as a massmarket application on mobile devices. It is late, it is inflexible, and it has been designed with many flawed in-built assumptions. The report identifies at least 12 different reasons why it cannot and will not become a universal standard. I've written about RCS several times before, since its inception more than 2.5 years ago.
Although originally intended as a ""epitaph" for a dead technology, I've tried to be open-minded and see if something might be salvaged. The report gives a few possible ways that RCS might be reincarnated.
I've been writing about mobile implementations of IMS for more than 4 years. In mid-2006, Disruptive Analysis published a report I authored, analysing the complete lack of useful standards defining what constituted an IMS-capable phone. I've subsequently talked about the failings of the MMtel standard for mobile VoIP in another report on VoIPo3G, and the recklessness of attempting to tie the LTE sports car to the IMS boat-anchor (or indeed, a dead parrot to its perch).
On the other hand, I've been broadly supportive of the GSMA's initiative for VoLTE, as it essentially does what I suggested in the 2007 VoIPo3G report - create a workable "bare bones" replacement for circuit voice over mobile IP. Although it doesn't preclude later adoption of some of the extra baggage that MMtel carries (video and so-called "multimedia", for example), it focuses on the here-and-now problem of making mobile VoIP phone calls, and interconnecting them.
If you look at why IMS has had moderate success in the fixed world, it's because it has been driven by direct PSTN-replacement, often with absolutely no additional features and gloss for the end-user. Yes, there can be more bells-and-whistles for corporate users, or people who want a fancy home screenphone. But it also works for seamless replacement of ordinary phone services for people who want to re-use their 1987-vintage handset plugged into a terminal adapter.
VoLTE also seems to be an attempt to "start simple" - essential, because there will be enough problems with optimising VoIP for mobile (coverage, battery, QoS, call setup, emergency calls etc), without layering on additional stuff that most customers won't want, initially at least. There will also, critically, be a ton of competition from other voice technologies, so speed and flexibility are critical.
Lastly, VoLTE can be deployed by individual LTE operators, without the need for all cellular providers (or handset manufacturers) to adopt it. VoLTE can interconnect quite easily with any other telco or Internet voice service, much as circuit voice, fixed VoIP or Skype can today.
RCS is another matter entirely. Rather than being a "bare-bones" way to migrate SMS (and, OK, MMS) to IP, in both network and handset, it attempts to redefine the messaging interface and phonebook UI at the same time. Rather than just getting SMS-over-mobile-IP to work as well as the original, it layers on additional complexities such as presence, IM and a reinvention of the handset's contacts list. It has been presented as enabling operators to create their own social network platforms, and interoperate amongst themselves.
All this extra functionality is intended to be a "core service" of future phones and networks, with an RCS software client functioning at the very heart of a handset's OS, especially on feature-phones. It is intended to ship (out of the box) in new handsets - although aftermarket RCS clients should also be available on Android and other devices.
Most of the initial thought seems to be an attempt to replicate a 2005-era MSN or Yahoo Messenger IM client. It certainly doesn't appear to have been invented for an era of downloadable smartphone apps, current-generation mobile browsers, mashups - or the harsh realities of a markeplace in which alternatives such as Facebook, BBM and various Google and Apple and Ovi services are already entrenched.
Much of the RCS Initiative effort is focused around interoperability. While this is very worthy, it is unfortunately demonstrating inter-operation with the wrong things: other operators' RCS platforms, rather than the hundreds of millions of people happily using other services already. There are some hooks into Facebook and related services, but these fall into the already-failed effort to put a clunky layer of social network aggregation on top of already-refined services.
The net result is that if an interoperable RCS is going to do something *better* than Facebook, it needs to be available to *everyone* and be adopted by all their friends. In reality, this means that all operators in a country will need to deploy it (roughly) together. And it means RCS functionality needs to be in all handsets - even those bought unsubsidised by prepay users.
As a result, RCS is yet another of a long series of "coalitions of the losers". It has not been designed with either web mashups or 3rd-party developers in mind. (At the 2010 RCS developer competition, there were "almost 40" entries). It has not been designed with the idea that an individual operator could launch a blockbuster service across all networks. It comes from an era of mobile central-planning, where a roomful of people could determine a single universal architecture and service, despite a lack of awareness of users' actual behavioural needs.
Smartphones, and the emergence of Apple, Google, Nokia, RIM, Facebook and others as applications powerhouses, has now guaranteed that there will never again be another single, ubiquitous mobile service, controlled solely by the operators. That ship has sailed, MMS being the last vessel limping from the port.
Let me ram that point home. There are now more users of open and flexible 3G smartphones, than there were total cellular subscribers when MMS was first invented. Almost all of them know what a good IM or social network service looks like - and which ones their friends are on.
The report covers all the topics raised here in greater depth, and also looks in more detail at some of the other "minor" gotchas around RCS, such as its impact on radio network signalling and handset battery life. Or its complete lack of obvious business model. Or any enterprise focus.
As I mentioned above, the report does contain some suggestions about possible "salvage" use cases for RCS. It could be used to create single-operator niche services, perhaps - maybe a music or movie fan service, perhaps.
Contents, pricing and online purchase of the report are available here , or contact me directly via information AT disruptive-analysis DOT com .
(Note: I'm trying out a new online payment / download service, please drop me a message if there are any problems and I can send the report by conventional invoice and email)
Thursday, October 07, 2010
A quick Net Neutrality paradox...
Let's say, hypothetically, that mobile Internet connections are allowed to be discriminated between. So, for example, a major operator like AT&T or Orange or Vodafone can charge "upstream" providers, such as an online gaming firm, for higher-quality connections for its services delivery to mobile users, over and above, say Facebook traffic.
[Note that this is for differential performance of *Internet* delivered services, within the context of overall "Internet Access", not separate non-Internet operator-hosted applications]
The problem is that WoW or EA or whoever will want a guarantee, not just a vague promise of better-than-average service. Would you pay for a business class airfare if you only knew you had an unspecified *probability* of a larger seat and better food?
In other words, they'll want an SLA, a mechanism for recourse if they don't get what they've paid for, and a means of monitoring/reporting that better quality was delivered as promised.
But the gating factor for mobile performance often isn't things like latency & congestion.... it's basic coverage. And for the operator, it's especially difficult to guarantee performance if the user is on the edge of the cell, or if mobility means that lots of high-priority people suddenly cluster together (eg a gaming convention).
Realistically, the only way to give reasonable, statistical *guarantees* and SLAs for mobile data QoS is to deploy lots of femtocells and/or use WiFi offload or other cast-iron approaches to coverage (a ton of DAS, or repeaters). And then actually do measurements and tests on what the indoor coverage is really like. That means rather than "drive tests", they ought to be doing "walk tests" inside buildings.
So.... lots of femtos or WiFi. Which, almost certainly, will need to be (partly) run over other telcos' networks. And connected via... the Internet.
In other words, any operator hoping to buid a non-neutral mobile Internet service had better hope that either:
- the fixed-access Internet *is* neutral, or..
- ... that their CFO is happy to pay for lots of QoS/prioritisation from the fixed broadband guys themselves, for guarantees for the femto traffic (and signalling).
[Note that this is for differential performance of *Internet* delivered services, within the context of overall "Internet Access", not separate non-Internet operator-hosted applications]
The problem is that WoW or EA or whoever will want a guarantee, not just a vague promise of better-than-average service. Would you pay for a business class airfare if you only knew you had an unspecified *probability* of a larger seat and better food?
In other words, they'll want an SLA, a mechanism for recourse if they don't get what they've paid for, and a means of monitoring/reporting that better quality was delivered as promised.
But the gating factor for mobile performance often isn't things like latency & congestion.... it's basic coverage. And for the operator, it's especially difficult to guarantee performance if the user is on the edge of the cell, or if mobility means that lots of high-priority people suddenly cluster together (eg a gaming convention).
Realistically, the only way to give reasonable, statistical *guarantees* and SLAs for mobile data QoS is to deploy lots of femtocells and/or use WiFi offload or other cast-iron approaches to coverage (a ton of DAS, or repeaters). And then actually do measurements and tests on what the indoor coverage is really like. That means rather than "drive tests", they ought to be doing "walk tests" inside buildings.
So.... lots of femtos or WiFi. Which, almost certainly, will need to be (partly) run over other telcos' networks. And connected via... the Internet.
In other words, any operator hoping to buid a non-neutral mobile Internet service had better hope that either:
- the fixed-access Internet *is* neutral, or..
- ... that their CFO is happy to pay for lots of QoS/prioritisation from the fixed broadband guys themselves, for guarantees for the femto traffic (and signalling).
Wednesday, October 06, 2010
Any smartphone you like, as long as it's Nokia
I'm in Georgia (the country, not the state) at the moment. Escaping the continual rain, I wandered into a store of mobile operator Beeline (owned by Russia's Vimplecom) this morning. This is probably a flagship store, on the main shopping street called Rustaveli in the capital Tbilisi.
There was a large glass cabinet on one side, with maybe 40-60 phones on display. All switched off, and with unsubsidised retail price stickers.
About 60% of the phones were Nokias - basically the complete range from low-end handsets right up to the N900 and all the E-series smartphones, although I didn't see the very newest announcements like the N8. Most were in the range of 600-1000 Lari (about $250-450).
There were also a fair few Samsungs, a few SonyEricssons, and a couple of no-name $20 ultra-basic own-brand devices. One of the Samsungs might have been a Symbian device, but there were no Androids I could see. No LGs, no HTCs, no BlackBerries.
Basically, if you want a smartphone, it's going to be Symbian-based. (However, there were also a number of dongles on display, and I've seen quite a few people with PCs and modems around the country).
(Oh, and there was also an iPhone 3GS, lurking without fanfare in the middle of the S-Es, at a cost of 1700 Lari ($940 I guess excluding tax). For reference, this is in a city where the average cost of a one-bedroom apartment rental is about $200 a month, so iPhones aren't exactly the aspirational device of students or normal families)
Now for the real kicker.... Beeline only operates a 2G network here. And all its consumer tariffs are prepaid, with no default access to data. You can get WAP or full Internet provisioned - if you're prepared to mess around with APN settings. To be fair, I have seen one person using WAP on a low-end device in the past few days, so it's not a completely voice+SMS centric country.
Another operator, Magti, does have Blackberry devices prominent on its website, although its store nearby also seemed to major on dongles (CDMA-450 EVDO) and even fixed-wireless deskphones. I haven't been to a Geocell store yet, although that has a 3G UMTS network - but its website is still firmly in voice/SMS territory.
One takeaway from this was that as the Beeline dongles only cost 49 Lari - less than $30. So if you have a family, it's a lot cheaper to buy a low-end PC and a dongle, using prepaid data, than it is to get 3 or 4 smartphones for a family - especially given device life-expectancy.
In fact, if you live in a developing country, probably the best bet for a family is PC+dongle (maybe $300-400) and 3x $20 basic phones. OK, so you don't get web access while mobile, but for a relatively immature Internet marketplace, that's really an aspirational nice-to-have for several years yet, for all except a small handful of the Tbilisi elite.
The other takeaway is that Nokias retain popularity outside the more visible North American and Western European markets. Certainly in the other cities I've visited away from the capital, it's still a solidly Nokia-centric country. I suspect that's partly because Symbian smartphones tend to be much better at "offline" uses (eg as cameras) than their peers. Certainly, I couldn't imagine an iPhone or Android being much use without an always-available 3G data plan.
There was a large glass cabinet on one side, with maybe 40-60 phones on display. All switched off, and with unsubsidised retail price stickers.
About 60% of the phones were Nokias - basically the complete range from low-end handsets right up to the N900 and all the E-series smartphones, although I didn't see the very newest announcements like the N8. Most were in the range of 600-1000 Lari (about $250-450).
There were also a fair few Samsungs, a few SonyEricssons, and a couple of no-name $20 ultra-basic own-brand devices. One of the Samsungs might have been a Symbian device, but there were no Androids I could see. No LGs, no HTCs, no BlackBerries.
Basically, if you want a smartphone, it's going to be Symbian-based. (However, there were also a number of dongles on display, and I've seen quite a few people with PCs and modems around the country).
(Oh, and there was also an iPhone 3GS, lurking without fanfare in the middle of the S-Es, at a cost of 1700 Lari ($940 I guess excluding tax). For reference, this is in a city where the average cost of a one-bedroom apartment rental is about $200 a month, so iPhones aren't exactly the aspirational device of students or normal families)
Now for the real kicker.... Beeline only operates a 2G network here. And all its consumer tariffs are prepaid, with no default access to data. You can get WAP or full Internet provisioned - if you're prepared to mess around with APN settings. To be fair, I have seen one person using WAP on a low-end device in the past few days, so it's not a completely voice+SMS centric country.
Another operator, Magti, does have Blackberry devices prominent on its website, although its store nearby also seemed to major on dongles (CDMA-450 EVDO) and even fixed-wireless deskphones. I haven't been to a Geocell store yet, although that has a 3G UMTS network - but its website is still firmly in voice/SMS territory.
One takeaway from this was that as the Beeline dongles only cost 49 Lari - less than $30. So if you have a family, it's a lot cheaper to buy a low-end PC and a dongle, using prepaid data, than it is to get 3 or 4 smartphones for a family - especially given device life-expectancy.
In fact, if you live in a developing country, probably the best bet for a family is PC+dongle (maybe $300-400) and 3x $20 basic phones. OK, so you don't get web access while mobile, but for a relatively immature Internet marketplace, that's really an aspirational nice-to-have for several years yet, for all except a small handful of the Tbilisi elite.
The other takeaway is that Nokias retain popularity outside the more visible North American and Western European markets. Certainly in the other cities I've visited away from the capital, it's still a solidly Nokia-centric country. I suspect that's partly because Symbian smartphones tend to be much better at "offline" uses (eg as cameras) than their peers. Certainly, I couldn't imagine an iPhone or Android being much use without an always-available 3G data plan.
Subscribe to:
Posts (Atom)