Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label Internet. Show all posts
Showing posts with label Internet. Show all posts

Wednesday, November 25, 2020

Interoperability is often good – but should not be mandated

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

Context: I'm going to be spending more time on telecom/tech policy & geopolitics over the next few months, spanning UK, US, Europe & Global issues. I'll be sharing opinions & analysis on the politics of 5G & Wi-Fi, spectrum, broadband plans, supply-chain diversity & competition.

Recently, I've seen more calls for governments to demand mandatory interoperability between technology systems (or between vendors) as a regulatory tool. I think this would be a mistake - although incentivising interop can sometimes be a good move for various reasons. This is a fairly long post to explain my thinking, with particular reference to Open RAN and messaging.

Background & history

The telecoms industry has thrived on interoperability. Phone calls work from anywhere to anywhere, while handsets and other devices are tested & certified for proper functioning on standardised networks. Famously, interoperability between different “islands” of SMS led to the creation of a huge market for mobile data services, although that didn't happen overnight in many countries.

Much the same is true in the IT world as well, with everything from email standards to USB connections and Wi-Fi certification proving the point. The web and open APIs make it easier for cloud applications to work together harmoniously.

Image source: https://pixabay.com/illustrations/rings-wooden-rings-intertwined-100181/

 But not everything valuable is interoperable. It isn't the only approach. Proprietary and vertically-integrated solutions remain important too.

Many social media and communications applications have very limited touch-points with each other. The largest 4G/5G equipment companies don’t allow operator customers to mix-and-match components in their radio systems. Many IT systems remain closed, without public APIs. Consumers can’t choose to subscribe to network connectivity from MNO A, but telephony & SMS from ISP B, and exclusive content belonging to cable company C.

This isn't just a telecom or IT thing. It’s difficult to get different industrial automation systems to work together. An airline can’t buy an airframe from Boeing, but insist that it has avionics from Airbus. The same is true for cars' sub-systems and software.

Tight coupling or vertical integration between different subsystems can enable better overall efficiency, or more fluid consumer experience - but at the cost of creating "islands". Sometimes that's a problem, but sometimes it's actually an advantage.

Well-known examples of interoperability in a narrow market subset can obscure broader use of proprietary systems in a wider domain. Most voice-related applications, beyond traditional "phone calls", do not interoperate by default. You could probably connect a podcast platform to a karaoke app, home voice assistant and a critical-communications push-to-talk system.... but why would you? (This is one reason why I always take care to never treat "voice" and "telephony" synonymously).

Hybrid, competitive markets are optimal

So there is value in interoperable systems, and also in proprietary alternatives and niches. Some sectors gravitate towards openness, such as federation between different email systems. Others may create de-facto proprietary appoaches - which might risk harmful monopolies, or which may be transferred to become open standards (for instance, Adobe's PDF document format).

And even if something is based on theoretically interoperable underpinnings, it might still not interoperate in practice. Most enterprise Private 4G and 5G networks are not connected to public mobile networks, even though they use the same standards.


Interoperability can be both a positive and negative for security. Open and published interfaces can be scrutinised for vulnerabilities, and third-parties can test anything that can be attached to something else. Yet closed systems have fewer entry points – the “attack surface” may be smaller. Having a private technology for a specific purpose – from a military communications infrastructure to a multiplayer gaming network – may make commercial or strategic sense.

In many all areas of technology, we see a natural pendulum swing between openness and proprietary. From open flexibility to closed-system optimisation, and back again. Often there are multiple layers of technology, where the pendulum swings with a different cadence for each. Software-isation of many hardware products means a given system might employ multiple layers at the same time.

 Consider this (incomplete and sometimes overlapping) set of scenarios for interoperability:

  • Between products: A device needs to be able to connect to a network, using the right radio frequencies and protocols. Or an electrical plug needs to fit into a standardised socket.
  • Within products or solutions (between components): A product or service can be considered to be just a collection of sub-systems. A computer might be able to support different suppliers’ memory chips or disks, using the same sockets. A browser could support multiple ad-blockers. A telco’s virtualised network could support different vendors for certain functions.
  • Application-to-application / service-to-service: An application can link to, integrate or federate with another - for instance a reader could share this article on their Twitter feed, or mobile user can roam onto another network, or a bank can share data access with an accounting tool.
  • Data portability: Data formats can be common from one system to another, so users can own and move their "state" data and history. This could range from a porting a phone number, to moving uploaded photos from one social platform to another.

There’s also a large and diverse industry dedicated to gluing together things which are not directly interoperable – and acting as important boundaries to enforce security, charging or other functions. Session Border Controllers link different voice systems, with transcoders to translate between different codecs. Gateways link Wi-Fi or Bluetooth IoT devices to fixed or wireless broadband backhaul. Connectors enable different software platforms to work together. Mapping functions will eventually allow 5G network slicing to work across core, transport and radio domains, abstracting the complexities at the boundaries.

Added to this is the entire sphere of systems integration – the practice of connecting disparate systems and components together, to create solutions. While interoperability helps SIs in some ways, it also commoditises some of their business.

Coexistence vs. interoperation

Yet another option for non-interoperable systems is rules for how they can coexist, without damaging each other’s operation. This is seen in unlicensed or shared wireless spectrum bands, to avoid “tragedies of the commons” where interference would jam all the disparate systems. Even licensed bands can be "technology neutral".

Analogous approaches enable the safe coexistence of different types of road users on the same highway - or in the voice/video arena, technologies such as WebRTC which embed "codec negotiation" procedures into the standards.

Arguably, improving software techniques, automation, containerisation and AI will make such interworking and coexistence approaches even easier in future. Such kludginess might not please engineering purists who value “elegance”, but that’s not the way the world works – and certainly shouldn’t be how it’s regulated.

In a healthy and competitive market, customers should be able to choose between open and closed options, understanding the various trade-offs involved, yet be protected from abusive anti-competitive power.

A great example of consumer gains and "generativity" in innovation is that of the Internet itself, which works alongside walled-garden, telco or private-network alternatives to access content and applications.

Customers can have the best of both worlds - accelerated, because of the competitive tensions involved. The only risk is that of monopolies or oligopolies, which requires oversight.

Where does government & regulatory policy fit in this?

This highlights an important and central point: the role of government, and its attitude to technology standards, interoperability and openness. This topic is exemplified by various recent initiatives, ranging from enthusiasm around Open RAN for 5G in the US, UK and elsewhere, to the EU’s growing attempts to force Internet platform businesses to interoperate and enable portability of data or content, as part of its Digital Services Act.

My view is that governments should, in general, let technology markets, vendors and suppliers make their own choices.

It is reasonable that governments often want to frame regulation in ways to protect citizens from monopolists, or risks of harm such as cybersecurity. In general, competition rules are developed across industries, without specific rules about products, unless there is unfair vertical integration and cross-subsidy.

Governments can certainly choose to adopt or even incentivise interoperability for various reasons – but they should not enshrine it in laws as mandatory. If you're a believer in interventionist policies, then incentivising market changes that favour national champions, foster inward investment and increase opportunities can make sense - although others will clearly differ.

(Personally, I think major tranches of intervention and state-aid should only apply to game-changers with huge investment needs - so perhaps for carbon capture technology, or hydrogen-powered aviation).

Open RAN may be incentivised, not mandated

A particular area of focus by many in telecoms is around open radio networks. The O-RAN Alliance and the TIP OpenRan project are at the forefront, with many genuinely impressive innovations and evolutions occurring. Rakuten's deployment is proving to be a beacon - at least for greenfield networks - while others such as Vodafone are using this architectural philosophy for rural coverage improvements.

Governments are increasingly involved as well - seeing a possible way to meet voters' desires for better/cheaper coverage, while also offsetting perceived risks from concentrations of power in a few large integrated vendors. This latter issue has been pushed further into the limelight by Huawei's fall from favour in a number of countries, which then see a challenge from a smaller number of alternative providers - Nokia, Ericsson and in some cases Samsung and NEC or niche providers.

This combination of factors then gets further conflated with industrial policy goals. For instance, if a country is good at creating software but not manufacturing radios, then Open RAN is an opportunity, that might merit some form of R&D stimulus, government-funded testbeds and so on.

So I can see some arguments for incentives - but I would be very wary of a step to enshrine any specific interop requirements into law (or rules for licenses), or for large-scale subsidies or plans for government-run national infrastructure. The world has largely moved to "tech neutral" approaches in areas such as spectrum awards. In the past, governments would mandate certain technologies for certain bands - but that is now generally frowned upon.

No, message apps should not interoperate

Another classic example of undesirable "forced interoperability" is in messaging applications. I've often heard many in the telecoms industry assert that it would be much better if WhatsApp, iMessage, Telegram, Snap - and of course the mobile industry's own useless RCS standard - could interconnect. Recently, some government and lobbying groups have suggested much the same, especially in Brussels.

Yet this would instantly hobble the best and most unique features of each - how would ephemeral (disappearing) messages work on systems that keep them stored perpetually? How would an encrypted platform interoperate with a non-encrypted platform? How could an invite/accept contact system interwork with a permissive any-to-any platform? How would a phone-number identity system work with a screen-name one?

... and that's before the real unintended consequences kick in, when people realise that their LinkedIn messages now interoperate with Tinder, corporate Slack and telemedicine messaging functions.

That doesn't mean there's never a reason to interoperate between message systems. In particular, if there's an acquisition it can be useful and imporant - imagine if Zoom and Slack merged, for instance. Or a gaming platform's messaging might want users to send invitations on social media. I could see some circumstances (for business) where it might be helpful linking Twitter and LinkedIn - but also others where it would be a disaster (I'm looking at you, Sales Navigator spamming tools).

So again - interoperability should be an option. Not a default. And in this case, I see zero reasons for governments to incentivise.

Conclusion

Interoperability between technology solutions or sub-systems should be possible - but it should not be assumed as a default, nor legislated in areas with high levels of innovation. It risks creating lowest-common denominators which do not align with users' needs or behaviours. Vertical integration often brings benefits, and as long as the upsides and downsides are transparent, users can make informed trade-offs and choices.

Lock-in effects can occur in both interoperable and proprietary systems. I'll be writing more about the concept of path dependence in future.

Regulating or mandating interoperability risks various harms - not just a reduction in innovation and differentiation, but also unexpected and unintended consequences. Many cite the European standardisation of GSM 2G/3G mobile networks as a triumph - yet the US, Korea, Japan, China and others allowed a mix of GSM, CDMA and local oddities such as iDen, WiBro and PHS. No prizes for guessing which parts of the world now lead in 5G, although correlation doesn't necessarily imply causation here.

There's also a big risk from setting precedents that could lead to unintended consequences. Perhaps car manufacturers would be next in line to be forced to have open interfaces for all the electronic systems, impacting many automakers' potential revenues. Politicians need to think more broadly. As a general rule, if someone uses the obsolete term "digital" in the context of interop, they're not thinking much at all.

I've written before about the possible risks to telcos from the very "platform neutrality" concept that many have campaigned for. Do they imagine regulators wouldn't notice that many have their own ambitions to be platform providers too?

In my view, an ideal market is made up of a competitive mix of interoperable and proprietary options. As long as abuses are policed effectively, customers should be able to make their own trade-offs - and their own mistakes.



As always - please comment and discuss this. I'll participate in the discussions as far as possible. If you've found this thought-provoking, please like and share on LinkedIn, Twitter and beyond. And get in touch if I can help you with internal advisory work, or external communications or speaking / keynote needs.

Note: this post was first published via my LinkedIn Newsletter. Please subscribe (here) & also join the comment & discussion thread on LI

#5G #openran #regulation #telecom #mobile #interoperability #competition #messaging #voice #innovation


Tuesday, December 19, 2017

Emerging risks to telcos from "Cuckoo Platforms"

Summary
  • Telcos want to be platform players at varying points in their network architecture and service offerings. 
  • But successful platforms generally need "anchor tenants" to gain scale.
  • The problem comes when anchor-tenants are themselves other 3rd-party platforms.
  • There is a risk of platforms-on-platforms acting as "cuckoos", pushing the native owner's eggs out of the nest.
  • Telcos face a risk from major cloud platforms overwhelming their MEC edge-compute platforms.
  • ... and a risk from major AI-based commerce platforms overwhelming their messaging, voice and IoT platforms.
  • Other future platforms also face similar challenges.
  • To succeed as platform providers, telecom operators need to have their own anchor-type services, and to have a well-designed approach to combating the risk of parasitic cuckoo platforms.

Background: the Internet overcame its broadband host

The cuckoo bird is infamous for laying its eggs in other birds' nests. The young cuckoos grow much faster than the rightful occupants, forcing the other chicks out - if they haven't already physically knocked the other eggs overboard. (See "brood parasitism", here).


Analogies exist quite widely in technology - a faster-growing "tenant" sometimes pushes out the offspring of the host. Arguably Microsoft's original Windows OS was an early "cuckoo platform" on top of IBM's PC, removing much of IBM's opportunity for selling additional software. 

In many ways, Internet access itself has outgrown its own host: telco-provided connectivity. Originally, fixed broadband (and the first iterations of 3G mobile broadband) were supposed to support a wide variety of telco-supplied services. Various "service delivery platforms" were conceived, including IMS, yet apart from ordinary operator telephony/VoIP and some IPTV, very little emerged as saleable services.

Instead, Internet access - which started using dial-up modems and normal phone lines before ADSL and cable and 3G/4G were deployed - has been the interloping bird which has thrived in the broadband nest instead of telcos' own services. It's interesting to go back and look at the 2000-era projections for walled-garden, non-Internet services.


The need for an anchor tenant

The problem is that everyone wants to be a platform player. And when you're building and scaling a new potential platform, it's really hard to turn down a large and influential "anchor tenant", even if you worry it might ultimately turn out to be a Trojan Horse (apologies for the mixed metaphor). You need the scale, the validation, and the draw for other developers and partners.

This is why the most successful platforms are always the one which have one of their own products as the key user. It reduces the cannibalisation risk. Office is the anchor tenant on Windows. iTunes, iMessage and the camera app are anchors on iOS. Amazon.com is the anchor tenant for AWS.

Unfortunately, the telecoms industry looks like it will have to learn a(nother) tough lesson or two about "cuckoo platforms".


MEC is a tempting nest

The more I look at Multi-Access Edge Computing (MEC), the more I see the risks of a questionable platform strategy. Some people I met at the Small Cells event, in the US a couple of weeks ago, genuinely believe it can allow telcos to become some sort of distributed competitor to Amazon AWS. They see MEC as a general-purpose edge cloud for mainstream app and IoT developers, especially those needing low-latency applications. 

I think this is delusional - firstly because no developer will want to deal with 800 worldwide operators with individual edge-cloud services and pricing, secondly because this issue of latency is overstated & oversimplified (see my recent post, link), and thirdly because a lot of edge-computing tasks will actually be designed to reduce the use of the network and reliance/spend on network operators.

But also, this "MEC as quasi-Amazon" strategy will fail mostly because the edge/distributed version Amazon will be Amazon. The recent announcement by Nokia that it will be implementing AWS Greengrass in its MEC servers is a perfect example (link). I suspect that other MEC operators and vendors will end up acting as "nests" for Azure, IBM Bluemix and various other public cloud providers.

Apologies for the awful pun, but these "cloud-cuckoos" will use the ready-made servers at the telco edge to house their young distributed-computing services, especially for IoT - if the wholesale price is right. They will also build their own sites in other "deeper" network locations (link). 

In other words, telcos' MEC deployments are going to help the cloud providers become even larger. They may get a certain revenue stream from their tenancy, but this will likely be at the cost of further entrenching the major players overall. The prices paid by an Amazon-scale provider for MEC hosting are likely to be far lower than the prices that individual "retail" developers might pay.

(The real opportunity for MEC, in my view, lies in hosting the internal network-centric applications of the operators themselves, probably linked to NFV. Think distributed EPCs, security gateways, CDN nodes and so on. Basically, stuff that lives in the network already, but is more flexible/responsive if located at the edge rather than a big data centre).


End-running Messaging-as-a-Platform (MaaP)

Another example of platform-on-platform cannibalisation is around the concept of "messaging as a platform", MaaP. Notwithstanding WeChat's amazing success in China, my sense is that it's being vastly over-hyped as a potential channel for marketing and customer interaction. 

I just don't see the majority of people in other markets forgoing the web or optimised native apps, and using WhatsApp or iMessage or SnapChat or SMS as the centrepiece of their future purchases or "engagement" (ugh) with companies and A2P functions. But where they do decide to use messaging apps for B2C reasons, the chatbots they interact with will not be MaaP-dedicated or MaaP-exclusive

These chatbots will themselves be general "conversational platforms" that work across multiple channels, not just messaging, with voice as well as text, and with a huge AI-based back-end infrastructure and ongoing research/deployment effort. They'll work in messaging apps, browsers, smart speakers, wearables, car and general APIs for embedding in apps and all sorts of other contexts.

Top of the list of conversational platforms are likely to be Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana and Facebook M, plus probably other emergent ones from the Internet realm.


MaaP is "just another channel" for broad conversational/commerce platforms

In other words, some messaging apps might theoretically become "platforms", but the anchor tenants will be "wholesale" conversational platforms, not individual brands or developers. In some cases they will again be in-house assistants (iMessage + Siri, or Google Allo + Assistant for instance). In other cases, they may be 3rd-party bot ecosystems - we already see Amazon Alexa integrated into numerous other devices.

Now consider what telcos are doing around MaaP. As well as extending their existing SMS business towards A2P (application-to-person), they have also allowed third-parties like Twilio to absorb much of the added value as cPaaS providers. And when it comes to RCS* which has an explicit MaaP strategy, they have welcomed Google as a key enabler on Android, despite its obvious desire to use it mainly as a free iMessage rival. (*obviously, I'm not a believer in RCS succeeding for many other reasons as well, but let's leave that for this argument).

What the GSMA seems to have also missed is that Google isn't really interested in RCS MaaP per-se - it simply wants as many channels as possible for its Assistant, and its DialogFlow developer toolkit. To be fair, Google announced Assistant, and acquired API.AI (DialogFlow's original source) after it acquired Jibe. It's moved from mobile-first, to AI-first, since September 2015.

The Google conversational interface is not going to be exclusive to RCS, or especially optimised for it. (I asked the DialogFlow keynote speaker about this at last week's AI World conference in Boston, and it was pretty clear that it wasn't exactly top-of-mind. Or even bottom-of-mind). Google's conversational platform will be native in Android, in other messaging apps like Allo, Chrome, Google Home and presumably 1000 other outlets.

From an RCS MaaP perspective, it's a huge cuckoo that will be more important than the Jibe platform. There is no telco "anchor tenant" for RCS-MaaP as far as I can tell - I haven't even seen large deployment of MNOs' own customer-care apps using it. If I was an airline's or a retailer's customer experience manager, and I was looking beyond my own Android & iOS apps for message-based interactions, I wouldn't be looking at creating an RCS chatbot. I'd be creating an Assistant chatbot, plus one for Alexa and maybe Siri.


Can you cuckoo-proof a platform?

Apple, incidentally, has a different strategy. It tends to view its own services as integrated parts of a holistic experience. It tries to make its various platforms cuckoo-proof, especially where it doesn't have an anchor tenant app. This is a major reason for the AppStore policies being so restrictive - it doesn't want apps to be mini-platforms in their own right, especially around transactions. Currently, Google and Amazon are fighting their own mutual anti-cuckoo war over YouTube on Fire TV, and sales of Google Home on Amazon.com (link). Amazon and Apple are also mutually wary.

It's worth noting that telcos are sometimes pretty good at cuckoo-deterrence too. In theory, wholesale mobile networks could have a been a platform for all manner of disruptive interlopers, but in reality, MVNO deals have been carefully chosen to avoid commoditisation. A similar reticence exists around eSIM and remote SIM provisioning - probably wisely, given the various platform-on-platform concepts for network arbitrage that have been suggested.


Conclusions

In my view, both MEC and (irrespective of its many other failings) RCS are susceptible to cuckoo platforms. I also wonder if various telco-run IoT initiatives, and potentially network-slicing will become a platform for other platforms in future too.

One of the key factors here is a "the rush to platformisation". Platforms only succeed when they evolve out of already-successful products, which can become inhouse anchor tenants. Amazon's marketplace platform grew on the back of its own book and other retail sales. AWS success grew on the back of Amazon using its own APIs and cloud-computing.

MEC needs to succeed on the basis of telcos' own use of their edge-computing resources - which don't currently exist in a meaningful way, partly because NFV has been slower than expected. MaaP needs telcos' own messaging services and use-cases to be successful before it should look at external developers. With RCS, that's not going to happen.

Network-slicing needs to have telcos' own slices in place, before pitching to car manufacturers (or Internet players, again). IoT is the same too. Otherwise, expect even more telco eggs to be pushed out of the nest, as they help to foster other birds' offspring.

Tuesday, August 22, 2017

Blockchain for telecoms and networks: the emergence of ICOs & token-based platforms

There's a new trend I'm currently seeing emerge: ICOs (Initial Coin Offerings) for network/Internet-related businesses and communities. These use blockchain-based "tokens" (or coins) as a way to build decentralised marketplaces, for Internet connectivity or other communications capabilities like phone calls. Most have visions for long-term disruption of existing models, although they tend to start from more humble niches.

ICOs both establish a "currency" for these future markets, and provide funding for organisations responsible for their creation and maintenance. At least five network-related ICOs have been announced already, and more seem likely to follow in due course. (Disclosure: I'm an advisor to one of these five - more details below).

Note: If you've found this post through a link from a mainstream ICO/Bitcoin site or link, a quick introduction: I'm primarily a mobile and telecoms analyst. I study and advise on technology and business-model trends relating to network evolution and communications applications. I cover areas like 5G, IoT-oriented networks, voice & video communications, regulatory policy, the future role of telecom operators, and the impact of "futures" innovations like AI / ML, blockchain and drones on telecoms. Most of my clients are telcos or network equipment/software vendors. I'm not a fintech or blockchain generalist.

Note 2: I am also not an investment advisor of any sort. I'm not making recommendations here.


I've been covering the role of blockchains and distributed ledgers in telecoms and networks for well over a year now. I've spoken at events run by TMForum, IIT, Comptel and others about the telecom-sector use-cases (and complexities), and ran a recent public workshop in London alongside Caroline Gabriel (link). I recently participated in a webinar for Juniper Networks (link) and have a forthcoming white-paper in preparation for Juniper as well.

My general stance is "pragmatic optimism": Blockchain technology has many possible touch-points with the telecoms industry, from data-integrity management to back-office systems to billing - but maturity will take time. Some of the utopian "it'll change the world" and "telcos are obsolete" rhetoric is overblown. Distributed ledgers will have many uses and opportunities in telecoms/networking - but are unlikely to overturn or radically-disrupt industry structures, at least on a 5-10 year view.


Most of the uses I've seen discussed until recently have been around private (permissioned) blockchains, intended to improve processes and security within or between telcos and their suppliers. Another set have been around new services/capabilities to be delivered by telcos - for example, using smart contracts to enforce SLAs (service-level agreements), or for identity-management in IoT networks.

The ICO trend is different - this is about public blockchain-based functions that anyone can participate in - hence the "offering". The idea is to create common, distributed, dynamic ways of storing (and pricing) network-related value - especially for Internet access, but also voice communications and potentially other capabilities. 

Actually, telecoms is lagging here: there's been a much broader rush towards ICOs across many sectors over the past year. This website (link) lists hundreds, while this article from the Economist is a useful intro (link). It should also be acknowledged that they have attracted not-always-favourable attention from financial regulators, as there is limited official oversight and most are launched as "crowdsales" on the back of a white paper and some PR, rather than a regulated prospectus and well-monitored issuance on a specific stock exchange. There are some questionable-quality ICOs and a few dubious individuals involved, it seems. Nevertheless, they are a popular way for blockchain-based initiatives to get funding and early traction - and some will undoubtedly becomes stars, even if others flame-out like supernovae.

In a way, a system for exchanging telecoms capacity or data quotas already exists - it's possible to send prepay account "top-ups" between people or companies today, although those are usually in monetary form (ie PAYG credit), rather than being denominated in minutes or MB. That is unsurprising, given the diversity of different pricing models and network operators - it would be hard for me to gift a GB of data to a friend on a different network, but I can send them a £5 / $5 / €5 credit and let them buy the data themselves. There are also other ways to share network capacity, such as FON's WiFi community.

The various ICOs are attempting to "tokenise" aspects of networks and communications, allowing different models of monetisation, with pricing driven by an external market rather than telcos' / ISPs' internal marketing functions. Some link to an existing cryptocurrency and blockchain like Ethereum, while others are trying to create something new.

The ones I've discovered that are clearly related to telecoms/networks include:
  • DENT Wireless (The website is here & white paper is here): This aims to act as a clearinghouse for mobile data quotas / allocations, between users, between MNOs, or for roaming "local breakout" via visited networks, using its tokens as a common currency. Its ICO, based on Ethereum, was in July. It is aiming to build up enough members as a "buying consortium" to exert pressure on operators to cooperate. It's got some interesting execs and advisors, notably including Rainer Deutschmann who has been instrumental in getting Reliance Jio off the ground in India. One of the use-cases is "donating GB of data to Africa" as a way to improve Internet access in emerging markets. One interesting angle is a tie-up with sponsored-data software company Aquto, which works with AT&T and others. My longterm doubts about the general sponsored-data model continue (the concept of "1-800 apps" is palpable nonsense), but this could be a possible workable use-case. The key differentiator appears to be its willingness (& knowledge) of partnering with operators rather than trying to displace them. Given the wide variations of mobile data pricing (& conditions) by operator, country and tariff - especially postpaid vs prepaid - I'm not sure there's an easy common denominator, though. The inbound roaming scenario may be very tough as well, especially as it may need users to manually select networks, which they may be locked-out from doing on subsidised/customised handsets.
  • AirFox (The website is here & the white paper is here): This platform attempts to draw a link between mobile prepay credits, advertising, user-data and potentially micro-loans in future. It extends the current model of gifting or sending "recharges" to many international mobile operators' prepay customers, by shifting from normal payments to a cryptocurrency bought in a marketplace or earned by viewing ads. The model of "watch these ads and get free calls/credit/data" is not a new one (eg Blyk in the UK between 2007-09), but this is the first decentralised and tokenised one I've seen, linked to a global recharge network. It relies on a customised browser and also a dedicated ad-viewer/recharge app. The browser blocks native ads and replaces them with its own (and can also fingerprint the user by looking at other apps installed). Users can thus earn Ethereum-based "AirTokens" or alternatively they can buy them at market rate, to exchange for prepay credit / recharges. It's not obvious to me how AirFox proposes to "bulk buy" data from operators without wholesale/MVNO deals - in most cases I suspect it'll have to use the usual recharge channels. Its aspiration to "replace the current mobile ecosystem (applications, sites, advertisers, data purchases) with a more efficient new decentralized AirFox mobile ecosystem" seems unrealistic given that most mobile users prefer native apps (or web-pages rendered in apps). Nevertheless, the existing model of sending real ("fiat") money or top-ups seems to work, so there's a basis for an ad-supported model, although its existing stats imply a revenue of 1/17th of a US cent per ad. The ICO / crowdsale launches on August 29th.
  • Ammbr (The website is here & the white paper is here): [Note - I am an advisor - see below]. This is an attempt to blend custom mesh-network silicon and hardware units, with a blockchain and token-based model for identity and a marketplace. While AirFox and DENT focus on sharing credits/quotas for normal personal mobile access, Ammbr wants to share the access network itself, and ultimately encourage build-out of extra coverage and capacity. Its network units (initially WiFi but with other radios in future) support decentralised micropayments, allowing the node owners to earn tokens and essentially act as their own local ISPs with very little friction or setup cost. While these will obviously need backhaul from normal telcos (fixed and/or mobile), once sufficient density is reached, meshes may reduce the total number of wide-area connections needed. An initial use-case is likely to be in developing countries, where micro-loans and other local (and often informal) sharing-model businesses have grown. The hardware-based model is obviously ambitious, but also means future potential to support multiple radios (imagine a CBRS-type shared spectrum or LPWAN module), and could also potentially host distributed edge-computing or NFV capabilities. There are both opportunities and various complexities and possible pitfalls I can imagine, plus there are alternative options for community/rural connectivity (I'm writing a piece on Facebook's Telco Infra Project & OpenCellular for my STL Partners research stream at present [link]). One aspect that's interesting, but which I'm not able to comment on authoritatively, is the unique blockchain model, based on Proof of Elapsed Time / Velocity, which differs from Bitcoin & Ethereum's Proof of Work. In Ammbr is it linked to a custom silicon processor, with claims of much better power consumption than other approaches. The ICO is upcoming in September.
  • EncryptoTel: (Web page is here and white paper is here. This is very different from the other network-type ICOs, as it's more about (business) voice communications than data access. It is a version of an enterprise cloud PBX / UCaaS platform, with encryption, privacy protections and (anonymous) cryptocurrency payments. It allows both on-net VoIP calls (using standard SIP endpoints or dialler apps) and integration with the public phone network, as well as (in future) interconnecting with various messaging applications. It will offer both monthly subscriptions and a pay-as-you-go model. The white paper references video calls, but it does not appear to offer full-fledged UC functions. The roadmap describes a progressive roadmap of development and deployment, with full commercial launch expected in Summer 2018. The ICO occurred in May 2017.
  • Mysterium (Web page is here and white paper is here) is a distributed VPN and data-encryption platform - essentially a higher-performing, blockchain-based version of Tor. It uses an Ethereum-based token system of micropayments. In its earliest phases it retains some central control, with the intention of removing this further down the roadmap. It will compete with commercial VPN products. Its ICO started at the end of May 2017.
[Note: some white papers get updated, so the URL might change with the version number - check the main websites for the latest versions] 

There are also various other ICOs relating to cloud-computing, storage and other related areas, such as Filecoin and Internxt. Another company called Crypviser (link) is developing a secure messaging app and also references secure voice calls in its white paper, although with few details.

So - will any of these, or future, ICOs lead to commercial, scalable networking or communications platforms? It's too early to tell. While the white papers typically given enough "vision" and a tentative roadmap, it's likely that most or all of these projects will encounter challenges and pitfalls, and may end up pivoting as events unfold (and customers'/users' behaviour develops).

One of the risks is that tokenisation itself may limit the possible business and pricing models - for example, how can any of them offer hybrid centralised/decentralised services, if that's what the market seems to want? Can they support sponsored/free models, or allow more granular differentiation? What happens if they contravene other services' T's & C's? How is customer support provided for decentralised capabilities? It is also unlikely that any such proprietary mechanisms or payment instruments will become globally dominant, so there will need to be paths to standardisation - as well as deal with the beady eyes of regulators if they become successful.


Nevertheless, this is an interestingly different direction-of-travel for telecoms/network blockchain, as it sits separately to the main thrust of work around private/permissioned use-cases I'm seeing from some vendors, various operators, bodies like TMForum etc. I still think that some of the back-office applications for blockchain in the telecoms sector have more short-to-medium term opportunity, but it's possible we could see a break-out here by a new entrant of the type discussed in this post. I'll definitely be keeping a watching eye on all of these. 


Please drop me a message at information AT disruptive-analysis DOT com if you want to discuss this more, or want a telecom/blockchain speaker or analyst for an event or workshop.


Footnote on Ammbr: Close contacts may have noticed I recently added an advisory role to my LinkedIn profile, for an organisation called Ammbr, mentioned above. At present, I'm just working on a consultative basis, but unlike most of my other advisory clients, it's not purely "behind the scenes" with execs in private under-NDA workshops, but has a public aspect to it as well. It's got a genuinely interesting combination of technologies (mesh, blockchain, custom silicon, potentially private cellular etc), some talented people, and while that means a lot of moving parts to fit together, there are some intriguing possibilities I'm glad to be able to help refine and prioritise.

Internally, my role is as a telecoms-sector expert and (to nobody's surprise) a general curmudgeon pointing out any risks, technical or commercial "gotchas", competition/substitution threats and anything that seems like wishful thinking. I should point out that this is a small part of my overall activities, I'm not "endorsing" it as such, and my normal
Disruptive Analysis work on all areas of analysis & futurism is continuing. It's also not going to bias my views on other wireless technologies or business models, many of which are more-developed and which I'm also enthused about (eg private cellular). Drop me a message if you want to discuss this further (or want to discuss other consulting or advisory roles).

Monday, July 18, 2016

My comments on BEREC's Net Neutrality guidelines consultation

I've been meaning to submit a response to the BEREC consultation on its draft implementation guidelines for the new EU Net Neutrality guidelines for some time. However, a combination of project-work and vacation has meant I've had to do just a fairly rapid set of comments at the last moment. 

I'm posting them here as a reference and further discussion-point. 

As a background, I think the guidelines are quite comprehensive - but have shifted the needle somewhat from the final EU regulation back towards the Internet-centric view of the world. However, the permissiveness around both zero-rating and (in certain circumstances) so-called "specialised services" seems a pragmatic compromise position. I tend to think that zero-rating is fine "in moderation" - it's basically the Internet equivalent of promotions and coupons. "Sponsored data" is an almost-unworkable concept anyway, so the regulatory aspect is largely irrelevant.

Specialised services are OK as long as they are genuinely "special" - something I've been saying for a long time (see post here). It should also be possible to watch for genuine innovation being catalysed / inhibited by the new rules - and then regulators and policymakers can take a more-educated view to revising them in a few years, based on hard evidence.

Anyway - the contents of my submission (reformatted slightly) are below:



Preamble

I am an independent telecom industry analyst and futurist, representing my own advisory company Disruptive Analysis. I advise a broad variety of telecom operators, network and software vendors, investors, NRAs, IT/Internet firms and others on technology evolution paths, business models and applications, and regulatory issues. I look at the issue of Net Neutrality particularly through the lens of what is, or what is not, possible – and also how the Internet value-chain, applications and user-behaviour are likely to evolve in future.

In the past, I have published research studies examining the possible roles and scale of “non-neutral” broadband & IAS business models. My primary conclusions have been that, irrespective of regulation, most proposed commercial models such as “paid prioritisation”, application-based charging or “sponsored data” are broadly unworkable, for many different technical and business reasons – such as growing use of encryption, plus the risks of false positives/negatives.

Overall, I see the guidelines as broadly positive, as they help clarify some of the many grey areas around implementing NN, and clearly try to close off future potential loopholes. Some aspects will likely be difficult to implement technically – notably the precise definitions and measurements of QoS / and “quality” – but the guidelines are good in setting the “spirit” of the law, even though in some cases the “letter” may be harder to achieve.

Listed below are comments that I feel could help to:

  • Clarify the guidelines further 
  •  Help future-proof them against changes in technology 
  •  Raise questions about possible evolution of the guidelines in response to those changes 
  •  Lock down a few additional possible loopholes
                                                                                                                       
Specific points on individual paragraphs: (reference to the guidelines doc here)

#10 – in locations where “WiFi guest access” is made available (eg visitors to a company’s offices), there is sometimes a sign-up or registration required, either via a splash-page, or simply via obtaining a password. Does this count as “publicly available”?

#11 – it should be clarified that there is a difference between corporate VPNs for connecting to a central site, and personal VPNs that are designed to secure/encrypt normal users’ access to the Internet. There is also a growing trend for corporate VPNs to be replaced by a new technology, software-defined WAN, which may itself use Internet access or even multiple accesses as transport.

#12 – Consideration of WiFi hotspots needs to distinguish between voluntary access (eg if a user obtains the café password & registers independently) vs. automated “WiFi offload” by ISPs as an integral part of their IAS offering. The latter is a form of “public access”. Also, there are growing examples of ISPs using WiFi in public places, including outdoors, sometimes as part of “WiFi-Primary” public IAS.

#14 - It is worth distinguishing between capital-I “The Internet” (ie public Internet, addressable via the DNS system & IAS) and lower-case-I “internets” (internetworks) that are private domains.

#23-25 – This needs to reference what happens when “terminal equipment” becomes virtualised, through the imminent release of NFV (network function virtualisation) architectures. This could mean that either the “terminal” become a software-function in the ISPs’ data-centre, or could be (in part) pushed down as a “virtual network function” (VNF) to a general-purpose box at the customer site. Some providers are already discussing the concept of a “VNF AppStore” where the user can choose between different software “terminal” functions. It is unclear if this is permissible – or even mandatory.

#39 & #45 – the nature of software and Internet applications makes its increasingly hard to define categories. There are many blurred boundaries, overlapping categories, “mashups” and differentiated offers. How is the categorisation achieved, for example where a social network includes a large amount of video-streaming in its timelines? Is that equivalent to a “pure” video application? What about streaming of games? Is there a distinction between video-on-demand and live-streaming? This is particularly difficult where some functions such as voice communication are being included as “secondary features” embedded in many other applications, often via the use of 3rd-party platforms and APIs (application programming interfaces). There needs to be stronger guidance on how “categories” are defined and how disputed or ambiguous categorisation can be addressed.

#40 & #45 – a possible implementation option is to require ISPs to report the % of overall traffic (or % of particular user-classes) that is zero-rated.  If the total amount provided “for free” is less than (say) 10% of the total, it can a-priori be considered acceptable as it is unlikely to materially affect users’ choices. However, if it is higher this could trigger closely investigation by the NRA.

#43 – this section seems to focus more on established CAPs or possible new-entrants. It is unclear if this explicitly covers the needs open-source initiatives and general software-developers

#56 – There is a possible implementation option for NRAs to collect and hold configuration details for ISPs’ network equipment or software-equivalent VNFs, to allow retrospective analysis of network setup if disputes occur. This could be done on an encrypted / escrowed basis to maintain normal commercial confidentiality

#57 – the reference to encryption needs to explicitly include both app-level encryption (eg HTTPS / HTTP2) and more general “all-traffic” encryption using corporate or personal VPN “tunnels”

#57 & 58 – an implementation option for NRAs could be provision of a contact-point for internal ISP whistle-blowers to report infringement, or 3rd-party monitoring organisations (eg that use pattern-recognition to detect abuses)

#60 & #61 & #63 – categorisation is extremely hard, owing to application differentiation, complex hybrid and “mashup” applications, different levels of fault-tolerance built into applications by developers etc. For example, different VoIP applications use different approaches to error-correction, or are used differently (eg ordinary telephony vs. karaoke). In future there will also be a difference based on whether the application (at either end) is a machine rather than a person. Implied QoS when speaking to “Siri” or “Alexa” may have very different characteristics to speaking to a friend, despite being carried over VoIP. There may also be other dependencies – eg if network conditions have worse impact on badly-designed applications, or devices with other constraints (memory, CPU power, processing chips etc)

#64 – does “network management traffic” also include other types of operational (internal) ISP traffic such as billing records, customer-service inquiries & apps and so forth?

#71 – does “alteration” cover so-called “optimisation”, whereby various content such as a video or image can be paused, down-rated, reformatted etc.? Does it also cover “insertion” of additional data such as tracking codes / “supercookies”, or additional overlay advertising? Are “splash pages” (eg for WiFi registration) allowed?

#89 – Dimensioning may well be affected by other constraints, such as spectrum availability, location, economics of network coverage/capacity, or “emergent” unexpected trends in demand

#98 & #123 – this appears to define specialised services as “actually being special” rather than those capabilities that are normally delivered over IAS. How are hybrid specialised/non-specialised services to be treated?

#101 & #104 – technologies such as SD-WAN (software-defined WAN) allow improved QoS by linking together multiple IAS connections, which in aggregate can perform as well (or even better/cheaper) than one QoS-optimised connection. Should NRAs consider this option when determining if specialised services are valid? See http://disruptivewireless.blogspot.co.uk/2016/06/arbitrage-everywhere-inevitable.html  and http://disruptivewireless.blogspot.co.uk/2016/03/is-sd-wan-quasi-qos-overlay-for.html for more detailed discussion of this point

#111 – It is important to recognise that VPNs are increasingly used by consumers as well as businesses, often to provide a secure & privacy-protected path to the Internet over both public IAS and localised WiFi hotspots. The guidelines should specifically reference consumer VPNs.

#113 to #115 & #117 & #119 – It may be difficult to guarantee coexistence of IAS and specialised services over cellular/other radio networks, where factors such as location in a cell, mobility, density of users, coverage/interference etc are non-deterministic. Potentially the guidelines could advise use of different spectrum bands for IAS and specialised services, to mitigate these problems.

#113 & #116 – in future 5G architectures, we may see a concept called “network slicing”, where the radio and core networks are logically divided into “slices” suitable for different application classes – either broadly between Internet & specialised services of different types, or resold more granularly a bit like “super-MVNOs” to particular 3rd-parties on a wholesale basis. Where those parties are themselves CAPs, this could make interpretation of this section very difficult. If Netflix or Google or even a rival ISP/telco buy rights to a “slice”, how do the guidelines apply?

#131 – This guideline should potentially also include information/transparent guidance for application developers, who may be creating applications intended to run over the IAS provided

#152 – should coverage maps be 2-dimensional, or also include z-axis detail (eg speed in a basement / on the 50th floor of a tower block)? How can such maps cope with the trend towards self-optimising / self-reconfiguring networks of various types?

#167 & #180 – NRAs should potentially seek to maintain records of network configuration status (which may change abruptly with the advent of NFV & SDN). This could perhaps be stored securely & reliably using technologies such as Blockchain.

#172 & #179 – monitoring of aggregate volumes of traffic subject to price-discrimination (eg % of IAS traffic that is zero-rated) would be useful


General comments:
  • There needs to be consideration of meshed, relayed or shared connections which run directly between users’ devices. In device-to-device scenarios, does the owner/operator of an intermediate device become responsible for the neutrality of the “onward” link to 3rd parties? (which could be via any technology such as WiFi, Bluetooth, wired USB port etc) 
  •  There needs to be consideration that some of the more invasive mechanisms for traffic discrimination and control will in future move from “the network” to becoming virtualised software (provided by an ISP) that reside in edge-nodes at the customer premise, or even in customers’ mobile devices. It is unclear how the implementation guidelines deal with predictable near/mid-term trends in NFV/SDN technology, especially where there is no clear “demarcation point” in ownership between ISP and end-user. 
  • Equally, in future there may well be CAP companies that offer their services “in the network” itself, also with NFV/SDN. There needs to be careful thought given to how this intersects with Net Neutrality guidelines 
  • The evolution of artificial intelligence & machine-learning means that workarounds or infringements may become automated, and perhaps even invisible to ISPs, in future. This may also impact the nature of QoS as used for different applications. See http://disruptivewireless.blogspot.co.uk/2016/04/telcofuturism-will-ai-machine-learning.html for more details 
  •  Where wholesale relationships occur – eg MNO/MVNO, “neutral host” networks using unlicenced-band LTE, or secondary ID on the same WiFi hotspot – and the traffic-management / IAS functions are co-managed, how do the guidelines apply? Which party/parties is responsible?

Wednesday, November 11, 2015

Telcos need to emulate SoftBank & decouple network / services businesses

I've just spent two days at an event called Nexterday North, run by Finnish OSS specialist CompTel. It wasn't the usual vendor product-based user conference, but more a quasi-TED flavoured "anti-seminar" with assorted general futurists (notably Rohit Talwar and Patrick Dixon), as well as outspoken telco-industry provocateurs like myself and Alan Quayle. It was fun and refreshing, held in a warehouse by the Helsinki docks. My talk was on "Top 10 Myths in Telecoms" and will be uploaded to the event site soon.

There were also some good, high-level operator presentations. Tele2 and Globe (from the Philippines) has some interesting angles, but the one that really struck me was SoftBank's.

A bit of history - SoftBank is basically a web and software company with a fixed broadband arm, that bought a mobile operator (Vodafone Japan) rather than vice versa. It also now owns Sprint in the US. The Japanese mobile profits are now 9x what they were under the VF stewardship of the business, and it has increased its market share considerably.

But at the same time, the company has been involved in numerous other successful Internet businesses, notably Yahoo Japan and a major stake in Alibaba. More recently it has invested in Indian online players SnapDeal and Ola.

The key point that the speaker made was that the network businesses and service businesses are decoupled. Yes, there are certain services that are integrated - the mobile network has telephony and so forth. But SoftBank's mobile network activities were specifically described as being the much-derided "dumb pipe" model, with heavy utilisation of both WiFi (both in homes and public hotspots) for offload / load-balancing.



I'd disagree with the assertion that mobile operators will benefit from NFC payments, but that may be different in Japan as an exception. (Although last time I was in Tokyo, nobody used NFC for train tickets - I stood watching people coming through the barriers in Shinjuku Station).

But the key message is that as a network operator the money comes from driving data traffic usage, whilst keeping costs manageable. There was no angst about so-called "OTTs". Virtualisation and heavy use of indoor coverage solutions are seen as critical, especially given the urban-heavy bias of Japan mobile usage. (Sidenote: global urbanisation was seen as a trend by numerous speakers; a topic for another post).

The reason for the lack of concern about OTTs is that SoftBank is "hedged". It has its own Internet/online footprint and benefits from the growth of the web (and, implicitly, neutrality).



So while the Japanese network business is looking to create some network-based services (it has implemented VoLTE for example) this is not the core of its overall Group-level hopes for deriving value from the web and apps. It is, in reality, there to support the sales of data network services.

The real value comes from entirely separate ecosystems like Alibaba - which primarily derives revenues from people and countries for whom SoftBank doesn't offer connectivity. In other words:  "The upper layer and lower layer has no need to be integrated".

In many ways this is like the oil industry. Many major integrated companies do exploration and production (E&P) in areas of the world where they do not also do refining and marketing (R&M). They produce crude oil in one place (analogy: network capacity) and process/consume it in others (analogy: network traffic & applications). It's not a perfect metaphor because there's a global marketplace for crude, but it's a useful conceptual tool.

SoftBank is successful on both sides of the connectivity/application divide because it does not try to integrate them. Its recent investment in the Indian Internet industry is because it sees growth in its own right, not as a way to "add value" to its network assets. Meanwhile most other telcos try to "leverage the network" with IMS, QoS, complex policy-control, APIs, numbering, integrated IoT platforms and so on. 

As far as I know, none of the SoftBank Internet businesses is particularly interested in "specialised services", paid priority or any of the other non-neutrality myths. Maybe there will isolated examples in future, but the bulk of its "digital" activities just use plain, vanilla, Internet access.

Verizon is perhaps heading in the right direction with its purchase of AOL (although one could question the choice of target). Telefonica's TokBox remains the rare example of a telco-owned Silicon Valley company that hasn't been messed up by its new owners, three years after acquisition. But for now, SoftBank is the pre-eminent example of a successful dual telco + Internet strategy. It exemplifies what I was referring to when I first talked about Telco-OTT businesses, 4 years ago. 

Other operators should take its lessons to heart, and decouple network and Internet business units. Regulators should consider whether structural separation would be healthy for the industry, if they don't do it themselves.

Monday, August 10, 2015

Trip report: a tale of mobile/WiFi in two developing countries, Haiti & Cuba

I've been away for the past 3 weeks in two very different, but very close countries: Haiti & Cuba, separated by less than 100km. While it was a vacation and I was mostly "off-grid", there were still a few interesting things I noticed about local use of communications and the Internet in each place.

Haiti is one of the poorest countries in the Western hemisphere, and has some of the worse conditions of poverty and restricted infrastructure I've seen anywhere. Outside the Petionville district of the capital Port-au-Prince, there are still many signs of 2010's devastating earthquake, and often severe poverty. It's a beautiful country, with some fascinating places - but it's also one of the hardest places to travel that I've visited.

However, cellphones are pretty ubiquitous, with growing use of low-end smartphones, but still a lot of basic voice/SMS devices around. There are two mobile networks - Digicel, which is prominent throughout the Carribean and various other island nations, plus fixed/mobile Natcom which is majority owned by Vietnamese telco Viettel, but apparently holds a small market share. Most users (except roamers and wealthly UN/NGO/government types) use prepaid SIM cards, with top-ups available from many locations, often street-side or even sold through bus windows during stops. Adverts for mobile networks are everywhere, often permanently painted onto walls of houses or shops.



3G coverage is present across a fair amount of the larger Haitian cities and by major roads - I visited Jacmel and Cap Haitien as well as the capital. (By coincidence, one of the press releases in my post-holiday inbox is from Astellia, talking about a contract with Digicel for 3G network optimisation in the country).

Data charges are reasonable to external eyes (eg 20c/day for 90MB, or bizarrely at an anti-discount $8/mo for 2GB - see here) but still pretty expensive for many of the inhabitants (my guide used a BlackBerry because of its compression abilities, primarily for email & Whatsapp), but seem to be of growing importance to many. I noticed various Facebook-centric per-app or zero-rating plans being advertised.

The few good hotels in the country typically had decent fixed broadband and WiFi, but I saw very little of this in other locations, fairly unsurprisingly. Interestingly, Haiti is included in Vodafone's £5/day WorldTraveller programme, so I could use my iPhone in most places at relatively reasonable cost, rather than getting a local SIM for my spare phone - although had I known the 20c/day rate and been staying more than a few days, I would still have taken that approach.

My overall sense was that Haitian use of mobile and the Internet is broadly on a par with other developing nations at a comparable level of GDP, and apart from patchy competition it seems to have similar business and deployment models. I noticed various schools offering IT and Internet lessons, although some of the poverty I saw suggests that adoption across the whole population will be slow - there are more important problems to fix first.

Cuba, by contrast, could not have been more different.

Firstly, I switched off data roaming as it's not covered by a decent plan, but would have cost me £3/US$5 per MB. Roaming for phone calls & SMS seemed to work OK in some places but not others - I had three days of "No Service" in the middle of the country. One SMS I sent to a local Cuban took 4 days to arrive. Apparently there is 3G data available in some places, but it's aimed at tourists rather than local inhabitants. Relatively few Cubans seem to have phones anyway - although that is changing (see below). It's the only country I've visited recently that has payphones everywhere on the street - and people using them.




It seemed to be possible for tourists to get local SIM cards (eg to call hotels or the small number of private restaurants), but I didn't bother as I suspected it would mean navigating assorted bureaucracy in Spanish. I did, however, get a calling card for the payphones - the first time I've used one in about a decade. There is one state monopoly provider of fixed and mobile communications, ETECSA, and I didn't see the proliferation of mobile-related advertising you get in almost any other nation.There's quite a few places that are agents for top-up cards or payphone cards, but they're not the "phone shops" you'd get elsewhere.

I'd already told all my friends and clients to expect me to be offline, as I knew Internet access was near-impossible to find except on PCs in hotel lobbies, or telecom operator offices where I'd likely have to stand in a queue in the heat outside for ages. I'd heard that prices were $4.50-$10 per hour - well outside the reach of most Cubans, for many of whom (under the dual-currency system there) that would be a sizeable % of their monthly wage packet. Looking at some guides & reports online, I found that a few locations had WiFi, but they were mostly in Havana rather than country-wide (I visited about 7 different places). There are very few URLs displayed anywhere.

But that has changed, and very recently.

On July 1, ETECSA cut access rates by 60%, to $2/hour (bought via scratch-cards from their booths and offices, or some hotel lobby desks). They also fired up WiFi access points in various public squares and parks - often the social hubs where hundreds gather in the cooler evenings anyway.

As a result, when I visited 3 weeks later, I saw small clusters of both tourists and Cubans sitting in the shadier bits of the squares, clutching phones and tablets, wherever was closest to the telecom office or WiFi AP. I saw mostly cheaper Android phones and no-brand tablets, but quite a lot of Samsungs and a few Apples. Amusingly, some devices sported US carrier branding, suggesting recycling/unlocking of old phones. 

The WiFi (still expensive by local standards, but about the price of 2 cans of beer), was actually pretty speedy. None of the services I normally used seemed to be blocked, either.

But the most amazing thing was the realtime behavioural change I could witness. There were huge queues outside all the ETECSA offices. Out of hours (or alongside the queue) some enterprising folk re-sold the scratch-cards at a markup for the impatient. Some hotels ran out of stock of the cards, which could be used at the (ETECSA-run) indoor WiFi as well as the public venues. Online access had very quickly become the teenagers' entertainment of choice.



And then I started seeing cross-generational groups standing on the street in the evenings, using video-calls (Skype or Viber?) to connect kids and grandparents to relatives elsewhere in the world (presumably many in the US, to which numerous Cubans had emigrated during the last decades).

It will be very interesting to see what other societal shifts occur in coming months in Cuba. The Internet genie is very much "out of the bottle" - unlike a lot of countries, the low pre-existing use of the web means that the first use for many is on a smartphone or tablet. And, interestingly, via public outdoor WiFi, rather than cellular or a fixed PC. 

Ironically, the only city where I couldn't easily find any public WiFi was Varadero, the package-tourist capital of the country. And in Havana, I couldn't find a square with "vanilla" ETECSA WiFi access, so I instead ended up using the networks at hotels or Hemingway's favourite daiquiri bar, Floridita.