Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Friday, August 10, 2018

Thoughts on roaming, local SIM cards and eSIMs

I spend a large part of my life travelling, both for work and leisure. But while I find connectivity to be hugely important, I refuse to pay ludicrous per-MB data roaming prices.

So until a couple of years ago, this meant that I had a large collection of (mostly non-functioning) local mobile SIM cards I'd bought in various countries. Typically, I'd use them in a spare phone, so I could keep me normal phone on my home SIM to get inbound SMS or missed voice-call notifications. I'd also often use the second phone as a WiFi tether for my primary iPhone.

At one point I found old SIMs from the US, Singapore, Mozambique, Vanuatu, UAE and Australia in my wallet. In some places it was easy to get local SIMs, while in others it involved cumbersome registration with a passport or other documents. Places like India and Japan were a real pain, and I just didn't bother, relying on WiFi & an occasional extortionate SMS.

That has changed in recent years - and there are now multiple options for travellers:
  • Local SIMs are often easier to obtain. Booths at airports are well-practised at registering documents, sorting APN setting and so on, in a couple of minutes
  • In the EU, roaming prices have fallen progressively to zero - often including non-EU European countries as well. Various other groups of countries or regional operator groups have also created free-roaming zones.
  • Some operators offer customers flat-rate or even free roaming to other countries, such as T-Mobile US's free (but 2G-only) international data, or $5/day for capped LTE (link). I use Vodafone UK's £6/day "roam further" plan quite a lot, especially when visiting the US (link).
  • Many travellers can get dual-SIM phones, so they can easily switch between home and local SIMs without fiddling about with trays & pins. (There's no dual-SIM iPhone though. Grrrr. More on this later). 
  • Various companies (eg Truphone) offer global/roaming SIMs, and have hoped that frequent travellers would use these as their primary/only SIM. The problem with this is that they typically rely on MVNO relationships in each country, including the user's home market - which often means poorer data plans than can be bought domestically from the main MNOs. You also don't get to benefit from multi-play plans, bundled content and so forth. I'm also not entirely convinced that MVNO traffic always gets as well-treated as the host MNO's own customer data - and that's likely to get worse with 5G and network-slicing.
  • Some providers pitch global SIMs alongside rented/bought portable WiFI hotspots, such as TEP Wireless (link). The problem is that these often just cover the same countries as the better roaming plans from normal mobile operators. 
So... in July I went on holiday to the Cape Verde islands, off the coast of West Africa. Beautiful archipelago of 9 inhabited islands, with beaches, mountains, volcanoes, hiking trails and small villages nested in sheer-sided valleys. Neither Vodafone nor any of the travel-SIM companies seemed to cover either of its two main networks. So I went and bought an unlocked WiFi hotspot (from TP-Link), and hoped to get a local SIM on arrival, as I'd read a few suggestions it was possible.

It wasn't just possible, but remarkably easy. Walking through the arrivals door from customs at the airport, I was handed a free SIM by a representative of one of the operators (Unitel) within seconds. When I unwrapped it later in the day, I found it had 200MB of data included for free. No registration needed, no upfront payment, nothing. 3G network only, but that was fine to assure myself it worked OK. The next day I found a branded store & decided to stick with that network rather than check the other one (good marketing / customer acquisition strategy!) as the price-plans seemed fine. 

I paid €12 for 5GB of data, valid for a month. There was also a 7GB and maybe a 10 or 12GB one, but I wasn't planning on streaming video. In other words, €1 a day with about 500MB available per day, for normal mobile usage during my 11-day visit. The helpful lady in the shop sorted it all out for me, including temporarily switching my new SIM into her phone to send the setup / dataplan-purchase messages, which were tricky from a device with no keypad.

This compared to the roaming-advice SMS telling me that data would cost £0.60/MB [about €0.70]. In other words, roaming data was about 300x overpriced - quite astonishing, in 2018. And the mobile industry wonders why users have such little loyalty and respect.

(It's also worth noting that WiFi was ubiquitous in any hotel, cafe, restaurant or other places that visitors might go. There were telephone cable strung along all the valleys on poles, and decently-fast broadband was common. Given the moutainous topography, you could sometimes get WiFi more readily than cellular).

How would eSIM change things?

But this experience got me thinking about how the experience might be different in the coming era of eSIMs and remote-provisioning. Firstly, let's assume that one or both Cape Verdean operators actually had the requisite server-side gear for RSP. And let's assume that my future iPhone either has a multi-profile eSIM capability, or has dual removable/embedded SIM capability. (Remember, I still want to get my normal SMS's from my UK Vodafone number). Potentially, a future WiFi Hotspot could be eSIM-enabled too.

But then the question is, how does the user find out about the available networks, and the available plans on those networks? What's the user journey?

And there are lots of other questions too:
  • Would I get a popup alert when I switched my phone on after the flight? 
  • Would it give me menus for all the available plans or just a subset? 
  • Would I need to have signed up in advance, either with a local CV telco, or perhaps facilitated by Apple, Vodafone or a third party? 
  • When and how would I download the new profile? What data would that require me to send back (or what would be collected automatically?). 
  • Would it be easier to get an eSIM-capable WiFi device? 
  • But would that just be the same global MVNO providers who didn't have a Cape Verde relationship for roaming?
  • What happens if something goes wrong, or you need to buy more data? Can local stores give you any help, or top-ups?
Bottom line: this whole experience would likely have been worse with eSIM, not better. And probably more costly too. Maybe in a less unusual country, with MVNOs and better roaming partnerships, it could be much more slick.

But for most "normal" countries, I'll probably stick to the £6/day plan from Vodafone for ease, even if that's 5x overpriced and should really be £1-2/day. It's annoying, but basically the equivalent of  beer, and there's probably other ways I can save money faster when on a trip. That said, now I've got my new WiFi puck, I might switch back to SIMs sometimes though, if they're easy and available at the airport. I'll certainly take it along with me as a Plan B.

Sunday, June 03, 2018

Telecom regulation and blockchain - is #RegTech the killer application?

One of the most interesting developments in telecoms technology for a while occurred this week – India’s telecom regulator TRAI issued a set of draft regulations aimed at combating spam and nuisance calls. (link)

At first glance, you could be forgiven for asking why anti-spam rules could possibly be more important than all the hoopla about 5G, market consolidation, network-slicing and, especially, “digital transformation” or RCS messaging (I jest).

The reason is in the details: TRAI has stipulated that telcos should use blockchain-based technologies to enforce its proposed rules, creating a tamper-proof and encrypted ledger of consent records, given by users for opt-in telemarketing. If the rules translate to reality, this is a major step forward in commercialisation of digital ledger technology, and at scale.

"Access  Providers  shall  adopt  Distributed  Ledger  Technology  (DLT)  with
permissioned and private DLT  networks for  implementation of  system, functions and processes as prescribed in Code(s) of Practice: -
(1) to  ensure  that  all  necessary  regulatory  pre-checks  are  carried  out  for  sending
Commercial Communication;
(2) to operate smart contracts among entities for effectively controlling the flow of Commercial Communication;
Access Providers may authorise one or more DLT network operators, as deemed fit, to provide technology solution(s) to all entities to carry out the functions as provided for in these regulations."

But in my view, this could be just the tip of a quite large iceberg. I'm starting to think that regulatory uses for blockchain (especially private/permissioned versions) could be central to the technology's success in telecoms.

Innovation in Regulation Technology, or RegTech, is already a huge domain, especially in sectors like financial services and healthcare. Historic methods for regulatory enforcement, from money-laundering rules, to certification of professionals, have often used reams of paperwork and had cumbersome processes. There is a huge need for automation, better provision of security and authentication, and simpler online access to regulatory resources and approval.

Obviously, telecoms has itself long had technical means for creating and enforcing rules, from spectrum-monitoring and radio-coverage tools, through automated platforms for telecoms licensing, to software aimed at checking broadband QoS and spotting net-neutrality violations.

But given that a lot of telecoms rules tend to involve multiple parties (eg user, telco, advertiser as here, or multiple telcos doing interconnect or wholesale agreements), requirements for "credentials", and there are often registries and other databases involved, the whole sphere looks like an archetypal match for the types of capability normally found in blockchains.

In particular, I think there are many potential use-cases for regulators to assist - or keep tabs on - telco activities that relate to regulatory policy. Adding unarguable timestamps to tamper-proof data storage has huge potential, in particular. Ones that immediately leap out to me include:
  • Number portability databases and porting requests
  • Storage of call detail records, that may be subject to lawful request at a later date
  • Spectrum allocations and permissions, especially for shared, local and dynamic spectrum models.
One other that I think has longer-term potential, but which nobody has talked about yet, is in secure and encrypted storage of network configuration and log files. One of the problems with regulating wholesale interconnect, peering, net neutrality and other rules, is that it is exceptionally hard to prove what happened retrospectively, if someone makes a complaint. This issue will be exacerbated with NFV/SDN, and the move to network slicing, when network configurations will be temporary and highly dynamic.

Given that law-enforcement insists that ISPs retain theur users' data records, it doesn't seem unreasonable to retain the ISPs' own information as well - obviously in a form that's secure and encrypted unless needed for evidence in the case of a legal intervention. It could also make a clear distinction between a problem of network failure (or happenstance in the way the maths of contention works), and deliberate actions.

The Net Neutrality angle here is particularly potent - it would allow any egregious behaviour to be dealt with post-hoc. Most anti-neutrality lobbyists dislike ex-ante regulation, but few could argue against allowing competition authorities or others from investigating alleged infringements that occurred deep inside the network's configurations and policies.

I'm just musing here, but I definitely feel that there's a lot more to telecom #RegTech using #blockchain than just tracking spam calls and SMS. 

This is one of the topics that will get discussed at my upcoming workshop on telecoms blockchain, on July 3 in London. Full details are here (link) or email information AT disruptive-analysis dot COM

Saturday, March 17, 2018

MEC and network-edge computing is overhyped and underpowered

I keep hearing that Edge Computing is the next big thing - and specifically, in-network edge computing models such as MEC. (See here for a list of all the different types of "edge"). 

I hear it from network vendors, telcos, some consultants, blockchain-based startups and others. But, oddly, very rarely from developers of applications or devices.

My view is that it's important, but it's also being overhyped. Network-edge computing will only ever be a small slice of the overall cloud and computing domain. And because it's small, it will likely be an addition to (and integrated with) web-scale cloud platforms. We are very unlikely to see edge-first providers become "the next Amazon AWS, only distributed".

Why do I think it will be small? Because I've been looking at it through a different lens to most: power. It's a metric used by those at the top- and bottom ends of the computing industry, but only rarely by those in the middle, such as network owners. This means they're ignoring a couple of orders of magnitude.

(This is a long post. You might want to grab a coffee first....)

How many zeroes?

Cloud computing involves huge numbers. There are many metrics that you can use - numbers of servers, processors, standard-sized equipment racks, floorspace and so on. But the figure that gets used most among data-centre folk is probably power consumption in watts, or more commonly here kW, MW & GW. (Yes, it's a lower-case k for kilo). 

Power is useful, as it covers the needs not just of compute CPUs and GPUs, but also storage and networking elements in data centres. It's not perfect, but given that organising and analysing information is ultimately about energy it's a valid, top-level metric. [Hey, I've got a degree in physics, not engineering. Helloooo, thermodynamics & entropy!]

Roughly speaking, the world's big data centres have a total power consumption of about 100GW. A typical one might have a capacity of 30MW, but a number of the world's largest data centres already use over 100MW individually, and there are enormous plans for locations with 600MW or even 1GW (link). No, they're not all running at full power, all the time - but that's true of any computing platform.

This growth is partly driven by an increase in the number of servers and equipment racks needed (hence growing floor-space for these buildings). But it also reflects power consumption for each server, as chips get more powerful. Most equipment racks use 3-5kW of power, but some can go as high as 20kW if that power - and cooling - is available.

So, to power "the cloud" needs 100GW, a figure that is continuining to grow rapidly. We are also seeing a rise in smaller, regional data-centres in second- and third-tier cities. Companies and governments often have private data-centres as well. These vary quite a bit, but 1-5MW is a reasonable benchmark.

How many decimal places?

At the other end of the computing power spectrum, are devices, and the components inside them. Especially for battery-powered devices, managing the power-budget down to watts or milliwatts is critical. This is the "device edge".

  • Sensors might use less than 10mW when idle & 100mW when actively processing data
  • A Raspberry Pi might use 0.5W
  • A smartphone processor might use 1-3W
  • An IoT gateway (controlling various local devices) might be 5-10W
  • A laptop might draw 50W
  • A decent crypto mining rig might use 1kW

New innovations are pushing the boundaries. Some researchers are working on sub-milliwatt vision processors (link). ARM has designs able to run machine-learning algorithms on very low-powered devices.

But perhaps the most interesting "device edge" is the future top-end Nvidia Pegasus board, aimed at self-driving vehicles. It is a 500W supercomputer. That might sound a lot, but it's still less than 1% of the engine power on most cars. A top-end Tesla P100D puts over 500kW to the wheels in "ludicrous mode", or 1000x that figure. Cars' aircon might use 2kW, to give context.

Of course, all of these device-edge computing platforms are numerous. There are billions of phones, and hundreds of millions of vehicles and PCs. Potentially, we'll get 10s of billions of sensors. Most aren't coordinated, though. 

And in the middle?

So we have milliwatts at one end of distributed computing, and gigawatts at the other, from device to cloud.

So what about the middle, where the network lives?

There are many companies talking about MEC (multi-access edge computing) and fog-computing products, with servers designed to run at cellular base stations, network aggregation points, and also in fixed-network nodes and elsewhere. 

Some are "micro-data-centres" capable of holding a few racks of servers near the largest cell towers. The very largest might be 50kW shipping-container sized units, but those will be pretty rare and will obviously need a dedicated power supply.

It's worth noting here that a typical macro-cell tower might have a power supply of 1-2kW. So if we consider that maybe 10% could be dedicated to a compute platform rather than the radio (a generous assumption), we get 100-200W, in theory. Or in other words, a cell tower edge-node will be less than half as powerful as a single car's computer.

Others are smaller server units, intended to hook into cellular small-cells, home gateways, cable street-side cabinets or enterprise "white boxes". For these, 10-30W is more reasonable.

Imagine the year 2023

Let's think 5 years ahead. By then, there could probably be 150GW of large-scale data centres, plus a decent number of midsize regional data-centres, plus private enterprise facilities.

And we could have 10 billion phones, PCs, tablets & other small end-points contributing to a distributed edge, although obviously they will spend a lot of time in idle-mode. We might also have 10 million almost-autonomous vehicles, with a lot of compute, even if they're not fully self-driving. 

Now, imagine we have a very-bullish 10 million "deep" network-compute nodes, at cell sites large and small, built into WiFi APs or controllers, and perhaps in cable/fixed streetside cabinets. They will likely have power ratings between 10W and 300W, although the largest will be numerically few in number. Choose 100W on average, for a simpler calculation. (Frankly, this is a generous forecast, but let's run with it for now).

And let's add in 20,000 container-sized 50kW units, or repurposed central-offices-as-datacentres, as well. (Also generous)

In other words, we might end up with:

150GW large data centres
50GW regional and corporate data centres
20,000x 50kW = 1GW big/aggregation-point "network-edge"
10m x 100W = 1GW "deep" network-edge nodes
1bn x 50W = 50GW of PCs
10bn x 1W = 10GW "small" device edge compute nodes
10m x 500W = 5GW of in-vehicle compute nodes
10bn x 100mW = 1GW of sensors & low-end devices

Now admittedly this is a very crude analysis. And a lot of devices will be running idle most of the time, and may need to offload functions to save battery power. Laptops are often switched off entirely. But equally, network-edge computers won't be running at 100%, 24x7 either.

The 1% edge

So at a rough, order-of-magnitude level, we can see that the total realistic "network edge", with optimistic assumptions, will account for less than 1% of total aggregate compute capability. And with more pessimistic assumptions, it might easily be just 0.1%. 

Any more will simply not be possible to power, unless there are large-scale upgrades to the electricity supply to network infrastructure - installed at the same time as backhaul upgrades for 5G, or deployment of FTTH. (And unlike copper, fibre can't even power small devices on its own). And haven't seen announcements of any telcos building hydroelectric power stations anywhere.

Decentralised, blockchain-based edge "fogs" are unlikely to really solve this problem either, even if they also use decentralised, blockchain-based power supply and management.

Now it could be argued that this 0.1-1% of computing workloads will be of such pivotal importance, that they will bring everything else into their orbit and indirect control. Could the "edge" really be the new frontier? 

I think not.

In reality, the reverse is more likely. Either device-based applications will selectively offload certain workloads to the network, or the webscale clouds will distribute certain functions. Yes, there will be some counter-examples, where the network-edge is the control point for certain verticals or applications - I think some security functions make sense, for instance, as well as an evolution of today's CDNs. But will IoT management, or AI, be concentrated in these edge nodes? It seems improbable.

Conclusion & TL:DR

In-network edge-computing architectures, such as MEC, will become more important. There are various interesting use-cases. But despite that, they will struggle to live up to the hype. 

There will be almost no applications that run *only* in the network-edge - it’ll be used just for specific workloads or microservices, as a subset of a broader multi-tier application. The main compute heavy-lifting will be done on-device, or on-cloud. As such, collaboration between edge-compute providers and industry/webscale cloud will be needed, as the network-edge will only be a component in a bigger solution, and will only very rarely be the most important component. 

One thing is definite: mobile operators won’t become distributed quasi-Amazons, running image-processing for all nearby cars or industry 4.0 robots in their networks, linked via 5G. 

Yes, MEC nodes could host Amazon Greengrass or other functions on a wholesale basis, but few developers will want to write directly to telcos' distributed-cloud APIs on a standalone basis, with or without network-slicing or 5G QoS mechanisms.

Indeed, this landscape of compute resource may throw up some unintended consequences. Ironically, it seems more likely that a future car's hefty computer, and abundant local power, could be used to offload tasks from the network, rather than vice versa.

Comments and feedback are very welcome. I'm aware I've made many assumptions here, and will doubtless generate various comments and detailed responses, either on my blog or LinkedIn posts. I haven't seen an "end to end" analysis of compute power before - if there's any tweaks to my back-of-envelope calculations, I'd welcome suggestions. If you'd like to contact me about projects or speaking engagements, I can be reached via information at disruptive-analysis dot com.