Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Sunday, June 03, 2018

Telecom regulation and blockchain - is #RegTech the killer application?


One of the most interesting developments in telecoms technology for a while occurred this week – India’s telecom regulator TRAI issued a set of draft regulations aimed at combating spam and nuisance calls. (link)

At first glance, you could be forgiven for asking why anti-spam rules could possibly be more important than all the hoopla about 5G, market consolidation, network-slicing and, especially, “digital transformation” or RCS messaging (I jest).

The reason is in the details: TRAI has stipulated that telcos should use blockchain-based technologies to enforce its proposed rules, creating a tamper-proof and encrypted ledger of consent records, given by users for opt-in telemarketing. If the rules translate to reality, this is a major step forward in commercialisation of digital ledger technology, and at scale.

"Access  Providers  shall  adopt  Distributed  Ledger  Technology  (DLT)  with
permissioned and private DLT  networks for  implementation of  system, functions and processes as prescribed in Code(s) of Practice: -
(1) to  ensure  that  all  necessary  regulatory  pre-checks  are  carried  out  for  sending
Commercial Communication;
(2) to operate smart contracts among entities for effectively controlling the flow of Commercial Communication;
Access Providers may authorise one or more DLT network operators, as deemed fit, to provide technology solution(s) to all entities to carry out the functions as provided for in these regulations."


But in my view, this could be just the tip of a quite large iceberg. I'm starting to think that regulatory uses for blockchain (especially private/permissioned versions) could be central to the technology's success in telecoms.

Innovation in Regulation Technology, or RegTech, is already a huge domain, especially in sectors like financial services and healthcare. Historic methods for regulatory enforcement, from money-laundering rules, to certification of professionals, have often used reams of paperwork and had cumbersome processes. There is a huge need for automation, better provision of security and authentication, and simpler online access to regulatory resources and approval.

Obviously, telecoms has itself long had technical means for creating and enforcing rules, from spectrum-monitoring and radio-coverage tools, through automated platforms for telecoms licensing, to software aimed at checking broadband QoS and spotting net-neutrality violations.

But given that a lot of telecoms rules tend to involve multiple parties (eg user, telco, advertiser as here, or multiple telcos doing interconnect or wholesale agreements), requirements for "credentials", and there are often registries and other databases involved, the whole sphere looks like an archetypal match for the types of capability normally found in blockchains.

In particular, I think there are many potential use-cases for regulators to assist - or keep tabs on - telco activities that relate to regulatory policy. Adding unarguable timestamps to tamper-proof data storage has huge potential, in particular. Ones that immediately leap out to me include:
  • Number portability databases and porting requests
  • Storage of call detail records, that may be subject to lawful request at a later date
  • Spectrum allocations and permissions, especially for shared, local and dynamic spectrum models.
One other that I think has longer-term potential, but which nobody has talked about yet, is in secure and encrypted storage of network configuration and log files. One of the problems with regulating wholesale interconnect, peering, net neutrality and other rules, is that it is exceptionally hard to prove what happened retrospectively, if someone makes a complaint. This issue will be exacerbated with NFV/SDN, and the move to network slicing, when network configurations will be temporary and highly dynamic.

Given that law-enforcement insists that ISPs retain theur users' data records, it doesn't seem unreasonable to retain the ISPs' own information as well - obviously in a form that's secure and encrypted unless needed for evidence in the case of a legal intervention. It could also make a clear distinction between a problem of network failure (or happenstance in the way the maths of contention works), and deliberate actions.

The Net Neutrality angle here is particularly potent - it would allow any egregious behaviour to be dealt with post-hoc. Most anti-neutrality lobbyists dislike ex-ante regulation, but few could argue against allowing competition authorities or others from investigating alleged infringements that occurred deep inside the network's configurations and policies.

I'm just musing here, but I definitely feel that there's a lot more to telecom #RegTech using #blockchain than just tracking spam calls and SMS. 

This is one of the topics that will get discussed at my upcoming workshop on telecoms blockchain, on July 3 in London. Full details are here (link) or email information AT disruptive-analysis dot COM



Saturday, March 17, 2018

MEC and network-edge computing is overhyped and underpowered

I keep hearing that Edge Computing is the next big thing - and specifically, in-network edge computing models such as MEC. (See here for a list of all the different types of "edge"). 

I hear it from network vendors, telcos, some consultants, blockchain-based startups and others. But, oddly, very rarely from developers of applications or devices.

My view is that it's important, but it's also being overhyped. Network-edge computing will only ever be a small slice of the overall cloud and computing domain. And because it's small, it will likely be an addition to (and integrated with) web-scale cloud platforms. We are very unlikely to see edge-first providers become "the next Amazon AWS, only distributed".

Why do I think it will be small? Because I've been looking at it through a different lens to most: power. It's a metric used by those at the top- and bottom ends of the computing industry, but only rarely by those in the middle, such as network owners. This means they're ignoring a couple of orders of magnitude.

(This is a long post. You might want to grab a coffee first....)


How many zeroes?

Cloud computing involves huge numbers. There are many metrics that you can use - numbers of servers, processors, standard-sized equipment racks, floorspace and so on. But the figure that gets used most among data-centre folk is probably power consumption in watts, or more commonly here kW, MW & GW. (Yes, it's a lower-case k for kilo). 

Power is useful, as it covers the needs not just of compute CPUs and GPUs, but also storage and networking elements in data centres. It's not perfect, but given that organising and analysing information is ultimately about energy it's a valid, top-level metric. [Hey, I've got a degree in physics, not engineering. Helloooo, thermodynamics & entropy!]

Roughly speaking, the world's big data centres have a total power consumption of about 100GW. A typical one might have a capacity of 30MW, but a number of the world's largest data centres already use over 100MW individually, and there are enormous plans for locations with 600MW or even 1GW (link). No, they're not all running at full power, all the time - but that's true of any computing platform.

This growth is partly driven by an increase in the number of servers and equipment racks needed (hence growing floor-space for these buildings). But it also reflects power consumption for each server, as chips get more powerful. Most equipment racks use 3-5kW of power, but some can go as high as 20kW if that power - and cooling - is available.

So, to power "the cloud" needs 100GW, a figure that is continuining to grow rapidly. We are also seeing a rise in smaller, regional data-centres in second- and third-tier cities. Companies and governments often have private data-centres as well. These vary quite a bit, but 1-5MW is a reasonable benchmark.


How many decimal places?

At the other end of the computing power spectrum, are devices, and the components inside them. Especially for battery-powered devices, managing the power-budget down to watts or milliwatts is critical. This is the "device edge".

  • Sensors might use less than 10mW when idle & 100mW when actively processing data
  • A Raspberry Pi might use 0.5W
  • A smartphone processor might use 1-3W
  • An IoT gateway (controlling various local devices) might be 5-10W
  • A laptop might draw 50W
  • A decent crypto mining rig might use 1kW

New innovations are pushing the boundaries. Some researchers are working on sub-milliwatt vision processors (link). ARM has designs able to run machine-learning algorithms on very low-powered devices.

But perhaps the most interesting "device edge" is the future top-end Nvidia Pegasus board, aimed at self-driving vehicles. It is a 500W supercomputer. That might sound a lot, but it's still less than 1% of the engine power on most cars. A top-end Tesla P100D puts over 500kW to the wheels in "ludicrous mode", or 1000x that figure. Cars' aircon might use 2kW, to give context.

Of course, all of these device-edge computing platforms are numerous. There are billions of phones, and hundreds of millions of vehicles and PCs. Potentially, we'll get 10s of billions of sensors. Most aren't coordinated, though. 


And in the middle?

So we have milliwatts at one end of distributed computing, and gigawatts at the other, from device to cloud.

So what about the middle, where the network lives?

There are many companies talking about MEC (multi-access edge computing) and fog-computing products, with servers designed to run at cellular base stations, network aggregation points, and also in fixed-network nodes and elsewhere. 

Some are "micro-data-centres" capable of holding a few racks of servers near the largest cell towers. The very largest might be 50kW shipping-container sized units, but those will be pretty rare and will obviously need a dedicated power supply.

It's worth noting here that a typical macro-cell tower might have a power supply of 1-2kW. So if we consider that maybe 10% could be dedicated to a compute platform rather than the radio (a generous assumption), we get 100-200W, in theory. Or in other words, a cell tower edge-node will be less than half as powerful as a single car's computer.

Others are smaller server units, intended to hook into cellular small-cells, home gateways, cable street-side cabinets or enterprise "white boxes". For these, 10-30W is more reasonable.




Imagine the year 2023

Let's think 5 years ahead. By then, there could probably be 150GW of large-scale data centres, plus a decent number of midsize regional data-centres, plus private enterprise facilities.

And we could have 10 billion phones, PCs, tablets & other small end-points contributing to a distributed edge, although obviously they will spend a lot of time in idle-mode. We might also have 10 million almost-autonomous vehicles, with a lot of compute, even if they're not fully self-driving. 

Now, imagine we have a very-bullish 10 million "deep" network-compute nodes, at cell sites large and small, built into WiFi APs or controllers, and perhaps in cable/fixed streetside cabinets. They will likely have power ratings between 10W and 300W, although the largest will be numerically few in number. Choose 100W on average, for a simpler calculation. (Frankly, this is a generous forecast, but let's run with it for now).

And let's add in 20,000 container-sized 50kW units, or repurposed central-offices-as-datacentres, as well. (Also generous)

In other words, we might end up with:

150GW large data centres
50GW regional and corporate data centres
20,000x 50kW = 1GW big/aggregation-point "network-edge"
10m x 100W = 1GW "deep" network-edge nodes
1bn x 50W = 50GW of PCs
10bn x 1W = 10GW "small" device edge compute nodes
10m x 500W = 5GW of in-vehicle compute nodes
10bn x 100mW = 1GW of sensors & low-end devices

Now admittedly this is a very crude analysis. And a lot of devices will be running idle most of the time, and may need to offload functions to save battery power. Laptops are often switched off entirely. But equally, network-edge computers won't be running at 100%, 24x7 either.


The 1% edge

So at a rough, order-of-magnitude level, we can see that the total realistic "network edge", with optimistic assumptions, will account for less than 1% of total aggregate compute capability. And with more pessimistic assumptions, it might easily be just 0.1%. 

Any more will simply not be possible to power, unless there are large-scale upgrades to the electricity supply to network infrastructure - installed at the same time as backhaul upgrades for 5G, or deployment of FTTH. (And unlike copper, fibre can't even power small devices on its own). And haven't seen announcements of any telcos building hydroelectric power stations anywhere.

Decentralised, blockchain-based edge "fogs" are unlikely to really solve this problem either, even if they also use decentralised, blockchain-based power supply and management.

Now it could be argued that this 0.1-1% of computing workloads will be of such pivotal importance, that they will bring everything else into their orbit and indirect control. Could the "edge" really be the new frontier? 

I think not.

In reality, the reverse is more likely. Either device-based applications will selectively offload certain workloads to the network, or the webscale clouds will distribute certain functions. Yes, there will be some counter-examples, where the network-edge is the control point for certain verticals or applications - I think some security functions make sense, for instance, as well as an evolution of today's CDNs. But will IoT management, or AI, be concentrated in these edge nodes? It seems improbable.


Conclusion & TL:DR

In-network edge-computing architectures, such as MEC, will become more important. There are various interesting use-cases. But despite that, they will struggle to live up to the hype. 

There will be almost no applications that run *only* in the network-edge - it’ll be used just for specific workloads or microservices, as a subset of a broader multi-tier application. The main compute heavy-lifting will be done on-device, or on-cloud. As such, collaboration between edge-compute providers and industry/webscale cloud will be needed, as the network-edge will only be a component in a bigger solution, and will only very rarely be the most important component. 

One thing is definite: mobile operators won’t become distributed quasi-Amazons, running image-processing for all nearby cars or industry 4.0 robots in their networks, linked via 5G. 

Yes, MEC nodes could host Amazon Greengrass or other functions on a wholesale basis, but few developers will want to write directly to telcos' distributed-cloud APIs on a standalone basis, with or without network-slicing or 5G QoS mechanisms.

Indeed, this landscape of compute resource may throw up some unintended consequences. Ironically, it seems more likely that a future car's hefty computer, and abundant local power, could be used to offload tasks from the network, rather than vice versa.


Comments and feedback are very welcome. I'm aware I've made many assumptions here, and will doubtless generate various comments and detailed responses, either on my blog or LinkedIn posts. I haven't seen an "end to end" analysis of compute power before - if there's any tweaks to my back-of-envelope calculations, I'd welcome suggestions. If you'd like to contact me about projects or speaking engagements, I can be reached via information at disruptive-analysis dot com.