Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Tuesday, February 05, 2019

3 Emerging Models for Edge-Computing: Single-Network, Interconnected & Federated

Summary

Edge-computing enables applications to access cloud resources with lower latencies, more local control, less load on transport networks and other benefits.

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge, where a fixed or mobile operator puts compute resources at its own cell-sites, aggregation points, or fixed-network central offices.
  • Local / Interconnected Datacentre Edge, where an existing or new DC provider puts smaller facilities in tier-2/3 cities or other locations, connected to multiple networks.
  • Federated / Open Edge, where a software player aggregates numerous edge facilities and provides a single mechanism for developers to access them.
These are not 100% mutually-exclusive - various hybrids are possible, as well as "private edge" facilities directly owned by enterprises or large cloud providers. They will also interact or integrate with hyperscale-cloud in variety of ways. 

But there is a major issue. All of these will be impacted by even faster-evolving changes in the ways that users access networks and applications, such as "fallback" from 5G to 4G, or switching to WiFi. In other words, the most relevant "edge" will often move or blur. Superficially "good" edge-compute ideas will be forced to play catch-up to deal with the extra network complexity. 
 
(Also - this model excludes the "device edge" - the huge chunk of compute resource held in users' phones, PCs, cars, IoT gateways and other local devices).

Note: this is a long post. Get a coffee. 

There is also an accompanying podcast / audio-track I've recorded on SoundCloud that explains this post if you'd rather listen than read (link)



Background and Overview 

A major area of focus for me in 2019 is edge-computing. It’s a topic I’ve covered in various ways in the last two year or so, especially contrasting the telecom industry’s definitions/views of “in-network” edge, with those of enterprise IT and IoT providers. The latter tend to be more focused on “edge datacentres” in “edge markets” [2nd-tier cities] or more-localised still, such as on-premise cloud-connected gateways. 

I wrote a detailed post in 2018 (link) about computing power consumption and supply, which looked at the future constraints on edge, and whether it could ever really compete with / substitute for hyperscale cloud (spoiler: it can't at an overall level, as it will only have a small % of the total power).

I’m speaking at or moderating various edge-related events this year, including four global conferences run by data-centre information and event firm BroadGroup (link). The first one, Edge Congress in Amsterdam, was on 31st January, and followed PTC’19 (link) the week before, which also had a lot of edge-related sessions.


(I’m also collaborating with long-time WebRTC buddy Tsahi Levent-Levi [link] to write a ground-breaking paper on the intersection of edge-computing with realtime communications. Contact me for details of participating / sponsoring)


Different drivers, different perspectives

A huge diversity of companies are looking at the edge, including both established large companies and a variety of startups:
  • Mobile operators want to exploit the low latencies & distributed sites of 5G networks, as well as decentralising some of their own (and newlyt-virtualised) internal network / operational software
  • Fixed and cable operators want to turn central offices and head-ends into local datacentres - and also house their own virtualised systems too. Many are hybrid fixed/mobile SPs.
  • Long-haul terrestrial and sub-sea fibre providers see opportunities to add new edge data-centre services and locations, e.g. for islands or new national markets. A handful of satellite players are looking at this too.
  • Large data-centre companies are looking to new regional / local markets to differentiate their hosting facilities, reduce long-distance latencies, exploit new subsea fibres and provide space and interconnect to various cloud providers (and telcos).
    At PTC’19 I heard places like Madrid, Fiji, Johannesburg and Minneapolis described as “edge markets”.
  • Hyperscale cloud players are also latency-aware, as well as recognising that some clients have security or regulatory need for local data-storage. They may use third-party local DCs, build their own (Amazon & Whole Food sites?) or even deploy on-premise at enterprises (Amazon Outposts)
  • Property-type players (eg towerco's) see edge-compute as a way to extend their businesses beyond siting radios or network gear.
  • Startups want to offer micro-DCs to many of the above as pre-built physical units, such as Vapor.io, EdgeMicro and EdgeInfra.
  • Other startups want to offer developers convenient (software-based) ways to exploit diverse edge resources without individual negotiations. This includes both federations, or software tools for application deployment and management. MobiledgeX and Ori are examples here.
  • Enterprises want a mix of localised low-latency cloud options, either shared or owned/controlled by themselves (and perhaps on-site, essentially Server Room 2.0). They need to connect them to hyperscale cloud(s) and internal resources, especially for new IoT, AI, video and mobility use-cases.
  • Network vendors are interested either in pitching edge-oriented network capabilities (eg segment-routing), or directly integrating extra compute resource into network switches/routers.
  • Others: additional parties interested in edge compute include PaaS providers, security companies, SD-WAN providers, CDN players, neutral-host firms etc
Each of these brings a different definition of edge - but also has a different set of views about networks and access, as well as business models.


Application diversity

Set against this wide array of participants, is an even more-diverse range of potential applications being considered. They differ in numerous ways too - exact latency needs (<1ms to 100ms+), mobility requirements (eg handoff between edge sites for moving vehicles), type of compute functions used (CPUs, GPUs, storage etc), users with one or multiple access methods, security (physical or logical) and so on.

However, in my view there are two key distinctions to make. These are between:
  • Single-network vs. Multiple-network access: Can the developer accurately predict or control the connection between user and edge? Or are multiple different connection paths more probable? And are certain networks (eg a tier-1 telco's) large enough to warrant individual edge implementations anyway?
  • Single-cloud vs. Multi-cloud: Can all or most of the application's data and workloads be hosted on a single cloud/edge provider's platform? Or are they inherently dispersed among multiple providers (eg content on one, adverts from another, analytics on a third, legacy integration with a fourth / inhouse system)
For telcos in particular, there is an important subset of edge applications which definitely are single-network and internal, rather than client-facing: running their own VNFs (virtual network functions, security functions, distributed billing/charging, and managing cloud/virtualised radio networks (CRAN/vRAN). They also typically have existing relationships with content delivery networks (CDNs), both in-house and third-party.

This "anchor tenant" of on-network, single-telco functions is what is driving bodies like ETSI to link MEC to particular access networks and (largely) individual telcos. Some operators are looking at deploying MEC deep into the network, at individual cell towers or hub sites. Others are looking at less-distributed aggregation tiers, or regional centres.

The question is whether this single-network vision fits well with the broader base of edge-oriented applications, especially for IoT and enterprise.




How common will single-network access be?

The telco edge evolution (whether at region/city-level or down towards cells and broadband-access fibre nodes) is not happening in isolation. A key issue is that wide availability of such edge-cloud service - especially linked to ultra-low-latency 5G networks - will come after the access part of the network gets much more complex.



From a developer perspective, it will often be hard to be certain about a given user’s connectivity path, and therefore which or whose edge facilities to use, and what minimum latency can be relied upon:

  • 5G coverage will be very patchy for several years, and for reliable indoor usage perhaps 10 years or more. Users will regularly fall back to 4G or below, particularly when mobile.
  • Users on smartphones will continue to use 3rd-party WiFi in many locations. PC and tablet users, and many domestic IoT devices, will use Wi-Fi almost exclusively. Most fixed-wireless 5G antennas will be outdoor-mounted, connecting to Wi-Fi for in-building coverage.
  • Users and devices may use VPN security software with unknown egress points (possibly in another country entirely)
  • Not all 5G spectrum bands or operator deployments will offer ultra-low latency and may have different approaches to RAN virtualisation. 
  • Increasing numbers of devices will support multi-path connections (eg iOS TCP Multipath), or have multiple radios (eg cars).
  • Security functions in the network path (eg firewalls) may add latency
  • Growing numbers of roaming, neutral-host and MVNO scenarios involving third-party SPs are emerging. These will add latency, extra network paths and other complexities.
  • eSIM growth may enable more rapid network-switching, or multi-MNO MVNOs like Google Fi.
  • Converged operators will want to share compute facilities between their mobile and fixed networks.

This means that only very tightly-specified “single-network” edge applications make sense, unless there is a good mechanism for peering and interconnect, for instance with some form of “local breakout”.



So for instance, if Telco X operates a smart-city contract connecting municipal vehicles and street lighting, it could offer edge-compute functions, confident that the access paths are well-defined. Similarly it could offer deep in-network CDN functions for its own quad-play streaming, gaming or commerce services. 

But by contrast, an AR game that developers hope will be played by people globally, on phones & PCs, could connect via every telco, ISP & 3rd-party WiFi connection. It will need to be capable of dealing with multiple, shifting, access networks. An enterprise whose employees use VPN software on their PCs, or whose vehicles have multi-network SIMs for roaming, may have similar concerns.
 

The connected edge



I had a bit of an epiphany while listening to an Equinix presentation at PTC recently. The speaker talked about the “Interconnected Edge”, which I realised is very distinct from this vision of a single-telco edge.

Most of the datacentre industry tries to create facilities with multiple telco connections - ideally sitting on as many fibres as possible. This allows many ingress paths from devices/users, and egress paths to XaaS players or other datacentres. (This is not always possible for the most "remote" edges such as Pacific islands, where a single fibre and satellite backup might be the only things available).



And even for simple applications / websites, there may be multiple components coming from different servers (ads, storage, streaming, analytics, security etc) so the immediate edge needs to connect to *those* services with the easiest path. Often it’s server-to-server latency that’s more important than server-to-device, so things like peering and “carrier density” (ie lots of fibres into the building) make a big difference.

In other words, there are a number of trade-offs here. Typically the level of interconnectedness means more distance/latency from each individual access point (as it's further back in the network and may mean data transits a mobile core first), but that is set against flexibility elsewhere in the system. 

A server sitting underneath a cell-tower, or even in a Wi-Fi access point, will have ultra-low latency. But it will also have low interconnectedness. A security camera might have very fast local image-recognition AI to spot an intruder via edge-compute. But if it needs to match their face against a police database, or cross-check with another camera on a different network, that will take significantly longer.

But edge datacentres also face problems - they will typically only be in certain places. This might be fine for individual smart-city applications, or localised "multi-cloud" access, but it still isn't great for multinational companies or the game/content app-developers present in 100 countries.


Is edge-aggregation the answer?

The answer seems to be some form of software edge-federation or edge-broking layer, which can tie together a whole set of different edge resources, and hopefully have intelligence to deal with some of the network-access complexity as well.

I've been coming across various companies hoping to take on the role of aggregator, whether that's primarily for federating different telcos' edge networks (eg MobiledgeX), or helping developers deploy to a wider variety of edge-datacentre and other locations (eg Ori). 

I'm expecting this space to become a lot more complex and nuanced - some will focus on being true "horizontal" exchanges / APIs for multi-edge aggregation. The telco ones will focus on aspects like roaming, combined network+MEC quality of service and so on. Others will probably look to combine edge with SD-WAN for maximum resilence and lowest cost.

Yet more - probably including Amazon, Microsoft and other large cloud companies - will instead look to balance between edge vs. centralised cloud for different workloads, using their own partnerships with edge datacentres (perhaps including telcos) and containerisation approaches like Amazon's Greengrass.

Lastly, we may see the emergence of "neutral-host" networks of edge facilities, not linked to specific telcos, data-centre providers or fibre owners. These could be "open" collaborations, or even decentralised / blockchain-based approaches.

The "magic bullet" here will be the ability to cope with all the network complexities I mentioned above (which drive access paths and thus latencies), plus having a good geographic footprint of locations and interconnections. 

In a way, this is somewhat similar to the historic CDN model, where Akamai and others grew by placing servers in many ISPs' local networks - but that was more about reducing latency from core-to-edge, rather than device-to-edge, or edge-to-edge.

I doubt that this will resolve to a single monopoly player, or even an oligopoly - there are too many variables, dimensions and local issues / constraints.


 
Summary and conclusions

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge
  • Local / Interconnected Datacentre Edge
  • Federated / Open Edge
These will overlap, and hybrids and private/public splits will occur as well.

My current view remains that power constraints mean that in-network [telco-centric] edge cannot ever realistically account for more than 2% of overall global computing workloads or perhaps 3-5% of public cloud services provision, in volume terms – although pricing & revenue share may be higher for provable lower latencies. Now that is certainly non-trivial, but it’s also not game-changing. 

I also expect that in-network edge will be mostly delivered by telcos as wholesale capacity to larger cloud providers, or through edge-aggregation/federation players, rather than as “retail” XaaS sold directly to enterprises or application/IoT developers.

I’m also expecting a lot of telco-edge infrastructure to mostly serve fixed-network edge use-cases, not 5G or 4G mobile ones. 5G needs edge, more than edge needs 5G. While there are some early examples of companies deploying mini-datacentres at large cell-tower “hub” sites (eg Vapor.io), other operators are focusing further back in the network, at regional aggregation points, or fixed-operator central offices. It is still very early days, however.

The edge datacentre business has a lot of scope to grow, both in terms of networks of micro-datacentres, and in terms of normal-but-small datacentres in tier-2/3/4 cities and towns. However, it too will face complexities relating to multi-access users, and limited footprints across many locations.


The biggest winners will be those able to link together multiple standalone edges into a more cohesive and manageable developer proposition, that is both network-aware and cloud-integrated. 

The multi-network, multi-cloud edge will be tough to manage, but essential for many applications.

It is doubtful that telco-only edge clouds (solo or federated) can work for the majority of use-cases, although there will be some instances where the tightest latency requirements overlap with the best-defined connectivity models.

I'm tempted to create a new term of these players - we already have a good term for a meeting point of multiple edges: a corner. Remember where you first heard about Corner Computing...


If you are interested in engaging me for private consulting, presentations, webinars, or white papers, please get in touch via information at disruptive-analysis dot com, or my LinkedIn and Twitter

I will be writing a paper soon on "Edge Computing meets Voice & Video Communications" - get in touch if you are interested in sponsoring it. Please also visit deanbubley.com for more examples of my work and coverage.

3 comments:

InfoStack said...

Last June standards were ratified paving the way for SD cards that can store 125 terabytes of information. Let's assume they'll be in the market in 3 years for under $200.

I think your analysis misses the fact that there will be entirely new supply/demand models that will be almost 100% local in nature: to wit smart-retail, smart buildings, tele-medicine, tele-education. No need to up/downstream traffic over distances for a host of reasons.

And these will have nothing to do with carriers; in fact it is almost 100% certain that the carriers WILL NOT participate in these local markets.

Lastly, I think edge is simply opposite of core; whatever core might be. From a technical perspective, there will always be trade-offs between layers 1, 2 and 3, with regards to storage and processing depending on density and type of traffic. Core and edge are mutually inter-dependent.

Dean Bubley said...

I mention in the text that this analysis excludes on-device or gateway edge compute.

It's something I covered in depth in my previous post & conference presentations on overall presentation of compute power - basically it will look like a dumb-bell, with a big chunk of processing & storage on-device or nearby, an even larger chunk in hyperscale cloud, and a thin bit in the middle connecting them (ie in-network and small edge DCs)

And I mostly agree that carriers won't touch much on-prem edge compute, except maybe where it's in set-top boxes or managed access gateways.

However, most of that on-prem compute will still use various cloud-based functions as well, eg for analytics, developer tools & 3rd-party appps etc. It won't be purely offline in most cases

Praylin S said...
This comment has been removed by a blog administrator.