Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label multi-access. Show all posts
Showing posts with label multi-access. Show all posts

Wednesday, September 08, 2021

Drawing flawed conclusions from public misconceptions about wireless

(Cross-posted from my LinkedIn Newsletter - see original + comment thread here)

In the last couple of weeks, I’ve come across several clear examples of general confusion about connectivity and wireless technologies – including among smart and otherwise tech-savvy people.

  • A recent survey came up with the remarkable result that over a million people in the UK think they already have “satellite broadband”. The real number is likely under 100k. But many still associate the telecom brand Sky with its early involvement with satellite TV. (Expect Dish to face the same issue in the US).
  • On a client workshop discussing future devices, a user-interface expert referred to “Wi-Fi towers”, rather than mobile/cellular towers. I've also heard someone talk about "satellite Wi-Fi" when referring to things like LEO constellations.
  • A friend posted a photo of mobile antennas in London, in black enclosures to match the structure they were mounted on. One comment was that they were “definitely 5G” with no explanation why they distinguished them from 4G (or indeed, multi-radio RAN units as I suspect they were). Another confidently asserted they were definitely “boosters”, whatever that means.
  • A fascinating Nokia-produced podcast, with a visionary from Disney, covered a huge amount about AR/VR, branding and new experiences. The only problem was the assertion that this would all depend on 5G – even indoors on the sofa, where we can expect essentially all headsets and most smartphones to be connected to Wi-Fi.
  • Another podcast referenced Mavenir's acquisition of cPaaS provider Telestax, with the farcical suggestion that it tied in with B2B uses of 5G. Instead, it's more about platforms for enterprise messaging and calling. Getting an automated dentist-appointment reminder or automating a call-centre process doesn't depend on 5G (or any other G, or even wireless).
  • I've lost count of the people who think 5G enables 1 millisecond latencies everywhere.

At one level, we can just shrug and say this is just normal. People often fail to grasp distinctions between categories of similar things that are obvious (and important) to experts involved in their production or classification. 

 

Source: https://pixabay.com/users/peterdargatz-5783/

 How many people confuse bulldozers and excavators, a flan vs. a quiche, or even a spider and insect? Yet we don’t pay much attention to the exasperated sighs and teeth-grinding of civil engineers, chefs or arachnologists. We in the industry don’t help much either – how many Wi-Fi SSID access names are called “5G” instead of “5GHz”?

Yet for connectivity, these distinctions do matter in many real ways. They can lead to poor decision-making, flawed regulation, misled investors and wasted effort. In some cases there is real, physical harm too – think about all the crazy conspiracy theories about 5G (especially "60GHz mmWave 5G" - which doesn't even exist yet), or previously Wi-Fi.

Think too about the huge hyping by politicians about 5G – despite many of the use-cases either working perfectly well on older 4G, or in reality more likely to use fibre or Wi-Fi connections. That can feed through to poor policy on spectrum, competition – and as seen in many places recently, vendor diversification rules which ignore the vibrant ecosystem of indoor and private cellular suppliers.

Think too about the ludicrous assertions that LEO satellite constellations like SpaceX’s Starlink could replace normal home broadband or terrestrial mobile, despite the real practicalities meaning endpoint numbers will be 100x fewer, even with optimistic projections.

This all puts a new angle on a common refrain in telecoms “users don’t care what network they’re connected to”. In reality, this could be more accurately rephrased as “users don’t understand what network they're connected to…. although they really should”.

This also applies to the myth of "seamless" interconnection between different technologies, such as Wi-Fi and 5G networks. The border (ie seam) is hugely important. It can change the speed, cost, ownership, security, privacy, predictability of the connection. Not just users, but also application & device developers need to understand this - and if possible, control it. Frictionless can be OK. Seamless is useless, or worse.

What should be our practical steps to deal with this? Realistically, we're not going to get the population to take "Wireless 101" courses, even if we could agree amongst ourselves what to tell them. We're certainly not going to give people a grasp of radio propagation through walls, nor ITU IMT-Advanced definitions and how that relates to "5G".

But on a more mundane level, there are some concrete recommendations we can follow:

  • Use generic terms such as "advanced connectivity" without specifying 5G, Wi-Fi or whatever, wherever possible. At least that's relatively accurate.
  • Ignore any surveys of the general public about wireless technology. Assume that 90% of people won't understand the questions, and the other 10% will lie. Actually, ignore most surveys of the industry as well - most have appallingly biased samples, usually over-represented by people trying to sell things.
  • Don't repost, retweet or otherwise circulate hyped-up articles or comments. If someone claims that $X Trillion will be generated by 5G, ask if they've looked into what the baseline would be for 4G, and what the assumptions and sensitivities are.
  • I'll be bad at this myself, but we should try to gently point out to people they're wrong, rather than either shrug-and-ignore, or ridicule-and-point. If a politician or marketer or broadcaster talks about 5G or Wi-Fi or satellite with clear factual errors, point it out online, or in person.
  • Ask open-ended questions such as "why do you think satellite broadband can really do that?" or "have you considered how that would work indoors?" and see if people have actually given it any real thought.
  • Don't let your boss or your clients get away with these misconceptions, even if you think correcting them could cause a negative reaction. Don't be a yes-person. (If you need to, let me know & I can debunk their claims for you. I'll probably enjoy it too much though....)
  • Do NOT hire clueless "content marketing" people to write gibberish about "Why Tech X will Change the World"
  • Watch out for logical fallacies like "appeal to authority". There's no shortage of very senior and well-known people spouting the type of nonsense I describe here.
  • Run internal training sessions on "myth vs. reality" about wireless and telecoms. Make them fun.

I don't know whether this campaign to improve genuine understanding (and a bit of skepticism of hyperbole) will pay off. But I think it's important to try. Feel free to add other examples or suggestions in the comments! Also, please subscribe to this LinkedIn newsletter & follow @disruptivedean on Twitter.

(And yes, that's an excavator in the image above).

#5G #WiFi #mobile #wireless #satellite #broadband

Monday, January 11, 2021

The Myth of "Always Best Connected"

 (This was originally posted as a LinkedIn Newsletter article. See this link, read the comment thread, and please subscribe)

It Was the Best of Times, it Was the Worst of Times

One of the most ludicrous phrases in telecoms is "Always Best Connected", or ABC. It is typically used by an operator, network vendor or standards organisation attempting to glue together cellular and Wi-Fi connections. It's a term that pretends that some sort of core network function can automatically and optimally switch a user between wireless models, without them caring - or even knowing - that it's happening.

Often, it's used together with the equally-stupid term "seamless handover", and perhaps claims that applications are "network agnostic" or that it doesn't matter what technology or network is used, as long as the user can "get connected". Often, articles or papers will go on to describe all Wi-Fi usage on devices as "offload" from cellular (it isn't - perhaps 5% of Wi-Fi traffic from phones is genuine offload).

There's been a long succession of proposed technologies and architectures, mostly from the 3GPP and cellular industry, keen to embrace but downplay Wi-Fi as some sort of secondary access mechanism. Acronyms abound - UMA, GAN, IWLAN, ANDSF, ATSSS, HetNets and so on. There have been attempts to allow a core network to switch a device's Wi-Fi radio on/off, and even hide the Wi-Fi logo so the user doesn't realise that's being used. It's all been a transparent and cynical attempt to sideline Wi-Fi - and users' independent choice of connection options - in the name of so-called "convergence". Pretty much all of these have been useless (or worse) except in very narrow circumstances.

To be fair, accurate and genuine descriptions - let's say "Rarely Worst-Connected" or "Usually Good-Enough Connected" or "You'll Take What Connection We Give You & Shut Up" - probably don't have the same marketing appeal.

Who's Better, Who's Best?

The problem is that there is no singular definition of "best". There are numerous possible criteria, many of which are heavily context-dependent.

Which "best" is being determined?

  • Highest connection speed (average, or instantaneous?)
  • Lowest latency & jitter
  • Lowest power consumption (including network, device and cloud)
  • Highest security
  • Highest visibility and control
  • Lowest cost (however defined)
  • Greatest privacy
  • Best coverage / lowest risk of drops while moving around
  • Highest redundancy (which might mean 2+ independent connections)
  • Connection to the public Internet vs. an edge server

In most cases involving smartphones, the basic definition of "best" is "enough speed and reliability so I can use my Internet / cloud application with OK performance, without costing me any extra money or inconvenience". Yet people and applications are becoming more discerning, and the network is unaware of important contextual information.

For instance, someone with flatrate data may view "best" very differently to someone with a limited data quota. Someone in a vehicle at traffic lights may have a different connection preference to someone sitting on the sofa at home. Someone playing a fast-paced game has a different best to someone downloading a software update. A user on a network with non-neutral policies, or one which collects and sells data on usage patterns, may want to use alternatives where possible.

In an era of private cellular, IoT, multiple concurrent applications, encryption, cloud/edge computing and rising security and privacy concerns, all this gets even more complex.

In addition to a lack of a single objective "best", there are many stakeholders, each of which may have their own view of what is "best", according to their particular priorities.

  • The user
  • The application developer
  • The network operator(s)
  • The user's employer or parents
  • The building / venue owner
  • The device or OS vendor
  • A third-party connection management provider (eg SD-WAN vendor)
  • The government

On some occasions, all these different "bests" will align. But on others, there will be stark divergence, especially where the stakeholders have access to different options for connectivity. A mobile phone network won't know that the user has access to an airport lounge's premium Wi-Fi, because of their frequent flyer status. A video-streaming app can't work out whether 5G or Wi-Fi will route to a closer, lower-power edge server.

So who or what oversees these conflicts and makes a final decision on which connection (or, increasingly, connections plural) is chosen? Who's the ultimate arbiter - and what do the other stakeholders do about it?

This problem isn't unique to network connectivity - it's true for transport as well. I live in London, and if I want to get from my home to somewhere else, I have lots of "best" options. Tube, bus, drive, taxi, walk, cycle and so on. Do I want to get there via the fastest route? Cheapest? Least polluting? Easiest for social-distancing? Have a chance to listen to a podcast on the way? If I want to put the best smile on the most people's faces, maybe I should go by camel or unicycle? And what's best for the city's air, Transport for London's finances, other travellers' convenience, or whoever I'm meeting (probably not the unicycle)?

 



There are multiple apps that give me all the options, and define preferences and constraints. The same is true for device operating systems, or connection-management software tools.

Hit Me With Your Best Shot

There are also all sorts of weird possible effects where "application-aware networks" end up in battle with "network-aware applications". Many applications are designed to work differently on different networks - perhaps "only auto-download video on Wi-Fi" or "ask the user before software updates download over metered connections". Some might try to work out the user's preferences intelligently, and compress / cache / adjust the flow when they appear to be on cellular, or uprate video when the user is home - or perhaps casting content to a larger screen. The network has little grasp of true context or user/developer desire and preferences.

Networks might attempt to treat a given application, user or traffic flow differently - perhaps giving it priority, or slowing or blocking it, or assigning it to a particular "slice". The application on the other hand might try to second-guess or game the network - either by spoofing another application's signature, or just using heuristics to reverse-engineer any "policy" or "optimisation" that might get applied.

You're My Best Friend

So what's the answer? How can the connectivity for a device or application be optimised?

There's no simple answer here, given the number of parameters discussed. But some general outlines can be created.

  • Firstly, there needs to be multiple connections available, and ways to choose, switch, arbitrage between them - or bond them together.
  • The operating system and radios / wired connections of the device should allow the user (or apps) to know what's available, with which characteristics - and any heuristics that can be deduced from current and previous behaviour.
  • The user or device-owner needs to know "who or what is in charge of connections" and be able to delegate and switch that decision function when desired. It might be outsourced to their MNO, or their device supplier, or a third party. Or it could be that each application gets to choose its own connection.
  • As a default, the user should always be aware of any automated changes - and be given the option to disable them. These should not be "seamless" but "frictionless" or low-friction. (Seams are important. They're there for a reason. Anyone disagreeing with this statement must post a picture of themselves wearing a seamless Lycra all-in-one along with their comment).
  • Connectivity providers (whether SPs or privately-owned) should provide rich status information about their services - expected/guaranteed speed & latency, ownership, pricing, congestion, the nature of any data-collection or traffic inspection practices, and so on. This will be useful as input to the decision engines. Over time, it will be good to standardise this information. (Governments and policymakers - take note as well)
  • We can expect connectivity decisions to be partly driven by external context - location, movement, awareness of indoor/outdoor situation, environment (eg home, work, travelling, roaming), use of accessories like headphones or displays, and so on.

Going forward, we can expect wireless devices to have some form of SD-WAN type control function. Using technologies such as multipath TCP, it will become easier to use multiple simultaneous connections - perhaps dedicated some to specific applications, or bonding them together. For security and privacy, the software may send packets via diverse routes, stopping any individual network monitoring function from seeing the entire flow.

Growing numbers of devices will have eSIM capability, allowing new network identities / owners to be added. Some may have 2+ cellular radios, as well as Wi-Fi (again, perhaps 2+ independent connections), USB and maybe in future satellite or other options as well.

Add in the potential for Free 5G (link), beamforming, private 5G, local-licensed spectrum WiFi, relaying & assorted other upcoming innovations to add even more layers here.

The bottom line is that "best connected" will become even more mythical in future than it already is. But there will be more options - and more tools - to try to optimise it, based on a dynamic and complex set of variables - especially when going beyond connectivity towards overall "quality of experience" metrics spanning eyeball-to-cloud. There's likely be plenty of opportunities for AI, user-experience designers, standards bodies and numerous others.

But (with apologies to the Tina Turner), users should always be wary of any software or service provider that claims to be "Simply the Best".

If you've enjoyed this article, please sign up for my LinkedIn Newsletter (link). Please also reach out to me for advisory workshops, consulting projects, speaking slots etc.

#5G #WiFi #cellular #mobile #telecoms #satellite #wireless #smartphones #connectionmanagement

Tuesday, February 05, 2019

3 Emerging Models for Edge-Computing: Single-Network, Interconnected & Federated

Summary

Edge-computing enables applications to access cloud resources with lower latencies, more local control, less load on transport networks and other benefits.

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge, where a fixed or mobile operator puts compute resources at its own cell-sites, aggregation points, or fixed-network central offices.
  • Local / Interconnected Datacentre Edge, where an existing or new DC provider puts smaller facilities in tier-2/3 cities or other locations, connected to multiple networks.
  • Federated / Open Edge, where a software player aggregates numerous edge facilities and provides a single mechanism for developers to access them.
These are not 100% mutually-exclusive - various hybrids are possible, as well as "private edge" facilities directly owned by enterprises or large cloud providers. They will also interact or integrate with hyperscale-cloud in variety of ways. 

But there is a major issue. All of these will be impacted by even faster-evolving changes in the ways that users access networks and applications, such as "fallback" from 5G to 4G, or switching to WiFi. In other words, the most relevant "edge" will often move or blur. Superficially "good" edge-compute ideas will be forced to play catch-up to deal with the extra network complexity. 
 
(Also - this model excludes the "device edge" - the huge chunk of compute resource held in users' phones, PCs, cars, IoT gateways and other local devices).

Note: this is a long post. Get a coffee. 

There is also an accompanying podcast / audio-track I've recorded on SoundCloud that explains this post if you'd rather listen than read (link)



Background and Overview 

A major area of focus for me in 2019 is edge-computing. It’s a topic I’ve covered in various ways in the last two year or so, especially contrasting the telecom industry’s definitions/views of “in-network” edge, with those of enterprise IT and IoT providers. The latter tend to be more focused on “edge datacentres” in “edge markets” [2nd-tier cities] or more-localised still, such as on-premise cloud-connected gateways. 

I wrote a detailed post in 2018 (link) about computing power consumption and supply, which looked at the future constraints on edge, and whether it could ever really compete with / substitute for hyperscale cloud (spoiler: it can't at an overall level, as it will only have a small % of the total power).

I’m speaking at or moderating various edge-related events this year, including four global conferences run by data-centre information and event firm BroadGroup (link). The first one, Edge Congress in Amsterdam, was on 31st January, and followed PTC’19 (link) the week before, which also had a lot of edge-related sessions.


(I’m also collaborating with long-time WebRTC buddy Tsahi Levent-Levi [link] to write a ground-breaking paper on the intersection of edge-computing with realtime communications. Contact me for details of participating / sponsoring)


Different drivers, different perspectives

A huge diversity of companies are looking at the edge, including both established large companies and a variety of startups:
  • Mobile operators want to exploit the low latencies & distributed sites of 5G networks, as well as decentralising some of their own (and newlyt-virtualised) internal network / operational software
  • Fixed and cable operators want to turn central offices and head-ends into local datacentres - and also house their own virtualised systems too. Many are hybrid fixed/mobile SPs.
  • Long-haul terrestrial and sub-sea fibre providers see opportunities to add new edge data-centre services and locations, e.g. for islands or new national markets. A handful of satellite players are looking at this too.
  • Large data-centre companies are looking to new regional / local markets to differentiate their hosting facilities, reduce long-distance latencies, exploit new subsea fibres and provide space and interconnect to various cloud providers (and telcos).
    At PTC’19 I heard places like Madrid, Fiji, Johannesburg and Minneapolis described as “edge markets”.
  • Hyperscale cloud players are also latency-aware, as well as recognising that some clients have security or regulatory need for local data-storage. They may use third-party local DCs, build their own (Amazon & Whole Food sites?) or even deploy on-premise at enterprises (Amazon Outposts)
  • Property-type players (eg towerco's) see edge-compute as a way to extend their businesses beyond siting radios or network gear.
  • Startups want to offer micro-DCs to many of the above as pre-built physical units, such as Vapor.io, EdgeMicro and EdgeInfra.
  • Other startups want to offer developers convenient (software-based) ways to exploit diverse edge resources without individual negotiations. This includes both federations, or software tools for application deployment and management. MobiledgeX and Ori are examples here.
  • Enterprises want a mix of localised low-latency cloud options, either shared or owned/controlled by themselves (and perhaps on-site, essentially Server Room 2.0). They need to connect them to hyperscale cloud(s) and internal resources, especially for new IoT, AI, video and mobility use-cases.
  • Network vendors are interested either in pitching edge-oriented network capabilities (eg segment-routing), or directly integrating extra compute resource into network switches/routers.
  • Others: additional parties interested in edge compute include PaaS providers, security companies, SD-WAN providers, CDN players, neutral-host firms etc
Each of these brings a different definition of edge - but also has a different set of views about networks and access, as well as business models.


Application diversity

Set against this wide array of participants, is an even more-diverse range of potential applications being considered. They differ in numerous ways too - exact latency needs (<1ms to 100ms+), mobility requirements (eg handoff between edge sites for moving vehicles), type of compute functions used (CPUs, GPUs, storage etc), users with one or multiple access methods, security (physical or logical) and so on.

However, in my view there are two key distinctions to make. These are between:
  • Single-network vs. Multiple-network access: Can the developer accurately predict or control the connection between user and edge? Or are multiple different connection paths more probable? And are certain networks (eg a tier-1 telco's) large enough to warrant individual edge implementations anyway?
  • Single-cloud vs. Multi-cloud: Can all or most of the application's data and workloads be hosted on a single cloud/edge provider's platform? Or are they inherently dispersed among multiple providers (eg content on one, adverts from another, analytics on a third, legacy integration with a fourth / inhouse system)
For telcos in particular, there is an important subset of edge applications which definitely are single-network and internal, rather than client-facing: running their own VNFs (virtual network functions, security functions, distributed billing/charging, and managing cloud/virtualised radio networks (CRAN/vRAN). They also typically have existing relationships with content delivery networks (CDNs), both in-house and third-party.

This "anchor tenant" of on-network, single-telco functions is what is driving bodies like ETSI to link MEC to particular access networks and (largely) individual telcos. Some operators are looking at deploying MEC deep into the network, at individual cell towers or hub sites. Others are looking at less-distributed aggregation tiers, or regional centres.

The question is whether this single-network vision fits well with the broader base of edge-oriented applications, especially for IoT and enterprise.




How common will single-network access be?

The telco edge evolution (whether at region/city-level or down towards cells and broadband-access fibre nodes) is not happening in isolation. A key issue is that wide availability of such edge-cloud service - especially linked to ultra-low-latency 5G networks - will come after the access part of the network gets much more complex.



From a developer perspective, it will often be hard to be certain about a given user’s connectivity path, and therefore which or whose edge facilities to use, and what minimum latency can be relied upon:

  • 5G coverage will be very patchy for several years, and for reliable indoor usage perhaps 10 years or more. Users will regularly fall back to 4G or below, particularly when mobile.
  • Users on smartphones will continue to use 3rd-party WiFi in many locations. PC and tablet users, and many domestic IoT devices, will use Wi-Fi almost exclusively. Most fixed-wireless 5G antennas will be outdoor-mounted, connecting to Wi-Fi for in-building coverage.
  • Users and devices may use VPN security software with unknown egress points (possibly in another country entirely)
  • Not all 5G spectrum bands or operator deployments will offer ultra-low latency and may have different approaches to RAN virtualisation. 
  • Increasing numbers of devices will support multi-path connections (eg iOS TCP Multipath), or have multiple radios (eg cars).
  • Security functions in the network path (eg firewalls) may add latency
  • Growing numbers of roaming, neutral-host and MVNO scenarios involving third-party SPs are emerging. These will add latency, extra network paths and other complexities.
  • eSIM growth may enable more rapid network-switching, or multi-MNO MVNOs like Google Fi.
  • Converged operators will want to share compute facilities between their mobile and fixed networks.

This means that only very tightly-specified “single-network” edge applications make sense, unless there is a good mechanism for peering and interconnect, for instance with some form of “local breakout”.



So for instance, if Telco X operates a smart-city contract connecting municipal vehicles and street lighting, it could offer edge-compute functions, confident that the access paths are well-defined. Similarly it could offer deep in-network CDN functions for its own quad-play streaming, gaming or commerce services. 

But by contrast, an AR game that developers hope will be played by people globally, on phones & PCs, could connect via every telco, ISP & 3rd-party WiFi connection. It will need to be capable of dealing with multiple, shifting, access networks. An enterprise whose employees use VPN software on their PCs, or whose vehicles have multi-network SIMs for roaming, may have similar concerns.
 

The connected edge



I had a bit of an epiphany while listening to an Equinix presentation at PTC recently. The speaker talked about the “Interconnected Edge”, which I realised is very distinct from this vision of a single-telco edge.

Most of the datacentre industry tries to create facilities with multiple telco connections - ideally sitting on as many fibres as possible. This allows many ingress paths from devices/users, and egress paths to XaaS players or other datacentres. (This is not always possible for the most "remote" edges such as Pacific islands, where a single fibre and satellite backup might be the only things available).



And even for simple applications / websites, there may be multiple components coming from different servers (ads, storage, streaming, analytics, security etc) so the immediate edge needs to connect to *those* services with the easiest path. Often it’s server-to-server latency that’s more important than server-to-device, so things like peering and “carrier density” (ie lots of fibres into the building) make a big difference.

In other words, there are a number of trade-offs here. Typically the level of interconnectedness means more distance/latency from each individual access point (as it's further back in the network and may mean data transits a mobile core first), but that is set against flexibility elsewhere in the system. 

A server sitting underneath a cell-tower, or even in a Wi-Fi access point, will have ultra-low latency. But it will also have low interconnectedness. A security camera might have very fast local image-recognition AI to spot an intruder via edge-compute. But if it needs to match their face against a police database, or cross-check with another camera on a different network, that will take significantly longer.

But edge datacentres also face problems - they will typically only be in certain places. This might be fine for individual smart-city applications, or localised "multi-cloud" access, but it still isn't great for multinational companies or the game/content app-developers present in 100 countries.


Is edge-aggregation the answer?

The answer seems to be some form of software edge-federation or edge-broking layer, which can tie together a whole set of different edge resources, and hopefully have intelligence to deal with some of the network-access complexity as well.

I've been coming across various companies hoping to take on the role of aggregator, whether that's primarily for federating different telcos' edge networks (eg MobiledgeX), or helping developers deploy to a wider variety of edge-datacentre and other locations (eg Ori). 

I'm expecting this space to become a lot more complex and nuanced - some will focus on being true "horizontal" exchanges / APIs for multi-edge aggregation. The telco ones will focus on aspects like roaming, combined network+MEC quality of service and so on. Others will probably look to combine edge with SD-WAN for maximum resilence and lowest cost.

Yet more - probably including Amazon, Microsoft and other large cloud companies - will instead look to balance between edge vs. centralised cloud for different workloads, using their own partnerships with edge datacentres (perhaps including telcos) and containerisation approaches like Amazon's Greengrass.

Lastly, we may see the emergence of "neutral-host" networks of edge facilities, not linked to specific telcos, data-centre providers or fibre owners. These could be "open" collaborations, or even decentralised / blockchain-based approaches.

The "magic bullet" here will be the ability to cope with all the network complexities I mentioned above (which drive access paths and thus latencies), plus having a good geographic footprint of locations and interconnections. 

In a way, this is somewhat similar to the historic CDN model, where Akamai and others grew by placing servers in many ISPs' local networks - but that was more about reducing latency from core-to-edge, rather than device-to-edge, or edge-to-edge.

I doubt that this will resolve to a single monopoly player, or even an oligopoly - there are too many variables, dimensions and local issues / constraints.


 
Summary and conclusions

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge
  • Local / Interconnected Datacentre Edge
  • Federated / Open Edge
These will overlap, and hybrids and private/public splits will occur as well.

My current view remains that power constraints mean that in-network [telco-centric] edge cannot ever realistically account for more than 2% of overall global computing workloads or perhaps 3-5% of public cloud services provision, in volume terms – although pricing & revenue share may be higher for provable lower latencies. Now that is certainly non-trivial, but it’s also not game-changing. 

I also expect that in-network edge will be mostly delivered by telcos as wholesale capacity to larger cloud providers, or through edge-aggregation/federation players, rather than as “retail” XaaS sold directly to enterprises or application/IoT developers.

I’m also expecting a lot of telco-edge infrastructure to mostly serve fixed-network edge use-cases, not 5G or 4G mobile ones. 5G needs edge, more than edge needs 5G. While there are some early examples of companies deploying mini-datacentres at large cell-tower “hub” sites (eg Vapor.io), other operators are focusing further back in the network, at regional aggregation points, or fixed-operator central offices. It is still very early days, however.

The edge datacentre business has a lot of scope to grow, both in terms of networks of micro-datacentres, and in terms of normal-but-small datacentres in tier-2/3/4 cities and towns. However, it too will face complexities relating to multi-access users, and limited footprints across many locations.


The biggest winners will be those able to link together multiple standalone edges into a more cohesive and manageable developer proposition, that is both network-aware and cloud-integrated. 

The multi-network, multi-cloud edge will be tough to manage, but essential for many applications.

It is doubtful that telco-only edge clouds (solo or federated) can work for the majority of use-cases, although there will be some instances where the tightest latency requirements overlap with the best-defined connectivity models.

I'm tempted to create a new term of these players - we already have a good term for a meeting point of multiple edges: a corner. Remember where you first heard about Corner Computing...


If you are interested in engaging me for private consulting, presentations, webinars, or white papers, please get in touch via information at disruptive-analysis dot com, or my LinkedIn and Twitter

I will be writing a paper soon on "Edge Computing meets Voice & Video Communications" - get in touch if you are interested in sponsoring it. Please also visit deanbubley.com for more examples of my work and coverage.