Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Tuesday, February 05, 2019

3 Emerging Models for Edge-Computing: Single-Network, Interconnected & Federated

Summary

Edge-computing enables applications to access cloud resources with lower latencies, more local control, less load on transport networks and other benefits.

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge, where a fixed or mobile operator puts compute resources at its own cell-sites, aggregation points, or fixed-network central offices.
  • Local / Interconnected Datacentre Edge, where an existing or new DC provider puts smaller facilities in tier-2/3 cities or other locations, connected to multiple networks.
  • Federated / Open Edge, where a software player aggregates numerous edge facilities and provides a single mechanism for developers to access them.
These are not 100% mutually-exclusive - various hybrids are possible, as well as "private edge" facilities directly owned by enterprises or large cloud providers. They will also interact or integrate with hyperscale-cloud in variety of ways. 

But there is a major issue. All of these will be impacted by even faster-evolving changes in the ways that users access networks and applications, such as "fallback" from 5G to 4G, or switching to WiFi. In other words, the most relevant "edge" will often move or blur. Superficially "good" edge-compute ideas will be forced to play catch-up to deal with the extra network complexity. 
 
(Also - this model excludes the "device edge" - the huge chunk of compute resource held in users' phones, PCs, cars, IoT gateways and other local devices).

Note: this is a long post. Get a coffee. 

There is also an accompanying podcast / audio-track I've recorded on SoundCloud that explains this post if you'd rather listen than read (link)



Background and Overview 

A major area of focus for me in 2019 is edge-computing. It’s a topic I’ve covered in various ways in the last two year or so, especially contrasting the telecom industry’s definitions/views of “in-network” edge, with those of enterprise IT and IoT providers. The latter tend to be more focused on “edge datacentres” in “edge markets” [2nd-tier cities] or more-localised still, such as on-premise cloud-connected gateways. 

I wrote a detailed post in 2018 (link) about computing power consumption and supply, which looked at the future constraints on edge, and whether it could ever really compete with / substitute for hyperscale cloud (spoiler: it can't at an overall level, as it will only have a small % of the total power).

I’m speaking at or moderating various edge-related events this year, including four global conferences run by data-centre information and event firm BroadGroup (link). The first one, Edge Congress in Amsterdam, was on 31st January, and followed PTC’19 (link) the week before, which also had a lot of edge-related sessions.


(I’m also collaborating with long-time WebRTC buddy Tsahi Levent-Levi [link] to write a ground-breaking paper on the intersection of edge-computing with realtime communications. Contact me for details of participating / sponsoring)


Different drivers, different perspectives

A huge diversity of companies are looking at the edge, including both established large companies and a variety of startups:
  • Mobile operators want to exploit the low latencies & distributed sites of 5G networks, as well as decentralising some of their own (and newlyt-virtualised) internal network / operational software
  • Fixed and cable operators want to turn central offices and head-ends into local datacentres - and also house their own virtualised systems too. Many are hybrid fixed/mobile SPs.
  • Long-haul terrestrial and sub-sea fibre providers see opportunities to add new edge data-centre services and locations, e.g. for islands or new national markets. A handful of satellite players are looking at this too.
  • Large data-centre companies are looking to new regional / local markets to differentiate their hosting facilities, reduce long-distance latencies, exploit new subsea fibres and provide space and interconnect to various cloud providers (and telcos).
    At PTC’19 I heard places like Madrid, Fiji, Johannesburg and Minneapolis described as “edge markets”.
  • Hyperscale cloud players are also latency-aware, as well as recognising that some clients have security or regulatory need for local data-storage. They may use third-party local DCs, build their own (Amazon & Whole Food sites?) or even deploy on-premise at enterprises (Amazon Outposts)
  • Property-type players (eg towerco's) see edge-compute as a way to extend their businesses beyond siting radios or network gear.
  • Startups want to offer micro-DCs to many of the above as pre-built physical units, such as Vapor.io, EdgeMicro and EdgeInfra.
  • Other startups want to offer developers convenient (software-based) ways to exploit diverse edge resources without individual negotiations. This includes both federations, or software tools for application deployment and management. MobiledgeX and Ori are examples here.
  • Enterprises want a mix of localised low-latency cloud options, either shared or owned/controlled by themselves (and perhaps on-site, essentially Server Room 2.0). They need to connect them to hyperscale cloud(s) and internal resources, especially for new IoT, AI, video and mobility use-cases.
  • Network vendors are interested either in pitching edge-oriented network capabilities (eg segment-routing), or directly integrating extra compute resource into network switches/routers.
  • Others: additional parties interested in edge compute include PaaS providers, security companies, SD-WAN providers, CDN players, neutral-host firms etc
Each of these brings a different definition of edge - but also has a different set of views about networks and access, as well as business models.


Application diversity

Set against this wide array of participants, is an even more-diverse range of potential applications being considered. They differ in numerous ways too - exact latency needs (<1ms to 100ms+), mobility requirements (eg handoff between edge sites for moving vehicles), type of compute functions used (CPUs, GPUs, storage etc), users with one or multiple access methods, security (physical or logical) and so on.

However, in my view there are two key distinctions to make. These are between:
  • Single-network vs. Multiple-network access: Can the developer accurately predict or control the connection between user and edge? Or are multiple different connection paths more probable? And are certain networks (eg a tier-1 telco's) large enough to warrant individual edge implementations anyway?
  • Single-cloud vs. Multi-cloud: Can all or most of the application's data and workloads be hosted on a single cloud/edge provider's platform? Or are they inherently dispersed among multiple providers (eg content on one, adverts from another, analytics on a third, legacy integration with a fourth / inhouse system)
For telcos in particular, there is an important subset of edge applications which definitely are single-network and internal, rather than client-facing: running their own VNFs (virtual network functions, security functions, distributed billing/charging, and managing cloud/virtualised radio networks (CRAN/vRAN). They also typically have existing relationships with content delivery networks (CDNs), both in-house and third-party.

This "anchor tenant" of on-network, single-telco functions is what is driving bodies like ETSI to link MEC to particular access networks and (largely) individual telcos. Some operators are looking at deploying MEC deep into the network, at individual cell towers or hub sites. Others are looking at less-distributed aggregation tiers, or regional centres.

The question is whether this single-network vision fits well with the broader base of edge-oriented applications, especially for IoT and enterprise.




How common will single-network access be?

The telco edge evolution (whether at region/city-level or down towards cells and broadband-access fibre nodes) is not happening in isolation. A key issue is that wide availability of such edge-cloud service - especially linked to ultra-low-latency 5G networks - will come after the access part of the network gets much more complex.



From a developer perspective, it will often be hard to be certain about a given user’s connectivity path, and therefore which or whose edge facilities to use, and what minimum latency can be relied upon:

  • 5G coverage will be very patchy for several years, and for reliable indoor usage perhaps 10 years or more. Users will regularly fall back to 4G or below, particularly when mobile.
  • Users on smartphones will continue to use 3rd-party WiFi in many locations. PC and tablet users, and many domestic IoT devices, will use Wi-Fi almost exclusively. Most fixed-wireless 5G antennas will be outdoor-mounted, connecting to Wi-Fi for in-building coverage.
  • Users and devices may use VPN security software with unknown egress points (possibly in another country entirely)
  • Not all 5G spectrum bands or operator deployments will offer ultra-low latency and may have different approaches to RAN virtualisation. 
  • Increasing numbers of devices will support multi-path connections (eg iOS TCP Multipath), or have multiple radios (eg cars).
  • Security functions in the network path (eg firewalls) may add latency
  • Growing numbers of roaming, neutral-host and MVNO scenarios involving third-party SPs are emerging. These will add latency, extra network paths and other complexities.
  • eSIM growth may enable more rapid network-switching, or multi-MNO MVNOs like Google Fi.
  • Converged operators will want to share compute facilities between their mobile and fixed networks.

This means that only very tightly-specified “single-network” edge applications make sense, unless there is a good mechanism for peering and interconnect, for instance with some form of “local breakout”.



So for instance, if Telco X operates a smart-city contract connecting municipal vehicles and street lighting, it could offer edge-compute functions, confident that the access paths are well-defined. Similarly it could offer deep in-network CDN functions for its own quad-play streaming, gaming or commerce services. 

But by contrast, an AR game that developers hope will be played by people globally, on phones & PCs, could connect via every telco, ISP & 3rd-party WiFi connection. It will need to be capable of dealing with multiple, shifting, access networks. An enterprise whose employees use VPN software on their PCs, or whose vehicles have multi-network SIMs for roaming, may have similar concerns.
 

The connected edge



I had a bit of an epiphany while listening to an Equinix presentation at PTC recently. The speaker talked about the “Interconnected Edge”, which I realised is very distinct from this vision of a single-telco edge.

Most of the datacentre industry tries to create facilities with multiple telco connections - ideally sitting on as many fibres as possible. This allows many ingress paths from devices/users, and egress paths to XaaS players or other datacentres. (This is not always possible for the most "remote" edges such as Pacific islands, where a single fibre and satellite backup might be the only things available).



And even for simple applications / websites, there may be multiple components coming from different servers (ads, storage, streaming, analytics, security etc) so the immediate edge needs to connect to *those* services with the easiest path. Often it’s server-to-server latency that’s more important than server-to-device, so things like peering and “carrier density” (ie lots of fibres into the building) make a big difference.

In other words, there are a number of trade-offs here. Typically the level of interconnectedness means more distance/latency from each individual access point (as it's further back in the network and may mean data transits a mobile core first), but that is set against flexibility elsewhere in the system. 

A server sitting underneath a cell-tower, or even in a Wi-Fi access point, will have ultra-low latency. But it will also have low interconnectedness. A security camera might have very fast local image-recognition AI to spot an intruder via edge-compute. But if it needs to match their face against a police database, or cross-check with another camera on a different network, that will take significantly longer.

But edge datacentres also face problems - they will typically only be in certain places. This might be fine for individual smart-city applications, or localised "multi-cloud" access, but it still isn't great for multinational companies or the game/content app-developers present in 100 countries.


Is edge-aggregation the answer?

The answer seems to be some form of software edge-federation or edge-broking layer, which can tie together a whole set of different edge resources, and hopefully have intelligence to deal with some of the network-access complexity as well.

I've been coming across various companies hoping to take on the role of aggregator, whether that's primarily for federating different telcos' edge networks (eg MobiledgeX), or helping developers deploy to a wider variety of edge-datacentre and other locations (eg Ori). 

I'm expecting this space to become a lot more complex and nuanced - some will focus on being true "horizontal" exchanges / APIs for multi-edge aggregation. The telco ones will focus on aspects like roaming, combined network+MEC quality of service and so on. Others will probably look to combine edge with SD-WAN for maximum resilence and lowest cost.

Yet more - probably including Amazon, Microsoft and other large cloud companies - will instead look to balance between edge vs. centralised cloud for different workloads, using their own partnerships with edge datacentres (perhaps including telcos) and containerisation approaches like Amazon's Greengrass.

Lastly, we may see the emergence of "neutral-host" networks of edge facilities, not linked to specific telcos, data-centre providers or fibre owners. These could be "open" collaborations, or even decentralised / blockchain-based approaches.

The "magic bullet" here will be the ability to cope with all the network complexities I mentioned above (which drive access paths and thus latencies), plus having a good geographic footprint of locations and interconnections. 

In a way, this is somewhat similar to the historic CDN model, where Akamai and others grew by placing servers in many ISPs' local networks - but that was more about reducing latency from core-to-edge, rather than device-to-edge, or edge-to-edge.

I doubt that this will resolve to a single monopoly player, or even an oligopoly - there are too many variables, dimensions and local issues / constraints.


 
Summary and conclusions

There are 3 main models emerging for organising edge-computing services and infrastructure:
  • Single-Network Telco Edge
  • Local / Interconnected Datacentre Edge
  • Federated / Open Edge
These will overlap, and hybrids and private/public splits will occur as well.

My current view remains that power constraints mean that in-network [telco-centric] edge cannot ever realistically account for more than 2% of overall global computing workloads or perhaps 3-5% of public cloud services provision, in volume terms – although pricing & revenue share may be higher for provable lower latencies. Now that is certainly non-trivial, but it’s also not game-changing. 

I also expect that in-network edge will be mostly delivered by telcos as wholesale capacity to larger cloud providers, or through edge-aggregation/federation players, rather than as “retail” XaaS sold directly to enterprises or application/IoT developers.

I’m also expecting a lot of telco-edge infrastructure to mostly serve fixed-network edge use-cases, not 5G or 4G mobile ones. 5G needs edge, more than edge needs 5G. While there are some early examples of companies deploying mini-datacentres at large cell-tower “hub” sites (eg Vapor.io), other operators are focusing further back in the network, at regional aggregation points, or fixed-operator central offices. It is still very early days, however.

The edge datacentre business has a lot of scope to grow, both in terms of networks of micro-datacentres, and in terms of normal-but-small datacentres in tier-2/3/4 cities and towns. However, it too will face complexities relating to multi-access users, and limited footprints across many locations.


The biggest winners will be those able to link together multiple standalone edges into a more cohesive and manageable developer proposition, that is both network-aware and cloud-integrated. 

The multi-network, multi-cloud edge will be tough to manage, but essential for many applications.

It is doubtful that telco-only edge clouds (solo or federated) can work for the majority of use-cases, although there will be some instances where the tightest latency requirements overlap with the best-defined connectivity models.

I'm tempted to create a new term of these players - we already have a good term for a meeting point of multiple edges: a corner. Remember where you first heard about Corner Computing...


If you are interested in engaging me for private consulting, presentations, webinars, or white papers, please get in touch via information at disruptive-analysis dot com, or my LinkedIn and Twitter

I will be writing a paper soon on "Edge Computing meets Voice & Video Communications" - get in touch if you are interested in sponsoring it. Please also visit deanbubley.com for more examples of my work and coverage.

Monday, January 07, 2019

Private cellular networks - why Ofcom's UK spectrum proposals are so innovative

On December 18th 2018, Ofcom announced two consultations about new 5G-oriented spectrum releases (link), and potential new models for spectrum-sharing, rural mobile coverage and related innovation (link). 

I've already commented briefly on Twitter (link) and LinkedIn (link), but it's worth going a bit deeper in a full post on this - particularly on the aspects relating to private networks and spectrum-sharing.

NOTE: this is a long post. Get a coffee now. Or listen to my audio commentary (Part 1 on the background to private mobile networks is here and Part 2 on the Ofcom proposals is here)

My view is that 2019 is a key breakout year for new mobile network ownership and business models - whether that's fully-private enterprise networks, various types of neutral-host, or a revitalised version of MVNO-type wholesale perhaps enriched by network-slicing. 

This trend touches everything from IoT to 5G verticals, to enterprise voice/comms & UCaaS. I'll be covering it in depth. I also discussed it when I presented to Ofcom's technology team in November (see slides halfway down this page), and it's good to see my thinking seems to align fairly closely with theirs.


This was the future, long ago

Localised or private cellular networks - sometimes called Micro-MNOs - are not a new concept.

Twelve years ago, in 2006, the UK telecoms regulator Ofcom made an unusual decision - to auction off a couple of small slices* of 2G mobile spectrum, for use on a low-power, localised basis for a number of innovative service providers or private companies' use. (Link). A few launches occurred, and the Dutch regulator later did something similar, but it didn't really herald a sudden flourishing of private mobile networks. 

*(The slices were known as the DECT Guard Bands, which separated GSM mobile bands from those used for older cordless phones, widely used in homes and businesses)

Numerous practical glitches were blamed, including the costs / complexities involved in deploying small-cells, the need for roaming or MVNO deals for wide-area coverage, and the fact that the spectrum was mostly suitable for voice calls, at a time when the world was moving to mobile data and smartphones. 

Unfortunately, there was also no real international momentum or consensus on the concept, despite Ofcom's hope to set a trend - although it did catalyse a good UK-based cottage industry of small-cell and niche core-network vendors.


Going mainstream: private / virtualised networks for enterprise & verticals

At the start of 2019, the world looks very different. There is a broad consensus that new models of mobile network are needed - whether that is fully-owned private cellular, more-sophisticated MVNOs with their own core networks, or future visions of 5G with privately-run "network slices". 


There's a focus on neutral-host networks for in-building coverage, proponents of wholesale national "open" networks, and a growing number of large non-telecoms enterprises wanting more control and ownership.




It is unrealistic to expect the main national MNOs to be able to pay for, deploy, customise, integrate and operate networks for every industry vertical, indoor location or remote area. They have constraints on capital, personnel, management resource, specialised knowledge and appetite for risk. Other types of network operator or service provider are needed as well.

In a nutshell, there is a wide recognition that "telecoms is too important to just leave up to the telcos".

I've been talking about this for several years now - the rise of unlicensed cellular technologies such as MulteFire or Huawei's eLTE, the growing focus on locally-licensed or shared spectrum for IoT or industry use, and the specific demands of rural, indoor or industrial network coverage and business models.

(As well as non-MNO deployed and owned 4G/5G networks, we will also see a broad range of other ways to deliver private capabilities, including various evolutions of MVNO, mobile SD-WAN and future network-slicing and private cores. But this particular consultation is more about the radio-centric innovations).


Where is the action?

But while there has been a lot of discussion in the UK (including my own presentations to Ofcom, the Spectrum Policy Forum and others), the main sources of action on private (licensed) cellular have been elsewhere. 

In particular, the US push on its CBRS 3-tier model of network sharing - expected to yield the first local service launches in 2019 - and German and Dutch approaches to local-licensed spectrum for industry, have been notable. Unlicensed cellular adoption is (fairly quietly) emerging in Japan and China as well.

Plenty of other trials and regulatory maneouvring has occurred elsewhere too, with encouragaing signs by bodies like ITU, BEREC and assorted national authorities that private/local sharing is becoming important. 

In the UK, various bodies including Ofcom, National Infrastructure Commission, DCMS (the ministry in charge), TechUK/Spectrum Policy Forum (link) and others have referenced the potential for shared/private spectrum - and even invited me to talk about it - but until now, not much concrete has happened.


What use-cases are important here?

From my perspective, the main focus in actual deployment of private LTE has been for industrial IoT and especially the ability for large enterprises to run their own networks for factories, robots, mining facilities, (air)ports or process plants. Some of these also want human communications as well, such as replacing TETRA mobile radio / walkie-talkie units with more sophisticated cellular smartphone-type devices, or links to UCaaS systems.

These are all seen as future 5G opportunities by vendors too. They are also often problematic for many MNOs to cover directly - few are really good at dealing the specialised demands of industrial equipment and installations, and the liability, systems-integration and customisation work required.

Together with big companies like GE and Bosch and BMW, there has been some lobbying action as well. CBRS has had a broader appeal, with numerous other categories showing interest too, from sports stadium owners, to cable operators looking for out-of-home coverage for quadplay, or fixed-wireless extensions.

But I'd say that rural coverage, and more generic in-building use-cases, have had less emphasis by regulators or proponents of Micro-MNO spectrum licensing. That's partly because rural uses are often hard to generate business cases and have fragmentary stakeholders by definition, while in-building represents an awkward mix of rights, responsibilities and willingness-to-pay.  

Yet it is these areas - especially rural - that Ofcom is heavily focused on, partly in response to some UK Government policy priorities, notably around rural broadband coverage.


What has been announced?

There are two separate announcements / consultations:
  • An immediate, specific proposal for 700MHz and 3.6-3.8GHz auctions to have additional coverage conditions added to "normal" national mobile licenses, especially for rural areas. This includes provisions for cheaper license fees for operators that agree to build new infrastructure in under-served rural areas, and cover extra homes in "not-spots" today.
  • A more general consultation on innovation, which focuses on various interesting sharing models for three bands: the 1800MHz DECT guard bands (as discussed above), the 3.8-4.2GHz range and also 10MHz around 2.3GHz.

The first proposal is essentially just a variation of "normal 3.5GHz-band national 5G licenses", similar to the earlier 3.4-3.6GHz tranche which has already been released in the UK. Some were hoping that this would have some sort of sharing option, for instance for neutral-host networks in rural or industrial sectors, but that has been sidelined. 

Unlike Germany, which has just 3 MNOs and a powerful industrial lobby wanting private spectrum, the UK has to squeeze 4 MNOs' 5G needs into this band, with a big chunk already belonging to 3/UK Broadband. So, it has stuck with fairly normal national licenses. Instead, there's some tweaks to incentivise MNOs to build out better rural coverage. This helps address some of the UK government's and voters' loudly voiced complaints, but doesn't really affect this post's core theme of private/novel network types.

It is the second consultation that is the most radical - and the one which could potentially reshape the mobile industry in the UK. There are two central elements to its proposals:
  • Local-licensed spectrum in three "shared" bands, with Ofcom managing authorisations itself, with a fixed pricing structure that is just based on cost of administration, rather than raising large sums for the Treasury. There are proposals for low-power and mid-power deployments, suitable respectively for individual buildings or sparsely-populated rural areas.
  • Secondary re-use of existing national licensed bands. In essence, this means that any existing mobile band could be subject to 3rd-party localised, short-term licensing in areas where there is no existing coverage. This is likely to be hugely controversial, but makes inherent sense - essentially it's a form of "use it or lose it" rule for MNOs. 

Local licensing in shared bands
 
The local licensing idea has numerous potential applications, from industrial sites to neutral-hosts to fixed-wireless access in rural districts. It updates the 1.8GHz 2006 low-power wireless licenses to the new approach, and adds in the new bands in 2.3GHz and 3.8-4.2GHz. 

While I'm sure that some objections will be raised - for example, perhaps around the low-cost aspects of these new licenses - I struggle to find many grounds for substantive disagreement. It is, essentially, a decent pitch for a halfway-house between national licenses and complete WiFi-style unlicensed access. Like CBRS in the US (which is much more complex in many ways) it could drive a lot of innovative network deployments, but at smaller scale, as CBRS is aimed at county-sized areas rather than local areas as small as 50m diameter. 



There are numerous innovations here - and considerable pragmatism too, and plenty of homework that's been done already. The medium-power band, and the rural restrictions for outdoor use, are both definitely interesting angles - and well-designed to ensure that this doesn't allow full national/mobile competition "on the cheap" by aggressive new entrants. The "what if?" consultation sections on "possible unintended consequences" and ways to mitigate  them are especially smart - frankly all governmental policy documents should do something similar.

Ofcom also discusses options for database-driven dynamic spectrum approaches (similar to CBRS, white spaces and others) but thinks that would take too long to develop. It essentially wants a quasi-static authorisation mechanism, but with short enough terms - 3 years - that it can transition to some DSA-type option when it's robust and flexible enough. 

(As an aside, I wonder if the ultimate version is some sort of decentralised blockchain-ish decentralised-database platform for dynamic spectrum, which in theory sounds good, but has not been tried in practice yet. And no, it shouldn't be based on SpectrumCoin cryptocurrency tokens).


Secondary licensing of existing bands

This is the really controversial one.

It basically tells the MNOs that their existing - or future - national licenses don't allow them to "bank" spectrum in places where it's not going to be actively used. If there's no coverage now, or credible mid-term plans for build-out in the future, then (as long as it won't create interference) then other parties can apply to use it instead, as long as Ofcom agrees that there's no risk of interference. 

Unlike the shared-band approach (except for 1800MHz), this means that devices will be available immediately, as they would operate in the same bands that already exist. It would also potentially apply for the new 5G bands, especially 3.4-3.8GHz. 

There's a proposed outline mechanism from Ofcom to verify that suggested parallel licenses should be able to go ahead, and again a fairly low-cost pricing mechanism.



Clearly, this is just a broad outline, and there are a lot of details to consider before this could become a reality. But the general principle is almost "use it or lose it", although more accurately it's "use it, or don't complain if someone else uses it until you're ready".

There are a few possible options that have been suggested in the past for this type of thing - leasing or sub-licencing of spectrum by MNOs, or some form of spectrum trading, for instance. In some countries / places this has worked OK, for example for mines in the Australian Outback running private cellular, that have been able to do a deal with one of the national MNOs. But it's complex to administer, and often the MNOs don't really have incentives or mechanisms to do this at scale. They're not interested in doing site-surveys, or drawing up unique contracts for £1000 a year for a couple of farmhouses or a wind-turbine on a hilltop. Plus, there are complexities about liability with leasing (it's still the original licensee's name on the license).

While there will be costs for Ofcom to manage this process, it thinks they should be reasonable - it's pricing the licenses at £950 for a 3 year period. 

All this is pretty radical. And I expect MNOs and industry bodies to raise blue-murder about this in the consultation. Firstly, they will complain about possible interference, which is valid enough, but can be ruled out in some locations. They'll talk about the internal costs of the acceptance process. And above all, they may talk about "cherry-picking" and perceived competitive distortions.

The most interesting aspect for me is how this changes the calculus for building networks indoors, in offices, factories or public buildings. This could limit the practice of MNOs sometimes insisting that enterprises pay for their own indoor systems, for delivery of the MNOs' network coverage and capacity. It could incentivise operators to focus on indoor coverage, if they want to offer managed services for IoT, for example.

There's a lot of other implications, opportunities and challenges I don't have time to address in this post, but will pick up on, over the next weeks and months. There are technical, regulatory, commercial, practical and political dimensions.

I'm really curious to read the responses to this consultation, and see what comes out of the next round of statements from Ofcom. I'm probably going to submit something myself, as I can see a bunch of questions and complexities. Let me know if you'd like me to brainstorm any of this with you.
 

Spectrum is not enough

One thing is definitely critical for both proposals. The availability of local-licensed spectrum is not enough for innovators and enterprises to build networks. There are many other "moving parts" as well - affordable radio infrastructure such as small-cells, inexpensive (likely cloud-based) core and transport networks, numbering resources, SIMs, billing/operations software, voice and messaging integration, and so on. The consultations cover numbering concerns and MNC (mobile network codes), at least up to a point.

In some cases, roaming deals with national networks will be needed - and may be hard to negotiate, unless regulatory pressure is applied. As I've been discussing recently (including in this report for STL - link) this ties in with a wider requirement for revisiting wholesale mobile business models and regulation.


Conclusions

This is all very exciting, and underscores a central forecast of mine, that mobile network business / ownership models will change a lot in the next few years. We'll see new network owners, wholesalers and tenants - even as normal MNOs consolidate and merge with fixed-lie players.

I'd like to think I've played a small part in this myself. I've advised clients, presented and run many workshops on the topic, including my own public events in May and November 2017 (link), and numerous speeches to regulators, industry groups and policymakers. Industry, rural and in-building users need both more coverage and sometimes more control / ownership of cellular networks in licensed bands. 

There will need to be customisation, systems integration and a wide variety of "special cases" for future cellular. The MNOs are not always willing or able to deliver that, so alternatives are needed. (Most will admit, privately at least, that they cannot cover all verticals and all use-cases for 4G/5G). WiFi works fine for many applications, but in some cases private cellular is more suitable.

We're seeing a variety of new network-sharing and private-spectrum models emerge around the world, and a general view that they are (in some fashion) needed. What's unclear is what is the best approach (or approaches). CBRS, German industrial networks, Dutch localised licenses, or something else. I'd say that Ofcom's various ideas are very powerful - and in the case of the secondary re-use proposal, highly disruptive.

Edit & footnote: rather than "secondary re-use", perhaps a better name for this proposal is "Cellular White-Space", given that it is, in essence, the mobile-spectrum equivalent of the TVWS model.

If you'd like to discuss this with me - or engage me for a presentation or input on strategy or regulatory submissions - please reach out and connect. I'm available via information AT disruptive-analysis DOT com

Also, please subscribe to this blog, follow me on Twitter and LinkedIn - and (new for 2019!) look out for new audio/podcast and YouTube content. 

There are two audio segments that relate to this blog post:
Part 1 covers the general background to private cellular (here)
Part 2 covers the specific Ofcom proposals (here)