Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label LPWAN. Show all posts
Showing posts with label LPWAN. Show all posts

Tuesday, July 11, 2017

Sensors: implications for wireless connectivity & video communications

Quick summary
  • Sensor technology is complex, diverse, fascinating & fast-evolving.
  • There are dozens of sensor types & technologies.
  • Nobody believes the 20-50bn devices forecasts, especially if they are based on assumptions that 1 sensor = 1 device
  • Some sensors improve the capabilities of already-connected devices, like phones or (increasingly) cars.
  • Some sensors enable creation of new forms of connected device & application.
  • Most sensors connect first via one or two tiers of local gateways, sub-systems or controllers, rather than directly connect to the Internet / cloud individually
  • While the amount of sensor-generated data is growing hugely, not all of this needs real-time collection and analysis, and so network needs are less-extreme.
  • Many industrial sensors use niche or unfamiliar forms of connectivity.
  • Genuine real-time controls often need sensors linked to "closed-loop" systems, rather than using Internet connections / cloud.
  • WiFi & short-range wireless technologies like Bluetooth & ZigBee are growing in importance. There is limited concern about using unlicensed spectrum
  • LoRa radios (sometimes but not always with LoRaWAN protocols) are growing in importance rapidly
  • Cellular connectivity is important for certain (especially standalone, remote/mobile & costly) sensor types, or sensor-rich complex objects like vehicles. 
  • The US seems more keen on LTE Cat-1 / Cat-M than NB-IoT for sensor-based standalone devices. Europe and Asia seem more oriented towards NB-IoT
  • There are no obvious & practical sensor use-cases that need 5G, but it will likely improve the performance / economics / reach of some 4G applications.
  • Camera / image sensors are becoming hugely important and diverse. These are increasingly linked to either AI systems (machine vision) or new forms of IoT-linked communication applications
  • "Ordinary" video sensors/modules are being supplemented by 3D, depth-sensing, emotion-sensing, 360degs, infra-red, microscopy and other next-gen capabilities.
  • AI and analytics will sometimes be performed on the sensor or controller/gateway itself, and sometimes in the cloud. This may reduce the need for realtime data transmission, but increase the need for batch transfer of larger files.
  • Conclusion: sensors are central to IoT and evolving fast, but the impact on network connectivity - especially new cellular 4G and 5G variants - is diffuse and non-linear.

Narrative
 
A couple of weeks ago I went to Sensors Expo 2017 in San Jose. This topic is slightly outside my normal beat, but fits with my ongoing interest in "telcofuturism", especially around the intersection of IoT, networks and AI. It also dovetails well with recent writing I've done on edge computing (link & link), a webinar [this week] and paper on IoT+video for client Dialogic (link), and an upcoming report I'll be writing on LPWAN for my Future of the Network research stream at STL Partners (link).

First things first: listening to some of the conference speeches, and then walking around the show floor, made me realise just how little I actually knew about sensors, and how they fit into the rest of the IoT industry. I suspect a lot of people in telecoms - or more broadly in wireless networking and equipment - don't really understand the space that well either.

For a start, there's a bewildering array of sensor types and technologies - from tiny silicon accelerometers that can be built into a chip (based on MEMS - micro-electromechanical systems), right up to sensors woven into large-scale fabrics, that can be used to make tarpaulins or tents which know when someone tries to cut them. There's all manner of detectors for gases, proximity, light, pressure, force, airflow, air quality, humidity, torque, electrical current, vibration, magnetic fields, temperature, distance, and so forth.

Secondly, a lot of sensors have historically been part of "closed-loop" systems, without much in the way of "fully-connected" computing, permanent data collection, networking, cloud platforms or analysis. 

An easy example to think about is an old-fashioned thermostat for a heating system. It senses temperature - and switches a boiler or radiator on or off accordingly - without "compute" or networking resource. This has been reinvented by Nest and others. Plenty of other sensors just interact with "real-time" systems - for example older cars' airbags, or motion-detection alarms which switch on lights.

In industry, a lot of sensors hook into the "real-time control" systems, whether that's for industrial production machinery, quality control, aircraft avionics or whatever. These often use fixed connectivity, with a bewildering array of network and interface types. It's not just TCP/IP or familiar wireless technologies. If you haven't come across things like Modbus or Profibus, or terms like RS485 physical connections, you perhaps don't realise the huge complexity and unfamiliarity of some of these systems. This is not telco territory.

This is important, as it brings in an entire new realm to think about. From a telco perspective, we're comfortable talking about the touch-points of networks and IT. We are don't often talk about OT or "operational technology". A lot of people seem to naively believe that we can hook up a sensor or a robot or a piece of industrial machinery straight to a 4G/5G/WiFi connection, then via Internet or VPN to a cloud application to control it, and that's all there is to it. 

In fact, there may well be one, two or three layers of other technology involved first, notably PLC units (programmable logic controllers) as well as local gateways. A lot of this is the intranet-of-things, not the Internet-of-things - and may well not even be using IP as most people in networking and telecoms normally think about it.

In other words, there's a lot more optionality around ISO layers - there are a broad range of sector-specific or proporietary protocols, that control sensors or IoT devices over a particular "physical layer". That contrasts with most users' (and telco-world observers') day-to-day expectations of "IP everywhere" and using HTTP and TCP/IP and similar protocols over ethernet, WiFi, 4G or whatever. The sensor world is much more fragmented than that.

These are some of the specific themes I noted at the event:
  • Despite the protocol zoo I've discussed, WiFi is everywhere nonetheless. Pretty much all the sensor types have WiFi connectivity options somewhere, unless they're ultra-low power. There's quite a bit of Bluetooth and ZigBee / other varieties of IEEE 802.15.4 for short-range access too.
  • Almost nobody seems bothered about the vagaries of unlicensed spectrum, apart from a few seriously mission-critical, time-critical applications, in which case they'll probably use fixed connections if they can. Bear in mind that a lot of sensors are actually fairly time-insensitive so temporary interference or congestion doesn't matter much. Temperatures usually only change over seconds / minutes, not milliseconds, for example. Bear in mind though, that this is for sensing (ie gathering data) not actuating (doing stuff, eg controlling machines or robots).
  • Most sensors send small bursts of data - either at set intervals, or when something changes. There are exceptions (notably camera / image sensors)
  • I saw a fair amount of talk about 5G (and also 4G and NB-IoT) but comparatively little action. Unlike Europe, the US seems more interested in LTE Cat-1 and Cat-M rather than NB-IoT. Cat-M can support VoLTE, which makes it interesting for applications like elder/child-trackers, wearable and building security. NB-IoT seems fairly well-suited to things like parking meters, environmental sensors, energy metering etc. where each unit is comparatively standalone, and needs to link to cloud/external resources like payments.
  • There's also lot of interest in LoRa, both as a public network service (Senet was prominently involved), and also as privately-owned infrastructure. I think we're going to see a lot of private LoRa embedded into medium-area sensor networks. Imagine 100 moisture sensors for a farm, connected back to a central gateway on top of the barn, and then on to a wide-area connection (fixed or mobile) and a cloud-based application. The 100 sensors don't need a wireless "service" - they'll be owned by the farmer, or else perhaps the connectivity will be offered as a part of a broader "managed irrigation service" by the software company.
  • There's an interest in wireless connectivity to reduce regulatory burdens for some sensors. For example, to connect a temperature sensor in an area of an oil refinery with explosion risks, to a server in another building, requires all manner of paperwork and certification. The trenching, ducting and physical wire between them needs approval, inspection and so on. It's much simpler to do it with wireless transmitters and receivers.
  • A lot of the extra sensors getting connected are going to be bundled with existing sensors. Rather than just a vibration sensor, the unit might also include temperature and pressure sensors in integrated form. That probably adds quite a lot to the IoT billions number-count, without needing separate network links.
  • A lot of sensors will get built into already-connected objects. Cars and aircraft will continue to add cameras, material stress sensors, chemical analysis probes for exhaust gases, air/fluid flow sensors, battery sensors of numerous types, more accelerometers and so on. This means more data being collected, and perhaps more ways to justify always-on connections because of new use-cases - but it also means a greater need for onboard processing and "bulk" transfers of data in batches.
  • Safety considerations often come ahead of security, and a long way ahead of performance. A factory robot needs sensors to avoid killing humans first. Production quality, data for machine learning and efficiency come further down the list. That means that connecting devices and sensors via wider-range networks might make theoretical or economic sense - but it'll need to be seen through a safety lens (and often sector-specific regulation) first. Taking things away from realtime connections and control systems, into a non-deterministic IP or wireless domain, will need careful review.
  • Discussion of sensor security issues is multi-layer, and encouragingly pervasive. Plenty of discussions around data integrity, network protection, even device authenticity and counterfeiting.
  • Imaging sensors (cameras and variants of them) are rapidly proliferating in terms of both capabilities and reach into new device categories. 3D depth-sensing cameras are expected on phones soon, for example for facial recognition. 360-degree video is rapidly growing, for example with drones. Vehicles will use cameras not just for awareness of surrounding, but also to identify drivers or check for attentiveness and concentration. Rooms or public-spaces will use cameras to count occupancy numbers or footfall data. New video endpoints will link into UC and collaboration systems "Sensed video" will need greater network capacity in many instances. [I am doing a webinar with Dialogic about IoT+video on July 13th - sign up here: link]
  • Microphones are sensors too, and are also getting smarter and more capable. Expect future audio devices to be aware of directionality, correct for environmental issues such as wind noise, recognise audio events as triggers - and even do their own voice recognition in the sensor itself.
  • Textile and fabric sensors are really cool - anything from smart tarpaulins for trucks to stop theft, through to bandages which can measure moisture and temperature changes, to signal a need for medical attention. 
  • There's a lot of modularity being built into sensors - they can work with multiple different network types depending on the use-case, and evolve over time. A vibration sensor module might be configurable to ship with WiFi, BLE, LoRa, NB-IoT, ZigBee and various combinations. I spoke to Advantech and Murata and TE Connectivity, among others, who talked about this.
  • Not many people seemed to have thought about SIMs/eSIMs much, at a sensor level. The expectation is that they will be added by solution integrators, eg vehicle manufacturers or energy-meter suppliers, as needed.
  • AI will have a range of impacts both positive and negative from a connectivity standpoint. The need for collecting and pooling large volumes of data from sensors will increase the need for network transport... but conversely, smarter endpoints might process the data locally more effectively, with just occasional bulk uploads to help train a central system.
Overall - this has really helped to solidify some of my thinking about IoT, connectivity, the implications for LPWAN and also future 4G/5G coverage and spectrum requirements. I'd recommend readers in the mainstream telecom sector to drop in to any similar events for a day or two - it's a good way to frame your understanding of the broader IoT space and recognise that "sensors" are diverse and have varying impacts on network needs.

Thursday, May 11, 2017

Spectrum-Sharing: Europe & Asia need something like CBRS

The more I look at enterprise mobile, especially its focus on verticals and IoT, the I'm more convinced there needs to be a change in industry structure, regulation and network ownership/operation.  And that means new spectrum policy, as well.

In particular, private licensed-band wireless networks will be essential - that is, networks (using cellular, WiFi, LPWAN or other technology) that can be directly managed by organisations that are not traditional MNOs (mobile network operators), to provide high-QoS, reliable wireless connections. I'm thinking large companies running their own networks, industrial network specialists, local cooperatives, perhaps new government-sector initiatives, and various other aggregators, outsourcers and intermediaries. These will mostly be in-building / on-campus, but some may need to be genuinely wide-area, or even national, as well.

This is in addition to enterprise-centric initiatives in the MVNO/E space, vertical activities by fixed telcos and MNOs, unlicensed-band WiFi and LPWAN deployments and so on.

 There are three main models for licensing radio spectrum today:
  • Exclusive licenses: Dedicated access to certain bands is very common today, for example for mobile networks, fixed microwave links, broadcasters, satellite access and many government-sector uses, such as military radios and radar. Particular organisations have rights to solo access to particular frequencies, in a given country/region, subject to complying with various rules on power and so forth.
  • Unlicensed: (also license-exempt): Beyond some basic rules on power and antenna siting, some bands are essentially "open to all". The 2.4GHz and 5GHz bands used by technologies such as WiFi, Bluetooth and many other technologies are prime examples, as well as bands used for consumer walkie-talkies and various medical and automotive applications.
  • Shared spectrum: This covers various models for allowing multiple users for certain frequencies. It could involve temporary usage (eg for event broadcast), bands that haven't been "cleared" fully and still have incumbent users that newcomers need to "work around". It might be spectrum assigned in geographic chunks, or at low power levels and mandating "polite" protocols so that multiple users can co-exist. We've seen TV "white spaces" where under-used bands are opened up to others, and so forth.
The latter approach of sharing is becoming much more important - despite continued clamour for exclusive licenses, especially from the mobile industry. Given that the demand for spectrum is rising from all sides - mobile, WiFi, utilities, broadcast, satellite, Internet and many others - and each has a different demand profile (global / national / regional and subscription / private / amenity etc), a one-size-fits-all model cannot work, given limited spectrum resources. More spectrum-sharing will be essential.

More models are now emerging for sharing spectrum bands. Depending on the details, these open up opportunities for a greater number of stakeholders. The US' innovative CBRS model (see link) for 3.5GHz is worth examining, and perhaps replicating elsewhere, especially Europe. It is much more sophisticated - but more complex to implement - than the Licensed Shared Access (LSA) that Europe has leaned towards historically. In Disruptive Analysis' view this extra complexity is worthwhile, as it allows a much broader group of stakeholders to access spectrum, fostering greater innovation
 
The important differentiator for CBRS is that there are three tiers of users:
  • Incumbents, primarily the military, which gets the top level of access rights for radar and other uses in the band
  • Licensed access providers which can get dedicated slices in specific geographic areas. These are "protected" but subject to pre-emption by the top tier. They will also generate revenue for the government in terms of license fees - although awards will be for shorter periods than normal bands (3 years is being discussed).
  • General access - basically this is like unlicensed access, but it has to work around the other tiers, if they are present.
To make all this work, the CBRS system needs databases of who is using what spectrum and where, and sensors to detect any changes in the top tier's usage. (The military, as incumbents, isn't keen on spending any money to actually tell the system what it's doing - it needs to be securely automated).

When all this is up and running, there will be many potential user groups for shared spectrum such as this, using either the priority licenses, or general access tiers:
  • Incumbent mobile operators needing more capacity in specific areas
  • MVNOs wanting to "offload" some traffic from their host MNO networks, onto their own infrastructure, without the expense of full national coverage. This could work either alongside, or as an alternative to, WiFi-based offload or WiFi-primary models.
  • Enterprises wanting to deploy private cellular networks indoors or over large campuses (eg across an airport or oil-refinery for IoT usage)
  • Potentially, large-scale WiFi deployments in new bands, with less subject to interference than mainstream unlicensed bands - although this would require devices/chipsets supporting new frequencies that are currently outside the proper WiFi standards.
  • Various "neutral host" wholesale LTE models, for example run by city authorities for metropolitan users, or cloud-providers for enterprise - or as a way to provide better indoor coverage for existing incumbent "outdoor" operators, without their needing individual infrastructure in each building. This could allow the pooling of back-end / administrative functions and costs across multiple local LTE networks in shared bands. Imagine an Amazon AWS approach to buying cellular capacity, on-demand.
  • Various approaches to roaming or "un-roaming" providers - for example, a theme-park operator or hotel owner could offer its foreign guests "free LTE" while on-site.
  • Potential new classes of cellular operator, such as an Industrial Mobile Operator (imagine GE or ABB integrating cellular access into machinery & plant equipment), various IoT platform providers, and integration opportunities with Internet, healthcare, transport and other systems.

This approach may not work for enterprise wireless users requiring national (or very broad-area) coverage, such as utility companies or transport providers. There are separate arguments for utility and rail companies getting slices of dedicated spectrum, or some other model of national sharing.

Importantly, CBRS means that LTE-U variants like MuLTEfire can be used to create private cellular networks. Coupled with cheap, virtualised (& probably cloud-based) core networks, this means that mobile networks are much more accessible to new entrants. The scale economies of national licenses will no longer apply to lock out alternative providers.

In other words, we will see consolidation of national MNOs, but fragmentation of localised MNOs or (PNOs as some are calling private networks). 

While some MNOs and their industry bodies may be concerned at more competition, privately many of them acknowledge that a lot of the use-cases above cannot realistically be offered by today's industry. 

Even large MNOs can probably only pick 2 or 3 verticals to really get deep expertise in - maybe smart cities, or rail, or utilities, say. But they cannot get enough expertise to effectively build customised, small networks in all the possible contexts - car factories, ports, hospitals, mines, hotels, shopping malls, airports, public safety agencies, universities, oil refineries, power stations and so on. Each will have its own requirements, its own industry standards to observe, its own systems to integrate with, its own insurance/liability issues and so on. They need wireless for all sorts of reasons from robots to visitors - but today's MNOs will not be able to satisfy all those needs, especially indoors.

For many governments' visions of future factories, cities and public services, good quality wireless will be essential. But it will need to be provided by many new types of providers, with business models we can only guess at.

While CBRS is still at an early stage, and will be tricky to implement, we need something similar to it - with multiple tiers including a "permissionless" one - in Europe and the rest of the world. Enterprise and private cellular networks (and other licensed-band options for WiFi and LPWAN) are critical - and policymakers and regulators need to acknowledge and support this.




If you are interested in discussing this topic further, I will be running a workshop day on private cellular on May 30th in Central London, in a joint effort with Caroline Gabriel of Rethink Research. Details and booking are here: (link) or else email information AT disruptive-analysis DOT com.

Saturday, February 25, 2017

A Core Problem for Telcos: One Network, or Many?

In my view the central question - maybe an existential dilemma - facing the telecoms industry is this:

Is it better to have one integrated, centrally-managed and feature-rich network, or several less feature-rich ones, operated independently?

Most of the telecoms "establishment" - operators, large vendors, billing/OSS suppliers, industry bodies - tends to prefer the first option. So we get notions of networks with differentiated QoS levels, embedding applications in-network with NFV and mobile edge computing (MEC) and perhaps "slicing" future 5G networks, with external customer groups or applications becoming virtual operators. There is an assumption that all the various standards are tightly coupled - radio, core network, so-called "telco cloud", IMS and so on. Everything is provided as a "network function" or "network service" in integrated fashion, and monetised by a single CSP.

It's not just the old guard either. New "non-establishment" approaches to managing quality also appear, such as my colleague Martin Geddes' views on clever and deterministic contention-management mechanisms (link). That takes a fresh look at statistical multiplexing.

Yet users, device vendors and cloud/Internet application providers often prefer a different approach. Using multiple network connections, either concurrently or being able to switch between them easily, is seen to help reduce costs, improve coverage and spread risks better. I've written before about using independent connections to create "Quasi-QoS" (link), especially in fixed networks with SD-WAN. In mobile, hundreds of millions of users have multi-SIM handsets, while (especially in IoT) we see multi-IMSI SIM cards that can be combined with roaming deals to give access to all mobile networks in a given country, or optimise for costs/performance in other ways. Google's Fi service famously combines multiple MVNO deals, as well as WiFi. Others are looking to blend LPWAN with cellular, or satellite and so on. The incremental cost of adding another connection (especially wireless) is getting ever lower. At the other end of the spectrum, data centres will often want redundant fibre connections from different providers, to offset the risk of a digger cutting a duct, as well as the ability to arbitrage on pricing or performance.

I have spoken to "connected car" specialists who want their vehicles to have access not just to (multiple) cellular networks, but also satellite, WiFi in some locations - and also work OK in offline mode as well. Many software developers create apps which are "network aware", with connectivity preferences and fallbacks. We can expect future AI-based systems to be much smarter as well - perhaps your car will know that your regular route to work has 10 miles of poor 4G coverage, so it learns to pre-cache data, or uses a temporary secondary cellular link from a different provider.

There are some middle grounds as well. Technologies such as MIMO in wireless networks give "managed multiplicity", using bouncing radio signals and multiple antennas. Plenty of operators offer 4G backups for fixed connections, or integrate WiFi into their same core infrastructure. The question then is whether the convergence occurs in the network, or perhaps just in the billing system. Is there a single point of control (or failure)?

The problem for the industry is this: multi-network users want all the other features of the network (security, identity, applications etc) to work irrespective of their connection. Smartphone users want to be able to use WiFi wherever they are, and get access to the same cloud services - not just the ones delivered by their "official" network operator. They also want to be able to switch provider and keep access - the exact opposite of the type of "lock-in" that many in the telecoms industry would prefer. Google Fi does this, as it can act as an intermediary platform. That's also true for various international MVNO/MNO operators like Truphone.

A similar problem occurs at an application level: can operators push customers to be loyal to a single network-resident service such as telephony, SMS or (cough) RCS? Or are alternative forces pushing customers to choose multiple different services, either functionally-identical or more distant substitutes? It's pretty clear that the low marginal cost of adding another VoIP or IM or social network cost outweighs the benefits of having one "service to rule them all", no matter how smart it is. In this case, it's not just redundancy and arbitrage, but the ability to choose fine-grained features and user-experience elements.

In the past, the trump card for the mono-network approach has been QoS and guarantees. But ironically, the shift to mobile usage has reduced the potential here - operators cannot really guarantee QoS on wireless networks, as they are not in control of local interference, mobility or propagation risks. You couldn't imagine an SLA that guaranteed network connection quality, or application performance - just as long as it wasn't raining, or there wasn't a crowd of people outside your house. 




In other words, the overall balance is shifting towards multiplicity of networks. This tends to pain many engineers, as it means networks will (often) be less-deterministic as they are (effectively) inverse-multiplexed. Rather than one network being shared between many users/applications, we will see one user/device sharing many networks. 

While there will still be many use-cases for well-managed networks - even if users ultimately combine several of them - this means that future developments around NFV and network-slicing need to be realistic, rather than utopian. Your "slice" or QoS-managed network may only be used a % of them time, rather than exclusively. It's also likely that your "customer" will be an AI or smart application, rather than an end-user susceptible to being offered loyalty incentives. That has significant implications for pricing and value-chain - for example, meaning that aggregators and brokers will become much more important in future.

My view is that there are various options open to operators to mitigate the risks. But they need to be realistic and assume that a good % of their customers will, inevitably, be "promiscuous". They need to think more about competing for a larger share of a user's/device's connectivity, and less about loading up each connection with lots of QoS machinery which adds cost rather than agility. Nobody will pay for QoS (or a dedicated slice) only 70% of the time. Some users will be happy with a mono-connection option. But those need to be identified and specifically-relevant solutions developed accordingly. Hoping that software-defined arbitrage and multi-connection devices simply disappear is wishful (and harmful) thinking. Machiavellian approaches to stopping multi-connection won't work either - forget about switching off WiFi remotely, or connecting to a different network than the one the user prefer.

This is one of the megatrends and disruptions I often discuss in workshops with telco and vendor clients. If you would like to arrange a private Telecoms Strategic Disruptions session or custom advisory project, please get in touch with me via information AT disruptive-analysis DOT com.

Wednesday, February 15, 2017

My presentation at Ofcom: What the year 2030 implies for wireless trends & spectrum policy

On the 8th of February, I gave a presentation at Ofcom (the UK telecom regulator). The event was a day-long discussion of the "Future Wireless World", looking at longer-term trends towards IoT and connectivity (5G, WiFi, mesh, satellite and more), with an implied impact on how spectrum policy needs to be reshaped to meet the changes. It was introduced and moderated by Philip Marnick (Group Director, Spectrum) and also attended by the Ofcom CEO Sharon White. On the same day, Ofcom released its latest thoughts on 5G spectrum (link)

There were about 150 attendees from a range of operators, broadcasters, government bodies, vendors, consultants, Internet and industrial players and internal Ofcom staff. There may be an audio/video recording of the sessions put up online at some point, but I'm not certain of this.

My presentation was a very broad one - I was tasked with imagining how the future economy, consumer and business environment might look like in the year 2030, what disruptions and innovations may occur between now and then, and how that flows back into the use of wireless networks and therefore spectrum. 

In other words, I was wearing my "telcofuturist" hat, where I take generic futurist themes and apply them to the specifics of telecoms and the broader wireless industry. After my presentation, I joined Philip Marnick for a Q&A session with the audience, which was a mix of regulatory, futurist and general analyst-type discussion.

The rest of the event was made up of a series of presentations and panel debates between a broad set of industry luminaries and innovators, including Dino Flore of 3GPP & Qualcomm, Simon Saunders of Google (& formerly the Femto Forum), plus others from O3B, Ericsson, Veniam, BT/EE, Vodafone, Silver Spring and others.
There was a really interesting session on mesh networks later in the day, which I also think has a lot of potential. It was a really refreshing change from some of the usual sponsor-driven snorefests, although there was clearly a strong "lobbying" flavour to some of the questions, with people taking advantage of access to the regulator in an open forum.

One thing that struck me about both this event, and another event I attended recently at Tech-UK's Spectrum Policy Forum (link) is a growing frustration in the regularory community. Some people now view spectrum purely as a "mobile" thing, without simultaneously mentioning broadcast, government, WiFi, LPWAN, industrial, satellite, fixed-access and all the other users of the airwaves.

The mobile industry tends to be very good at pitching for more and more slices of spectrum, ideally provided on an exclusive basis with long licence terms (in exchange for quite a lot of cash in terms of fees, to be fair). It has a far bigger and more cohesive lobbying and publicity engine than the broad set of other spectrum stakeholders.



My own view - and, it seems, many regulators' - is that given the finite amount of spectrum, there is ever less rationale for exclusivity. Various forms of sharing and private networks are rising up the agenda. My recent piece on Industrial IoT and sharing [link] has garnered a lot of good feedback, while the National Infrastructure Commission's Dec'16 report [link] recommended that "Government and Ofcom should review how unlicensed, lightly licensed spectrum, spectrum sharing and similar approaches can be utilised for higher frequencies to maximise access to the radio spectrum".

In other words, spectrum-sharing - of various types - is moving up the regulatory agenda very fast in the UK. I think onsite industrial IoT coverage, via private cellular or licenced-band WiFi deployments, is the easiest to conceptualise and "sell", but there are plenty of other angles too.

But as well as the challenges of IIoT, I covered a lot of other topics in my presentation (slides are embedded below the list - apologies that the bullets aren't in the same sequence):
  • The impact of AI will be felt on both network "supply" side (eg more efficiently-optimised networks, churn management etc) and "demand" (smarter use of wireless connectivity, least-cost routing and so forth). I wrote a post on this a while back (link)
  • Whether the emphasis on mobile uses of spectrum, and the 3GPP/GSMA "national MNO" view of the world could lead to a "monoculture" of cellular connectivity. As in agriculture, the superficial efficiency/yield needs to be considered in the context of risks. Might there be long-term benefits in "network diversity", and should regulators look to protect it, the same way environmental rules protect biodiversity?
  • On a similar environmental theme, I considered habitats that are primarily "mono-platform" and fragile to external events (eg coral reefs) vs. "multi-platform" ecosystems which are more resilient (eg rainforests). Obviously this doesn't translate precisely to wireless networks, but the metaphor seems apt. I'm not a biologist, but a quick word with someone who does study ecosystems afterwards suggested my analogy is worth further exploration.
  • "Arbitrage Everywhere": future networks - and by extension both spectrum and telecom competition rules - should anticipate devices and applications using multiple connections / service providers, and picking and choosing/bonding connectivity from several options. This is already seen in the fixed world for enterprise with SD-WAN, and should be expected in wireless too. This means that "partial competition" (eg from WiFi, LPWAN, satellite, private cellular) should be considered as well as like-for-like rival infrastructure from other national MNOs.
  • Redefining the nature of a "service" - what do we actually mean, when we frame our regulation of "service providers"? Many more organisations are offering connectivity services, while many other models of delivering a "capability" are emerging. WiFi can be a service, owned by a venue, given away for free, provided as an amenity, self-provisioned by a user and so forth. ITU's definition of a service ("a set of functions offered to a user by an organisation") seems to be too narrow given the rise of developers, embedded connectivity in IoT, private networks and more.
  • I discussed the relative timing of various industry trends - and the fact that various look like swinging "pendulums". For instance we see a back-and-forth between centralised vs. distributed control, standards vs. proprietary technologies, local vs. national vs. global and so on. I noted that the timing of the various pendulums' swings are not all in sync - and therefore the actual outcome for the wireless sector is really complex to predict. Various external trends (eg open source, Moore's Law, AI, geopolitics, specific companies) can act as weights on the pendulums.
  • I noted that many different and new organisations may own/operate/embed wireless connectivity in future. Aircraft engine manufacturers use satellite telemetry and download sensor data via WiFi to optimise their analytics for selling "power by the hour". IoT platforms & MVNOs for specific sectors are springing up (eg Cubic Telecom for automotive). Theoretically, Elon Musk could use SpaceX to launch his own satellites - and provide vertically-integrated connectivity to Tesla cars. Google has numerous wireless initiatives, from Fi to WiFi to white spaces to its Loon balloon project. The Governor of California has suggested launching the state's own earth-sensing satellites, if the current administration cuts federal funding for environmental monitoring. Then there are public-safety LTE networks, WiFi everywhere, new mesh concepts, private LoRa deployments and so on.
  • In the Q&A, I also discussed 5G bands, NFV, network-slicing and more. I noted that 5G is being driven initially by fixed-access and 28GHz in the US & S Korea, not the three "mainstream" uses of critical IoT, ultra-mobile broadband and massive IoT. This is outside the "official" bands being pushed by Ofcom as "pioneer" options, and slowly being explored internationally for the ITU WRC event in 3 years' time. This was explored in another post of mine (link). I also expressed doubts that NFV-led network-slicing will deliver all the agility required for creating vertical-specific networks - even if it allows "super-MVNOs", will the host network provide enough fine-grained control and liability-bearing SLAs?


Overall, my session seemed to be very well-received. Hopefully I've prodded some parts of the industry. I'd like to see a wider recognition of the changes to some of our fundamental assumptions that will occur over the next decade and beyond. 

A key point is that 5G, delivered by traditional MNOs as a subscription service, is exciting and important - but it must not be allowed to totally dominate discussions around spectrum. Governments and regulators must push for "network diversity" of technologies, stakeholders and business/operational models - including private networks for businesses. Short-term focus on "efficiency" of a monoculture approach may mask wider ecosystem-level risks. 

A key theme is the need for flexibility and agility in wireless networks and related regulation - many of the more radical changes will occur at timespans of 1-5 years, which is much shorter than the investment and planning horizon for a lot of the industry. Whether we need more malleable licences, better secondary marketplaces for spectrum, new forms of sharing (eg using blockchain as a basis for a distributed database of allocations), or a rethink on how competition is measured, there are plenty of options.

Spectrum policy is several steps away from the actual world of consumer and business needs for wireless networks. But it's for that reason it's worth thinking deeply, about the long chain of implications of seemingly small decisions or baked-in business models that are created now.

If you'd like to have a similar presentation and discussion at your own event, or at a private workshop, please contact me via information AT disruptive-analysis dot com

Friday, October 28, 2016

A realistic 5G view: Timelines, Standards & Politics

Things are moving incredibly fast for 5G!

...or are they? A couple of recent headlines make it a little hard to tell:

Verizon Eyes "Wireless Fibre" Launch in 2017 

Verizon Rejects AT&T-led Effort to Speed Up Release of 5G Standard

So, does Verizon want early 5G, or not? Are we looking at a 2017 launch, or still 2019-20? Why the apparent contradiction? And what about other operators in Asia and Europe?

I've been to recent 5G events including NGMN's conference (link), and a smaller one this week organised by Cambridge Wireless and the UK's National Infrastructure Commission (link). I've also been debating with assorted fellow-travellers online and at this week's WiFi Now event (link).

In my view, Verizon (and SKT in South Korea) are gunning hard for early "pre-5G" well in advance of the full standards, but are also subtly trying to push back the development of "proper" 5G so that they're able to influence it to their advantage. That's especially true for Verizon, which seems to be trying to out-game AT&T its with 5G strategy.

It's helpful to note a few things going on in the background:
  • 28GHz is definitely "a thing". The FCC released huge chunks of spectrum for 5G this summer (link). Even though 28GHz wasn't even identified as a candidate 5G band by ITU originally, and mmWave wasn't expected to be standardised until 2020, it is starting to look like an early "done deal", as it's also available for use in S Korea and Japan.
  • The Winter Olympics in Korea in 2018 has prompted local operators KT and SKT, as well as Samsung, to look for pre-5G solutions. They've already spent quite a lot of effort on 28GHz trials (as has DoCoMo in Japan which has the 2020 Summer Olympics) and they've gone well. They have been mostly interested in mobile broadband.
  • Verizon (and to an extent AT&T) have a different driver - gigabit-speed fixed broadband. They have been stung by the rapid growth of cable, which has far outpaced DSL in speed and market share. They also want to shut down the old PSTN and go to all-IP architectures. The problem is that much of the US is too sparsely-populated to run FTTH everywhere - putting new fibre in a trench down rural roads and driveways in Idaho, to serve a handful of homes is not appealing. But running fibre to a pole or cabinet distribution point & then using 5G as a "drop" to say 10-100 homes nearby is much cheaper. T-Mobile US and USCellular have also been trialling fixed-wireless 5G, although any deployment would be harder without their own fibre backhaul and transport infrastructure. Ericsson and Nokia are also involved in the trials.
  • Fixed-access 5G won't need complex network-slicing & NFV cores to be useful, as it can be functionally similar to other forms of broadband access. It also won't need mobility, or fallback to 4G, and will be able to run in big wall-mounted terminals connected to a power supply - and sold/branded by the carrier rather than Apple et al. In other words, it's a lot simpler, and a lot faster-to-market.
  • Meanwhile, the other "headline" use-case groups for 5G have some issues. "Massive IoT" is probably going to have to wait until after the 4G variant NB-IoT has been deployed and matured. A 5G version of low-power IoT networking seems unlikely before 2020-22. And the ultra-low latency IoT use-cases (drones and self-driving cars et al) introduce some unpleasant compromises in IP frame structure, and given probable low volumes are something of a "tail wagging the 5G dog". In other words, the IoT business models for 5G don't really exist yet.
  • Linked to the IoT argument, it seems that the much-vaunted NFV "network slicing" approach to combine all these myriad use-cases is going to be late, expensive, complex and in need of better integration with BSS/OSS and legacy domains. I wrote about my doubts over slicing last month - link
So in other words, the original 3-Bubble Venn diagram for 5G use-cases (Enhanced Mobile Broadband, Massive IoT & Low-Latency IoT) was wrong. There's a 4th bubble - fixed wireless, which is going to come first.



And this is massively important in the new technology reality. Increasingly often these days, fast-to-market beats perfect and then often defines future direction as well. We have seen various disruptions from adjacency, where expedient "narrow" solutions beat theoretical elegant-but-grandiose architectures to the punch. SD-WAN's rapid rise is disrupting the original NFV/NaaS plan for enterprise services, for example (link). Similarly, the rise of Internet VoIP and chat apps signalled the death-knell for IMS as a platform for anything except IP-PSTN.

In this case, I believe that fixed-wireless 5G - even if "pre-standard" and relatively small in volume - is going to set the agenda for later mobile broadband 5G, and then even-later IoT 5G. If it gets traction, there's a good chance the inertia will create de-facto standards and also skew official standards to ensure interoperability. This is already evident in steam-rollering 28GHz into the picture. (It's also worth remembering that Apple's surprise early decision to support 1.8GHz for LTE shifted the market a few years ago - while that had been an "official" band, it hadn't been expected to be popular so soon).

The critical element here is that AT&T is much more bullish and focused on mobile broadband (especially in urban hotspots) as a lead use-case for 5G, plus backhaul-type point to point connections. It expects that "the coverage layer will be 4G for many years to come”. At the NGMN conference the speaker noted that fixed uses were also of interest, but was wary of the business case - for instance whether it was possible to reach 10 homes or 30 from a single radio head. It also seems more interested in 70-80GHz links to apartment blocks, using existing in-building wiring, rather than Verizon's 28GHz rural-area drops. Coupled with its CEO's rather implausible assertion that mobile 5G will compete with cable broadband (link), this suggests it is somewhat distant from the Verizon/SKT/DoCoMo group. 

The kicker for me is the delay to the 3GPP standardisation of what is called the "non-standalone" NSA version 5G radio, which uses a 4G control plane and is suitable for mobile devices (link). Despite its bullishness on fixed-5G, Verizon has pushed the timeline for the more mobile-friendly version back 6 months, against AT&T's wishes. The NSA and SA versions will now both be targeted for the June 2018 meeting of the standards body, rather than December 2017.

The official reason given is fairly turgid "in order to effectively define a non-standalone option which can then migrate to a standalone, a complete study standalone would be required to derisk the migration". But I suspect the truth is rather more political: it gives Verizon and its partners (notably Samsung) another 6 months to get their 28GHz fixed-access solution into the market. Qualcomm has just announced a pre-5G chip that can accommodate just that, too. This means that standardised eMBB devices probably won't arrive until mid-2019, although there may be a few pre-standard ones for the 2018 Winter Olympics and elsewhere.

This will cement not just the 28GHz band in place, but also the fixed-5G uses and the idea that 5G doesn't need the full, fancy network-slicing NFV back-end. Given AT&T's huge emphasis on its eCOMP virtualisation project, that reduces the possible future advantage that might accrue if 5G was "always virtualised". It may also mean that lessons from real-world deployment get fed into the 2018 standards in some fashion, further advantaging the early movers. This is especially the case if it turns out that 28GHz can support some form of mobility - and early comments from Samsung suggest they've already experimented with beam-steering successfully.

Meanwhile.... what about Europe? Well to be honest, I'm a bit despondent. The European operators seem to be using 5G as a political football, playing with the European Commission and aiming at the goal marked "less net-neutrality and more consolidation". In July, a ridiculously-political "manifesto" was announced by a group of major telcos (link), trying to promise some not-very-demanding 5G rollouts if the EU agrees to a massive watering-down of regulation. The European 5G community also seems to be seduced by academia and the promise of lots of complex network-slicery and equally-dubious edge-computing visions. It's much more interested in the (late, uncertain-revenue) IoT use-cases rather than fixed-access and mobile broadband. And it has earmarked 26GHz (not 28) as a possible band for 2019 ITU Radio Congress to consider. 

In other words, it's missing the boat. By the time the EU, the European operators and European research institutions get their 5G act together, we'll have had a repeat of 4G, with the US, Korea and Japan leading the way. 

So overall, I see Verizon outmanouevring AT&T, once again. The Koreans and Japanese will benefit from VZ's extra scale and heft in moving vendors faster (notably Samsung, it seems, as Nokia and Ericsson seem more equivocal). The Europeans will be late to the party, once again. And the "boring" use-cases for 5G (fixed access and mobile broadband) will come out first, while the various IoT categories are still scratching their heads and waiting for the promised NFV slice-utopia to catch up.