Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label latency. Show all posts
Showing posts with label latency. Show all posts

Sunday, November 07, 2021

No, the Metaverse is not the killer app for 5G

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

Let's stop the next cliche before it even starts.

Most knowledgeable people now roll their eyes in derision whenever they hear the words 5G and autonomous driving (or robotic surgery) mentioned in the same sentence. But the mobile industry's hypesters are always casting around for some new trope - and especially the mythical "killer app" that could help to justify the costs and complexity.

And as if on cue, the Metaverse - essentially a buzzword meaning a hybrid of AR/VR with the social web, collaboration and gaming - has captured the headlines.

No alt text provided for this image

The growing noise around Metaverse technologies - and especially Facebook's recent rebrand to Meta - is attracting a whole slew of bandwagon-jumpers. The cryptocurrency community has been the first to trumpet its assumed future role - perhaps unsurprisingly, since they tend to be even more fervent and boosterish than the mobile sector. But we're also seeing the online shopping, advertising and gaming worlds hail the 'Verse as the next big thing.

Next up - I can pretty much guarantee it - will be the 5G industry talking about millisecond latency and buying a "Metaverse network slice". We'll probably get the edge-computing crowd popping up shortly afterwards too. I've already seen a few posts hailing the Metaverse as the possible next big thing for MNOs (mobile network operators).

They're wrong.

The elephant in the room

If you've found this article without knowing my normal coverage themes, you might be surprised to read that the single biggest issue for connecting Metaverse devices and users will be real, physical walls.

If you go through Mark Zuckerberg's lengthy video intro to Meta and his view of future technologies, you'll notice that a high % of scenarios and use-cases are indoors. Gaming from your sofa. Virtual living rooms. Hybrid work environments blending WFH with in-person meetings, and so on.

This shouldn't be a huge surprise. The more immersive a technology is - and especially if it's VR rather than AR based - the more likely people will take part while seated, or at least not while walking around an outdoor environment with obstacles and dangers. Most gaming, and most business collaboration takes places indoors too.

And indoor environments tend to have particular ways that connectivity is delivered to devices. Generally, Wi-Fi tends to be used a lot, as the access points are themselves indoors, at the end of broadband connection or office local area network.

Basically, wireless signals at frequencies above 2-3GHz don't get inside buildings very well from outside, and the higher the performance, the worse that propagation tends to be. Put simply, 5G-connected headsets and other devices will generally not work reliably indoors, especially if they have to deliver consistent high data speeds and low latencies which need higher frequencies. We can also expect the massive push for Net Zero in coming years to mean ever-better insulated buildings, which will make matters even worse for wireless signals as a side-effect.

For sure, certain locations will have well-engineered indoor 5G systems that will work effectively - but software developers generally won't be able to assume this. Airports, big sports venues, shopping malls and some industrial sites like factories will be at the top of the list for these types of solutions. For those locations, 5G Metaverse connections may well be widely used and effective. However, those are the exceptions - and it will take many years to deploy new in-building systems, or upgrade existing infrastructure anyway.

In particular, most homes and offices will have patchy or sometimes no 5G coverage, especially in internal rooms, elevators or basements. (There might be a 5G signal or logo displayed on the device, but that doesn't mean that the famously-promised gigabit speeds or millisecond latencies will actually be deliverable).

In those locations, expect Metaverse devices to use Wi-Fi as a baseline - and increasingly the Wi-Fi 6/6E/7 generations with better capabilities than previous versions.

What the Meta video tells us

I'm aware that the Metaverse is more than just Facebook / Meta, but the 1h17 video from Zuck (link) is not a bad overview of what to expect in terms of experiences, devices and business models. Obviously there will be different views from Epic Games, Microsoft's various initiatives around Hololens and Mesh, plus whatever Apple is quietly cooking up, but this is a decent place to start.

The first thing to note is the various Horizon visions that Meta is pitching - Home, Worlds and Workrooms. These are (broadly) for close social interaction, gaming/larger-scale social and business collaboration - especially hybrid work.

Mostly, the demos and visions are expected to take place from the participant's home, office, school or similar venue. There's a couple of outdoor examples of enhanced sports, or outdoor art/advertising as well. Virtual desktops, avatars that mimic eye and facial movements and so on.

In terms of devices, there's a large emphasis on headsets (obviously the Oculus Quest, and also the new high-end Cambria device promised for 2022) as well as discussions of AR glasses, from the RayBan Stories recently launched, to a forthcoming project called Nazare.

The technology discussion is all around the functional elements, not the connectivity. Optics, sensors, batteries, displays, speakers, cameras and so on. There are developer tools for hand and voice interaction, and presence / placement of objects in the virtual realm. There's lots of discussion around creators, advertising and the ability to own (and interoperate) virtual avatars, costumes and furniture. There are also nods to privacy, as would be expected.

There's no mention of connectivity, apart from noting that Cambria will have radios of some sort. The section on the "Dozen major technological breakthroughs for next-gen metaverse" doesn't mention wireless, 5G or anything else.

No alt text provided for this image

It's worth noting that Oculus devices and the RayBan glasses today use Wi-Fi. We can also expect the gesture-control in future will likely lean on UWB sensors. Outside of Facebook / Meta essentially all of today's dedicated AR/VR headsets connect with Wi-Fi or a cable, to a local network or broadband line. (That might be 5G fixed-wireless to the building for a few % of homes, but that will still use Wi-Fi on the inside).

Where cellular 4G/5G takes a role in XR is where the device is tethered to a phone or modem, or is experienced actually on the smartphone itself - think Pokemon Go, or the IKEA app that lets you design a room with virtual furniture.

We can expect the same with the Metaverse. If you're using a smartphone to access it, then obviously 5G will play a role, just as it will for all mobile apps in 3-4 years time when penetration has increased.

Will Cambria and future iterations feature 5G built-in? Maybe but I doubt it, not least because of the extra cost and engineering involved, as well as multiple versions to support different regional frequency options. Would a future Apple AR/Metaverse headset feature cellular, like some versions of the Watch? Again, that's possible but I wouldn't bet on it.

In the second half of the decade, later versions of 5G (Release 17 & 18) will have useful new features like centimetre-accuracy positioning that could be useful for Metaverse purposes - but again, that's reliant on having decent coverage in the first place. There will likely be some useful aspects outdoors though - for instance accurate measurement of vehicles on roadways.

Facebook Connectivity becomes Meta too

One other thing I noticed is a reference on LinkedIn to Facebook's often-overlooked Connectivity division, which does all sorts of interesting programmes and initiatives like TIP (which does OpenRAN and other projects), Terragraph 60GHz mesh, Express Wi-Fi and the low-end Basics "FB-lite" platform for developing markets with limited network infrastructure.


No alt text provided for this image

Apparently it's now being renamed Meta Connectivity - partly I guess because of the reorganisation and rebranding of the group overall, but also as a longterm part of the Metaverse landscape.

To me, that also indicates that the Metaverse is going to use multiple wireless (and wired) technologies - which aligns with Zuckerberg's view that it's more of a reinvention of the Internet/Web overall, rather than a particular app or experience.

Bandwidth-heavy? Or perhaps not....

One other thing needs to be considered around the Metaverse and connectivity. The immediate assumption is that such a "rich" environment, either full-virtual or overlaid onto a view of the real world, will need lots of data - and therefore the types of bandwidths promised by 5G. If we all use Metaverse devices to project "virtual TV screens" onto virtual surfaces, it will use lots of capacity, supposedly.

But it strikes me that avatars (even photo-realistic ones) & 3D reconstructions of real-world scenes will likely need less bandwidth than actual video. Realtime rendering will likely be done on-device in most cases, just sending the motion/sensor data or metadata about objects over the network.

Clearly this will depend on the exact context and application, but if your PC or phone or headset has a model of your friend's virtual house, or your virtual conference room - and all the objects and people/avatars in it - then it doesn't actually need realtime 4K video feeds to show different views.

In addition, the integration of eye-tracking allows pre-emptive downloads or actions, so "pseudo-latency" can seem very low, irrespective of the network's actual performance. If the headset sees you looking at a football, it can start working on the trajectory of a kick 10's or even 100's of milliseconds before you move your virtual leg.

That said, the sensor data uplink & motion control downlink will need low latency, but I suspect that will be more about driving localised breakout and peering rather than genuine localised compute. If you're in a hybrid conference with distant colleagues, the main role for edge-computing is to offload your data to the nearest Internet exchange with as few hops as possible.

(Some of the outdoor scenes in the Meta video from Connect seem rather unrealistic. They show groups of people playing table tennis and a virtual basketball match with "friends on the other side of the world", which would involve some interesting issues with the speed of light and how that would impact latency.)

Conclusion

In a nutshell - no, the Metaverse isn't the killer app for 5G.

The timelines align between the two, so where 'Verse apps are used on smartphones they'll increasingly use 5G if it's available and the user is out-and-about. But that's correlation, not causation. Those smartphones will typically be connected via Wi-Fi when at home, school or work. I suspect the main impact on smartphones will be on the need for better 3D graphics capability and enhanced sensors and cameras, rather than the network side.

Will we see some headsets or glasses with built-in cellular radios, some with 5G support? Sure, there will certainly be a few emerging in coming years, especially for enterprise / private network use. I'd expect field-workers, military, or industrial employees to exploit various forms of AR and VR in demanding situations well-suited to cellular, although many will tether a headset or glasses to a separate modem / module to reduce weight.

Many devices will also include various other wireless technologies too - Wi-Fi, Bluetooth, maybe Thread/Matter, UWB and so on.

But if anything, I suspect that the Metaverse may turn out to be the killer app for WiFi7, especially for home and office usage. That doesn't mean that 5G won't benefit as well - but I don't see it as a central enabler, given the probable heavy indoor bias of the main applications. (I don't think that cryptocurrency or edge-computing are key enablers either, but those are debates for another day)

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

#Metaverse #Facebook #Meta #AugmentedReality #VirtualReality #5G #WiFi #MixedReality #Mobile #Wireless #Devices #Gaming #Collaboration #HybridWorking

Friday, October 01, 2021

5G hype and exaggeration - be clear and realistic about your claims!

 This was originally posted on my LinkedIn (here) & the main comment thread is on LI

I'm getting really fed up with a lot of the hype and exaggeration around #5G at the moment, especially PR and marketing puff that creates unclear or misleading claims. It's damaging to the credibility of the industry overall & the specific organisations involved.

In recent weeks I've seen examples of:

  • "Ultra-low #latency" claimed for a manufacturing network that uses non-standalone 5G (so, using a 4G core network & incapable of getting anywhere near 1 millisecond)
  • Augmented reality demos claimed as 5G when actually they're using Wi-Fi or a wired tether
  • Use of a 5G fixed-wireless access link to a building (distributed with #WiFi locally via a hotspot or router) leading to an application described as 5G-enabled
  • A healthcare application with an internal diagnostic wireless camera within the patient's body, connecting to an external or gateway or handheld. The press release was vague on which bit of the solution was 5G, but a social media reply asserted it was a "virtual assistant" " (5G? really?) and refused to detail the system publicly, trying to get me to take the discussion offline
  • A CBRS "hotspot" described as 5G, despite no 5G #CBRS standalone standards or devices yet being available yet
  • 60GHz wireless (mostly using 802.11a or y) described as "5G" because it might be able to connect to a 5G core. There is no 60GHz 5G NR yet.
  • Spurious claims that 5G will generate $Xbn in GDP, or save Y tons of CO2. What's the baseline for 4G/other wireless & what's the uplift attributable to 5G? What % of CO2 savings are from the wireless rather than 100 other system elements, or are you double-counting?
  • Regular comments that compare performance of old versions of WiFi with future versions of 5G. Rather than, say, comparing WiFi 6E vs. 5G Rel 16, or WiFi7 vs. Rel 17.
  • Cliched use of "billions of IoT devices" when we all know only a tiny % will ever connect with 5G
  • Small 5G pilots being deliberately misused to imply large-scale or “production” use by a company.


The commentary is often along the lines of "Oh, well it might be proper 5G in the next version. This just the demo".

In which case, be honest and transparent and SAY SO CLEARLY.

Do not just release a press statement claiming yet another wondrous 5G use case. Be specific:

  • Is it *actually* 5G? Or is this just using 5G as a buzzword?
  • Which specific wireless connection in the solution uses 5G? Between which points / devices?
  • What version/features of 5G is used? What frequency band & coverage is needed?
  • What technology was used in past for similar solutions? What problems does 5G fix?
  • Does the application work equally well over other wireless technologies such as 4G or Wi-Fi6?


It's not just marketing - this actually matters, as things like government funding or spectrum policy may be justified on the basis of spurious claims.

Let's have some more honesty here about
what 5G can do today & what might be possible tomorrow. And let's all call out the chancers in public.

 

This was originally posted on my LinkedIn (here) & the main comment thread is on LI

 

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Monday, October 29, 2018

Quick thoughts on 5G

I've been doing a lot of work - and events - on 5G recently. 
I've noticed a few recent shifts in perception and focus amongst vendors, regulators and operators. Some quick take-outs (a few more than appear on my similar LinkedIn post, as I'm not limited to 1300 characters!)
  • 5G smartphones launch in 2019, but will be low-volume until 2020/21. Expect the first 5G iPhone towards the end of 2020
  • Fixed-wireless use cases for 5G are high on the agenda in some markets (eg US, S Korea, Turkey, Germany), but seemingly almost absent in others.
  • Commercial, large-scale, automated network slicing only becomes real from around 2023 onwards. A few "hand-carved" slices will be sooner, for example for internal use by MNOs' own business units, or perhaps public safety
  • URLLC (ultra-reliable low latency) use-cases seem to have shifted from sci-fi fantasies around automated vehicles and surgical robots, to industrial IoT and factory automation... 
  • ... but industrial use will often be controlled by industry itself, via one of several forms of private network, either using shared spectrum, private cores or private slices / enterprise MVNOs. MNOs' role may be minor
  • Some claim that NB-IoT is the 5G version for "massive IoT", despite it being developed as a 4G variant. This is revisionist nonsense; if it was true then DT, VF and others would have been putting out PR 2+ years ago, claiming to be first to launch 5G
  • 3.5GHz should be OK-ish outdoors but will struggle with outdoor-to-indoor coverage. mmWave will be worse. Beware of demos showing good indoor performance - ask about uplink from inside-out, or whether signals penetrate double-glazing, or at oblique angles to walls/windows. In any case, #WiFi will continue to dominate in the home.
  • There will be some small-cells and neutral-host deployments for 3.5GHz (and similar bands) in enterprises and other large buildings, but this will take a long time to become widespread. 
  • Existing in-building DAS systems will need some serious upgrades to support higher 5G frequency bands - most of today's top out at 2.6GHz and can't handle MIMO very well.
  • Despite it not being an "official" 5G candidate band, 28GHz seems to be the most popular option, at least for test networks. This is partly because of chipset support, notably Qualcomm's X50. The European-proposed 26GHz hasn't seen much action yet
  • Two of the largest 5G "verticals" associations, for Automotive (5GAA) and Industrial (5GACIA) seem to be heavily driven by German companies - and the German regulator looks like it's going to award 100MHz of spectrum to verticals directly (not 100% certain but getting clearer). In other countries apart from the US (CBRS) and China (Huawei's enterprise LTE), there doesn't seem to be as much action from large firms knocking on the regulator/governments doors.
  • The 5G New Core is getting a lot of discussion and attention... but given that some of the existing NFV deployments have been slow, and the cost-savings somewhat illusory, I don't expect much near-term action on this.
  • Some of the visions for 5G seem to lean heavily on automation and AI back-office for optimising radio, core, user-plane etc. Yet those are also still at an early stage - and few telcos have many skilled engineers -  so could act as a brake. There are also some emerging questions on security of network AI, and whether the algorithms might be single points of failure, especially when used for networks used for critical national infrastructure. 
  • Connected-car companies are interested in 5G, but not as enthusiastic as some might imagine. One told me "it's a nice-to-have" - especially as vehicles will need to be able to work offline, and have prodigous on-board compute capabilities.
  • I'm more positive about some of the discussion around Cloud RAN for 5G. In many ways, it's going to be necessary, given the complexity of NR. That said, there's some serious practical challenges about the radio, such as the size/weight/cost of the massive-MIMO antennas.
  • There's lots of talk about network-slicing for 5G, but nobody has really thought about whether today's MNO wholesale departments are up to the task of selling "slice as a service". Speaking to some of today's MVNOs, it seems like they will have to do a lot of homework before they can become effective slicemongers.
That's a quick list of things off the top of my head. Plenty more observations and comments to come, or on my Twitter feed from various events I've attended.


If you'd like me to give an unvarnished presentation at an event, on "5G opportunities, realities & myths", please get in touch via:  information AT disruptive-analysis DOT com
And if you're interested in my last point, on 5G+MVNOs+Slicing+Wholesale, please look at my upcoming workshop doing a deep-dive on this (link)

Monday, December 04, 2017

5G & IoT? We need to talk about latency



Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.

Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.

In many ways, this is irrelevant at best, and duplicitous and misleading at worst.

IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.

Think about timescales a bit more deeply:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • A networked video-surveillance system may need to send a facial image, and get a response in a tenth of a second, before they move out of camera-shot.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • A rapidly-moving drone may need to react in a millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total. 

(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)

The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

Are we focusing 5G too much on the occasional Goldilocks of not-too-fast and not-too-slow?