Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label edge computing. Show all posts
Showing posts with label edge computing. Show all posts

Thursday, February 23, 2023

Local networks: when telecoms becomes "pericoms"​

Published via my LinkedIn Newsletter - see here to subscribe / see comment thread

"Telecoms" or "telecommunications" is based on the Greek prefix "tele-".

It means "at a distance, or far-off". It is familiar from its use in other terms such as telegraph, television or teleport. And for telecoms, that makes sense - we generally make phone calls to people across medium or long distances, or send then messages. Even our broadband connections generally tend to link to distant datacentres. The WWW is, by definition, worldwide.

The word "communications" actually comes from a Latin root, meaning to impart or share. Which at the time, would obviously have been done mostly through talking to other people directly, but could also have involved writing or other distance-independent methods.

This means that distant #communications, #telecoms, has some interesting properties:

  • The 2+ distant ends are often (but not always) on different #networks. Interconnection is therefore often essential.
  • Connecting distant points tends to mean there's a good chunk of infrastructure in between them, owned by someone other than the users. They have to pay for it, somehow.
  • Because the communications path is distant, it usually makes sense for the control points (switches and so on) to be distant as well. And because there's typically payment involved, the billing and other business functions also need to be sited "somewhere", probably in a #datacentre, which is also distant.
  • There are a whole host of opportunities and risks with distant communications, that mean that governments take a keen interest. There are often licenses, regulations and internal public-sector uses - notably emergency services.
  • The infrastructure usually crosses the "public domain" - streets, airwaves, rooftops, dedicated tower sites and so on. That brings additional stakeholders and rule-makers into the system.
  • Involving third parties tends to suggest some sort of "service" model of delivery, or perhaps government subsidy / provision.
  • Competition authorities need to take into account huge investments and limited capacity/scope for multiple networks. That also tends to reduce the number of suppliers to the market.

That is telecommunications - distant communications.

But now consider the opposite - nearby communications.

Examples could include a private 5G network in a factory, a LAN in an office, a WiFi connection in the home, a USB cable, or a Bluetooth headset with a phone. There are plenty of other examples, especially for IoT.

These nearby examples have very different characteristics to telecoms:

  • Endpoints are likely to be on the same network, without interconnection
  • There's usually nobody else's infrastructure involved, except perhaps a building owner's ducts and cabinets.
  • Any control points will generally be close - or perhaps not needed at all, as the devices work peer-to-peer.
  • There's relatively little involvement of the "public domain", unless there are risks like radio interference beyond the network boundaries.
  • It's not practical for governments to intervene too much in local communications - especially when it occurs on private property, or inside a building or machine.
  • There might be a service provider, but equally the whole system could be owned outright by the user, or embedded into another larger system like a robot or vehicle.
  • Competition is less of an issue, as is supplier diversity. You can buy 10 USB cables from different suppliers if you want.
  • Low-power, shared or unlicensed spectrum is typical for local #wireless networks.

I've been trying to work out a good word for this. Although "#telecommunications" is itself an awkward Greek / Latin hybrid I think the best prefix might be Greek again - "peri" which means "around", "close" or "surrounding" - think of perimeter, peripheral, or the perigee of an orbit.

So I'm coining the term pericommunications, to mean nearby or local connectivity. (If you want to stick to all-Latin, then proxicommunications would work quite well too).

Just because a company is involved in telecoms does not mean it necessarily can expect a role in pericoms as well. (Or indeed, vice versa). It certainly can participate in that market, but there may be fewer synergies than you might imagine.

Some telcos are also established and successful pericos as well. Many home broadband providers have done an excellent job with providing whole-home #WiFi systems with mesh technology, for example. In-building mobile coverage systems in large venues are often led by one telco, with others onboarding as secondary operators.

But other nearby domains are trickier for telcos to address. You don't expect to get your earbuds as an accessory from your mobile operator - or indeed, pay extra for them. Attempts to add-on wearables as an extra SIM on a smartphone account have had limited success.

And the idea of running on-premise enterprise private networks as a "slice" of the main 4G/5G macro RAN has clearly failed to gain traction, for a variety of reasons. The more successful operators are addressing private wireless in much the same way as other integrators and specialist SPs, although they can lean on their internal spectrum team, test engineers and other groups to help.

Some are now "going the extra mile" (sorry for the pun) for pericoms. Vodafone has just announced its prototype 5G mini base-station, the size of a Wi-Fi access point based on a Raspberry Pi and a Lime Microsystems radio chip. It can support a small #5G standalone core and is even #OpenRAN compliant. Other operators have selected new vendors or partners for campus 4G/5G deployments. The 4 UK MNOs have defined a set of shared in-building design guidelines for neutral-host networks.

It can be hard for regulators and policymakers to grasp the differences, however. The same is true for consultants and lobbyists. An awful lot of the suggested upsides of 5G (or other forms of connectivity) have been driven by a tele-mindset rather than a peri-view.

I could make a very strong argument that countries should really have a separate pericoms regulator, or a dedicated unit within the telecoms regulator and ministry. The stakeholders, national interests and economics are completely different.

A similar set of differences can be seen in #edgecomputing: regional datacentres and telco MEC are still "tele". On-premise servers or on-device CPUs and GPUs are peri-computing, with very different requirements and economics. Trying to blur the boundary doesn't work well at present - most people don't even recognise it exists.

Overall, we need to stop assuming that #pericoms is merely a subset of #telecoms. It isn't - it's almost completely different, even if it uses some of the same underlying components and protocols.

(If this viewpoint is novel or interesting and you would like to explore it further and understand what it means for your organisation - or get a presentation or keynote about it at an event - please get in touch with me)

Tuesday, April 26, 2022

Telcos should focus on "connected data"​ not just "edge computing"​

Note: A version of this article first appeared as a guest blog post written for Cloudera, linked to a webinar presentation on May 4, 2022. See the sign-up link in the comments. This version has minor changes to fit the tone & audience of this newsletter, and tie in with previous themes. This version is also published on my LinkedIn newsletter with a comments thread (here).

Telcos and other CSPs are rethinking their approach to enterprise services in the era of advanced wireless connectivity - including their 5G, fibre and Software-Defined Wide Area Network (SD-WAN) portfolios. 

Many consumer-centric operators are developing propositions for “verticals”, often combining on-site or campus mobile networks with edge computing, plus deeper solutions for specific industries or horizontal applications. Part of this involves helping enterprises deal with their data and overall cloud connectivity as well as local networks. (The original MNO vision of delivering enterprise networks as "5G network slices" partitioned from their national infrastructure has taken a back seat. There is more interest currently in the creation of dedicated on-premise private 5G networks, via telcos' enterprise or integrator units).

No alt text provided for this image

At the same time, telecom operators are also becoming more data- and cloud-centric themselves. They are using disaggregated systems such as Open RAN and cloud-native 5G cores, plus distributed compute and data, for their own requirements. This is aimed at running their networks more efficiently, and dealing with customers and operations more flexibly. There are both public and private cloud approaches to this, with hyperscalers like Amazon and disruptors such as Rakuten Symphony and Totogi promising revolutions in future.

As I've said for some time, “The first industry that 5G will transform is the telecom industry itself.

This poses both opportunities and challenges. Telcos’ internal data and cloud needs may not mirror their corporate customers’ strategies and timing perfectly, especially given the diverse connectivity landscape.

If operators truly want to blend their own transformation journey with that of their customers, what is needed is a much broader view of the “networked cloud” and "distributed data", not just the “telco cloud” or "telco edge" that many like to discuss.

Networked data and cloud are not just “edge computing”

Telecom operators’ discussions around edge/cloud have gone in two separate directions in recent years:

  • External edge computing: The desire by MNOs to deploy in-network edge nodes for end-user applications such as V2X, IoT control, smart city functions, low-latency cloud gaming, or enterprise private networks. Often called “MEC” (mobile edge computing), this spans both in-house edge solutions and a variety of collaborations with hyperscalers such as Azure, Google Cloud Platform, and Amazon Web Services.
  • Internal: The use of cloud platforms for telcos’ own infrastructure and systems, especially for cloud-native cores, flexible billing, and operational support systems (BSS/OSS), plus new open and virtualised RAN technology for disaggregated 4G/5G deployments. Some functions need to be deployed at the edge of the network (such as 5G DUs and UPF cores), while others can be more centralised.

Of these two trends, the latter has seen more real-world utilisation. It is linked to solving clear and immediate problems for the CSPs themselves.

Many operators are working with public and private clouds for their operational needs—running networks, managing subscriber data and experience, and enabling more automation and control. While there are raging debates about “openness” vs. outsourcing to hyperscalers, the underlying story—cloudification of telcos’ networks and IT estates—is consistent and accelerating. The timing constraints of radio signal processing in Open RAN, and the desire to manage ultra-low latency 5G “slices” in future 3GPP releases are examples that need edge compute. There may also be roles for edge billing/charging, and various security functions.

In contrast, telcos' customer-facing cloud, edge and data offers have been much slower to emerge. The focus and hype about MEC has meant operators’ emphasis has been on deploying “mini data centres” deep in their networks—at cell towers or aggregation sites, or fixed-operators’ existing central office locations. Discussion has centred on “low latency” applications as the key differentiator for CSP-enabled 5G edge. The focus has also been centred on compute rather than data storage and analysis. Few telcos have given much consideration to "data at rest" rather than "data in motion" - but both are important for developers.

This has meant a disconnect between the original MEC concept and the real needs of enterprises and developers. In reality, enterprises need their data and compute to occur in multiple locations, and to be used across multiple time frames—from real time closed-loop actions, to analysis of long-term archived data. It may also span multiple clouds—as well as on-premise and on-device capabilities beyond the network itself.

What is needed is a more holistic sense of “networked cloud” to tie these diverse data storage and processing needs together, along with documentation of connectivity and the physical source and path of data transmission.

No alt text provided for this image

Potentially there are some real sources of telco differentiation here - as opposed to some of the more fanciful MEC visions, which are more realistically MNOs just acting as channel partners for AWS Outposts and Azure's equivalent Private MEC.

An example of the “networked cloud”

Consider an example: video cameras for a smart city. There are numerous applications, ranging from public transit and congestion control, to security and law enforcement, identification of free parking spots, road toll enforcement, or analysing footfall trends for retailers and urban planners. In some places, cameras have been used to monitor social-distancing or mask-wearing during the pandemic. The applications vary widely in terms of immediacy, privacy issues, use of historical data, or the need for correlation between multiple cameras. 

CSPs have numerous potential roles here, both for underlying connectivity and the higher-value services and applications.

But there may be a large gap between when “compute” occurs, compared to when data is collected and how it is stored. Short-term image data storage and real-time analysis might be performed on the cameras themselves, an in-network MEC node, or at a large data centre, perhaps with external AI resources or combined with other data sets. Longer-term data for trend analysis or historic access to event footage could be archived either in a city-specific facility or in hyperscale sites.

(I wrote a long article about Edge AI and analytics last year - see here)

No alt text provided for this image

For some applications, there will need to be strong proofs of security and data custody, especially if there are evidentiary requirements for law enforcement. That may extend to knowing (and controlling) the specific paths across which data transits, how it is stored, and the privacy and tamper-resistance compliance mechanisms employed.

Similar situations—with both opportunities and challenges—exist in verticals from vehicle-to-everything to healthcare to education to financial services and manufacturing. CSPs could become involved in the “networked cloud” and data-management across these areas—but they need to look beyond narrow views of edge-compute. Telcos are far from being the only contenders to run these types of services, but some operators are taking it seriously - Singtel offers video analytics for retail stores, for instance.

Location-specific data

As a result, the next couple of years may see something of a shift in telcos’ discussions and ambitions around enterprise data. There will be huge opportunities emerging around enterprise data’s chain-of-custody and audit trails—not only defining where processing takes place, but also where and how data is stored, when it is transmitted, and the paths it takes across the network(s) and cloud(s).

(A theme for another newsletter article or LI post is on enterprises' growing compliance headaches for data transit - especially for international networks. There may be cybersecurity risks or sanctions restrictions on transit through some countries or intermediary networks, for instance. Some corporations are even getting direct access into Internet exchanges and peering-points for greater control).

In some cases, CSPs will take a lead role here, especially where they own and control the endpoints and applications involved. Then they can better coordinate the compute and data-storage resources. In other cases, they will play supporting roles to others that have true end-to-end visibility. There will need to be bi-directional APIs—essentially, telcos become both importers and exporters of data and connectivity. This is especially true in the mobile and 5G domain, where there will inevitably be connectivity “borders” that data will need to transit. (A recent post on the need for telcos to take on both lead and support roles is here)

There may be particular advantages for location-specific data collected or managed by operators. For example, weather sensors co-located with mobile towers could provide useful situational awareness both for the telco’s own operational purposes as well as to enterprise or public-sector customers, such as smart city authorities or agricultural groups. 

Telcos also have a variety of end-device fleets that they directly own, or could offer as a managed service—for instance their own vehicles, or city-wide security cameras. These can leverage the operator’s own connectivity (typically 5G) as well as anchor some of the data origination and consumption.

Conclusion

Telecom operators should shift their enterprise focus from mobile edge computing (MEC) to a wider approach built around "networked data". Much of the enterprise edge will reside beyond the network and telco control, in devices or on-premise gateways and servers. Essentially no enterprise IT/IoT systems will be wholly run "in" the 5G or fixed telco network, as virtual functions in a 3GPP or ORAN stack.

They instead should look for involvement in end-point devices, where data is generated, where and when it is stored and processed—and also the paths through the network it takes. This would align their propositions with connectivity (between objects or applications) as well as property (the physical location of edge data centres or network assets).

There are multiple stages to get to this new proposition of “networked cloud”, and not all operators will be willing or able to fulfil the whole vision. They will likely need to partner with the cloud players, as well as think carefully about treatment of network and regulatory boundaries.

Nevertheless, the broadening of scope from “edge compute” to “networked cloud” seems inevitable. The role of telcos as pure-play "edge" specialists makes little sense and may even be a distraction from the real opportunities emerging at higher levels of abstraction.

The original version of this article is at https://blog.cloudera.com/telco-5g-returns-will-come-from-enterprise-data-solutions/

I'll be speaking on an upcoming webinar with @cloudera about "Enterprise data in the #5G era" on May 4, 2022 - https://register.gotowebinar.com/register/3531625172953644816

#cloud #edgecomputing #5G #telecoms #latency #IoT #smartcities #mobile #telcos

Sunday, November 07, 2021

No, the Metaverse is not the killer app for 5G

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

Let's stop the next cliche before it even starts.

Most knowledgeable people now roll their eyes in derision whenever they hear the words 5G and autonomous driving (or robotic surgery) mentioned in the same sentence. But the mobile industry's hypesters are always casting around for some new trope - and especially the mythical "killer app" that could help to justify the costs and complexity.

And as if on cue, the Metaverse - essentially a buzzword meaning a hybrid of AR/VR with the social web, collaboration and gaming - has captured the headlines.

No alt text provided for this image

The growing noise around Metaverse technologies - and especially Facebook's recent rebrand to Meta - is attracting a whole slew of bandwagon-jumpers. The cryptocurrency community has been the first to trumpet its assumed future role - perhaps unsurprisingly, since they tend to be even more fervent and boosterish than the mobile sector. But we're also seeing the online shopping, advertising and gaming worlds hail the 'Verse as the next big thing.

Next up - I can pretty much guarantee it - will be the 5G industry talking about millisecond latency and buying a "Metaverse network slice". We'll probably get the edge-computing crowd popping up shortly afterwards too. I've already seen a few posts hailing the Metaverse as the possible next big thing for MNOs (mobile network operators).

They're wrong.

The elephant in the room

If you've found this article without knowing my normal coverage themes, you might be surprised to read that the single biggest issue for connecting Metaverse devices and users will be real, physical walls.

If you go through Mark Zuckerberg's lengthy video intro to Meta and his view of future technologies, you'll notice that a high % of scenarios and use-cases are indoors. Gaming from your sofa. Virtual living rooms. Hybrid work environments blending WFH with in-person meetings, and so on.

This shouldn't be a huge surprise. The more immersive a technology is - and especially if it's VR rather than AR based - the more likely people will take part while seated, or at least not while walking around an outdoor environment with obstacles and dangers. Most gaming, and most business collaboration takes places indoors too.

And indoor environments tend to have particular ways that connectivity is delivered to devices. Generally, Wi-Fi tends to be used a lot, as the access points are themselves indoors, at the end of broadband connection or office local area network.

Basically, wireless signals at frequencies above 2-3GHz don't get inside buildings very well from outside, and the higher the performance, the worse that propagation tends to be. Put simply, 5G-connected headsets and other devices will generally not work reliably indoors, especially if they have to deliver consistent high data speeds and low latencies which need higher frequencies. We can also expect the massive push for Net Zero in coming years to mean ever-better insulated buildings, which will make matters even worse for wireless signals as a side-effect.

For sure, certain locations will have well-engineered indoor 5G systems that will work effectively - but software developers generally won't be able to assume this. Airports, big sports venues, shopping malls and some industrial sites like factories will be at the top of the list for these types of solutions. For those locations, 5G Metaverse connections may well be widely used and effective. However, those are the exceptions - and it will take many years to deploy new in-building systems, or upgrade existing infrastructure anyway.

In particular, most homes and offices will have patchy or sometimes no 5G coverage, especially in internal rooms, elevators or basements. (There might be a 5G signal or logo displayed on the device, but that doesn't mean that the famously-promised gigabit speeds or millisecond latencies will actually be deliverable).

In those locations, expect Metaverse devices to use Wi-Fi as a baseline - and increasingly the Wi-Fi 6/6E/7 generations with better capabilities than previous versions.

What the Meta video tells us

I'm aware that the Metaverse is more than just Facebook / Meta, but the 1h17 video from Zuck (link) is not a bad overview of what to expect in terms of experiences, devices and business models. Obviously there will be different views from Epic Games, Microsoft's various initiatives around Hololens and Mesh, plus whatever Apple is quietly cooking up, but this is a decent place to start.

The first thing to note is the various Horizon visions that Meta is pitching - Home, Worlds and Workrooms. These are (broadly) for close social interaction, gaming/larger-scale social and business collaboration - especially hybrid work.

Mostly, the demos and visions are expected to take place from the participant's home, office, school or similar venue. There's a couple of outdoor examples of enhanced sports, or outdoor art/advertising as well. Virtual desktops, avatars that mimic eye and facial movements and so on.

In terms of devices, there's a large emphasis on headsets (obviously the Oculus Quest, and also the new high-end Cambria device promised for 2022) as well as discussions of AR glasses, from the RayBan Stories recently launched, to a forthcoming project called Nazare.

The technology discussion is all around the functional elements, not the connectivity. Optics, sensors, batteries, displays, speakers, cameras and so on. There are developer tools for hand and voice interaction, and presence / placement of objects in the virtual realm. There's lots of discussion around creators, advertising and the ability to own (and interoperate) virtual avatars, costumes and furniture. There are also nods to privacy, as would be expected.

There's no mention of connectivity, apart from noting that Cambria will have radios of some sort. The section on the "Dozen major technological breakthroughs for next-gen metaverse" doesn't mention wireless, 5G or anything else.

No alt text provided for this image

It's worth noting that Oculus devices and the RayBan glasses today use Wi-Fi. We can also expect the gesture-control in future will likely lean on UWB sensors. Outside of Facebook / Meta essentially all of today's dedicated AR/VR headsets connect with Wi-Fi or a cable, to a local network or broadband line. (That might be 5G fixed-wireless to the building for a few % of homes, but that will still use Wi-Fi on the inside).

Where cellular 4G/5G takes a role in XR is where the device is tethered to a phone or modem, or is experienced actually on the smartphone itself - think Pokemon Go, or the IKEA app that lets you design a room with virtual furniture.

We can expect the same with the Metaverse. If you're using a smartphone to access it, then obviously 5G will play a role, just as it will for all mobile apps in 3-4 years time when penetration has increased.

Will Cambria and future iterations feature 5G built-in? Maybe but I doubt it, not least because of the extra cost and engineering involved, as well as multiple versions to support different regional frequency options. Would a future Apple AR/Metaverse headset feature cellular, like some versions of the Watch? Again, that's possible but I wouldn't bet on it.

In the second half of the decade, later versions of 5G (Release 17 & 18) will have useful new features like centimetre-accuracy positioning that could be useful for Metaverse purposes - but again, that's reliant on having decent coverage in the first place. There will likely be some useful aspects outdoors though - for instance accurate measurement of vehicles on roadways.

Facebook Connectivity becomes Meta too

One other thing I noticed is a reference on LinkedIn to Facebook's often-overlooked Connectivity division, which does all sorts of interesting programmes and initiatives like TIP (which does OpenRAN and other projects), Terragraph 60GHz mesh, Express Wi-Fi and the low-end Basics "FB-lite" platform for developing markets with limited network infrastructure.


No alt text provided for this image

Apparently it's now being renamed Meta Connectivity - partly I guess because of the reorganisation and rebranding of the group overall, but also as a longterm part of the Metaverse landscape.

To me, that also indicates that the Metaverse is going to use multiple wireless (and wired) technologies - which aligns with Zuckerberg's view that it's more of a reinvention of the Internet/Web overall, rather than a particular app or experience.

Bandwidth-heavy? Or perhaps not....

One other thing needs to be considered around the Metaverse and connectivity. The immediate assumption is that such a "rich" environment, either full-virtual or overlaid onto a view of the real world, will need lots of data - and therefore the types of bandwidths promised by 5G. If we all use Metaverse devices to project "virtual TV screens" onto virtual surfaces, it will use lots of capacity, supposedly.

But it strikes me that avatars (even photo-realistic ones) & 3D reconstructions of real-world scenes will likely need less bandwidth than actual video. Realtime rendering will likely be done on-device in most cases, just sending the motion/sensor data or metadata about objects over the network.

Clearly this will depend on the exact context and application, but if your PC or phone or headset has a model of your friend's virtual house, or your virtual conference room - and all the objects and people/avatars in it - then it doesn't actually need realtime 4K video feeds to show different views.

In addition, the integration of eye-tracking allows pre-emptive downloads or actions, so "pseudo-latency" can seem very low, irrespective of the network's actual performance. If the headset sees you looking at a football, it can start working on the trajectory of a kick 10's or even 100's of milliseconds before you move your virtual leg.

That said, the sensor data uplink & motion control downlink will need low latency, but I suspect that will be more about driving localised breakout and peering rather than genuine localised compute. If you're in a hybrid conference with distant colleagues, the main role for edge-computing is to offload your data to the nearest Internet exchange with as few hops as possible.

(Some of the outdoor scenes in the Meta video from Connect seem rather unrealistic. They show groups of people playing table tennis and a virtual basketball match with "friends on the other side of the world", which would involve some interesting issues with the speed of light and how that would impact latency.)

Conclusion

In a nutshell - no, the Metaverse isn't the killer app for 5G.

The timelines align between the two, so where 'Verse apps are used on smartphones they'll increasingly use 5G if it's available and the user is out-and-about. But that's correlation, not causation. Those smartphones will typically be connected via Wi-Fi when at home, school or work. I suspect the main impact on smartphones will be on the need for better 3D graphics capability and enhanced sensors and cameras, rather than the network side.

Will we see some headsets or glasses with built-in cellular radios, some with 5G support? Sure, there will certainly be a few emerging in coming years, especially for enterprise / private network use. I'd expect field-workers, military, or industrial employees to exploit various forms of AR and VR in demanding situations well-suited to cellular, although many will tether a headset or glasses to a separate modem / module to reduce weight.

Many devices will also include various other wireless technologies too - Wi-Fi, Bluetooth, maybe Thread/Matter, UWB and so on.

But if anything, I suspect that the Metaverse may turn out to be the killer app for WiFi7, especially for home and office usage. That doesn't mean that 5G won't benefit as well - but I don't see it as a central enabler, given the probable heavy indoor bias of the main applications. (I don't think that cryptocurrency or edge-computing are key enablers either, but those are debates for another day)

(This article was initially published on my LinkedIn Newsletter - click here to see the original, plus comment thread. And please subscribe!)

#Metaverse #Facebook #Meta #AugmentedReality #VirtualReality #5G #WiFi #MixedReality #Mobile #Wireless #Devices #Gaming #Collaboration #HybridWorking

Sunday, May 09, 2021

Telcos: Stop Thinking You're Always the Leading Actor

Hubris: "an extreme and unreasonable feeling of pride and confidence in yourself"

I've followed developments in the telecoms industry for over 25 years. I've seen positives (eg broadband, SMS, LTE) and negatives (UMA, RCS) as well as a shifting landscape of regulation, the rise of the Internet, and multiple generations of network technology and services infrastructure.

Undoubtedly, both fixed and mobile networks have added massively to economies, society and our current way of life. It's understandable that network operators - and their vendors and governments - feel proud of their legacy and want to perpetuate it.

Yet it's possible to take this too far. Even beyond obviously-silly pronouncements such as "5G is as important as electricity", there remains a constant thread among the telecoms industry that it is absolutely central to all future developments, and that the network's finely-engineered QoS mechanisms are the wellspring of technology-derived value, as well as pivotal to future GDP and world happiness.

But while self-belief and aspiration is helpful, arrogance and self-delusion is not.

 



Starring role, or supporting cast?

There is an assumption that the (public, traditional) network is always the leading actor in any movie about Industry 4.0, IoT, smart homes, AI, pandemic recovery & the "new normal", combating climate change, or creating new modes of communications and entertainment like AR/VR.

And yet in reality, the telecom network - especially public 5G - is often going to be a supporting actor. Or perhaps just have a walk-on role, or be relegated to an extra who gets dubbed in a different language.

You can almost imagine a C-list celebrity arriving at a busy party and shouting: "Guys, guys! Listen up! You can get rid of all your old stuff, all your Internet apps, all your legacy Industry 3.0 gear... just use our new [Technology X] instead, and we'll offer it all with a nice monthly per-GB subscription. You can even buy a slice!"

Heads swivel. Eyes roll. People refill their glasses & continue their conversations.

A bit more realism and humility is required. Telecoms isn't always the star of the show, and neither does it write the screenplay for the rest of the infrastructure or solution.

That doesn't mean it lacks value, or has a limited opportunity - but that it has to play nicely alongside others... and accept that the director and producer have other priorities to focus on - and a wide choice of alternatives to cast in the same roles.

Leaving the acting analogy aside, it's also important to understand that the nature of the word "telco" is itself changing. Looking out to 2030, the "telco of the future" isn't like todays - there won't just be 3-4 national MNOs and a handful of converged/fibre/fixed-line operators. There will be a vast diversity of service provider types and private/community networks. I've written before about the "new telcos" and this is a critical aspect for traditional ("legacy"?) operators to understand and even embrace.

This isn't just 5G-related

It's tempting to just see this as a problem with how 5G is being positioned and hyped. But while I discuss that below, it's far from being unique. This attitude has been around for years, and pervades the entire industry. Some examples of this mindset include:

  • Telcos consistently assume that "voice" means the same as "telephony", since they only do the latter. Telephony is just one voice application of hundreds - and a 140yr-old clunky and poorly-optimised one at that. This is why telcos don't have a foothold in voice assistants, critical comms, gaming voice, podcasts and so on - and get out-competed by cloud players for UCaaS and cPaaS. (For more: see my upcoming workshop series on the future of Realtime Comms, Voice & Video, starting May 19th)
  • 20 years ago, 3G networks were pitched as platforms for telco-created and telco-delivered videoconferencing, games, "value-added services" (ringtones, basically) and much more inside "walled gardens". The killer app was, in fact, plain vanilla Internet access - despite early dataplans trying to restrict the use of VoIP and IM.
  • Some 1980s & '90s telcos saw themselves as central to enterprises' telephony systems and pitched "Centrex" services - basically a precursor to today's cloud-based UCaaS. Most businesses decided that running their own PBXs was a better option - it fit with their internal organisation and operations much better.
  • Telcos' MEC edge-compute was supposed to take centre-stage against hyperscale cloud providers. Instead, MEC's main use is to host internal NFV or vRAN functions that run the network itself. Or enable some hyperscalers' own edge platforms on a wholesale basis, where they don't have other options. Meanwhile, edge-compute evolves in many other (non-telco) domains much faster, including on-device / gateway, or linked to non-3GPP technologies such as Wi-Fi and fibre.
  • RCS was initially supposed to replace all Internet-based messaging apps. Then its believers pivoted to pitch it as a universal B2C tool for mobile customer interactions. In reality, it's (at best) just another slow-moving messaging app with few users and no loyalty, or special features. It turns out to be channel #17 for consumers dealing with companies that don't merit downloading a proper app or which have a lousy website. RBM's best hope is for things like tickets from that 3rd-tier airline you're forced to use to get to an obscure airport, or ordering a new recycling bin from the local council's chatbot. It's competing with the browser, not apps or Internet messaging.
  • MNOs' public 5G with network-slicing was supposed to replace all the cumbersome enterprise network gear such as ethernet and Wi-Fi. There are still visions within obscure 3GPP work-groups about "5G LANs" and I still read and hear nonsense from the cellular industry about it replacing Wi-Fi at scale....
  • ... or alternatively, the new story is that the 5G core is going to be the centrepiece of all telecoms and networking - it'll control Wi-Fi, fixed broadband, satellite connectivity etc. on operators' terms and policies, of course. (See the Broadband Forum's rather Machiavellian efforts here - led unsurprisingly by behemoths like Verizon & Deutsche Telekom that want the core network as a "control point" all the way to end-devices in the home). Yes, maybe Wi-Fi can easily just slot into 5G's shiny new cloud-native core - but in reality, 99% of Wi-Fi has nothing to do with cellular networks, offload, or non-trusted / non-3GPP access
  • As I mentioned recently, the telecom industry tries to take 100% of the (carbon) credit for new technologies reducing energy consumption or emissions.

The ridiculous and judgmental term "OTT" exemplifies this - creating a them-and-us fallacy of "web" companies using "our" pipes. Never mind the fact those technology companies build their own infrastructure, and invest billions in R&D for everything from AI to chip design. Or that all telcos themselves deploy "OTT" apps, websites and Internet-delivered functions.

To use a more sociological phrasing, many network operators still have a "sense of entitlement". They feel that they should be running everything from voice and video communications to networked entertainment, smart homes, or B2B commerce and industrial automation.

This attitude extends into public policy, and discussions on topics like spectrum, where there is a sense of exerting "license privilege". There is often an attempt to exert control before earning it. This is different to (say) Apple's control of its AppStore.

(*Sidenote [And apologies to my clients if this stings!]:if you work in telecoms & talk casually about "OTTs" for anything other than TV streaming, you should be fired, and so should your boss. It's not only wrong, it's flat-out ignorant and damaging. It indicates gross incompetence. It's not quite a "hate crime" but it is a them-and-us divisive term for a distinction that simply does not exist).

Actions have consequences

There are several reasons why this problem is more than just "attitude" or normal marketing-related hyperbole. It directly translates to business successes and failures.

  • Many telco technologies don't just benefit from n-squared network effects, but depend on them. They degrade "non-gracefully" if they're not ubiquitous - which means they need to be adopted by other telcos at the same time. Messaging is a good example - at 50% uptake, across 50% of operators that implement a new standard, there's a high % chance that two people on different networks won't be able to communicate, especially internationally. There's no focus on saturating small niches, or communities of interest, then expanding over time.
  • Telcos spend so much time envisioning themselves as "platforms" that they fail to realise that pretty much every tech platform evolves from a great (and widely-used/loved) product. Google indexed the web & created a great seach function, before it started selling ads. Apple sold the iPhone for a while before launching the AppStore. It also had a loyal base of iPod users who wanted a music-phone, too. Amazon sold books before it launched AWS. All of them had platforms in mind earlier... but had to create a product before tuning the way the platform needed to behave for customers / developers.
  • The telecom industry always assumes that it will be a "net exporter" (or even pure exporter) of capabilities and APIs. It expects it will sell more "exposed functions" than it buys. It assumes a role at the top of the value chain, rather than the middle. This is starting to change now with the recognition of the role of buying public cloud services for virtualisation, but prior to that it just relied on Google Maps for "find the closest store", or credit-checking agencies for new subscriptions. Almost all successful tech businesses these days are more like trading hubs, importing AND exporting functions, APIs and data. The assumption that telcos will always be the OrchestratORS rather than OrchestratED is leading to an unrealistic world-view and poor decisions.
  • Conversations with regulators and governments try to amplify the supposed "special" status and reinforce the spurious divide with new telcos or Internet/tech firms. "We don't want to be dumb pipes, so please tax & regulate the clever people, because we can't compete". This might seem smart - and perhaps gets better access to new funds for rural coverage or pandemic recovery - but it also hampers and limits future options, for instance around international mergers and expansion. Domestic champions find it hard to live dual lives as global heroes.

What needs to change?

There needs to be a frank, honest discussion about "Telcos' place in the world", which works out how to transition from a world of a few licensed network operators per country, to one in which the landscape is much more complex and nuanced.

  • Position the term "telco" as a broader church & consider the needs/roles of the wider group. MNOs and fixed telcos are important, but not alone here. TowerCo's are telcos. Neutral Hosts are telcos. WISPs are telcos. MVNOs are telcos. Governments can act as telcos. Community networks are telcos. Consider them peers. Insist that GSMA, CTIA, ETNO and others treat all telcos equally and offer membership (and governance) on reasonable terms.
  • Don't push back against governments trying to enable new forms of competition and new entrants. Instead, exploit them. Offer reference designs for Open RAN internationally (see Rakuten). Launch Private 5G services in new countries with local spectrum (Verizon is doing this). Run MVNOs in other countries (Turkcell, China Mobile etc).
  • Internet, IT and industrial automation (OT) companies need to be seen as equal and equivalent peers too. Amazon, Microsoft, Google, Siemens, Honeywell, IBM, HPE, Tech Mahindra, NTT Data & many others will often own the customer relationship. Sometimes telecoms fits into their frameworks, and sometimes theirs' fits into telcos. Maybe there are roles for gatekeepers, but only where there is enough competition.
  • Telecom standards need to become much more "loosely coupled". The traditional insistence that a 5G radio needs a 5G core and IMS/VONR telephony needs to stop. 3GPP standards and interfaces should be mix-and-match. Rather than trying to push complex core networks into fixed broadband architectures, the industry should instead make core-optional lightweight variants of 5G RANs, or expose interfaces that make them controllable by enterprise IT, or a Wi-Fi platform.
  • Offer both complete solutions and sub-component services. Don't assume primacy - sell what customers want. Maybe enterprises want their own Private 5G, but would happily use telcos to do the installation and maintenance, or to enable roaming or as a provider of eSIM-aaS
  • Use 3rd-party infrastructure and connectivity where it makes sense - for instance on neutral host networks. Attempt to automate onboarding, and remove friction wherever possible. Accept national roaming if it means your customers get better access in remote places, or indoors.
  • Work out better metrics to measure the business & communicate these to investors and regulators. See this article on what metrics are especially poor.
  • Understand software and app developers' mindsets. They don't want to pay for "premium QoS" on a thousand networks. They want warning of congestion, and how to adjust their apps' demands - when/how to use on-device compute vs. cloud, which codecs and compression, and so on.
  • Stop thinking that phone calls (and worse, video calls) are perfect manifestations of communications, with just an upgrade every 10 years from circuit to VoLTE to VoNR. Why doesn't the dialler app get updated once a month with new features, or give the user more controls?
  • Look at alternatives to subscription business models. Why not an insurance-style annual premium? Or "dark spectrum" just like "dark fibre"? Or 100 others?
  • Invent more stuff. Spend money on R&D rather than sports TV rights. Much of the current angst comes from competing against tech firms that actually create products and services that people want to buy/use.
  • Have a much clearer policy and stance on buying/selling technology and services. Make using platforms effectively seem as important as creating platforms. This is starting to happen with cloud and Open RAN, but it's very slow.

It has been interesting to see that the most interesting - and lauded - new telcos have come from different backgrounds, and have different attitudes. Rakuten is a cloud/eCommerce company first and foremost. Dish started as a satellite TV provider. Jio's parent Reliance Industries is a broad conglomerate. Although not a new company, South Korea's SKT is part of the SK Group, which also has a broad set of non-telco assets.

To be fair, one area where telcos are taking a more hybrid position is around physical assets. Some are operators/co-owners of shared networks, some spin-out tower businesses, some sell dark fibre and some buy - or both in different places. Some use public colocation and data-centres, while others are looking at local offices as possible edge compute sites.

Conclusions

This undoubtedly comes across as a bit of a rant (and not for the first time...) but it's coming from a position of frustration. I've seen the same issues play out for years - and at the core is this attitude of entitlement that I mention above.

It's totally counterproductive, even if the inertia - and sense of history - is understandable.

Everyone wants to be the star, especially if they've been the lead actor for decades. But sometimes, the role just involves a couple of scenes. And often, it's just the cameo roles - if played well - that get the headlines after all.

[A quick plug again: my upcoming Future of Video & RTC workshop series is here]

Cross-Posted from my LinkedIn Newsletter Article (here). Please see comments there & Subscriber.

#telecom #5G #telco #cloud #technology #regulation #voice #edgecomputing

Thursday, May 06, 2021

Why does the Edge Computing sector ignore Wi-Fi?

We should be talking more about Wi-Fi-Edge as well as 5G-edge. Arguably, it is more important (along with fibre-connected edge)

Yes, the 3GPP term MEC has been upgraded from "mobile edge compute" to "multi-access", but there's still little focus on local edge-cloud use-cases that rely on fixed (usually fixed + Wi-Fi) broadband.

Given today's Wi-Fi often has lower latency than current 5G versions (2-5 milliseconds is common), and many devices such as AR/VR headsets don't have 5G radios, this seems odd.

Many of the use-cases for advanced connectivity, especially IoT in smart buildings and smart homes, as well as gaming and content/video display, uses Wi-Fi predominantly. 5G won't replace it.

On enterprise sites, Edge Computing applications will terminate to end-devices connected with a mix of 5G (public and private), 4G, Wi-Fi, fibre, Ethernet, LPWAN & other tech. This isn't just about low-latency, but connections for IoT devices, cameras, screens etc. that require local processing - and local storage ("data sovereignty"). 

They might use cloud-type software stacks, and use hyperscale cloud for deep analytics, but there will be various reasons for on/near-prem edge.

Offices connect all laptops, collaboration/meeting systems and screens with Wi-Fi. Wi-Fi dominates in education. Even in retail settings and #smartcities, there's a lot of Wi-Fi or proprietary industrial WLAN variants.

In homes, the opportunity is almost entirely about #WiFiEdge. TVs, laptops, voice assistants, smartphones, tablets, AR/VR headsests and most other residential devices connect with Wi-Fi (plus some short-range Bluetooth, ZigBee etc). Very few end-devices inside the home connect with 4G/5G, and even in future the low-band 5G connections that penetrate the walls likely won't support the ultra-low latencies that many talk about.

All of these have significant links to #cloud platforms and applications. Indeed, many higher-end Wi-Fi systems are themselves cloud-controlled. 

Outdoors, especially for mobile and vehicular use-cases, #5GEdge (& 4G for years) will be important plus maybe SatelliteEdge & LoRaEdge

In general, I'd expect "fixed edge" of one sort or another to be far more important than "mobile edge" or MEC. In many ways, it already is, given #CDNs largely service fixed broadband use-cases.

Possibly this is just reflecting a lack of marketing - or perhaps the cloud/edge/datacentre sector has been blinded by #5Gwash hype and has forgotten to focus on often more-important technologies for some critical applications - whether that's security-camera analytics or multiplayer games. They may well need low-latency or secure on-premise compute, but won't (often) be using 5G.

This also perhaps reflects the fact that 5G needs some edge-compute for its own operation (especially Open RAN), so the industry is trying to offset the costs by hyping the potential revenues of using that infrastructure for customer applicatins as well. That's less true for other connectivity types, although fixed/cable broadband has a lot of localised compute infrastructure too.

I'm curious to see if this blending of #WiFiEdge has resonance.
At the very least I think the Wi-Fi and fixed-broadband providers should be making much more noise about it. Seems bizarre that 5G-edge gets all the attention when it is, well, a bit of an edge-case.

Thursday, April 08, 2021

Free-to-download report on Creating Enterprise-Friendly 5G Policies (for goverments & regulators)

Copied from my LinkedIn. Please click here for the download page & comments

I'm publishing a full report & recommendations on Enterprise & Private 5G, especially aimed at policymakers and regulators.

It explains the complex dynamics linking Enterprises, MNOs and Governments – explaining the motivations of each around connectivity, 5G deployment choices, IoT and the broader impacts and trade-offs around the economy and productivity.

This is not a simple calculus – MNOs want to exploit 5G opportunities for verticals, but businesses have their own priorities and preferences. Governments want to satisfy both groups – and also act as both major network users themselves and “suppliers” of spectrum.

A supporting cast of cloud players, network vendors, other classes of service providers and other stakeholders have important roles as well.

This report is a “Director’s Cut” extended version of a paper originally commissioned for internal use by Microsoft, now made available for general distribution.

(To download on LinkedIn, display in full screen & select download PDF)




#5G #policy #telecoms #private5G #cloud #IoT #spectrum #WiFi

Tuesday, September 15, 2020

Low-latency and 5G URLLC - A naked emperor?

Originally published as a LinkedIn Newsletter Article - see here

I think the low-latency 5G Emperor is almost naked. Not completely starkers, but certainly wearing some unflattering Speedos.

Much of the promise around the 5G – and especially the “ultra-reliable low-latency” URLLC versions of the technology – centres on minimising network round-trip times, for demanding applications and new classes of device.


 

Edge-computing architectures like MEC also often focus on latency as a key reason for adopting regional computing facilities - or even servers at the cell-tower. Similar justifications are being made for LEO satellite constellations.

The famous goal of 1 millisecond time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, cloud-gaming, the “tactile Internet” and remote drone/robot control.

(In theory this is for end-to-end "user plane latency" between the user and server, so includes both the "over the air" radio and the backhaul / core network parts of the system. This is also different to a "roundtrip", which is there-and-back time).

Usually, that 1ms objective is accompanied by some irrelevant and inaccurate mention of 20 or 50 billion connected devices by [date X], and perhaps some spurious calculation of trillions of dollars of (claimed) IoT-enabled value. Gaming usually gets a mention too.

I think there are two main problems here:

  • Supply: It’s not clear that most 5G networks and edge-compute will be able to deliver 1ms – or even 10ms – especially over wide areas, or for high-throughput data.
  • Demand: It’s also not clear there’s huge value & demand for 1ms latency, even where it can be delivered. In particular, it’s not obvious that URLLC applications and services can “move the needle” for public MNOs’ revenues.

Supply

Delivering URLLC requires more than just “network slicing” and a programmable core network with a “slicing function”, plus a nearby edge compute node for application-hosting and data processing, whether that in the 5G network (MEC or AWS Wavelength) or some sort of local cloud node like AWS Outpost. That low-latency slice needs to span the core, the transport network and critically, the radio.

Most people I speak to in the industry look through the lens of the core network slicing or the edge – and perhaps IT systems supporting the 5G infrastructure. There is also sometimes more focus on the UR part than the LL, which actually have different enablers.

Unfortunately, it looks to me as though the core/edge is writing low-latency checks that the radio can’t necessarily cash.

Without going into the abstruse nature of radio channels and frame-structure, it’s enough to note that ultra-low latency means the radio can’t wait to bundle a lot of incoming data into a packet, and then get involved in to-and-fro negotiations with the scheduling system over when to send it.

Instead, it needs to have specific (and ideally short) timed slots in which to transmit/receive low-latency data. This means that it either needs to have lots of capacity reserved as overhead, or the scheduler has to de-prioritise “ordinary” traffic to give “pre-emption” rights to the URLLC loads. Look for terms like Transmission Time Interval (TTI) and grant-free UL transmission to drill into this in more detail.

It’s far from clear that on busy networks, with lots of smartphone or “ordinary” 5G traffic, there can always be a comfortable coexistence of MBB data and more-demanding URLLC. If one user gets their 1ms latency, is it worth disrupting 10 – or 100 – users using their normal applications? That will depend on pricing, as well as other factors.

This gets even harder where the spectrum used is a TDD (time-division duplexing) band, where there’s also another timeslot allocation used for separating up- and down-stream data. It’s a bit easier in FDD (frequency-division) bands, where up- and down-link traffic each gets a dedicated chunk of spectrum, rather than sharing it.

There’s another radio problem here as well – spectrum license terms, especially where bands are shared in some fashion with other technologies and users. For instance, the main “pioneer” band for 5G in much of the world is 3.4-3.8GHz (which is TDD). But current rules – in Europe, and perhaps elsewhere - essentially prohibit the types of frame-structure that would enable URLLC services in that band. We might get to 20ms, or maybe even 10-15ms if everything else stacks up. But 1ms is off the table, unless the regulations change. And of course, by that time the band will be full of smartphone users using lots of ordinary traffic. There maybe some Net Neutrality issues around slicing, too.

There's a lot of good discussion - some very technical - on this recent post and comment thread of mine: https://www.linkedin.com/posts/deanbubley_5g-urllc-activity-6711235588730703872-1BVn

Various mmWave bands, however, have enough capacity to be able to cope with URLLC more readily. But as we already know, mmWave cells also have very short range – perhaps just 200 metres or so. We can forget about nationwide – or even full citywide – coverage. And outdoor-to-indoor coverage won’t work either. And if an indoor network is deployed by a 3rd party such as neutral host or roaming partner, it's far from clear that URLLC can work across the boundary.

Sub-1GHz bands, such as 700MHz in Europe, or perhaps refarmed 3G/4G FDD bands such as 1.8GHz, might support URLLC and have decent range/indoor reach. But they’ll have limited capacity, so again coexistence with MBB could be a problem, as MNOs will also want their normal mobile service to work (at scale) indoors and in rural areas too.

What this means is that we will probably get (for the forseeable future):

  • Moderately Low Latency on wide-area public 5G Networks (perhaps 10-20ms), although where network coverage forces a drop back to 4G, then 30-50ms.
  • Ultra* Low Latency on localised private/enterprise 5G Networks and certain public hotspots (perhaps 5-10ms in 2021-22, then eventually 1-3ms maybe around 2023-24, with Release 17, which also supports deterministic "Time Sensitive Networking" in devices)
  • A promised 2ms on Wi-Fi6E, when it gets access to big chunks of 6GHz spectrum

This really isn't ideal for all the sci-fi low-latency scenarios I hear around drones, AR games, or the cliched surgeon performing a remote operation while lying on a beach. (There's that Speedo reference, again).

* see the demand section below on whether 1-10ms is really "ultra-low" or just "very low" latency

Demand

Almost 3 years ago, I wrote an earlier article on latency (link), some of which I'll repeat here. The bottom line is that it's not clear that there's a huge range of applications and IoT devices that URLLC will help, and where they do exist they're usually very localised and more likely to use private networks rather than public.

One paragraph I wrote stands out:

I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.

I still haven't seen any examples of that analysis. So I've tried to do a first pass myself, albeit using subjective judgement rather than hard data*. I've put together what I believe is the first attempted "heatmap" for latency value. It includes both general cloud-compute and IoT, both of which are targeted by 5G and various forms of edge compute. (*get in touch if you'd like to commission me to do a formal project on this)

A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.

I've looked at time-ranges for latency from microseconds to days, spanning 12 orders of magnitude (see later section for more examples). As I discuss below, not everything hinges on the most-mentioned 1-100 millisecond range, or the 3-30ms subset of that that 5G addresses.

I've then compared those latency "buckets" with distances from 1m to 1000km - 7 orders of magnitude. I could have gone out to geostationary satellites, and down to chip scales, but I'll leave that exercise to the reader.

  

The question for me is - are the three or four "battleground" blocks really that valuable? Is the 2-dimensional Goldilocks zone of not-too-distant / not-too-close and not-too-short / not-too long, really that much of a big deal?

And that's without considering the third dimension of throughput rate. It's one thing having a low-latency "stop the robot now!" message, but quite another doing hyper-realistic AR video for a remote-controlled drone or a long session of "tactile Internet" haptics for a game, played indoors at the edge of a cell.

If you take all those $trillions that people seem to believe are 5G-addressable, what % lies in those areas of the chart? And what are the sensitivities to to coverage and pricing, and what substitute risks apply - especially private networks rather than MNO-delivered "slices" that don't even exist yet?

Examples

Here are some more examples of timing needs for a selection of applications and devices. Yes, we can argue some of them, but that's not the point - it's that this supposed magic range of 1-100 milliseconds is not obviously the source of most "industry transformation" or consumer 5G value:

  • Sensors on an elevator doors may send sporadic data, to predict slowly-worsening mechanical problems – so an engineer might be sent a month before the normal maintenance visit. Similarly, sensors monitoring a building’s structural condition, vegetation cover in the Amazon, or oceanic acidity isn’t going to shift much month-by-month.
  • A car might download new engine-management software once a week, and upload traffic observations and engine-performance data once a day (maybe waiting to do it over WiFi, in the owner’s garage, as it's not time-critical).
  • A large oil storage tank, or a water well, might have a depth-gauge giving readings once an hour.
  • A temperature sensor and thermostat in an elderly person’s home, to manage health and welfare, might track readings and respond with control messages every 10 minutes. Room temperatures change only slowly.
  • A shared bicycle might report its position every minute – and unlock in under 10 seconds when the user buys access with their smartphone app
  • A payment or security-access tag should check identity and open a door, or confirm a transaction, in a second or two.
  • Voice communication seems laggy with anything longer than 200 millisecond latency.
  • A networked video-surveillance system may need to send a facial image, and get a response in 100ms, before the person of interest moves out of camera-shot.
  • An online video-game ISP connection will be considered “low ping” at maybe 50ms latency.
  • A doctor’s endoscope or microsurgery tool might need to respond to controls (and send haptic feedback) 100 times a second – ie every 10ms
  • Teleprotection systems for high-voltage utility grids can demand 6-10ms latency times
  • A rapidly-moving drone may need to react in 2-3 millisecond to a control signal, or a locally-recognised risk.
  • A sensitive industrial process-control system may need to be able to respond in 10s or 100s of microseconds to avoid damage to finely-calibrated machinery
  • Image sensors and various network sync mechanisms may require response times measured in nanoseconds
  • Photon sensors for various scientific uses may operate at picosecond durations
  • Ultra-fast laser pulses for machining glass or polymers can be measured in femtoseconds

Conclusion

Latency is important, for application developers, enterprises and many classes of IoT device and solution. But we have been spectacularly vague at defining what "low-latency" actually means, and where it's needed.

A lot of what gets discussed in 5G and edge-computing conferences, webinars and marketing documents is either hyped, or is likely to remain undeliverable. A lot of the use-cases can be adequately serviced with 4G mobile, Wi-Fi - or a person on a bicycle delivering a USB memory stick.

What is likely is that average latencies will fall with 5G. An app developer that currently expects a 30-70ms latency on 4G (or probably lower on Wi-Fi) will gradually adapt to 20-40ms on mostly-5G networks and eventually 10-30ms. If it's a smartphone app, they likely won't use URLLC anyway.

Specialised IoT developers in industrial settings will work with specialist providers (maybe MNOs, maybe fully-private networks and automation/integration firms) to hit more challenging targets, where ROI or safety constraints justify the cost. They may get to 1-3ms at some point in the medium term, but it's far from clear they will be contributing massively to MNOs or edge-providers' bottom lines.

As for wide-area URLLC? Haptic gaming from the sofa on 5G, at the edge of the cell? Remote-controlled drones with UHD cameras? Two cars approaching each other on a hill-crest on a country road? That's going to be a challenge for both demand and supply.

Saturday, August 08, 2020

A rant about 5G myths - chasing unicorns​

Exasperated rant & myth-busting time.

I actually got asked by a non-tech journalist recently "will 5G change our lives?"

Quick answer: No. Emphatically No.


#5G is Just Another G. It's not a unicorn

Yes, 5G is an important upgrade. But it's also *massively* overhyped by the mobile industry, by technology vendors, by some in government, and by many business and technology journalists.

- There is no "race to 5G". That's meaningless geopolitical waffle. Network operators are commercial organisations and will deploy networks when they see a viable market, or get cajoled into it by the terms & timing of spectrum licenses.

- Current 5G is like 4G, but faster & with extra capacity. Useful, but not world-changing.

- Future 5G will mean better industrial systems and certain other cool (but niche) use-cases.

- Most 5G networks will be very patchy, without ubiquitous coverage, except for very rudimentary performance. That means 5G-only applications will be rare - developers will have to assume 4G fallback (& WiFi) are common, and that dead-spots still exist.

- Lots of things get called 5G, but actually aren't 5G. It's become a sort of meaningless buzzword for "cool new wireless stuff", often by people who couldn't describe the difference between 5G, 4G or a pigeon carrying a message.

- Anyone who talks about 5G being essential for autonomous cars or remote surgery is clueless. 5G might get used in connected vehicles (self-driving or otherwise) if it's available and cheap, but it won't be essential - not least as it won't work everywhere (see above).

- Yes, there will be a bit more fixed wireless FWA broadband with 5G. But no, it's not replacing fibre or cable for normal users, especially in competitive urban markets. It'll help take FWA from 5% to 10-12% of global home broadband lines.

- The fact the 5G core is "a cloud-native service based architecture" doesn't make it world-changing. It's like raving about a software-defined heating element for your toaster. Fantastic for internal flexibility. But we expect that of anything new, really. It doesn't magically turn a mobile network into a "platform". Nor does it mean it's not Just Another G.

- No, enterprises are not going to "buy a network slice". The amount of #SliceWash I'm hearing is astonishing. It's a way to create some rudimentary virtualised sub-networks in 5G, but it's not a magic configurator for 100s or 1000s of fine-grained, dynamically-adjusted different permutations all coexisting in harmony. The delusional vision is very far removed from the mundane reality.

- The more interesting stuff in 5G happens in Phase 2/3, when 3GPP Release 16 & then Release 17 are complete, commercialised & common. R16 has just been finalised. From 2023-4 onward we should expect some more massmarket cool stuff, especially for industrial use. Assuming the economy recovers by then, that is.

- Ultra-reliable low-latency communications (URLLC) sounds great, but it's unclear there's a business case except at very localised levels, mostly for private networks. Actually, UR and LL are two separate things anyway. MNOs aren't going to be able sell reliability unless they also take legal *liability* if things go wrong. If the robot's network goes down and it injures a worker, is the telco CEO going to take the rap in court?

- Getting high-performance 5G working indoors will be very hard, need dedicated systems, and will take lots of time, money and trained engineers. It'll be a decade or longer before it's very common in public buildings - especially if it has to support mmWave and URLLC. Most things like AR/VR will just use Wi-Fi. Enterprises may deploy 5G in factories or airport hangars or mines - but will engineer it very carefully, examine the ROI - and possibly work with a specialist provider rather than a telco.

- #mmWave 5G is even more overhyped than most aspects. Yes, there's tons of spectrum and in certain circumstances it'll have huge speed and capacity. But it's go short range and needs line-of-sight. Outdoor-to-indoor coverage will be near zero. Having your back to a cell-site won't help. It will struggle to go through double-glazed windows, the shell of a car or train, and maybe even your bag or pocket. Extenders & repeaters will help, but it's going to be exceptionally patchy (and need tons of fibre everywhere for backhaul).

- 5G + #edgecomputing is a not going to be a big deal. If low-latency connections were that important, we'd have had localised *fixed* edge computing a decade ago, as most important enterprise sites connect with fibre. There's almost no FEC, so MEC seems implausible except for niches. And even there, not much will happen until there's edge federation & interconnect in place. Also, most smartphone-type devices will connect to someone else's WiFi between 50-80% of the time, and may have a VPN which means the network "egress" is a long way from the obvious geographically-proximal edge.

- Yes, enterprise is more important in 5G. But only for certain uses. A lot can be done with 4G. "Verticals" is a meaningless term; think about applications.

- No, it won't displace Wi-Fi. Obviously. I've been through this multiple times.

- No, all laptops won't have 5G. (As with 3G and 4G. Same arguments).

- No, 5G won't singlehandedly contribute $trillions to GDP. It's a less-important innovation area than many other things, such as AI, biotech, cloud, solar and probably quantum computing and nuclear fusion. So unless you think all of those will generate 10's or 100's of $trillions, you've got the zeros wrong.

- No, 5G won't fry your brain, or kill birds, or give you a virus. Conspiracy theorists are as bad as the hypesters. 5G is neither Devil nor Deity. It's just an important, but ultimately rather boring, upgrade.

There's probably a ton more 5G fallacies I've forgotten, and I might edit this with a few extra ones if they occur to me. Feel free to post comments here, although the majority of debate is on my LinkedIn version of this post (here). This is also the inaugural post for a new LinkedIn newsletter, Most of my stuff is not quite this snarky, but it depends on my mood. I'm @disruptivedean on Twitter so follow me there too.

If you like my work, and either need a (more sober) business advisory session or workshop, let me know. I'm also a frequent speaker, panellist and moderator for real and virtual events.

Just remember: #5GJAG. Just Another G.