Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, February 23, 2023

Local networks: when telecoms becomes "pericoms"​

Published via my LinkedIn Newsletter - see here to subscribe / see comment thread

"Telecoms" or "telecommunications" is based on the Greek prefix "tele-".

It means "at a distance, or far-off". It is familiar from its use in other terms such as telegraph, television or teleport. And for telecoms, that makes sense - we generally make phone calls to people across medium or long distances, or send then messages. Even our broadband connections generally tend to link to distant datacentres. The WWW is, by definition, worldwide.

The word "communications" actually comes from a Latin root, meaning to impart or share. Which at the time, would obviously have been done mostly through talking to other people directly, but could also have involved writing or other distance-independent methods.

This means that distant #communications, #telecoms, has some interesting properties:

  • The 2+ distant ends are often (but not always) on different #networks. Interconnection is therefore often essential.
  • Connecting distant points tends to mean there's a good chunk of infrastructure in between them, owned by someone other than the users. They have to pay for it, somehow.
  • Because the communications path is distant, it usually makes sense for the control points (switches and so on) to be distant as well. And because there's typically payment involved, the billing and other business functions also need to be sited "somewhere", probably in a #datacentre, which is also distant.
  • There are a whole host of opportunities and risks with distant communications, that mean that governments take a keen interest. There are often licenses, regulations and internal public-sector uses - notably emergency services.
  • The infrastructure usually crosses the "public domain" - streets, airwaves, rooftops, dedicated tower sites and so on. That brings additional stakeholders and rule-makers into the system.
  • Involving third parties tends to suggest some sort of "service" model of delivery, or perhaps government subsidy / provision.
  • Competition authorities need to take into account huge investments and limited capacity/scope for multiple networks. That also tends to reduce the number of suppliers to the market.

That is telecommunications - distant communications.

But now consider the opposite - nearby communications.

Examples could include a private 5G network in a factory, a LAN in an office, a WiFi connection in the home, a USB cable, or a Bluetooth headset with a phone. There are plenty of other examples, especially for IoT.

These nearby examples have very different characteristics to telecoms:

  • Endpoints are likely to be on the same network, without interconnection
  • There's usually nobody else's infrastructure involved, except perhaps a building owner's ducts and cabinets.
  • Any control points will generally be close - or perhaps not needed at all, as the devices work peer-to-peer.
  • There's relatively little involvement of the "public domain", unless there are risks like radio interference beyond the network boundaries.
  • It's not practical for governments to intervene too much in local communications - especially when it occurs on private property, or inside a building or machine.
  • There might be a service provider, but equally the whole system could be owned outright by the user, or embedded into another larger system like a robot or vehicle.
  • Competition is less of an issue, as is supplier diversity. You can buy 10 USB cables from different suppliers if you want.
  • Low-power, shared or unlicensed spectrum is typical for local #wireless networks.

I've been trying to work out a good word for this. Although "#telecommunications" is itself an awkward Greek / Latin hybrid I think the best prefix might be Greek again - "peri" which means "around", "close" or "surrounding" - think of perimeter, peripheral, or the perigee of an orbit.

So I'm coining the term pericommunications, to mean nearby or local connectivity. (If you want to stick to all-Latin, then proxicommunications would work quite well too).

Just because a company is involved in telecoms does not mean it necessarily can expect a role in pericoms as well. (Or indeed, vice versa). It certainly can participate in that market, but there may be fewer synergies than you might imagine.

Some telcos are also established and successful pericos as well. Many home broadband providers have done an excellent job with providing whole-home #WiFi systems with mesh technology, for example. In-building mobile coverage systems in large venues are often led by one telco, with others onboarding as secondary operators.

But other nearby domains are trickier for telcos to address. You don't expect to get your earbuds as an accessory from your mobile operator - or indeed, pay extra for them. Attempts to add-on wearables as an extra SIM on a smartphone account have had limited success.

And the idea of running on-premise enterprise private networks as a "slice" of the main 4G/5G macro RAN has clearly failed to gain traction, for a variety of reasons. The more successful operators are addressing private wireless in much the same way as other integrators and specialist SPs, although they can lean on their internal spectrum team, test engineers and other groups to help.

Some are now "going the extra mile" (sorry for the pun) for pericoms. Vodafone has just announced its prototype 5G mini base-station, the size of a Wi-Fi access point based on a Raspberry Pi and a Lime Microsystems radio chip. It can support a small #5G standalone core and is even #OpenRAN compliant. Other operators have selected new vendors or partners for campus 4G/5G deployments. The 4 UK MNOs have defined a set of shared in-building design guidelines for neutral-host networks.

It can be hard for regulators and policymakers to grasp the differences, however. The same is true for consultants and lobbyists. An awful lot of the suggested upsides of 5G (or other forms of connectivity) have been driven by a tele-mindset rather than a peri-view.

I could make a very strong argument that countries should really have a separate pericoms regulator, or a dedicated unit within the telecoms regulator and ministry. The stakeholders, national interests and economics are completely different.

A similar set of differences can be seen in #edgecomputing: regional datacentres and telco MEC are still "tele". On-premise servers or on-device CPUs and GPUs are peri-computing, with very different requirements and economics. Trying to blur the boundary doesn't work well at present - most people don't even recognise it exists.

Overall, we need to stop assuming that #pericoms is merely a subset of #telecoms. It isn't - it's almost completely different, even if it uses some of the same underlying components and protocols.

(If this viewpoint is novel or interesting and you would like to explore it further and understand what it means for your organisation - or get a presentation or keynote about it at an event - please get in touch with me)

Thursday, February 09, 2023

What does an AI think about Net Neutrality?

Originally published on my LinkedIn Newsletter, 9th Feb 2023. See here for comment thread

Two very important trends are occurring in tech I'm following at the moment, so I thought it might be fun to combine them:

  • The emergence of #GenerativeAI, for answering questions, generating images and sounds, and potentially a whole lot more. OpenAI #ChatGPT is the current best-known, but there are dozens of others using language models, transformers & other techniques. Some people are suggesting it will redefine web search - and potentially an awful lot more than that. Some even see it as a pivotal shift in technology, society and "skilled" employment.
  • The re-emergence of discussions around #NetNeutrality and associated regulation relating to technology platforms, telcos and networks, like the ridiculous (un)#fairshare & #InternetTrafficTax concept being pitched in Europe by lobbyists. In the UK, Ofcom recently concluded a consultation on whether changes to NN rules should be made (I sent in a reply myself - I'll discuss it another time).

So, I asked ChatGPT what it thought about NN, over a series of questions. I specifically focused on whether it helps or hinders innovation.

The transcript is below, but some thoughts from me first:

  • The text is good. Almost alarmingly good. I found myself saying "good point" a few times. This is probably because it gives reasons in fairly long lists, not just "3 bulletpoints for a slide".
  • It seems pretty even-handed, with "proponents say this, others say that"
  • You can sense that its training base tends to give it "common" responses and frequently-cited arguments. That doesn't mean they're wrong, but it can lack nuance, or repeat some common *fallacies* as well.
  • The long-list approach and comments is quite good at highlighting the need for multiple approaches & reconciling them. A single statistical approach isn't enough - there are also qualitative questions to ask, surveys to undertake and multiple angles to absorb. ChatGPT understands the issue is complex, even if it cannot resolve the complexity itself. Simple answers are wrong.
  • I was particularly impressed with the response to my question on cross-border effects, and to separating NN vs. funding effects.
  • It struggled with the last question, about Internet and non-Internet innovation. EDIT: I just realised that it ignored my typo, and understood that I meant "directly" when I wrote "direct" in my question
  • I'd be very worried if I did bog-standard "content generation" for clickbait blogs, "bucketshop" market research, or corporate social posts. This is all perfectly-usable general blurb if you need slabs of text nobody is going to really delve into too deeply.

All that said, I don't think this is (yet) a gamechanger in terms of analysis and argument. It's good for collating a lot of "food for thought", but as others have pointed out, it can also be "confidently wrong", sometimes in amusing ways.

In a separate chat, I asked it for a contrarian's view of #5G use-cases. While it was right on some things ("Some argue that the vast majority of #IoT devices do not require the high speed and low latency that 5G provides and that the hype around IoT is overblown.") others were less convincing: ("Some argue that 5G may not provide the low latency and high reliability that is being promised, and that there are other technologies, such as wired connections, that are better suited for applications such as autonomous vehicles"). If there's one thing less relevant to self-driving cars than 5G, it's fixed broadband!

Anyway - enjoy the debate treanscribed in full, below.

(Note: the image for this article came from Stable Diffusion, when I asked for a picture showing "An advanced AI debating Net Neutrality")



Me: Does net neutrality help innovation?

ChatGPT: Net neutrality has been a topic of much debate, with proponents arguing that it promotes innovation and others arguing that it stifles it.

Proponents of net neutrality argue that it helps to level the playing field for new entrants and small businesses by preventing Internet Service Providers (ISPs) from discriminating against certain types of traffic or services. This, in turn, encourages innovation, as entrepreneurs and startups are free to bring new products and services to market without fear of being blocked or slowed down by ISPs.

On the other hand, opponents of net neutrality argue that it creates a regulatory burden that stifles innovation and investment in the broadband industry. They argue that without the ability to prioritize different types of traffic, ISPs will be unable to effectively manage their networks and may be forced to limit investment in new infrastructure and technologies.

Ultimately, the impact of net neutrality on innovation is a complex and highly debated issue, and there is no clear consensus on whether it helps or hurts innovation. However, proponents of net neutrality generally believe that it promotes a free and open internet that supports innovation and competition, while opponents believe that it imposes unnecessary regulation that stifles growth and investment.


Me: What research is necessary to reach a better conclusion?

ChatGPT: To reach a more definitive conclusion on the impact of net neutrality on innovation, several types of research would be necessary, including:

  1. Economic studies: Economic studies could analyze the impact of net neutrality regulations on investment and innovation in the broadband industry, as well as the impact on consumer prices and access to broadband services.
  2. Case studies: Case studies of countries or regions with and without net neutrality regulations could provide valuable insights into the real-world effects of these regulations on innovation and investment in the broadband industry.
  3. Surveys and interviews: Surveys and interviews with entrepreneurs, startups, and industry experts could provide insights into the impact of net neutrality on the ability of new entrants to bring new products and services to market.
  4. Technical studies: Technical studies could examine the impact of net neutrality regulations on network management and traffic prioritization, and assess the potential impact on network performance, reliability, and security.
  5. Long-term data analysis: Long-term data analysis could track changes in investment, innovation, and consumer prices over time in regions with and without net neutrality regulations, providing valuable insights into the long-term impact of these regulations.

Overall, a comprehensive and multi-disciplinary approach to research, combining economic, case study, survey, technical, and data analysis methods, would be necessary to reach a more definitive conclusion on the impact of net neutrality on innovation.


Me: How would you measure innovation in this context?

ChatGPT: Measuring the impact of net neutrality on innovation is a complex task, as innovation can take many forms and can be difficult to quantify. However, several metrics could be used to measure innovation in this context, including:

  1. Investment in research and development (R&D): This could be measured by tracking the amount of investment in R&D by broadband companies and startups, as well as the number of patents filed in relevant areas.
  2. New product and service launches: This could be measured by tracking the number of new products and services introduced by broadband companies and startups, as well as the market adoption and success of these offerings.
  3. Entrepreneurial activity: This could be measured by tracking the number of new startups founded in the broadband industry, as well as the success of these startups in attracting investment and bringing new products and services to market.
  4. Consumer benefits: This could be measured by tracking changes in consumer prices, access to broadband services, and the availability of new and innovative online services and applications.
  5. Network performance and reliability: This could be measured by tracking changes in network speed, latency, and uptime, as well as the ability of networks to handle new and innovative applications and services.

While these metrics are not exhaustive, they provide a good starting point for measuring the impact of net neutrality on innovation in the broadband industry. By tracking changes in these metrics over time in regions with and without net neutrality regulations, researchers could gain valuable insights into the impact of these regulations on innovation.


Me: A lot of those metrics are international. More regions with net neutrality might increase innovation globally, not locally. How do you account for that?

ChatGPT: You are correct, some of the metrics mentioned could be influenced by factors beyond the borders of a single region. To account for this, it would be important to consider both the local and global impact of net neutrality on innovation.

For example, to measure the local impact of net neutrality, researchers could compare the metrics mentioned in regions with and without net neutrality regulations, taking into account factors such as the local economy, cultural and regulatory environment, and the state of the broadband industry in each region. This would provide valuable insights into the specific impact of net neutrality on innovation within those regions.

To measure the global impact of net neutrality, researchers could compare these metrics on a global scale, taking into account factors such as the state of the global economy, the state of the broadband industry globally, and the regulatory environment in different regions. This would provide a broader perspective on the impact of net neutrality on innovation globally, and could help to identify any trends or patterns that may not be apparent from a regional analysis alone.

Overall, it is important to consider both the local and global impact of net neutrality on innovation to gain a complete and accurate understanding of the impact of these regulations on the broadband industry and innovation more broadly.


Me: How would we know if net neutrality had a larger effect on innovation than availability of financing for startups?

ChatGPT: Determining the relative impact of net neutrality versus the availability of financing on innovation can be challenging, as these factors are often interrelated and can have complex and interconnected effects. However, several approaches could be used to isolate the impact of each factor, including:

  1. Comparative studies: Researchers could compare regions with and without net neutrality regulations, but with similar levels of financing available for startups. By controlling for the availability of financing, researchers could isolate the impact of net neutrality on innovation.
  2. Historical analysis: Researchers could analyze data from periods of time when net neutrality regulations were in place or not in place, and compare this data with changes in the availability of financing for startups. This would help to identify any correlations between net neutrality and innovation, independent of changes in financing.
  3. Causal inference methods: Researchers could use causal inference methods, such as regression analysis, to model the relationship between net neutrality, financing, and innovation, while controlling for other factors that could influence these relationships.
  4. Survey and interview-based research: Researchers could survey and interview startups, entrepreneurs, and industry experts to gain insights into the relative importance of net neutrality and financing for innovation. This would provide valuable qualitative data to complement the quantitative data obtained from other research methods.

Overall, a multi-disciplinary and multi-method approach, incorporating comparative studies, historical analysis, causal inference methods, and survey and interview-based research, would be necessary to determine the relative impact of net neutrality and financing on innovation in a robust and comprehensive manner.


Me: How do you distinguish innovations that direct relate to Internet access, compared to those which involve non-Internet applications and technologies?

ChatGPT: Distinguishing innovations that directly relate to Internet access versus those that involve non-Internet applications and technologies can be challenging, as many innovations are influenced by both Internet access and non-Internet technologies. However, several approaches could be used to differentiate these types of innovations, including:

  1. Technical classification: Innovations could be classified based on their underlying technologies, with innovations that rely primarily on Internet access being classified as Internet-related innovations and innovations that rely primarily on non-Internet technologies being classified as non-Internet-related innovations.
  2. Market classification: Innovations could be classified based on the markets they serve, with innovations that serve primarily Internet-based markets, such as online commerce or online communication, being classified as Internet-related innovations and innovations that serve primarily non-Internet-based markets, such as manufacturing or agriculture, being classified as non-Internet-related innovations.
  3. End-user classification: Innovations could be classified based on their end-users, with innovations that are primarily designed for use by individuals accessing the Internet being classified as Internet-related innovations and innovations that are primarily designed for use by individuals not accessing the Internet being classified as non-Internet-related innovations.

It is important to note that these classifications are not mutually exclusive, and that many innovations may have elements of both Internet-related and non-Internet-related technologies, markets, and end-users. Nevertheless, using these classifications can provide valuable insights into the extent to which innovations are influenced by Internet access, and can help to distinguish Internet-related innovations from non-Internet-related innovations.

 

Thursday, January 12, 2023

Workarounds, hacks & alternatives to network QoS

Originally published Jan 12th 2023 on my LinkedIn Newsletter - see here for comments

Sometimes, upgrading the network isn't the answer to every problem.

For as long as I can remember, the telecom industry has talked about quality-of-service, both on fixed and mobile networks. There has always been discussion around "fast lanes", "bit-rate guarantees" and more recently "network slicing". Videoconferencing and VoIP were touted as needing priority QoS, for instance. 

There have also always been predictions about future needs of innovative applications, which would at a minimum need much higher downlink and uplink speeds (justifying the next generation of access technology), but also often tighter requirements on latency or predictability.

Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.

We see it again today, with predictions for metaverse applications adding yet more zeroes - we'll have 8K screens in front of our eyes, running at 120 frames per second, with Gbps speeds and sub-millisecond latencies need to avoid nausea or other nasty effects. So we'll need 6G to be designed to cope.

The issue is that many in the network industry often don't realise that not every technical problem needs a network-based solution, with smarter core network policies and controls, or huge extra capacity over the radio-network (and the attendant extra spectrum and sites to go with it).

Often, there are other non-network solutions that achieve (roughly) the same effects and outcomes. There's a mix of approaches, each with different levels of sophistication and practicality. Some are elegant technical designs. Others are best described as "Heath Robinson" or "MacGyver" approaches, depending on which side of the Atlantic you live.

I think they can be classified into four groups:

  • Software: Most obviously, a lot of data can be compressed. Buffers can be used to smooth out fluctuations. Clever techniques can correct for dropped or delayed packets. There's a lot more going on here though - some examples are described below.
  • Hardware / physical: Some problems have a "real world" workaround. Sending someone a USB memory stick is a (high latency) alternative to sending large volumes of data across a network. Phones with dual SIM-slots (or, now, eSIM profiles) allow coverage gaps or excess costs to be arbitraged.
  • Architectural: What's better? One expensive QoS-managed connection, or two cheaper unmanaged ones bonded together or used for diverse routing? The success of SDWAN provides a clue. Another example is the use of onboard compute (and Moore's Law) in vehicles, rather than processing telemetry data in the cloud or network-edge. In-built sound and image recognition in smart speakers or phones is a similar approach to distributed-compute architecture. That may have an extra benefit of privacy, too.
  • Behavioural: The other set of workaround exploit human psychology. Setting expectations - or warning of possible glitches - is often preferable to fixing or apologising for problems after they occur. Skype was one of the first communications apps to warn of dodgy connections - and also had the ability to reconnect when the network performance improved. Compare that with a normal PSTN/VoLTE call drop - it might have network QoS, but if you lose signal in an elevator, you won't get a warning, apology or a simplified reconnection.

These aren't cure-alls. Obviously if you're running a factory, you'd prefer not to have the automation system cough politely and quietly tell you to expect some downtime because of a network issue. And we certainly *will* need more bandwidth for some future immersive experiences, especially for uplink video in mixed reality.

But recently I've come across a few examples of clever workarounds or hacks, that people in the network/telecom industry probably wouldn't have anticipated. They potentially reduce the opportunity for "monetised QoS", or reduce future network capacity or coverage requirements, by shifting the burden from traffic to something else.

The first example relates to the bandwidth needs for AR/VR/metaverse connectivity - although I first saw this mentioned in the context of videoconferencing a few years ago. It's called "foveated rendering". (The fovea is the most dense part of the eye's retina). In essence, it uses the in-built eye tracking in headsets or good quality cameras. The system know what part of a screen or virtual environment you are focusing on, and reduces the resolution or frame-rate of the other sections in your peripheral vision. Why waste compute or network capacity on large swathes of an image that you're not actually noticing?

I haven't seen many "metaverse bandwidth requirement" predictions take account of this. They all just count the pixels & frame rate and multiply up to the largest number - usually in the multi-Gbps range. Hey presto, a 6G use-case! But perhaps don't build your business case around it yet...

Network latency and jitter is another area where there are growing numbers of plausible workarounds. In theory, lots of applications such as gaming require low latency connections. But actually, they mostly require consistent and predictable but low-ish latency. A player needs to have a well-defined experience, and especially for multi-player games there needs to be fairness.

The gaming industry - and also other sectors including future metaverse apps - have created a suite of clever approaches to dealing with network issues, as well as more fundamental problems where some players are remote and there are hard speed-of-light constraints. They can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.

There are also numerous techniques for predicting or anticipating movements and actions, so network-delivered data might not be needed continually. AI software can basically "fill in the gaps", and even compensate for some sorts of errors if needed. Similar concepts are used for "packet loss concealment" in VoIP or video transmissions. Apps can even subtly speed up or slow down streams to allow people to "catch up" with each other, or have the same latency even when distributed across the world.

We can expect much more of this type of software-based mitigation of network flaws in future. We may even get to the point where sending full video/image data is unnecessary - maybe we just store a high-quality 3D image of someone's face and room (with lighting) and just send a few bytes describing what's happening. "Dean turned his head left by 23degrees, adopted a sarcastic expression and said 'who needs QoS and gigabit anyway?' A cloud outside the window cast a dramatic shadow half a second later". It's essentially a more sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I know I'm massively oversimplyifying, but you get the direction of travel here).

The last example is a bit more left-field. I did some work last year on wireless passenger connectivity on trains. There's a huge amount of complexity and technical effort being done on dedicated trackside wireless networks, improving MNO 5G coverage along railways, on-train repeaters for better signal and passenger Wi-Fi using multi-SIM (or even satellite) gateways. None of these are easy or cheap - the reality is that there will be a mix of dedicated and public network connectivity, with cities and rural areas getting different performance, and each generation of train having different systems. Worse, the coated windows of many new trains, needed for anti-glare and insulation, effectively act as Faraday cages, blocking outdoor/indoor wireless signals.

It's really hard to take existing rolling-stock out of service for complex retrofits, install anything along operational tracks / inside tunnels, and anything electronic like repeaters or new access points need a huge set of certifications and installation procedures.

So I was really surprised when I went to the TrainComms conference last year and heard three big train operators say they were looking at a new way to improve wireless performance for their passengers. Basically, someone very clever realised that it's possible to laser-etch the windows with a fine grid of lines - which makes them more transparent to 4G/5G, without changing the thermal or visual properties very much. And that can be done much more quickly and easily for in-service trains, one window at a time.

I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.

The story here is that while network upgrades and QoS are important, there are often highly inventive workarounds - and very motivated software, hardware and materials-science specialists hoping to solve the same problems via a different path.

Do you think a metaverse app developer would rather work on a cool "foveated rendering" approach, or deal with 800 sets of network APIs and telco lawyers to obtain QoS contracts instead? And how many team-building exercises just involve hiring a high-quality boat to go across a lake, rather than working out how to build rafts from barrels and planks?

We'll certainly need faster, more reliable, lower-latency networks. But we need to be aware that they're not the only source of solutions, and that payments and revenue uplift for network performance and QoS are not pre-ordained.


#QoS #Networks #Regulation #NetNeutrality #5G #FTTX #metaverse #videoconferencing #networkslicing #6G

Sunday, July 24, 2022

New Report on Enterprise Wi-Fi: No, 5G is not enough

(Initially posted on LinkedIn, here. Probably best to use LI for comments & discussion)

Published this week: my full STL Partners report on Enterprise Wi-Fi. Click here to get the full summary & extract.

Key takeout: Telcos, MNOs & other service providers need to take Wi-Fi6 , 6E & (soon) 7 much more seriously. So do policymakers.

5G is not enough for solving enterprises' connectivity problems on its own. It has important roles, especially in Private 5G guise, but cannot replace Wi-Fi in the majority of situations. They will coexist.

Wi-Fi will remain central to most businesses' on-site connectivity needs, especially indoors, for employees, guests and IoT systems.

Telcos should support Wi-Fi more fully. They need a full toolkit to drive relevance in enterprise, not just a 5G hammer & pretend everything is a nail. CIOs and network purchasers know what they want - and it's not 5G hype or slice-wash.

Newer versions of Wi-Fi solve many of the oft-cited challenges of legacy systems, and are often a better fit with existing IT and networks (and staff skills) than 5G, whether private or public. 




Deterministic latency, greater reliability and higher density of devices make 6/6E/7 more suitable for many demanding industrial and cloud-centric applications, especially in countries where 6GHz spectrum is available. Like 5G it's not a universal solution, but has far greater potential than some mobile industry zealots seem to think.

Some recommendations:

- Study the roadmaps for Wi-Fi versions & enhancements carefully. There's a lot going on over the next couple of years.
- CSP executives should ensure that 5G "purists" do not control efforts on technology strategy, regulatory engagement, standards or marketing.
- Instead, push a vision of "network diversity", not an unrealistic monoculture. (Read my recent skeptical post on slicing, too)
- Don't compare old versions of Wi-Fi with future versions of 5G. It is more reasonable to compare Wi-Fi 6 performance with 5G Release 15, or future Wi-Fi 7 with Rel17 (and note: it will arrive much earlier)
- 5G & Wi-Fi will sometimes be converged... and sometimes kept separate (diverged). Depends on the context, applications & multiple other factors. Don't overemphasise convergence anchored in 3GPP cores.
- Consider new service opportunities from OpenRoaming, motion-sensing and mesh enhancements.
- The Wi-Fi industry itself is getting better at addressing specific vertical sectors, but still needs more focus and communication on individual industries
- There should be far more "Wi-Fi for Vertical X, Y, Z" associations, events and articles.
- Downplay clunky & privacy-invasive Wi-Fi "monetisation" platforms for venues and transport networks.
- Policymakers & regulators should look at "Advanced Connectivity" as a whole, not focus solely on 5G. Issue 6GHz spectrum for unlicenced use, ideally the whole band
- Support Wi-Fi for local licensed spectrum bands (maybe WiFi8). Look at 60GHz opportunities.
- Insist Wi-Fi included as an IMT2030 / 6G candidate.

See link for report extract & Exec Summary


Thursday, July 14, 2022

Network Slicing is a huge error for the 5G industry

(Initially posted on LinkedIn, here. Probably best to use LI for comments & discussion)

I've started calling myself a "Slice Denier" or "Slicing Skeptic" on client calls and conference speeches on #5G.

Increasingly, I believe that #NetworkSlicing is one of the worst strategic errors made by the #mobile industry, since the catastrophic choice of IMS for communications applications. The latter has led to the fiascos of #VoLTE and #RCS, and loss of relevance of telcos in communications more broadly.

At best, slicing is an internal toolset that might allow telco operations or product teams (or their vendors) to manage their network resources. For instance, it could be used to separate part of a cell's capacity for FWA, and dynamically adjust that according to demand. It might be used as an "ingredient" to create a higher class of service for enterprise customers, for instance for trucks on a highway, or as part of an "IoT service" sold by MNOs. Public safety users might have an expensive, artisanal "hand-carved" slice which is almost a separate network. Maybe next-gen MVNOs.

(I'm talking proper 3GPP slicing here - not rebranded QoS QCI classes, private APNs, or something that looks like a VLAN, which will probably get marketed as "slices")

But the idea that slicing is itself a *product*, or that application developers or enterprises will "buy a slice" is delusional.

Firstly, slices will be dependent on [good] coverage and network control. A URLLC slice likely won't work reliably indoors, underground, in remote areas, on a train, on a neutral-host network, or while roaming. This has been a basic failure of every differentiated-QoS monetisation concept for many years, and 5G's often-higher frequencies make it worse, not better.

Secondly, there is no mature machinery for buying, selling, testing, supporting. price, monitoring slices. No, the 5G Network Exposure Function won't do it all. I haven't met a Slice salesperson yet, or a Slice-procurement team.

Thirdly, a "local slice" of a national 5G network will run headlong into a battle with the desire for separate private/dedicated local 5G networks, which may well be cheaper and easier. It also won't work well with the enterprise's IT/OT/IP domains, out of the box.

Also there's many challenges getting multi-operator slices, device OS links to slice APIs, slice "boundary controllers" between operators, aligning RAN and core slices, regulatory questionmarks and much more.

To use an appropriate analogy, consider an actual toaster, with settings for different timing, or a setting for bagels. Now imagine Toaster 5.0 with extra software smarts, perhaps cloud-native. Nobody wants to buy a single slice of toast, or a software profile. They'll just buy a toaster for their kitchen, or or get an "integrated breakfast solution" including toast in a cafe. They won't care about the slicing software. The chef might, but it's doubtful.

If you see 5G Network Slicing as a centrepiece of future "monetisation", you're in for an unpleasant smell of burning, and probably a blaring smoke alarm too.


 

Tuesday, April 26, 2022

Telcos should focus on "connected data"​ not just "edge computing"​

Note: A version of this article first appeared as a guest blog post written for Cloudera, linked to a webinar presentation on May 4, 2022. See the sign-up link in the comments. This version has minor changes to fit the tone & audience of this newsletter, and tie in with previous themes. This version is also published on my LinkedIn newsletter with a comments thread (here).

Telcos and other CSPs are rethinking their approach to enterprise services in the era of advanced wireless connectivity - including their 5G, fibre and Software-Defined Wide Area Network (SD-WAN) portfolios. 

Many consumer-centric operators are developing propositions for “verticals”, often combining on-site or campus mobile networks with edge computing, plus deeper solutions for specific industries or horizontal applications. Part of this involves helping enterprises deal with their data and overall cloud connectivity as well as local networks. (The original MNO vision of delivering enterprise networks as "5G network slices" partitioned from their national infrastructure has taken a back seat. There is more interest currently in the creation of dedicated on-premise private 5G networks, via telcos' enterprise or integrator units).

No alt text provided for this image

At the same time, telecom operators are also becoming more data- and cloud-centric themselves. They are using disaggregated systems such as Open RAN and cloud-native 5G cores, plus distributed compute and data, for their own requirements. This is aimed at running their networks more efficiently, and dealing with customers and operations more flexibly. There are both public and private cloud approaches to this, with hyperscalers like Amazon and disruptors such as Rakuten Symphony and Totogi promising revolutions in future.

As I've said for some time, “The first industry that 5G will transform is the telecom industry itself.

This poses both opportunities and challenges. Telcos’ internal data and cloud needs may not mirror their corporate customers’ strategies and timing perfectly, especially given the diverse connectivity landscape.

If operators truly want to blend their own transformation journey with that of their customers, what is needed is a much broader view of the “networked cloud” and "distributed data", not just the “telco cloud” or "telco edge" that many like to discuss.

Networked data and cloud are not just “edge computing”

Telecom operators’ discussions around edge/cloud have gone in two separate directions in recent years:

  • External edge computing: The desire by MNOs to deploy in-network edge nodes for end-user applications such as V2X, IoT control, smart city functions, low-latency cloud gaming, or enterprise private networks. Often called “MEC” (mobile edge computing), this spans both in-house edge solutions and a variety of collaborations with hyperscalers such as Azure, Google Cloud Platform, and Amazon Web Services.
  • Internal: The use of cloud platforms for telcos’ own infrastructure and systems, especially for cloud-native cores, flexible billing, and operational support systems (BSS/OSS), plus new open and virtualised RAN technology for disaggregated 4G/5G deployments. Some functions need to be deployed at the edge of the network (such as 5G DUs and UPF cores), while others can be more centralised.

Of these two trends, the latter has seen more real-world utilisation. It is linked to solving clear and immediate problems for the CSPs themselves.

Many operators are working with public and private clouds for their operational needs—running networks, managing subscriber data and experience, and enabling more automation and control. While there are raging debates about “openness” vs. outsourcing to hyperscalers, the underlying story—cloudification of telcos’ networks and IT estates—is consistent and accelerating. The timing constraints of radio signal processing in Open RAN, and the desire to manage ultra-low latency 5G “slices” in future 3GPP releases are examples that need edge compute. There may also be roles for edge billing/charging, and various security functions.

In contrast, telcos' customer-facing cloud, edge and data offers have been much slower to emerge. The focus and hype about MEC has meant operators’ emphasis has been on deploying “mini data centres” deep in their networks—at cell towers or aggregation sites, or fixed-operators’ existing central office locations. Discussion has centred on “low latency” applications as the key differentiator for CSP-enabled 5G edge. The focus has also been centred on compute rather than data storage and analysis. Few telcos have given much consideration to "data at rest" rather than "data in motion" - but both are important for developers.

This has meant a disconnect between the original MEC concept and the real needs of enterprises and developers. In reality, enterprises need their data and compute to occur in multiple locations, and to be used across multiple time frames—from real time closed-loop actions, to analysis of long-term archived data. It may also span multiple clouds—as well as on-premise and on-device capabilities beyond the network itself.

What is needed is a more holistic sense of “networked cloud” to tie these diverse data storage and processing needs together, along with documentation of connectivity and the physical source and path of data transmission.

No alt text provided for this image

Potentially there are some real sources of telco differentiation here - as opposed to some of the more fanciful MEC visions, which are more realistically MNOs just acting as channel partners for AWS Outposts and Azure's equivalent Private MEC.

An example of the “networked cloud”

Consider an example: video cameras for a smart city. There are numerous applications, ranging from public transit and congestion control, to security and law enforcement, identification of free parking spots, road toll enforcement, or analysing footfall trends for retailers and urban planners. In some places, cameras have been used to monitor social-distancing or mask-wearing during the pandemic. The applications vary widely in terms of immediacy, privacy issues, use of historical data, or the need for correlation between multiple cameras. 

CSPs have numerous potential roles here, both for underlying connectivity and the higher-value services and applications.

But there may be a large gap between when “compute” occurs, compared to when data is collected and how it is stored. Short-term image data storage and real-time analysis might be performed on the cameras themselves, an in-network MEC node, or at a large data centre, perhaps with external AI resources or combined with other data sets. Longer-term data for trend analysis or historic access to event footage could be archived either in a city-specific facility or in hyperscale sites.

(I wrote a long article about Edge AI and analytics last year - see here)

No alt text provided for this image

For some applications, there will need to be strong proofs of security and data custody, especially if there are evidentiary requirements for law enforcement. That may extend to knowing (and controlling) the specific paths across which data transits, how it is stored, and the privacy and tamper-resistance compliance mechanisms employed.

Similar situations—with both opportunities and challenges—exist in verticals from vehicle-to-everything to healthcare to education to financial services and manufacturing. CSPs could become involved in the “networked cloud” and data-management across these areas—but they need to look beyond narrow views of edge-compute. Telcos are far from being the only contenders to run these types of services, but some operators are taking it seriously - Singtel offers video analytics for retail stores, for instance.

Location-specific data

As a result, the next couple of years may see something of a shift in telcos’ discussions and ambitions around enterprise data. There will be huge opportunities emerging around enterprise data’s chain-of-custody and audit trails—not only defining where processing takes place, but also where and how data is stored, when it is transmitted, and the paths it takes across the network(s) and cloud(s).

(A theme for another newsletter article or LI post is on enterprises' growing compliance headaches for data transit - especially for international networks. There may be cybersecurity risks or sanctions restrictions on transit through some countries or intermediary networks, for instance. Some corporations are even getting direct access into Internet exchanges and peering-points for greater control).

In some cases, CSPs will take a lead role here, especially where they own and control the endpoints and applications involved. Then they can better coordinate the compute and data-storage resources. In other cases, they will play supporting roles to others that have true end-to-end visibility. There will need to be bi-directional APIs—essentially, telcos become both importers and exporters of data and connectivity. This is especially true in the mobile and 5G domain, where there will inevitably be connectivity “borders” that data will need to transit. (A recent post on the need for telcos to take on both lead and support roles is here)

There may be particular advantages for location-specific data collected or managed by operators. For example, weather sensors co-located with mobile towers could provide useful situational awareness both for the telco’s own operational purposes as well as to enterprise or public-sector customers, such as smart city authorities or agricultural groups. 

Telcos also have a variety of end-device fleets that they directly own, or could offer as a managed service—for instance their own vehicles, or city-wide security cameras. These can leverage the operator’s own connectivity (typically 5G) as well as anchor some of the data origination and consumption.

Conclusion

Telecom operators should shift their enterprise focus from mobile edge computing (MEC) to a wider approach built around "networked data". Much of the enterprise edge will reside beyond the network and telco control, in devices or on-premise gateways and servers. Essentially no enterprise IT/IoT systems will be wholly run "in" the 5G or fixed telco network, as virtual functions in a 3GPP or ORAN stack.

They instead should look for involvement in end-point devices, where data is generated, where and when it is stored and processed—and also the paths through the network it takes. This would align their propositions with connectivity (between objects or applications) as well as property (the physical location of edge data centres or network assets).

There are multiple stages to get to this new proposition of “networked cloud”, and not all operators will be willing or able to fulfil the whole vision. They will likely need to partner with the cloud players, as well as think carefully about treatment of network and regulatory boundaries.

Nevertheless, the broadening of scope from “edge compute” to “networked cloud” seems inevitable. The role of telcos as pure-play "edge" specialists makes little sense and may even be a distraction from the real opportunities emerging at higher levels of abstraction.

The original version of this article is at https://blog.cloudera.com/telco-5g-returns-will-come-from-enterprise-data-solutions/

I'll be speaking on an upcoming webinar with @cloudera about "Enterprise data in the #5G era" on May 4, 2022 - https://register.gotowebinar.com/register/3531625172953644816

#cloud #edgecomputing #5G #telecoms #latency #IoT #smartcities #mobile #telcos

Thursday, April 07, 2022

Geopolitics, war & network diversity

This post was originally published on my LinkedIn Newsletter (here). Please sign up, and join the discussion thread there.

Background

I'm increasingly finding myself drawn into discussions of #geopolitics and how it relates to #telecoms. This goes well beyond normal regulatory and policymaking involvement, as it means that rules - and opportunities and risks - are driven by much larger "big picture" strategic global trends, including the war in Ukraine.

As well as predicting strategic shifts, there are also lessons to be learned from events at a local, tactical level which have wider ramifications. Often, there will be trade-offs against normal telecoms preoccupations with revenue growth, theoretical "efficiency" of spectrum or network use, standardisation, competition and consumer welfare.

This is the first of what will probably be a regular set of articles on this broader theme. Here, I'm focusing on the Ukraine war, in the context some of the other geopolitical factors that I think are important. I'm specifically thinking about what they may mean for the types of network technology that are used, deployed and developed in future. This has implications for #5G, #6G, #satellite networks, #WiFi, #FTTX and much more, including the cloud/edge domains that support much of it. 

 



Ukraine and other geopolitical issues

This article especially drills into how the conflict in Ukraine has manifested in terms of telecoms and connectivity, and attempts to extrapolate to some early recommendations for policymakers more broadly.

I'm acutely consicous of the ongoing devastation and hideous war crimes being perpetrated there - I hope this isn't too early to try to analyse the narrow field of networking dispassionately, while conflict still rages.

For context, as well as Ukraine, other geopolitical issues impacting telecoms include:

  • US / West vs. China tensions, from trade wars to broader restrictions on the use of Huawei and other vendors' equipment, as well as sanctions on the export of components.
  • Impact of the pandemic on supply chains, plus the greater strategic and political importance of resilient telecom networks and devices in the past two years.
  • The politics of post-pandemic recovery, industrial strategy and stimulus funds. Does this go to broadband deployment, themes such as Open RAN, national networks, smart cities/infrastructure, satellite networks... or somewhere else?
  • Tensions within the US, and between US and Europe over the role and dominance of "Big Tech". Personal data, monopoly behaviour, censorship or regional sovereignty etc. This mostly doesn't touch networks today, but maybe cloud-native will draw attention.
  • Semiconductor supply-chain challenges and the geopolitical fragility of Taiwan's chip-fabrication sector.
  • How telecoms (and cloud) fits within Net Zero strategies, either as a consumer of energy, or as an enabler of green solutions.
  • Cyber threats from nation-state actors, criminal cartels and terrorist-linked groups - especially aimed at critical infrastructure and health/government/finance systems.

In other words, there's a lot going on. It will impact 5G, 6G development, vendor landscapes, cloud - and also other areas such as spectrum policy and Internet governance.

Network diversity as a focus

I've written and spoken before about the importance of "network diversity" and the dangers of technology monocultures, including over-reliance on particular standards (eg 5G) or particular business models (eg national MNOs) as some sort of universal platform. It is now clear that it is more important than ever.

The analogy I made with agriculture, or ecological biodiversity, is proving to be robust.

(Previous work includes this article from 2020 about private enterprise networks, or my 2017 presentation keynote on future disruptions, at Ofcom's spectrum conference. (The blue/yellow image of wheat fields, repeated here in this post, was chosen long before it became so resonant as the Ukrainian flag). I've also covered the shift towards Open RAN and telecoms supplier diversification – including a long report I submitted to the UK Government's Diversification Task Force last year - see this post and download the report).

A key takeout from my Open RAN report was that demand diversity is as important as creating more supply choices in a given product domain. Having many classes of network operator and owner – for instance national MNOs, enterprise private 4G/5G, towercos, industrial MNOs and neutral hosts – tends to pull through multiple options for supply in terms of both vendor diversity and technology diversity. They have different requirements, different investment criteria and different operational models.

In Ukraine, the "demands" for connectivity are arising from an even more broad set of sources, including improvised communications for refugees, drones and military personnel.

The war in Ukraine & telecoms

There have been numerous articles published which highlight the surprising resilience and importance of Ukrainian telecoms during the war so far. Bringing together and synthesising multiple sources, this has highlighted a number of important issues around network connectivity:

  • The original “survivability” concept of IP networks seems to have been demonstrated convincingly. Whether used for ISPs’ Internet access, or internal backhaul and transport for public fixed and mobile networks, the ability for diverse and resilient routing paths seems to have mostly been successful.
  • Public national mobile networks - mostly 4G in Ukraine's case - have proven essential in many ways, whether that has been for reporting information about enemy combatants' locations and activities, obtaining advice from government authorities, or dealing with the evacuation as refugees. (I'm not sure if subway stations used as shelters have underground cellular coverage, or if there is WiFi). Authorities also seem to have had success in getting citizens to self-censor, to avoid disclosing sensitive details to their enemies.
  • Reportedly the Russian forces haven't generally targeted telecoms infrastructure on a widescale basis. This was partly because they have been using commerical mobile networks themselves. However, because roaming was disabled, Russian military use of their encrypted handsets and SIMs on public 3G/4G networks seems to have failed. Two articles here and here give good insight, and also suggests there may be network surveillance backdoors which Russia may have exploited. There have also been reports of stingrays ("fake" base stations used for interception of calls / identity) being deployed. It also appears that some towns and cities - notably the destroyed city of Mariupol - have been mostly knocked offline, partly because the electrical grid was attacked first.
  • Ukraine’s competitive telecoms market has probably helped its resilience. There is a highly fragmented fixed ISP landscape, with very inexpensive connections. There are over a dozen public peering-points across the country. There are three main MNOs, with many users having SIMs from 2+ operators. (This is a good overview article - https://ukraineworld.org/articles/ukraine-explained/key-facts-about-ukraines-telecom-industry). It seems they have enabled some form of national roaming to allow subscribers to attach to each others' networks.
  • WiFi hotspots (likely with mobile backhaul) have been used by NGOs evacuating refugees by buses.
  • Although it is still only being used at a small scale, the LEO satellite terminals from SpaceX’s StarLink seem to be an important contributor to connectivity – not least as a backup option. Realistically, satellite isn’t appropriate for millions of individual homes – and especially not personal vehicles and smartphones – but is an important part of the overall network-diversity landscape. Various commentators have suggested it is useful as a backup for critical infrastructure connectivity, as well as for mobile units such as special forces.
  • Another satellite broadband provider, Viasat, apparently suffered a cyberattack at the start of the war (link here), which knocked various modem users offline (or even "bricked" the devies), reportedly including Ukrainian government organisations. Investigations haven't officially named Russia, but a coincidence seems improbable. This attack also impacted users outside Ukraine.
  • Various peer-to-peer apps using Bluetooth or WiFi allow direct connections between phones, even if wide area connections are down (see link)
  • There have been some concerning reports about the impact of GPS jammers on the operation of cellular networks, which may use it as a source of “timing synchronisation” to operate properly, especially for TDD radio bands. While this has long been a risk for individual cell-sites from low-power transmitters, the use of deliberate electronic warfare tools could potentially point to broader vulnerabilities in future.
  • There has been wide use of commercial drones like the DJI Mavic-3 for surveillance (video and thermal imaging), or modified to deliver improvised weaponry. These use WiFi to connect to controllers on the ground, as well as a proprietary video transmission protocols (called O3+) which apparently has range of up to 15km using unlicensed spectrum. Some of the "Aerorozvidka" units reportedly then use StarLink terminals to connect back to command sites to coordinate artillery attacks (link).

In short, it seems that Ukraine has been well served by having lots of connectivity options - probably including some additional military systems that aren't widely discussed. It has benefited from multiple fixed, cellular and satellite networks, with potential for interconnect, plus inventive "quick fixes" after failures and collaboration between providers. It is exploiting licensed and unlicensed spectrum, with cellular, Wi-Fi and other technologies.

In other words, network diversity is working properly. There appears to be no single point of failure, despite deliberate attacks by invading forces and hackers. Connectivity is far from perfect, but it has held up remarkably well. Perhaps the full range of electronic warfare options hasn't been used - but given the geographical size of Ukraine and the inability of Russia forces to maintain supply-lines to distant units, that is also unsurprising.

Another set of issues that I haven't really examined are around connectivity within sanctions-hit Russia. Maybe it will have to develop more local network equipment manufacturers - if they can get the necessary silicon and other components. It probably will not wish to over-rely on Huawei & ZTE any more than some Western countries have been happy with Nokia and Ericsson as primary options. More problematic may be fixed-Internet routers, servers, WiFi APs and other Western-dominated products. I can't say I'm sympathetic, and I certainly don't want to offer suggestions. Let's see what happens.

Recommendations for policymakers, industry bodies and regulators

So what are the implications of all this? Hopefully, few other countries face a similar invasion by a large and hostile army. But preparedness is wise, especially for countries with unfriendly neighbours and territorial disputes. And even for everywhere else, the risks of cyberattacks, terrorism, natural disasters - or even just software bugs or human error - are still significant.

I should stress that I'm not a cybersecurity or critical infrastructure specialist. But I can read across from other trends I'm seeing in telecoms, and in particular I'm doing a lot of work on "path dependency" where small, innocent-seeming actions end up having long-term strategic impacts and can lock-in technology trajectories.

My initial set of considerations and recommendations:

  • As a general principle, divergence in technology should be considered at least as positively than convergence. It maintains optionality, fosters innovation and reduces single-point-of-failure risks.
  • National networks and telcos (fixed and mobile) are essential - but cannot do everything. They also need to cooperate during emergencies - a spirit of collaboration which seems to have worked well during the pandemic in many countries.
  • Normal ideas about cyber-resilience and security may not extend to the impact of full-scale military electronic warfare units, as well as more "typical" online hacking and malware attacks.
  • Having separate "air-gapped" networks available makes sense not just for critical communications (military, utilities etc) but for more general use. It isn't inefficient - it's insurance. There may be implications here for network-sharing in some instances.
  • Thought needs to be given to emergency fallbacks and improvised work-arounds, for instance in the event of mass power outages or sabotage. This is particularly important for software/cloud-based networks, which may be less "fixable" in the field. Can a 5G network be "bodged"? (that's "MacGyvred" to my US friends)? As a sidenote - how have electric vehicles fared in Ukraine?
  • Unlicensed spectrum and "permissionless communications" is hugely important during emergency situations. Yes, it doesn't have control or lawful intercept. But that's entirely acceptable in extreme circumstances.
  • Linkages between technologies, access networks and control/identity planes should generally be via gateways that can be closed, controlled or removed if necessary. If one is attacked, the rest should be firewalled off from it. For the same reason "seamless" should be a red-flag word for cross-tech / cross-network roaming. Seams are important. They offer control and the ability to partition if necessary. "Frictionless" is OK, as long as friction can be re-imposed if needed.
  • Governments should be extremely cautious of telcos extending 3GPP control mechanisms – especially the core network and slicing – to fixed broadband infrastructure. Fixed broadband is absolutely critical, and complex software dependencies may trade off fine-grained control vs. resilience - and offer additional threat surfaces.
  • Democratising and improving satellite communications looks like an ever more wise move, for all sorts of reasons. It's not a panacea, but it's certainly "air-gapped" as above. 3GPP-based "non-terrestrial" networks, eg based on drones or balloons, also has potential - but will ideally be able to work independently of terrestrial networks if needed.
  • I haven't heard much about LPWAN and LoRa-type networks, but I can imagine that being useful in emergency situations too.
  • Sanctions, trade wars and supply-chain issues are highly unpredictable in terms of intended and unintended consequences. Technology diversity helps mitigate this, alongside supplier diversity in any one network domain.
  • Spectrum policy should enable enough scale economies to ensure good supply of products (and viability of providers), but not *so* much scale that any one option drives out alternatives.
  • The role and impact of international bodies like ITU, GSMA and 3GPP needs careful scrutiny. We are likely to see them become even more political in future. If necessary, there may have to be separate "non-authoritarian" and "authoritarian" versions of some standards (and spectrum policies). De-coupling and de-layering technologies' interdependency - especially radio and core networks - could isolate "disagreements" in certain layers, without undermining the whole international collaboration.
  • There should be a rudimentary basic minimum level of connectivity that uses "old" products and standards. Maybe we need to keep a small slice of 900MHz spectrum alive for generator-powered GSM cells and a box of cheap phones in bunkers - essentially a future variant of Ham Radio.

So to wrap up, I'm ever more convinced that Network Diversity is essential. Not only does it foster innovation, and limit oligopoly risk, but it also enables more options in tragic circumstances. We should also consider the potential risks of too much sophistication and pursuit of effiency and performance at all costs. What happens when things break (or get deliberately broken)?

In the meantime, I'm hoping for a quick resolution to this awful war. Slava Ukraini!

Sidenote: I am currently researching the areas of “technology lock-in” and “path dependence”. In particular, I have been investigating the various mechanisms by which lock-in occurs and strategies for spotting its incipience, or breaking out of it. Please get in touch with me, if this is an area of interest for you.