Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Monday, May 01, 2023

A critical enabler for broadband competition - Marketplaces for buying and selling open access FTTP

This post originally appeared on Apr 18 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)

Following yesterday's post on mobile #neutralhost operators as aggregators for wholesale access to municipality-level #smallcells and assets/permits, I think something roughly similar is happening in #FTTP.

An aggregation & marketplace tier for #ISPs, #AltNets and #infracos is emerging, among the UK fixed #broadband market's various groups:

- Incumbents with wholesale & retail units, although in theory separated - BT Retail & OpenReach, and VMO2 (Virgin) with its new wholesale JV Nexfibre (with Liberty Global & Infravia)
- AltNets with their own FTTP infrastructure solely for their own ISP retail services, eg Hyperoptic
- AltNets with FTTP for both inhouse ISP retail and wholesale to others
- Wholesale-only FTTP providers such as CityFibre
- Retail-only ISPs, such as Zen & TalkTalk, which buy wholesale fibre (and historically copper / FTTC)

The wholesale market is expanding rapidly, with infracos still building, Openreach accelerating (and trying to discount with its contentious Equinox 2 plan) and existing AltNets looking to supplement slow conversion of homes-passed to homes-connected by offering access to other ISPs.

But the patchwork quilt of wholesale FTTP is very messy. There is growing overbuild, lots of "passed" homes that need extra work to get to individual buildings (or inside them to flats), a mishmash of vendors and construction practices, variable-quality networks and processes - and ongoing consolidation and possible financial woes.

This brings a need for aggregation & simplification. There is both a "buy" and a "sell" side here.

Retail ISPs want access to well-defined and standardised wholesale fibre access, across multiple FTPP owners - both major players like Openreach and AltNets. They want to sell consistent products to end-customers, with promises on provisioning "live next Tuesday at 11am" or ways to deal with faults. They don't want 50 integration projects - but they do want good pricing.

The AltNets, meanwhile, want to be able to sell to those ISPs, even if they've built IT systems and processes that weren't originally designed for wholesale. They also need to conform to Ofcom's new one-touch-switching rules.

Maybe I'll think of a snappier term, but given that the #ConnectedNorth conference took place in Manchester, the term Open Access Solution as a Service, or #OASaaS, seems rather fitting...

There are already a number of OASaaS contenders. Some AltNets formed the Common Wholesale Platform | CWP in 2020. CityFibre is working on its own ecosystem, with Toob as its first partner. There's also The Fibre Café, Vitrifi & BroadbandHub - as well as TOTSCo which is purely focused on the one-touch switching process. Not all seem to focus equally on buy and sell sides.

I wonder if agreed standards or specs (or even regulation) are needed. Perhaps an equivalent to JOTS (Joint Operator Technical Specification) for shared/mobile infrastructure such as neutral host systems? We don't want OASaaS to look back in anger...

 

Sunday, April 30, 2023

A new view on Neutral Host - the role of cities and municipalities

This post originally appeared on Apr 17 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)

I'm at the #ConnectedNorth event in Manchester today and tomorrow. There's a lot about gigabit fibre rollouts and uptake, as well as a big emphasis on connected communities and cities - but this post is about mobile densification and small cells.

A key theme here is the fast-evolving model for #neutralhost mobile for small cells and network capacity in-fill in cities. An NH is a 3rd party wholesale provider which enables multiple tenant 4G/5G mobile providers - generally MNOs, but also potentially including private networks as well.

A few years ago when I was running NH workshops with Peter Curnow-Ford we identified this area of metro infill as one with potential, but limited actual deployments.

There are numerous challenges - MNOs ideally don't want separate deals with each city authority, while cities don't want multiple MNOs independently requesting 100s of sites with associated street clutter, road closures and soon. Authorities also want to both make money from access to assets such as lampposts, and to improve connectivity for citizens and businesses as fast as possible.

One option floated was for authorities to build out their own private 4G/5G networks, then allow MNOs to roam onto them, or use some sort of MOCN network-sharing arrangement. But MNOs each have different coverage / capacity holes, different spectrum bands, different customer groups - and also worry about security, ability to manage radio units, do carrier aggregation and so on. The idea of a single cell network in its own spectrum, with multiple MNO tenants is appealing, but sometimes unworkable. (It might work OK in villages or indoors, though).

What's happening is that another model is evolving. Local authorities like city councils are contracting with several infrastrucure specialists - companies like Cellnex UK , Freshwave, Ontix, BAI Communications and Shared Access to run (essentially) small-cell as a service offers. These act as intermediaries, allowing local authorities to create standard contracts, and for MNOs to have standardised processes for getting access at each site.

It reduces the frictions and costs of the paperwork - and also allows for infrastructure-sharing to evolve over time where it makes sense. Coupled with vRAN or open RAN it can put some of the electronics into central facilities, reducing street-side box numbers. And it means MNOs can get coverage in their preferred locations, with backhaul/fronthaul and power supplies simplified.

The competitive infraco/towerco angle, rather than exclusive area concessions, allows MNOs to choose the provider that is the best fit - and without needing different processes in each city.

It's not quite what I expected NH models to look like - and they may differ in the US or across Europe - but it seems to make good sense here in the UK.

 

Saturday, April 29, 2023

6G convergence or "network of networks" must be bi-directional, not assume a 3GPP umbrella

This post originally appeared on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)

 Following on from my (rather controversial) post the other day about #6G and #IMT2030 needing to be indoor-primary and also have an IEEE / #WiFi candidate, I'm now going to *further* annoy various people.

There's a lot of talk about 6G being a "network of networks". This follows on from previous similar themes about #convergence and #HetNets. At one level I agree, but I think there needs to be a perspective shift.

There has been a long string of attempts to blend Wi-Fi and cellular, going all the way back to UMA in the 2G/3G era around 2005. (I was a vociferous critic).

There's been a alphabet-zoo of acronyms covering 3GPP gateway functions or selection/offload approaches - GAN, ANDSF, TWAG, N3IWF, ATSSS - and probably others I've forgotten. From the Wi-Fi side there's been Hotspot 2.0 and others. More recently we've seen an attempt to bridge fixed and mobile networks, even going as far as pitching 3GPP-type cores for fixed ISPs.

Pretty much all of these have failed to gain traction. They've had limited deployments and successes here and there, but nobody can claim that true "converged wireless" is ubiquitous or even common. 99% of WiFi has no connection to cellular. Genuine "offload" is tiny.

But despite this, the 6G R&D and vision seems to be looking to do it all over again. This phrase "network of networks" cropped up regularly at the 6GWorld #6Gsymposium events I attended this week. It now usually includes integrating #satellite or non-terrestrial (NTN) capabilities as much as Wi-Fi.

But there's a bit of an unstated assumption I think needs to be challenged. There seems to be unquestioned acceptance that the convergence layer - or perhaps "umbrella" sheltering all the various technologies is necessarily the 3GPP core network.

I think this is a problem. Many of the new and emerging 6G stakeholders (for instance enterprises, satellite operators, or fixed providers) do not understand 3GPP cores, nor have the almost religious devotion to that model common in the legacy cellular sector.

So I think any "convergence" in IMT2030 must be defined as bi-directional. Yes, Wi-Fi and satellite can slot into a 3GPP umbrella. But satellite operators need to be able to add terrestrial 6G as an add-on to their systems, while Wi-Fi controllers (on-prem or cloud based) should be able to look after "naked" (core-free) 3GPP radios where appropriate.

This would also flow through to authentication methods, spectrum coordination and so on. Also it should get reflected in government policy & regulation.

My view is that 3GPP-led convergence has largely failed. Maybe it gets fixed in 5G/6G eras, but maybe it won't. We need #5G and 6G systems to have both northbound and southbound integration options.

I also think we need to recognise that "convergence" is itself only one example of "combination" of networks. There are numerous other models, such as bonding or hybrids that connect 2+ separate networks in software or hardware.

 

Friday, April 28, 2023

6G must be indoor-primary and have a Wi-Fi candidate technology

This post originally appeared on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)

I'm giving a lot of thought to #6G design goals, priorities & technology / policy choices. Important decisions are coming up. I'll be exploring them in coming weeks and months. Two important ones I see:

- 6G / #IMT2030 must be "indoor-primary"
- There must be a IEEE / Wi-Fi Alliance candidate tech for 6G

The first one is self-evident. The vast bulk of mobile use - and an even-larger % of total wireless use - is indoors. It's inside homes, offices schools, factories, warehouses, public spaces like malls and stadia - as well as inside vehicles like trains. Even outdoors, a large % of usage is on private sites like industrial complexes or hospital campuses.

Roughly 80% of mobile use is indoors - more if you include wireless streaming to smart TVs and laptops/tablets. By the 2030s 6G era, there will be more indoor wireless use for #industrialautomation, #gaming, education, healthcare, #robotics and #AR / #VR / #metaverse and so on.

This implies that economic, social, welfare and cultural upsides will be indoor-primary. 80%+ of any GDP uplift will be indoor-generated. This suggests 6G tech design & standards - and associated business models and regulation - should be indoor-oriented too.

The IEEE / #WiFi idea follows on from this. The default indoor wireless tech today is Wi-Fi. There is a lot of indoor cellular use, but currently 5G is supported poorly - and certainly not everywhere.

While 5G and future 6G indoor #smallcells, #neutralhost and repeaters / DAS are evolving fast, *nobody* expects true ubiquity. Indoor cellular will remain patchy, especially multi-operator. And many devices (eg TVs) don't have cellular radios anyway.

This means that WiFi - likely future #WiFi8 and #WiFi9 - will remain central to in-building connectivity in the 6G era, no matter how good the tech for reconfigurable surfaces or other cellular innovations become.

IEEE decided not to pitch WiFi6 formally for 5G / IMT2020, but instead just show it surpassed all the metrics. But "we could have done it if we wanted" isn't good enough. There are no government-funded "WiFi Testbed Programs" or "WiFi Innovation Centres of Excellence" because of this lower visibility.

Governments are ITU members and listen to it. If policymakers want the benefits of full connectivity, they need to support it with spectrum, targets and funding, across *all* indoor options.

And if the WiFi industry wants full / easy access to new resources, it needs to be an official 6G / IMT2030 technology. It needs access to IMT licensed spectrum, especially for local licenses with AFC.

This idea will be very unpopular among both cellular industry (3GPP pretends it is the "keeper of the G's") and the WiFi sector, which sees it as a lot of extra work & politics.

But I think it's essential for IMT2030 to embrace network diversity, plus ownership- & business-model diversity as central elements of 6G.

 

Thursday, February 23, 2023

Local networks: when telecoms becomes "pericoms"​

Published via my LinkedIn Newsletter - see here to subscribe / see comment thread

"Telecoms" or "telecommunications" is based on the Greek prefix "tele-".

It means "at a distance, or far-off". It is familiar from its use in other terms such as telegraph, television or teleport. And for telecoms, that makes sense - we generally make phone calls to people across medium or long distances, or send then messages. Even our broadband connections generally tend to link to distant datacentres. The WWW is, by definition, worldwide.

The word "communications" actually comes from a Latin root, meaning to impart or share. Which at the time, would obviously have been done mostly through talking to other people directly, but could also have involved writing or other distance-independent methods.

This means that distant #communications, #telecoms, has some interesting properties:

  • The 2+ distant ends are often (but not always) on different #networks. Interconnection is therefore often essential.
  • Connecting distant points tends to mean there's a good chunk of infrastructure in between them, owned by someone other than the users. They have to pay for it, somehow.
  • Because the communications path is distant, it usually makes sense for the control points (switches and so on) to be distant as well. And because there's typically payment involved, the billing and other business functions also need to be sited "somewhere", probably in a #datacentre, which is also distant.
  • There are a whole host of opportunities and risks with distant communications, that mean that governments take a keen interest. There are often licenses, regulations and internal public-sector uses - notably emergency services.
  • The infrastructure usually crosses the "public domain" - streets, airwaves, rooftops, dedicated tower sites and so on. That brings additional stakeholders and rule-makers into the system.
  • Involving third parties tends to suggest some sort of "service" model of delivery, or perhaps government subsidy / provision.
  • Competition authorities need to take into account huge investments and limited capacity/scope for multiple networks. That also tends to reduce the number of suppliers to the market.

That is telecommunications - distant communications.

But now consider the opposite - nearby communications.

Examples could include a private 5G network in a factory, a LAN in an office, a WiFi connection in the home, a USB cable, or a Bluetooth headset with a phone. There are plenty of other examples, especially for IoT.

These nearby examples have very different characteristics to telecoms:

  • Endpoints are likely to be on the same network, without interconnection
  • There's usually nobody else's infrastructure involved, except perhaps a building owner's ducts and cabinets.
  • Any control points will generally be close - or perhaps not needed at all, as the devices work peer-to-peer.
  • There's relatively little involvement of the "public domain", unless there are risks like radio interference beyond the network boundaries.
  • It's not practical for governments to intervene too much in local communications - especially when it occurs on private property, or inside a building or machine.
  • There might be a service provider, but equally the whole system could be owned outright by the user, or embedded into another larger system like a robot or vehicle.
  • Competition is less of an issue, as is supplier diversity. You can buy 10 USB cables from different suppliers if you want.
  • Low-power, shared or unlicensed spectrum is typical for local #wireless networks.

I've been trying to work out a good word for this. Although "#telecommunications" is itself an awkward Greek / Latin hybrid I think the best prefix might be Greek again - "peri" which means "around", "close" or "surrounding" - think of perimeter, peripheral, or the perigee of an orbit.

So I'm coining the term pericommunications, to mean nearby or local connectivity. (If you want to stick to all-Latin, then proxicommunications would work quite well too).

Just because a company is involved in telecoms does not mean it necessarily can expect a role in pericoms as well. (Or indeed, vice versa). It certainly can participate in that market, but there may be fewer synergies than you might imagine.

Some telcos are also established and successful pericos as well. Many home broadband providers have done an excellent job with providing whole-home #WiFi systems with mesh technology, for example. In-building mobile coverage systems in large venues are often led by one telco, with others onboarding as secondary operators.

But other nearby domains are trickier for telcos to address. You don't expect to get your earbuds as an accessory from your mobile operator - or indeed, pay extra for them. Attempts to add-on wearables as an extra SIM on a smartphone account have had limited success.

And the idea of running on-premise enterprise private networks as a "slice" of the main 4G/5G macro RAN has clearly failed to gain traction, for a variety of reasons. The more successful operators are addressing private wireless in much the same way as other integrators and specialist SPs, although they can lean on their internal spectrum team, test engineers and other groups to help.

Some are now "going the extra mile" (sorry for the pun) for pericoms. Vodafone has just announced its prototype 5G mini base-station, the size of a Wi-Fi access point based on a Raspberry Pi and a Lime Microsystems radio chip. It can support a small #5G standalone core and is even #OpenRAN compliant. Other operators have selected new vendors or partners for campus 4G/5G deployments. The 4 UK MNOs have defined a set of shared in-building design guidelines for neutral-host networks.

It can be hard for regulators and policymakers to grasp the differences, however. The same is true for consultants and lobbyists. An awful lot of the suggested upsides of 5G (or other forms of connectivity) have been driven by a tele-mindset rather than a peri-view.

I could make a very strong argument that countries should really have a separate pericoms regulator, or a dedicated unit within the telecoms regulator and ministry. The stakeholders, national interests and economics are completely different.

A similar set of differences can be seen in #edgecomputing: regional datacentres and telco MEC are still "tele". On-premise servers or on-device CPUs and GPUs are peri-computing, with very different requirements and economics. Trying to blur the boundary doesn't work well at present - most people don't even recognise it exists.

Overall, we need to stop assuming that #pericoms is merely a subset of #telecoms. It isn't - it's almost completely different, even if it uses some of the same underlying components and protocols.

(If this viewpoint is novel or interesting and you would like to explore it further and understand what it means for your organisation - or get a presentation or keynote about it at an event - please get in touch with me)

Thursday, February 09, 2023

What does an AI think about Net Neutrality?

Originally published on my LinkedIn Newsletter, 9th Feb 2023. See here for comment thread

Two very important trends are occurring in tech I'm following at the moment, so I thought it might be fun to combine them:

  • The emergence of #GenerativeAI, for answering questions, generating images and sounds, and potentially a whole lot more. OpenAI #ChatGPT is the current best-known, but there are dozens of others using language models, transformers & other techniques. Some people are suggesting it will redefine web search - and potentially an awful lot more than that. Some even see it as a pivotal shift in technology, society and "skilled" employment.
  • The re-emergence of discussions around #NetNeutrality and associated regulation relating to technology platforms, telcos and networks, like the ridiculous (un)#fairshare & #InternetTrafficTax concept being pitched in Europe by lobbyists. In the UK, Ofcom recently concluded a consultation on whether changes to NN rules should be made (I sent in a reply myself - I'll discuss it another time).

So, I asked ChatGPT what it thought about NN, over a series of questions. I specifically focused on whether it helps or hinders innovation.

The transcript is below, but some thoughts from me first:

  • The text is good. Almost alarmingly good. I found myself saying "good point" a few times. This is probably because it gives reasons in fairly long lists, not just "3 bulletpoints for a slide".
  • It seems pretty even-handed, with "proponents say this, others say that"
  • You can sense that its training base tends to give it "common" responses and frequently-cited arguments. That doesn't mean they're wrong, but it can lack nuance, or repeat some common *fallacies* as well.
  • The long-list approach and comments is quite good at highlighting the need for multiple approaches & reconciling them. A single statistical approach isn't enough - there are also qualitative questions to ask, surveys to undertake and multiple angles to absorb. ChatGPT understands the issue is complex, even if it cannot resolve the complexity itself. Simple answers are wrong.
  • I was particularly impressed with the response to my question on cross-border effects, and to separating NN vs. funding effects.
  • It struggled with the last question, about Internet and non-Internet innovation. EDIT: I just realised that it ignored my typo, and understood that I meant "directly" when I wrote "direct" in my question
  • I'd be very worried if I did bog-standard "content generation" for clickbait blogs, "bucketshop" market research, or corporate social posts. This is all perfectly-usable general blurb if you need slabs of text nobody is going to really delve into too deeply.

All that said, I don't think this is (yet) a gamechanger in terms of analysis and argument. It's good for collating a lot of "food for thought", but as others have pointed out, it can also be "confidently wrong", sometimes in amusing ways.

In a separate chat, I asked it for a contrarian's view of #5G use-cases. While it was right on some things ("Some argue that the vast majority of #IoT devices do not require the high speed and low latency that 5G provides and that the hype around IoT is overblown.") others were less convincing: ("Some argue that 5G may not provide the low latency and high reliability that is being promised, and that there are other technologies, such as wired connections, that are better suited for applications such as autonomous vehicles"). If there's one thing less relevant to self-driving cars than 5G, it's fixed broadband!

Anyway - enjoy the debate treanscribed in full, below.

(Note: the image for this article came from Stable Diffusion, when I asked for a picture showing "An advanced AI debating Net Neutrality")



Me: Does net neutrality help innovation?

ChatGPT: Net neutrality has been a topic of much debate, with proponents arguing that it promotes innovation and others arguing that it stifles it.

Proponents of net neutrality argue that it helps to level the playing field for new entrants and small businesses by preventing Internet Service Providers (ISPs) from discriminating against certain types of traffic or services. This, in turn, encourages innovation, as entrepreneurs and startups are free to bring new products and services to market without fear of being blocked or slowed down by ISPs.

On the other hand, opponents of net neutrality argue that it creates a regulatory burden that stifles innovation and investment in the broadband industry. They argue that without the ability to prioritize different types of traffic, ISPs will be unable to effectively manage their networks and may be forced to limit investment in new infrastructure and technologies.

Ultimately, the impact of net neutrality on innovation is a complex and highly debated issue, and there is no clear consensus on whether it helps or hurts innovation. However, proponents of net neutrality generally believe that it promotes a free and open internet that supports innovation and competition, while opponents believe that it imposes unnecessary regulation that stifles growth and investment.


Me: What research is necessary to reach a better conclusion?

ChatGPT: To reach a more definitive conclusion on the impact of net neutrality on innovation, several types of research would be necessary, including:

  1. Economic studies: Economic studies could analyze the impact of net neutrality regulations on investment and innovation in the broadband industry, as well as the impact on consumer prices and access to broadband services.
  2. Case studies: Case studies of countries or regions with and without net neutrality regulations could provide valuable insights into the real-world effects of these regulations on innovation and investment in the broadband industry.
  3. Surveys and interviews: Surveys and interviews with entrepreneurs, startups, and industry experts could provide insights into the impact of net neutrality on the ability of new entrants to bring new products and services to market.
  4. Technical studies: Technical studies could examine the impact of net neutrality regulations on network management and traffic prioritization, and assess the potential impact on network performance, reliability, and security.
  5. Long-term data analysis: Long-term data analysis could track changes in investment, innovation, and consumer prices over time in regions with and without net neutrality regulations, providing valuable insights into the long-term impact of these regulations.

Overall, a comprehensive and multi-disciplinary approach to research, combining economic, case study, survey, technical, and data analysis methods, would be necessary to reach a more definitive conclusion on the impact of net neutrality on innovation.


Me: How would you measure innovation in this context?

ChatGPT: Measuring the impact of net neutrality on innovation is a complex task, as innovation can take many forms and can be difficult to quantify. However, several metrics could be used to measure innovation in this context, including:

  1. Investment in research and development (R&D): This could be measured by tracking the amount of investment in R&D by broadband companies and startups, as well as the number of patents filed in relevant areas.
  2. New product and service launches: This could be measured by tracking the number of new products and services introduced by broadband companies and startups, as well as the market adoption and success of these offerings.
  3. Entrepreneurial activity: This could be measured by tracking the number of new startups founded in the broadband industry, as well as the success of these startups in attracting investment and bringing new products and services to market.
  4. Consumer benefits: This could be measured by tracking changes in consumer prices, access to broadband services, and the availability of new and innovative online services and applications.
  5. Network performance and reliability: This could be measured by tracking changes in network speed, latency, and uptime, as well as the ability of networks to handle new and innovative applications and services.

While these metrics are not exhaustive, they provide a good starting point for measuring the impact of net neutrality on innovation in the broadband industry. By tracking changes in these metrics over time in regions with and without net neutrality regulations, researchers could gain valuable insights into the impact of these regulations on innovation.


Me: A lot of those metrics are international. More regions with net neutrality might increase innovation globally, not locally. How do you account for that?

ChatGPT: You are correct, some of the metrics mentioned could be influenced by factors beyond the borders of a single region. To account for this, it would be important to consider both the local and global impact of net neutrality on innovation.

For example, to measure the local impact of net neutrality, researchers could compare the metrics mentioned in regions with and without net neutrality regulations, taking into account factors such as the local economy, cultural and regulatory environment, and the state of the broadband industry in each region. This would provide valuable insights into the specific impact of net neutrality on innovation within those regions.

To measure the global impact of net neutrality, researchers could compare these metrics on a global scale, taking into account factors such as the state of the global economy, the state of the broadband industry globally, and the regulatory environment in different regions. This would provide a broader perspective on the impact of net neutrality on innovation globally, and could help to identify any trends or patterns that may not be apparent from a regional analysis alone.

Overall, it is important to consider both the local and global impact of net neutrality on innovation to gain a complete and accurate understanding of the impact of these regulations on the broadband industry and innovation more broadly.


Me: How would we know if net neutrality had a larger effect on innovation than availability of financing for startups?

ChatGPT: Determining the relative impact of net neutrality versus the availability of financing on innovation can be challenging, as these factors are often interrelated and can have complex and interconnected effects. However, several approaches could be used to isolate the impact of each factor, including:

  1. Comparative studies: Researchers could compare regions with and without net neutrality regulations, but with similar levels of financing available for startups. By controlling for the availability of financing, researchers could isolate the impact of net neutrality on innovation.
  2. Historical analysis: Researchers could analyze data from periods of time when net neutrality regulations were in place or not in place, and compare this data with changes in the availability of financing for startups. This would help to identify any correlations between net neutrality and innovation, independent of changes in financing.
  3. Causal inference methods: Researchers could use causal inference methods, such as regression analysis, to model the relationship between net neutrality, financing, and innovation, while controlling for other factors that could influence these relationships.
  4. Survey and interview-based research: Researchers could survey and interview startups, entrepreneurs, and industry experts to gain insights into the relative importance of net neutrality and financing for innovation. This would provide valuable qualitative data to complement the quantitative data obtained from other research methods.

Overall, a multi-disciplinary and multi-method approach, incorporating comparative studies, historical analysis, causal inference methods, and survey and interview-based research, would be necessary to determine the relative impact of net neutrality and financing on innovation in a robust and comprehensive manner.


Me: How do you distinguish innovations that direct relate to Internet access, compared to those which involve non-Internet applications and technologies?

ChatGPT: Distinguishing innovations that directly relate to Internet access versus those that involve non-Internet applications and technologies can be challenging, as many innovations are influenced by both Internet access and non-Internet technologies. However, several approaches could be used to differentiate these types of innovations, including:

  1. Technical classification: Innovations could be classified based on their underlying technologies, with innovations that rely primarily on Internet access being classified as Internet-related innovations and innovations that rely primarily on non-Internet technologies being classified as non-Internet-related innovations.
  2. Market classification: Innovations could be classified based on the markets they serve, with innovations that serve primarily Internet-based markets, such as online commerce or online communication, being classified as Internet-related innovations and innovations that serve primarily non-Internet-based markets, such as manufacturing or agriculture, being classified as non-Internet-related innovations.
  3. End-user classification: Innovations could be classified based on their end-users, with innovations that are primarily designed for use by individuals accessing the Internet being classified as Internet-related innovations and innovations that are primarily designed for use by individuals not accessing the Internet being classified as non-Internet-related innovations.

It is important to note that these classifications are not mutually exclusive, and that many innovations may have elements of both Internet-related and non-Internet-related technologies, markets, and end-users. Nevertheless, using these classifications can provide valuable insights into the extent to which innovations are influenced by Internet access, and can help to distinguish Internet-related innovations from non-Internet-related innovations.

 

Thursday, January 12, 2023

Workarounds, hacks & alternatives to network QoS

Originally published Jan 12th 2023 on my LinkedIn Newsletter - see here for comments

Sometimes, upgrading the network isn't the answer to every problem.

For as long as I can remember, the telecom industry has talked about quality-of-service, both on fixed and mobile networks. There has always been discussion around "fast lanes", "bit-rate guarantees" and more recently "network slicing". Videoconferencing and VoIP were touted as needing priority QoS, for instance. 

There have also always been predictions about future needs of innovative applications, which would at a minimum need much higher downlink and uplink speeds (justifying the next generation of access technology), but also often tighter requirements on latency or predictability.

Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.

We see it again today, with predictions for metaverse applications adding yet more zeroes - we'll have 8K screens in front of our eyes, running at 120 frames per second, with Gbps speeds and sub-millisecond latencies need to avoid nausea or other nasty effects. So we'll need 6G to be designed to cope.

The issue is that many in the network industry often don't realise that not every technical problem needs a network-based solution, with smarter core network policies and controls, or huge extra capacity over the radio-network (and the attendant extra spectrum and sites to go with it).

Often, there are other non-network solutions that achieve (roughly) the same effects and outcomes. There's a mix of approaches, each with different levels of sophistication and practicality. Some are elegant technical designs. Others are best described as "Heath Robinson" or "MacGyver" approaches, depending on which side of the Atlantic you live.

I think they can be classified into four groups:

  • Software: Most obviously, a lot of data can be compressed. Buffers can be used to smooth out fluctuations. Clever techniques can correct for dropped or delayed packets. There's a lot more going on here though - some examples are described below.
  • Hardware / physical: Some problems have a "real world" workaround. Sending someone a USB memory stick is a (high latency) alternative to sending large volumes of data across a network. Phones with dual SIM-slots (or, now, eSIM profiles) allow coverage gaps or excess costs to be arbitraged.
  • Architectural: What's better? One expensive QoS-managed connection, or two cheaper unmanaged ones bonded together or used for diverse routing? The success of SDWAN provides a clue. Another example is the use of onboard compute (and Moore's Law) in vehicles, rather than processing telemetry data in the cloud or network-edge. In-built sound and image recognition in smart speakers or phones is a similar approach to distributed-compute architecture. That may have an extra benefit of privacy, too.
  • Behavioural: The other set of workaround exploit human psychology. Setting expectations - or warning of possible glitches - is often preferable to fixing or apologising for problems after they occur. Skype was one of the first communications apps to warn of dodgy connections - and also had the ability to reconnect when the network performance improved. Compare that with a normal PSTN/VoLTE call drop - it might have network QoS, but if you lose signal in an elevator, you won't get a warning, apology or a simplified reconnection.

These aren't cure-alls. Obviously if you're running a factory, you'd prefer not to have the automation system cough politely and quietly tell you to expect some downtime because of a network issue. And we certainly *will* need more bandwidth for some future immersive experiences, especially for uplink video in mixed reality.

But recently I've come across a few examples of clever workarounds or hacks, that people in the network/telecom industry probably wouldn't have anticipated. They potentially reduce the opportunity for "monetised QoS", or reduce future network capacity or coverage requirements, by shifting the burden from traffic to something else.

The first example relates to the bandwidth needs for AR/VR/metaverse connectivity - although I first saw this mentioned in the context of videoconferencing a few years ago. It's called "foveated rendering". (The fovea is the most dense part of the eye's retina). In essence, it uses the in-built eye tracking in headsets or good quality cameras. The system know what part of a screen or virtual environment you are focusing on, and reduces the resolution or frame-rate of the other sections in your peripheral vision. Why waste compute or network capacity on large swathes of an image that you're not actually noticing?

I haven't seen many "metaverse bandwidth requirement" predictions take account of this. They all just count the pixels & frame rate and multiply up to the largest number - usually in the multi-Gbps range. Hey presto, a 6G use-case! But perhaps don't build your business case around it yet...

Network latency and jitter is another area where there are growing numbers of plausible workarounds. In theory, lots of applications such as gaming require low latency connections. But actually, they mostly require consistent and predictable but low-ish latency. A player needs to have a well-defined experience, and especially for multi-player games there needs to be fairness.

The gaming industry - and also other sectors including future metaverse apps - have created a suite of clever approaches to dealing with network issues, as well as more fundamental problems where some players are remote and there are hard speed-of-light constraints. They can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.

There are also numerous techniques for predicting or anticipating movements and actions, so network-delivered data might not be needed continually. AI software can basically "fill in the gaps", and even compensate for some sorts of errors if needed. Similar concepts are used for "packet loss concealment" in VoIP or video transmissions. Apps can even subtly speed up or slow down streams to allow people to "catch up" with each other, or have the same latency even when distributed across the world.

We can expect much more of this type of software-based mitigation of network flaws in future. We may even get to the point where sending full video/image data is unnecessary - maybe we just store a high-quality 3D image of someone's face and room (with lighting) and just send a few bytes describing what's happening. "Dean turned his head left by 23degrees, adopted a sarcastic expression and said 'who needs QoS and gigabit anyway?' A cloud outside the window cast a dramatic shadow half a second later". It's essentially a more sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I know I'm massively oversimplyifying, but you get the direction of travel here).

The last example is a bit more left-field. I did some work last year on wireless passenger connectivity on trains. There's a huge amount of complexity and technical effort being done on dedicated trackside wireless networks, improving MNO 5G coverage along railways, on-train repeaters for better signal and passenger Wi-Fi using multi-SIM (or even satellite) gateways. None of these are easy or cheap - the reality is that there will be a mix of dedicated and public network connectivity, with cities and rural areas getting different performance, and each generation of train having different systems. Worse, the coated windows of many new trains, needed for anti-glare and insulation, effectively act as Faraday cages, blocking outdoor/indoor wireless signals.

It's really hard to take existing rolling-stock out of service for complex retrofits, install anything along operational tracks / inside tunnels, and anything electronic like repeaters or new access points need a huge set of certifications and installation procedures.

So I was really surprised when I went to the TrainComms conference last year and heard three big train operators say they were looking at a new way to improve wireless performance for their passengers. Basically, someone very clever realised that it's possible to laser-etch the windows with a fine grid of lines - which makes them more transparent to 4G/5G, without changing the thermal or visual properties very much. And that can be done much more quickly and easily for in-service trains, one window at a time.

I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.

The story here is that while network upgrades and QoS are important, there are often highly inventive workarounds - and very motivated software, hardware and materials-science specialists hoping to solve the same problems via a different path.

Do you think a metaverse app developer would rather work on a cool "foveated rendering" approach, or deal with 800 sets of network APIs and telco lawyers to obtain QoS contracts instead? And how many team-building exercises just involve hiring a high-quality boat to go across a lake, rather than working out how to build rafts from barrels and planks?

We'll certainly need faster, more reliable, lower-latency networks. But we need to be aware that they're not the only source of solutions, and that payments and revenue uplift for network performance and QoS are not pre-ordained.


#QoS #Networks #Regulation #NetNeutrality #5G #FTTX #metaverse #videoconferencing #networkslicing #6G