Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label Net Neutrality. Show all posts
Showing posts with label Net Neutrality. Show all posts

Thursday, June 22, 2023

Data traffic growth forecasts - AD Little's new report has a lot better methodology than most

This post originally appeared on June 5 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

When I saw that Arthur D. Little had published a report on “The evolution of data growth in Europe”, on behalf of ETNO Association & GSMA, I rolled my eyes.
 
Both organisations have previously published terrible studies by consultants, riddled with flawed assumptions and dodgy multiplier "fiddle factors". I’ve loudly criticised Axon and Coleago reports related to the (un)#fairshare and #6GHz #spectrum debates respectively.
 
So I started the ADL report with trepidation, not helped by a strange typo / editing error in the first paragraph.
 
But actually, the report is pretty good, and I broadly agree with both methodology and conclusions, albeit with one major caveat.
 
It estimates usage of home and mobile broadband on the basis of hours-per-day of active use of heavy applications such as video streaming, gaming and possible metaverse-type experiences.
 
I’ve used GB-per-hour myself, to model passenger data-traffic demand on trains. It makes more sense than the usual Gbps, as most applications are “bursty”. It also fits the typical heuristics of human behaviour. How many seconds a day do you spend on social media?
 
The central prediction of 20% growth in fixed traffic and 25% for mobile usage seems reasonable. I could argue for 25/20 rather than 20/25, but it's fine as a rough estimate.

Importantly these rates for the next few years are well within the bounds of both fixed broadband (moving to #FTTP) and mobile (on #5G) without incremental investments in extra capacity, beyond the main "generational" shift & CAPEX. And that is driven by government policy and competition, not traffic load and congestion. The report convincingly shows that nobody really needs/values more than 100Mbps for current apps, so #gigabit networks have plenty of headroom.

My main criticism is there is no analysis of mobile device traffic carried over fixed networks and #WiFi. Smartphones used at home for video, gaming or social media will be c80% on Wi-Fi, and indoor usage is c80% of the total.

The report also talks about AI pre-emptively downloading content for “infinite scrolling”, but doesn't suggest it could be smart enough to do so mostly over cheap / low-energy fixed connections. (IMO, by 2030, governments may *mandate* cellular offload via neutral-host or Wi-Fi for indoor use).

I agree with the report's assertions that VR is in an indoor/fixed application, that most #IoT traffic is a rounding-error and that #Web3 is probably irrelevant. The #metaverse scenarios seem mostly plausible.
 
One area I think ADL underestimates is fixed broadband for video streaming. While Netflix and YouTube are “active” viewing, historically, many people just leave broadcast TV switched on, even if nobody is in the room except the cat.

If TV really goes online-only, then that becomes a genuine “waste” of capacity, unless you can advertise to pets.

Overall - really quite good analysis, which (ironically, given the sponsors) fatally undermines the #InternetTrafficTax rhetoric.

 


Monday, June 19, 2023

CAPEX in telecoms - beware of headline numbers

This post originally appeared on June 12 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

CAPEX numbers are important in #telecoms. But they're also often collected and analysed in a haphazard fashion, or sometimes twisted and misinterpreted. There are examples that wrongly imply casual links or are carefully selected to drive specific policy choices.

- Telco execs watch CAPEX stats as they're important elements of cashflow & also signify key strategies and technology transitions
- Vendors watch #CAPEX stats to understand demand for new products
- Investors watch CAPEX as inputs to their valuation models, and as a barometer for company/industry health and prospects
- Policymakers watch CAPEX as it gets captured in "investment" statistics, and as an indicator for potential regulatory changes (or as a metric of success of previous policies)

Various ratios are commonplace, for both companies and the industry:
- CAPEX vs. revenues
- CAPEX vs. EBITDA
- CAPEX of telecoms vs. tech/hyperscalers
- CAPEX vs. R&D spending
- Fixed vs. Mobile CAPEX
... and so on

The problem is that "telco CAPEX" is also a very vague and malleable concept. Digging into it reveals many more questions - and problems with the methodologies and conclusions drawn, especially where headline numbers are concerned.

Some of the questions I'm currently looking at include:

- What counts as a "telco"? Are you including towercos, subsea fibre operators, municipalities building networks, MVNOs and many others?
- Are historic CAPEX numbers restated when telcos sell or acquire other businesses, especially tower spin-outs?
- Is it meaningful to compare CAPEX for 10 / 30 / 50 year assets such as #FTTP, which will generate decades of new revenue, with last year's figures?
- How do you separate CAPEX for basic coverage vs. incremental capacity vs. "generational" upgrades to fibre or #5G? A lot of CAPEX occurs even if usage is low
- How do you deal with leasing or other financing models? If CAPEX shifts to OPEX, how is it captured in the stats?
- What happens with "cloudified" networks? Firstly they rely on shared (often 3rd-party) assets, and secondly they are *supposed* to lower costs / investments. But will the lower CAPEX be viewed as a sign of distress, not modernisation?
- Is non-network CAPEX broken out (eg retail sites, central offices, datacentres etc)?
- Is "adjacent capex" included and if so, how?, eg in-building #wireless, #spectrum licenses, software development

I hear many commentators and lobbyists claim "#NetNeutrality led to lower CAPEX!" or "Streaming traffic leads to higher CAPEX!" or "There's an investment gap!". Without detailed data - and an analysis of causality - you have to question the veracity & meaningfulness of such rhetoric.

In summary - CAPEX is indeed important. But in fact it's so important, that headline numbers are often useless or misleading.

Ask for details on segmentation, methodology and definitions - if they aren't available, treat the numbers with deep skepticism.

#FTTX #telcos #regulations #networks #fairshare

Thursday, February 09, 2023

What does an AI think about Net Neutrality?

Originally published on my LinkedIn Newsletter, 9th Feb 2023. See here for comment thread

Two very important trends are occurring in tech I'm following at the moment, so I thought it might be fun to combine them:

  • The emergence of #GenerativeAI, for answering questions, generating images and sounds, and potentially a whole lot more. OpenAI #ChatGPT is the current best-known, but there are dozens of others using language models, transformers & other techniques. Some people are suggesting it will redefine web search - and potentially an awful lot more than that. Some even see it as a pivotal shift in technology, society and "skilled" employment.
  • The re-emergence of discussions around #NetNeutrality and associated regulation relating to technology platforms, telcos and networks, like the ridiculous (un)#fairshare & #InternetTrafficTax concept being pitched in Europe by lobbyists. In the UK, Ofcom recently concluded a consultation on whether changes to NN rules should be made (I sent in a reply myself - I'll discuss it another time).

So, I asked ChatGPT what it thought about NN, over a series of questions. I specifically focused on whether it helps or hinders innovation.

The transcript is below, but some thoughts from me first:

  • The text is good. Almost alarmingly good. I found myself saying "good point" a few times. This is probably because it gives reasons in fairly long lists, not just "3 bulletpoints for a slide".
  • It seems pretty even-handed, with "proponents say this, others say that"
  • You can sense that its training base tends to give it "common" responses and frequently-cited arguments. That doesn't mean they're wrong, but it can lack nuance, or repeat some common *fallacies* as well.
  • The long-list approach and comments is quite good at highlighting the need for multiple approaches & reconciling them. A single statistical approach isn't enough - there are also qualitative questions to ask, surveys to undertake and multiple angles to absorb. ChatGPT understands the issue is complex, even if it cannot resolve the complexity itself. Simple answers are wrong.
  • I was particularly impressed with the response to my question on cross-border effects, and to separating NN vs. funding effects.
  • It struggled with the last question, about Internet and non-Internet innovation. EDIT: I just realised that it ignored my typo, and understood that I meant "directly" when I wrote "direct" in my question
  • I'd be very worried if I did bog-standard "content generation" for clickbait blogs, "bucketshop" market research, or corporate social posts. This is all perfectly-usable general blurb if you need slabs of text nobody is going to really delve into too deeply.

All that said, I don't think this is (yet) a gamechanger in terms of analysis and argument. It's good for collating a lot of "food for thought", but as others have pointed out, it can also be "confidently wrong", sometimes in amusing ways.

In a separate chat, I asked it for a contrarian's view of #5G use-cases. While it was right on some things ("Some argue that the vast majority of #IoT devices do not require the high speed and low latency that 5G provides and that the hype around IoT is overblown.") others were less convincing: ("Some argue that 5G may not provide the low latency and high reliability that is being promised, and that there are other technologies, such as wired connections, that are better suited for applications such as autonomous vehicles"). If there's one thing less relevant to self-driving cars than 5G, it's fixed broadband!

Anyway - enjoy the debate treanscribed in full, below.

(Note: the image for this article came from Stable Diffusion, when I asked for a picture showing "An advanced AI debating Net Neutrality")



Me: Does net neutrality help innovation?

ChatGPT: Net neutrality has been a topic of much debate, with proponents arguing that it promotes innovation and others arguing that it stifles it.

Proponents of net neutrality argue that it helps to level the playing field for new entrants and small businesses by preventing Internet Service Providers (ISPs) from discriminating against certain types of traffic or services. This, in turn, encourages innovation, as entrepreneurs and startups are free to bring new products and services to market without fear of being blocked or slowed down by ISPs.

On the other hand, opponents of net neutrality argue that it creates a regulatory burden that stifles innovation and investment in the broadband industry. They argue that without the ability to prioritize different types of traffic, ISPs will be unable to effectively manage their networks and may be forced to limit investment in new infrastructure and technologies.

Ultimately, the impact of net neutrality on innovation is a complex and highly debated issue, and there is no clear consensus on whether it helps or hurts innovation. However, proponents of net neutrality generally believe that it promotes a free and open internet that supports innovation and competition, while opponents believe that it imposes unnecessary regulation that stifles growth and investment.


Me: What research is necessary to reach a better conclusion?

ChatGPT: To reach a more definitive conclusion on the impact of net neutrality on innovation, several types of research would be necessary, including:

  1. Economic studies: Economic studies could analyze the impact of net neutrality regulations on investment and innovation in the broadband industry, as well as the impact on consumer prices and access to broadband services.
  2. Case studies: Case studies of countries or regions with and without net neutrality regulations could provide valuable insights into the real-world effects of these regulations on innovation and investment in the broadband industry.
  3. Surveys and interviews: Surveys and interviews with entrepreneurs, startups, and industry experts could provide insights into the impact of net neutrality on the ability of new entrants to bring new products and services to market.
  4. Technical studies: Technical studies could examine the impact of net neutrality regulations on network management and traffic prioritization, and assess the potential impact on network performance, reliability, and security.
  5. Long-term data analysis: Long-term data analysis could track changes in investment, innovation, and consumer prices over time in regions with and without net neutrality regulations, providing valuable insights into the long-term impact of these regulations.

Overall, a comprehensive and multi-disciplinary approach to research, combining economic, case study, survey, technical, and data analysis methods, would be necessary to reach a more definitive conclusion on the impact of net neutrality on innovation.


Me: How would you measure innovation in this context?

ChatGPT: Measuring the impact of net neutrality on innovation is a complex task, as innovation can take many forms and can be difficult to quantify. However, several metrics could be used to measure innovation in this context, including:

  1. Investment in research and development (R&D): This could be measured by tracking the amount of investment in R&D by broadband companies and startups, as well as the number of patents filed in relevant areas.
  2. New product and service launches: This could be measured by tracking the number of new products and services introduced by broadband companies and startups, as well as the market adoption and success of these offerings.
  3. Entrepreneurial activity: This could be measured by tracking the number of new startups founded in the broadband industry, as well as the success of these startups in attracting investment and bringing new products and services to market.
  4. Consumer benefits: This could be measured by tracking changes in consumer prices, access to broadband services, and the availability of new and innovative online services and applications.
  5. Network performance and reliability: This could be measured by tracking changes in network speed, latency, and uptime, as well as the ability of networks to handle new and innovative applications and services.

While these metrics are not exhaustive, they provide a good starting point for measuring the impact of net neutrality on innovation in the broadband industry. By tracking changes in these metrics over time in regions with and without net neutrality regulations, researchers could gain valuable insights into the impact of these regulations on innovation.


Me: A lot of those metrics are international. More regions with net neutrality might increase innovation globally, not locally. How do you account for that?

ChatGPT: You are correct, some of the metrics mentioned could be influenced by factors beyond the borders of a single region. To account for this, it would be important to consider both the local and global impact of net neutrality on innovation.

For example, to measure the local impact of net neutrality, researchers could compare the metrics mentioned in regions with and without net neutrality regulations, taking into account factors such as the local economy, cultural and regulatory environment, and the state of the broadband industry in each region. This would provide valuable insights into the specific impact of net neutrality on innovation within those regions.

To measure the global impact of net neutrality, researchers could compare these metrics on a global scale, taking into account factors such as the state of the global economy, the state of the broadband industry globally, and the regulatory environment in different regions. This would provide a broader perspective on the impact of net neutrality on innovation globally, and could help to identify any trends or patterns that may not be apparent from a regional analysis alone.

Overall, it is important to consider both the local and global impact of net neutrality on innovation to gain a complete and accurate understanding of the impact of these regulations on the broadband industry and innovation more broadly.


Me: How would we know if net neutrality had a larger effect on innovation than availability of financing for startups?

ChatGPT: Determining the relative impact of net neutrality versus the availability of financing on innovation can be challenging, as these factors are often interrelated and can have complex and interconnected effects. However, several approaches could be used to isolate the impact of each factor, including:

  1. Comparative studies: Researchers could compare regions with and without net neutrality regulations, but with similar levels of financing available for startups. By controlling for the availability of financing, researchers could isolate the impact of net neutrality on innovation.
  2. Historical analysis: Researchers could analyze data from periods of time when net neutrality regulations were in place or not in place, and compare this data with changes in the availability of financing for startups. This would help to identify any correlations between net neutrality and innovation, independent of changes in financing.
  3. Causal inference methods: Researchers could use causal inference methods, such as regression analysis, to model the relationship between net neutrality, financing, and innovation, while controlling for other factors that could influence these relationships.
  4. Survey and interview-based research: Researchers could survey and interview startups, entrepreneurs, and industry experts to gain insights into the relative importance of net neutrality and financing for innovation. This would provide valuable qualitative data to complement the quantitative data obtained from other research methods.

Overall, a multi-disciplinary and multi-method approach, incorporating comparative studies, historical analysis, causal inference methods, and survey and interview-based research, would be necessary to determine the relative impact of net neutrality and financing on innovation in a robust and comprehensive manner.


Me: How do you distinguish innovations that direct relate to Internet access, compared to those which involve non-Internet applications and technologies?

ChatGPT: Distinguishing innovations that directly relate to Internet access versus those that involve non-Internet applications and technologies can be challenging, as many innovations are influenced by both Internet access and non-Internet technologies. However, several approaches could be used to differentiate these types of innovations, including:

  1. Technical classification: Innovations could be classified based on their underlying technologies, with innovations that rely primarily on Internet access being classified as Internet-related innovations and innovations that rely primarily on non-Internet technologies being classified as non-Internet-related innovations.
  2. Market classification: Innovations could be classified based on the markets they serve, with innovations that serve primarily Internet-based markets, such as online commerce or online communication, being classified as Internet-related innovations and innovations that serve primarily non-Internet-based markets, such as manufacturing or agriculture, being classified as non-Internet-related innovations.
  3. End-user classification: Innovations could be classified based on their end-users, with innovations that are primarily designed for use by individuals accessing the Internet being classified as Internet-related innovations and innovations that are primarily designed for use by individuals not accessing the Internet being classified as non-Internet-related innovations.

It is important to note that these classifications are not mutually exclusive, and that many innovations may have elements of both Internet-related and non-Internet-related technologies, markets, and end-users. Nevertheless, using these classifications can provide valuable insights into the extent to which innovations are influenced by Internet access, and can help to distinguish Internet-related innovations from non-Internet-related innovations.

 

Thursday, January 12, 2023

Workarounds, hacks & alternatives to network QoS

Originally published Jan 12th 2023 on my LinkedIn Newsletter - see here for comments

Sometimes, upgrading the network isn't the answer to every problem.

For as long as I can remember, the telecom industry has talked about quality-of-service, both on fixed and mobile networks. There has always been discussion around "fast lanes", "bit-rate guarantees" and more recently "network slicing". Videoconferencing and VoIP were touted as needing priority QoS, for instance. 

There have also always been predictions about future needs of innovative applications, which would at a minimum need much higher downlink and uplink speeds (justifying the next generation of access technology), but also often tighter requirements on latency or predictability.

Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.

We see it again today, with predictions for metaverse applications adding yet more zeroes - we'll have 8K screens in front of our eyes, running at 120 frames per second, with Gbps speeds and sub-millisecond latencies need to avoid nausea or other nasty effects. So we'll need 6G to be designed to cope.

The issue is that many in the network industry often don't realise that not every technical problem needs a network-based solution, with smarter core network policies and controls, or huge extra capacity over the radio-network (and the attendant extra spectrum and sites to go with it).

Often, there are other non-network solutions that achieve (roughly) the same effects and outcomes. There's a mix of approaches, each with different levels of sophistication and practicality. Some are elegant technical designs. Others are best described as "Heath Robinson" or "MacGyver" approaches, depending on which side of the Atlantic you live.

I think they can be classified into four groups:

  • Software: Most obviously, a lot of data can be compressed. Buffers can be used to smooth out fluctuations. Clever techniques can correct for dropped or delayed packets. There's a lot more going on here though - some examples are described below.
  • Hardware / physical: Some problems have a "real world" workaround. Sending someone a USB memory stick is a (high latency) alternative to sending large volumes of data across a network. Phones with dual SIM-slots (or, now, eSIM profiles) allow coverage gaps or excess costs to be arbitraged.
  • Architectural: What's better? One expensive QoS-managed connection, or two cheaper unmanaged ones bonded together or used for diverse routing? The success of SDWAN provides a clue. Another example is the use of onboard compute (and Moore's Law) in vehicles, rather than processing telemetry data in the cloud or network-edge. In-built sound and image recognition in smart speakers or phones is a similar approach to distributed-compute architecture. That may have an extra benefit of privacy, too.
  • Behavioural: The other set of workaround exploit human psychology. Setting expectations - or warning of possible glitches - is often preferable to fixing or apologising for problems after they occur. Skype was one of the first communications apps to warn of dodgy connections - and also had the ability to reconnect when the network performance improved. Compare that with a normal PSTN/VoLTE call drop - it might have network QoS, but if you lose signal in an elevator, you won't get a warning, apology or a simplified reconnection.

These aren't cure-alls. Obviously if you're running a factory, you'd prefer not to have the automation system cough politely and quietly tell you to expect some downtime because of a network issue. And we certainly *will* need more bandwidth for some future immersive experiences, especially for uplink video in mixed reality.

But recently I've come across a few examples of clever workarounds or hacks, that people in the network/telecom industry probably wouldn't have anticipated. They potentially reduce the opportunity for "monetised QoS", or reduce future network capacity or coverage requirements, by shifting the burden from traffic to something else.

The first example relates to the bandwidth needs for AR/VR/metaverse connectivity - although I first saw this mentioned in the context of videoconferencing a few years ago. It's called "foveated rendering". (The fovea is the most dense part of the eye's retina). In essence, it uses the in-built eye tracking in headsets or good quality cameras. The system know what part of a screen or virtual environment you are focusing on, and reduces the resolution or frame-rate of the other sections in your peripheral vision. Why waste compute or network capacity on large swathes of an image that you're not actually noticing?

I haven't seen many "metaverse bandwidth requirement" predictions take account of this. They all just count the pixels & frame rate and multiply up to the largest number - usually in the multi-Gbps range. Hey presto, a 6G use-case! But perhaps don't build your business case around it yet...

Network latency and jitter is another area where there are growing numbers of plausible workarounds. In theory, lots of applications such as gaming require low latency connections. But actually, they mostly require consistent and predictable but low-ish latency. A player needs to have a well-defined experience, and especially for multi-player games there needs to be fairness.

The gaming industry - and also other sectors including future metaverse apps - have created a suite of clever approaches to dealing with network issues, as well as more fundamental problems where some players are remote and there are hard speed-of-light constraints. They can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.

There are also numerous techniques for predicting or anticipating movements and actions, so network-delivered data might not be needed continually. AI software can basically "fill in the gaps", and even compensate for some sorts of errors if needed. Similar concepts are used for "packet loss concealment" in VoIP or video transmissions. Apps can even subtly speed up or slow down streams to allow people to "catch up" with each other, or have the same latency even when distributed across the world.

We can expect much more of this type of software-based mitigation of network flaws in future. We may even get to the point where sending full video/image data is unnecessary - maybe we just store a high-quality 3D image of someone's face and room (with lighting) and just send a few bytes describing what's happening. "Dean turned his head left by 23degrees, adopted a sarcastic expression and said 'who needs QoS and gigabit anyway?' A cloud outside the window cast a dramatic shadow half a second later". It's essentially a more sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I know I'm massively oversimplyifying, but you get the direction of travel here).

The last example is a bit more left-field. I did some work last year on wireless passenger connectivity on trains. There's a huge amount of complexity and technical effort being done on dedicated trackside wireless networks, improving MNO 5G coverage along railways, on-train repeaters for better signal and passenger Wi-Fi using multi-SIM (or even satellite) gateways. None of these are easy or cheap - the reality is that there will be a mix of dedicated and public network connectivity, with cities and rural areas getting different performance, and each generation of train having different systems. Worse, the coated windows of many new trains, needed for anti-glare and insulation, effectively act as Faraday cages, blocking outdoor/indoor wireless signals.

It's really hard to take existing rolling-stock out of service for complex retrofits, install anything along operational tracks / inside tunnels, and anything electronic like repeaters or new access points need a huge set of certifications and installation procedures.

So I was really surprised when I went to the TrainComms conference last year and heard three big train operators say they were looking at a new way to improve wireless performance for their passengers. Basically, someone very clever realised that it's possible to laser-etch the windows with a fine grid of lines - which makes them more transparent to 4G/5G, without changing the thermal or visual properties very much. And that can be done much more quickly and easily for in-service trains, one window at a time.

I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.

The story here is that while network upgrades and QoS are important, there are often highly inventive workarounds - and very motivated software, hardware and materials-science specialists hoping to solve the same problems via a different path.

Do you think a metaverse app developer would rather work on a cool "foveated rendering" approach, or deal with 800 sets of network APIs and telco lawyers to obtain QoS contracts instead? And how many team-building exercises just involve hiring a high-quality boat to go across a lake, rather than working out how to build rafts from barrels and planks?

We'll certainly need faster, more reliable, lower-latency networks. But we need to be aware that they're not the only source of solutions, and that payments and revenue uplift for network performance and QoS are not pre-ordained.


#QoS #Networks #Regulation #NetNeutrality #5G #FTTX #metaverse #videoconferencing #networkslicing #6G

Tuesday, November 27, 2018

Does the network need a "black box" as well as user data-retention?

What is the network equivalent of an aircraft's black-box? Is there an argument for governments pushing for more regulation on telco-side data-retention?

As far as I know, telcos are not under any obligation to maintain full logs of the state/operation of their network elements, either hardware or software – or make them available for authorities to inspect. As networks become more virtualised and complex, with NFV, orchestration, AI-led automation of network policies, slicing and so on, what happens if something goes seriously wrong? 

The industry is hoping that 5G and other networks will be used in safety-critical verticals, with "ultra-reliable" requirements, but that brings risks and responsibilities too.
That could mean authorities may need to do a diagnostic “post-mortem” if a network fails - or perhaps as a way to spot if the network is doing something it shouldn’t, such as discrimination in wholesale, or net neutrality violations.

Aviation has rigorous rules about flight data recorders (“black boxes”), and has an admirable record of learning lessons from catastrophe, and changing inspection and certification regimes, if needed. Air travel is a one-way ratchet, becoming ever-safer, because of this.

So, if a commercial 5G or FTTX network is being used for ultra-reliable uses (such as managing a power grid’s control, or a telemedicine app, or perhaps connected vehicles), is there a basis for countries having a “Network Accident Investigation Board” and better international cooperation? And would this not also imply a better way to store crucial background data is required? If a plane crashes, investigators can examine the physical wreckage, but this problem is much harder for software-controlled networks with no moving parts.

This is also an issue if a network gets compromised by hacking or a bug - who is responsible, how can it be fixed, and what prevents re-occurrence? And something similar applies for keeping records that may prove/disprove competition problems, eg did a virtualised network resource do something illegal, perhaps on a temporary basis? How could a complaint be investigated, or a prosecution brought?

The problems get multiplied massively if AI is involved, as any issues with underlying machine-learning algorithms are potentially a single point of failure, if that system is used widely (eg for coordinating 100’s or 1000’s of network-slices in an automated fashion).

Do regulators have the legal rights, obligation or ability to forensically analyse what’s gone wrong in such situations? Or the various cybersecurity agencies, or police forces?

One option might be to encrypt network configuration and operational logs, and keep them “in escrow” using blockchain to ensure anti-tamper properties, so that they could only be examined after a warrant or other legal instrument ordered decryption. There are likely numerous other technical approaches to consider as well.

In either case, as public networks become part of critical systems, this topic will only rise in importance. Policymakers should start thinking about it now - and the telecoms industry should face up to its responsibilities here, rather than push back without thinking. Do Boeing or Airbus complain about the need for data recorders?

Sunday, June 03, 2018

Telecom regulation and blockchain - is #RegTech the killer application?


One of the most interesting developments in telecoms technology for a while occurred this week – India’s telecom regulator TRAI issued a set of draft regulations aimed at combating spam and nuisance calls. (link)

At first glance, you could be forgiven for asking why anti-spam rules could possibly be more important than all the hoopla about 5G, market consolidation, network-slicing and, especially, “digital transformation” or RCS messaging (I jest).

The reason is in the details: TRAI has stipulated that telcos should use blockchain-based technologies to enforce its proposed rules, creating a tamper-proof and encrypted ledger of consent records, given by users for opt-in telemarketing. If the rules translate to reality, this is a major step forward in commercialisation of digital ledger technology, and at scale.

"Access  Providers  shall  adopt  Distributed  Ledger  Technology  (DLT)  with
permissioned and private DLT  networks for  implementation of  system, functions and processes as prescribed in Code(s) of Practice: -
(1) to  ensure  that  all  necessary  regulatory  pre-checks  are  carried  out  for  sending
Commercial Communication;
(2) to operate smart contracts among entities for effectively controlling the flow of Commercial Communication;
Access Providers may authorise one or more DLT network operators, as deemed fit, to provide technology solution(s) to all entities to carry out the functions as provided for in these regulations."


But in my view, this could be just the tip of a quite large iceberg. I'm starting to think that regulatory uses for blockchain (especially private/permissioned versions) could be central to the technology's success in telecoms.

Innovation in Regulation Technology, or RegTech, is already a huge domain, especially in sectors like financial services and healthcare. Historic methods for regulatory enforcement, from money-laundering rules, to certification of professionals, have often used reams of paperwork and had cumbersome processes. There is a huge need for automation, better provision of security and authentication, and simpler online access to regulatory resources and approval.

Obviously, telecoms has itself long had technical means for creating and enforcing rules, from spectrum-monitoring and radio-coverage tools, through automated platforms for telecoms licensing, to software aimed at checking broadband QoS and spotting net-neutrality violations.

But given that a lot of telecoms rules tend to involve multiple parties (eg user, telco, advertiser as here, or multiple telcos doing interconnect or wholesale agreements), requirements for "credentials", and there are often registries and other databases involved, the whole sphere looks like an archetypal match for the types of capability normally found in blockchains.

In particular, I think there are many potential use-cases for regulators to assist - or keep tabs on - telco activities that relate to regulatory policy. Adding unarguable timestamps to tamper-proof data storage has huge potential, in particular. Ones that immediately leap out to me include:
  • Number portability databases and porting requests
  • Storage of call detail records, that may be subject to lawful request at a later date
  • Spectrum allocations and permissions, especially for shared, local and dynamic spectrum models.
One other that I think has longer-term potential, but which nobody has talked about yet, is in secure and encrypted storage of network configuration and log files. One of the problems with regulating wholesale interconnect, peering, net neutrality and other rules, is that it is exceptionally hard to prove what happened retrospectively, if someone makes a complaint. This issue will be exacerbated with NFV/SDN, and the move to network slicing, when network configurations will be temporary and highly dynamic.

Given that law-enforcement insists that ISPs retain theur users' data records, it doesn't seem unreasonable to retain the ISPs' own information as well - obviously in a form that's secure and encrypted unless needed for evidence in the case of a legal intervention. It could also make a clear distinction between a problem of network failure (or happenstance in the way the maths of contention works), and deliberate actions.

The Net Neutrality angle here is particularly potent - it would allow any egregious behaviour to be dealt with post-hoc. Most anti-neutrality lobbyists dislike ex-ante regulation, but few could argue against allowing competition authorities or others from investigating alleged infringements that occurred deep inside the network's configurations and policies.

I'm just musing here, but I definitely feel that there's a lot more to telecom #RegTech using #blockchain than just tracking spam calls and SMS. 

This is one of the topics that will get discussed at my upcoming workshop on telecoms blockchain, on July 3 in London. Full details are here (link) or email information AT disruptive-analysis dot COM