Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label FTTx. Show all posts
Showing posts with label FTTx. Show all posts

Saturday, June 24, 2023

UK FTTP: Consolidation and driving uptake

This post originally appeared on June 16 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / connect to me on LinkedIn, to receive regular updates (about 1-3 / week)

Last week I attended the ISPA UK Business Models event, primarily about #FTTP build & adoption.

Two themes dominated:

- Consolidation patterns. The UK has >150 ISPs building #FTTX networks, with a patchwork mix of small/large, urban/rural & vertical/wholesale-only. As interest rates rise & consumer spending is inflation-limited, not all can stay viable.
- How can uptake be accelerated? While many homes are "passed" by fibre, comparatively few are actually signing up for FTTP access services. The lack of revenue for new #AltNets exacerbates the first issue.

Not discussed: data traffic volumes or so-called #fairshare. All the investment is going into initial builds, not capacity upgrades. Streaming and >500GB/mo is actually good news, not a cause for lobbyist handwringing.

The consolidation pathway is complex. There are 3 elements:

- Distress: companies running out of cash, unable to raise fresh capital, and selling assets or the whole business to deeper-pocketed consolidators willing to take a long view of the market.
- Proximity: Mergers or perhaps wholesale/sharing deals between geographic neighbouring ISPs, for scale efficiencies.
- Strategic: larger "mega-mergers" perhaps between wholesalers and integrated telcos, or between B2B and B2C specialists.

There are plenty of challenges. M&A means blending FTTP providers with different vendors, maybe different network engineering qualities, different back-office systems (perhaps proprietary) etc. There may be significant integration costs and practical headaches. Another issue to resolve is competing "overbuilt" fibre grids in urban areas, especially as OpenReach gets to more locations and offers cheap "Equinox2" wholesale.

The uptake question is also thorny. A few speakers pointed out that the UK's FTTC / VDSL broadband mostly proved itself "good enough" during the pandemic, so convincing people they need FTTP or gigabit speeds is a tough sell, especially given cost-of-living issues.

Unless they currently have really terrible connectivity, few people really want to take a day off work to wait for an engineer, risk a day or two without Internet if the switch doesn't work straight away, or pay more and sign up for a new longterm contract.

For some, futureproofing can wait until the future, it seems.

I can think of a number of ways that uptake could be incentivised:

- Trumpet fibre's uses, reliability & maybe impact on property values
- Subsidise an overlap of the old service with the new FTTP, so customers' old connection wouldn't be switched off before it was fully live
- Offer funding to connect homes that are "passed" as long as the connection is fully open-access / wholesale-ready
- Measure, monitor and incentivise B2B use of fibre as well as residential (retail, schools, small offices, home-workers etc)
- Better mapping to find and deal with "exceptions"

All would be enhanced by a consistent view (or scenarios) for the UK #fibre "end state". At the moment that is too amorphous.

Monday, May 01, 2023

A critical enabler for broadband competition - Marketplaces for buying and selling open access FTTP

This post originally appeared on Apr 18 on my LinkedIn feed, which is now my main platform for both short posts and longer-form articles. It can be found here, along with the comment stream. Please follow / subscribe to receive regular updates (about 1-3 / week)

Following yesterday's post on mobile #neutralhost operators as aggregators for wholesale access to municipality-level #smallcells and assets/permits, I think something roughly similar is happening in #FTTP.

An aggregation & marketplace tier for #ISPs, #AltNets and #infracos is emerging, among the UK fixed #broadband market's various groups:

- Incumbents with wholesale & retail units, although in theory separated - BT Retail & OpenReach, and VMO2 (Virgin) with its new wholesale JV Nexfibre (with Liberty Global & Infravia)
- AltNets with their own FTTP infrastructure solely for their own ISP retail services, eg Hyperoptic
- AltNets with FTTP for both inhouse ISP retail and wholesale to others
- Wholesale-only FTTP providers such as CityFibre
- Retail-only ISPs, such as Zen & TalkTalk, which buy wholesale fibre (and historically copper / FTTC)

The wholesale market is expanding rapidly, with infracos still building, Openreach accelerating (and trying to discount with its contentious Equinox 2 plan) and existing AltNets looking to supplement slow conversion of homes-passed to homes-connected by offering access to other ISPs.

But the patchwork quilt of wholesale FTTP is very messy. There is growing overbuild, lots of "passed" homes that need extra work to get to individual buildings (or inside them to flats), a mishmash of vendors and construction practices, variable-quality networks and processes - and ongoing consolidation and possible financial woes.

This brings a need for aggregation & simplification. There is both a "buy" and a "sell" side here.

Retail ISPs want access to well-defined and standardised wholesale fibre access, across multiple FTPP owners - both major players like Openreach and AltNets. They want to sell consistent products to end-customers, with promises on provisioning "live next Tuesday at 11am" or ways to deal with faults. They don't want 50 integration projects - but they do want good pricing.

The AltNets, meanwhile, want to be able to sell to those ISPs, even if they've built IT systems and processes that weren't originally designed for wholesale. They also need to conform to Ofcom's new one-touch-switching rules.

Maybe I'll think of a snappier term, but given that the #ConnectedNorth conference took place in Manchester, the term Open Access Solution as a Service, or #OASaaS, seems rather fitting...

There are already a number of OASaaS contenders. Some AltNets formed the Common Wholesale Platform | CWP in 2020. CityFibre is working on its own ecosystem, with Toob as its first partner. There's also The Fibre Café, Vitrifi & BroadbandHub - as well as TOTSCo which is purely focused on the one-touch switching process. Not all seem to focus equally on buy and sell sides.

I wonder if agreed standards or specs (or even regulation) are needed. Perhaps an equivalent to JOTS (Joint Operator Technical Specification) for shared/mobile infrastructure such as neutral host systems? We don't want OASaaS to look back in anger...

 

Thursday, January 12, 2023

Workarounds, hacks & alternatives to network QoS

Originally published Jan 12th 2023 on my LinkedIn Newsletter - see here for comments

Sometimes, upgrading the network isn't the answer to every problem.

For as long as I can remember, the telecom industry has talked about quality-of-service, both on fixed and mobile networks. There has always been discussion around "fast lanes", "bit-rate guarantees" and more recently "network slicing". Videoconferencing and VoIP were touted as needing priority QoS, for instance. 

There have also always been predictions about future needs of innovative applications, which would at a minimum need much higher downlink and uplink speeds (justifying the next generation of access technology), but also often tighter requirements on latency or predictability.

Cloud gaming would need millisecond-level latency, connected cars would send terabytes of data across the network and so on.

We see it again today, with predictions for metaverse applications adding yet more zeroes - we'll have 8K screens in front of our eyes, running at 120 frames per second, with Gbps speeds and sub-millisecond latencies need to avoid nausea or other nasty effects. So we'll need 6G to be designed to cope.

The issue is that many in the network industry often don't realise that not every technical problem needs a network-based solution, with smarter core network policies and controls, or huge extra capacity over the radio-network (and the attendant extra spectrum and sites to go with it).

Often, there are other non-network solutions that achieve (roughly) the same effects and outcomes. There's a mix of approaches, each with different levels of sophistication and practicality. Some are elegant technical designs. Others are best described as "Heath Robinson" or "MacGyver" approaches, depending on which side of the Atlantic you live.

I think they can be classified into four groups:

  • Software: Most obviously, a lot of data can be compressed. Buffers can be used to smooth out fluctuations. Clever techniques can correct for dropped or delayed packets. There's a lot more going on here though - some examples are described below.
  • Hardware / physical: Some problems have a "real world" workaround. Sending someone a USB memory stick is a (high latency) alternative to sending large volumes of data across a network. Phones with dual SIM-slots (or, now, eSIM profiles) allow coverage gaps or excess costs to be arbitraged.
  • Architectural: What's better? One expensive QoS-managed connection, or two cheaper unmanaged ones bonded together or used for diverse routing? The success of SDWAN provides a clue. Another example is the use of onboard compute (and Moore's Law) in vehicles, rather than processing telemetry data in the cloud or network-edge. In-built sound and image recognition in smart speakers or phones is a similar approach to distributed-compute architecture. That may have an extra benefit of privacy, too.
  • Behavioural: The other set of workaround exploit human psychology. Setting expectations - or warning of possible glitches - is often preferable to fixing or apologising for problems after they occur. Skype was one of the first communications apps to warn of dodgy connections - and also had the ability to reconnect when the network performance improved. Compare that with a normal PSTN/VoLTE call drop - it might have network QoS, but if you lose signal in an elevator, you won't get a warning, apology or a simplified reconnection.

These aren't cure-alls. Obviously if you're running a factory, you'd prefer not to have the automation system cough politely and quietly tell you to expect some downtime because of a network issue. And we certainly *will* need more bandwidth for some future immersive experiences, especially for uplink video in mixed reality.

But recently I've come across a few examples of clever workarounds or hacks, that people in the network/telecom industry probably wouldn't have anticipated. They potentially reduce the opportunity for "monetised QoS", or reduce future network capacity or coverage requirements, by shifting the burden from traffic to something else.

The first example relates to the bandwidth needs for AR/VR/metaverse connectivity - although I first saw this mentioned in the context of videoconferencing a few years ago. It's called "foveated rendering". (The fovea is the most dense part of the eye's retina). In essence, it uses the in-built eye tracking in headsets or good quality cameras. The system know what part of a screen or virtual environment you are focusing on, and reduces the resolution or frame-rate of the other sections in your peripheral vision. Why waste compute or network capacity on large swathes of an image that you're not actually noticing?

I haven't seen many "metaverse bandwidth requirement" predictions take account of this. They all just count the pixels & frame rate and multiply up to the largest number - usually in the multi-Gbps range. Hey presto, a 6G use-case! But perhaps don't build your business case around it yet...

Network latency and jitter is another area where there are growing numbers of plausible workarounds. In theory, lots of applications such as gaming require low latency connections. But actually, they mostly require consistent and predictable but low-ish latency. A player needs to have a well-defined experience, and especially for multi-player games there needs to be fairness.

The gaming industry - and also other sectors including future metaverse apps - have created a suite of clever approaches to dealing with network issues, as well as more fundamental problems where some players are remote and there are hard speed-of-light constraints. They can monitor latency, and actually adjust and balance the lags experienced by participants, even if it means slowing some participants.

There are also numerous techniques for predicting or anticipating movements and actions, so network-delivered data might not be needed continually. AI software can basically "fill in the gaps", and even compensate for some sorts of errors if needed. Similar concepts are used for "packet loss concealment" in VoIP or video transmissions. Apps can even subtly speed up or slow down streams to allow people to "catch up" with each other, or have the same latency even when distributed across the world.

We can expect much more of this type of software-based mitigation of network flaws in future. We may even get to the point where sending full video/image data is unnecessary - maybe we just store a high-quality 3D image of someone's face and room (with lighting) and just send a few bytes describing what's happening. "Dean turned his head left by 23degrees, adopted a sarcastic expression and said 'who needs QoS and gigabit anyway?' A cloud outside the window cast a dramatic shadow half a second later". It's essentially a more sophisticated version of Siri + Instagram filters + ChatGPT. (Yes, I know I'm massively oversimplyifying, but you get the direction of travel here).

The last example is a bit more left-field. I did some work last year on wireless passenger connectivity on trains. There's a huge amount of complexity and technical effort being done on dedicated trackside wireless networks, improving MNO 5G coverage along railways, on-train repeaters for better signal and passenger Wi-Fi using multi-SIM (or even satellite) gateways. None of these are easy or cheap - the reality is that there will be a mix of dedicated and public network connectivity, with cities and rural areas getting different performance, and each generation of train having different systems. Worse, the coated windows of many new trains, needed for anti-glare and insulation, effectively act as Faraday cages, blocking outdoor/indoor wireless signals.

It's really hard to take existing rolling-stock out of service for complex retrofits, install anything along operational tracks / inside tunnels, and anything electronic like repeaters or new access points need a huge set of certifications and installation procedures.

So I was really surprised when I went to the TrainComms conference last year and heard three big train operators say they were looking at a new way to improve wireless performance for their passengers. Basically, someone very clever realised that it's possible to laser-etch the windows with a fine grid of lines - which makes them more transparent to 4G/5G, without changing the thermal or visual properties very much. And that can be done much more quickly and easily for in-service trains, one window at a time.

I have to say, I wasn't expecting a network QoS vs. Glazing Technology battle, and I suspect few others did either.

The story here is that while network upgrades and QoS are important, there are often highly inventive workarounds - and very motivated software, hardware and materials-science specialists hoping to solve the same problems via a different path.

Do you think a metaverse app developer would rather work on a cool "foveated rendering" approach, or deal with 800 sets of network APIs and telco lawyers to obtain QoS contracts instead? And how many team-building exercises just involve hiring a high-quality boat to go across a lake, rather than working out how to build rafts from barrels and planks?

We'll certainly need faster, more reliable, lower-latency networks. But we need to be aware that they're not the only source of solutions, and that payments and revenue uplift for network performance and QoS are not pre-ordained.


#QoS #Networks #Regulation #NetNeutrality #5G #FTTX #metaverse #videoconferencing #networkslicing #6G

Friday, January 03, 2020

Predictions for the next decade: looking out to 2030 for telecoms, wireless & adjacent technologies


It's tempting to emulate every other analyst & commentator and write a list of 2020 predictions of success and failure. In fact, I got part-way into a set of bulletpoints about what’s overhyped and underhyped. 

But to be honest, if you read my articles and tweets, you probably know what I think about 2020 already. Private cellular networks will be important (4G, initially). 5G fixed wireless is interesting and will grow the FWA market - but won't replace fibre. 5G is Just Another G and is overhyped, especially until the new core matures. RCS is still a worthless zombie, eating brains. But I don't need to repeat all this in detail, just because I'm a bit more sharp-worded than most observers. It wouldn't tell you much new.

But seeing as I spend a fair amount of time advising clients about the longer-term future, 5-10 years out or even further, I thought I'd set my sights higher. I use the term "telco-futurism" to look at the impacts of technology and broader society on telecoms, and vice versa.

So, at the start of the 2020s, what about the next decade? Assuming I haven't retired to my palatial Mars-orbiting private Moon in 10 years' time, what do I think I'll be writing, podcasting (or neural-transmitting) about in 2030?

So, let's have a few shots at this more-distant target...

  • 6G: In 2030, the first 6G networks are already gaining traction in the marketplace. The first users are still fixed connections to homes, and personal devices that look a bit similar to phones and wearables, but with a variety of new display and UI technologies, including contact lenses and advanced audio/haptic interfaces. 6G represents the maturing of various 5G concepts (such as the new core), plus greater intelligence to allow efficient operation. 
  • Details, details: Much of the 2020s will have been spent dealing with numerous "back-office" problems that have stopped many early 5G visions becoming real. Network-slicing will have thrown up huge operationalisation and security issues. Dealing with QoS/slice roaming or handoff, at borders between networks (outdoor / indoor / private / neutral / international) will be hugely complex. Edge computing scenarios will turn out to need local peering or interconnection points. All of these will have huge extra complexities with billing, pricing and monitoring. mmWave planning and design tools will need to have matured, as well as the processes for installation and operation.Training and skills for all of this will have been time-consuming and expensive - we'll need hundreds of thousands of experts - often multi-domain experts. By the time all these issues get properly fixed, 6G radios and vendors will exploit them, rather than the "legacy 5G" infrastructure. See this post for my discussion about the telecom industry's problems with accurate timelines.
  • Device-Network cooperation: By 2030, mobile ecosystems and control software will break today's silos between radio network, devices and applications much more effectively. Sensors in users' devices, cell-towers and elsewhere will be linked to AI which works out how, why and where people or IoT objects need connectivity and how best to deliver it. Recognise a moving truck with machine-vision, and bounce signals off it opportunistically. Work out that someone is approaching the front of a building, and pre-emptively look for Wi-Fi, or negotiate with the in-building neutral host on a marketplace before they enter the door. Spot behavioural patterns such as driving the same route to work, and optimise connectivity accordingly. Recognise a low battery, and tweak the "best-connected" algorithm for power efficiency, and downrate apps' energy demand.Integrate with crowd-flow patterns or weather forecasts. There will be thousands of ways to improve operations if networks stop just thinking of a "terminal" as just an endpoint, and look for external sources of operational data - that's a 20th Century approach. Expect Google's work on its Fi MVNO & Android/Pixel phones, and similar efforts by Samsung and maybe Apple, Qualcomm and ARM, to have driven much of this cross-domain evolution.
  • Energy-aware networks: Far more energy-awareness will be designed into all aspects of the network, cloud and device/app ecosystem. I'm not predicting some sort of monolithic and integrated cascading-payments system linked into CO2-taxes, but I expect "energy budget" to be linked much more closely to costs (including externalities) in different areas. How best to optimise wired/wireless data for power demand, where best to charge devices, "scavenging" for power and so on. Maybe even "nudge" people to lower-energy applications or consumption behaviours by including "power-shaming" indicators. If 3GPP and governments get their act together, as well as vendors & CSPs, overall 6G energy use will be a higher priority design-goal than throughput speed and latency.
  • Wi-Fi: We'll probably be on Wi-Fi 9 by 2030. It will continue to dominate connectivity inside buildings, especially homes and business premises with FTTX broadband (i.e. most of them in developed markets). It will continue to be used for primary connectivity on high-throughput / low-margin / low-mobility devices like TVs and display screens, PC-type devices, AR/VR headsets and so on. It will be bonded together with 5G/6G and other technologies with ever-better multi-path mechanisms, including ad-hoc device meshes. Ease of use will have improved, with the success of approaches like OpenRoaming. Fairly little public Wi-Fi will be delivered by "service providers" as we think of them today.  We'll probably still have to suffer the "6G will kill Wi-Fi" pundit-pieces and hype, though.
  • Spectrum: The spectrum world changes slowly at a global level, thanks to the glacial 4-year cycle of ITU WRCs. By 2030 we will have had 2023 and 2027 conferences, which will probably harmonise more spectrum for 5G/6G, satellites & high-altitude platforms (HAPS) and Wi-Fi type unlicensed use. The more interesting developments will occur at national / regional levels, below the ITU's role, in how these bands actually get released / authorised - and especially whether that's for localised or shared usage suitable for private networks and other innovators. By 2030 we should have been through 2+ cycles of US CBRS and UK/Germany/Japan/France style local licensing experiments, allocation methods, databases and sensing systems. I think we'll be closer to some of the "spectrum-as-a-service" models and marketplaces I've been discussing over the last 24 months, with more fluid resale and temporary usage permits. International allocations will still differ though. We will also see whether other options, such as "national licenses with lots of extra conditions" (eg MVNO access, rural coverage, sharing, power use etc) has helped maintain today's style of MNOs, despite the grumbling. We will also see much more opportunism and flexibility in band support in silicon/devices, and more sophisticated approaches to in-band sharing between different technologies. I'm less certain whether we will have progressed much with commercialisation of mmWave bands 20-100GHz, especially for mobile and indoor use. It's possible and we'll certainly see lots of R&D, but the practicalities may prove insuperable for wide usage.
  • Private/neutral cellular: Today, there's around 1000 MNOs globally (public and private). By 2030, I'd expect there to be between 100,000 and a million networks, probably with various new types of service provider, aggregation hubs and consortia. These will span industrial, city, office, rural, utility, "public venue" and many other domains. It will be increasingly hard to distinguish private from public, eg with MNOs' campus networks with private cores and hybrid public/private spectrum. We might even get another zero, if the goals of making private 4G/5G as easy and cheap to build as Wi-Fi prove feasible, although I have doubts. Most of these networks will be user-specific, but a decent fraction will be multi-tenant, either offering wholesale access or roaming to "legacy MNOs" as neutral hosts, or with some sort of landlord model such as a property company running a network with each occupied floor or building on campus as a "semi-private" network. Some such networks will look like micro-telcos (eg an airport providing access to caterers & airlines) and will need billing, management & security tools - and perhaps new forms of regulation. This massive new domain will help catalyse various shifts in the vendor community as well - especially cloud-native core and BSS/OSS, and probably various forms of open RAN, and also "neutral edge".
  • Security & privacy: I'm not a security expert, so I hesitate to imagine the risks and responses 10 years out. Both good and bad guys will be armed to the teeth with AI. We'll see networks attacked physically as well as logically. We'll see sophisticated thefts of credentials and what we quaintly term "secrets" today. There will be cameras and mics everywhere. Quantum threats may compromise encryption - and other quantum tools may enhance it, as well as provide new forms of identity and authentication. We will need to be wary of threats within core networks, especially where orchestration and oversight is automated. I think we will be wise to avoid "monocultures" of technologies at various levels of the network - we need to trade off efficiency and scale vs. resilience.
  • Satellite / HAPS: We'll definitely have more satellite constellations by 2030, including some huge ones from SpaceX or others. I have my doubts that they will be "game-changers" in terms of our overall broadband use, except in rural/remote areas. They won't have the capacity of terrestrial networks, and signals will struggle with indoor penetration and uplink from anything battery-powered. Vehicles, planes, boats and remote IoT will be much better-connected, though. Space junk & cascading-collision scenarios like the movie Gravity will be a worry, though. I'm not sure about drones and balloons as HAPS for mass-market use, although I suspect they'll have some cool applications we don't know today.
  • Cloud & edge: Let's get one thing clear - the bulk of the world's computing cycles & data storage will continue to occur in massive datacentres (perhaps heading towards a terawatt of aggregate power by 2030) and on devices themselves, or nearby gateways. But there will be a thriving mid-market of different sorts of "edge" as I've covered in many posts and presentations recently. This will partly be about low-latency, but not as much as most people think. It will be more about saving mass data-transport costs, protecting "data sovereignty" and perhaps optimising energy consumption. A certain amount will be inside telcos' networks, but without localised peering / aggregation this will be fairly niche, or else it will be wholesaled out to the big cloud players. There will be a lot of value in the overall orchestration of compute tasks for applications between multiple locations in the ecosystem, from chip-level to hyperscale and back again. The fundamental physical quantum of much edge compute will be mundane: a 40ft shipping container, plonked down near sources of power and fibre.
  • Multi-network: We should expect all connectivity to be "software-defined" and "multi-network". Devices will have lots of radios, connecting simultaneously, with different paths and providers (and multiple eSIM / other identities). Buildings will have mutliple fibres, wireless connections and management tools. Device-to-device connections and relaying will be prevalent. IoT will use a selection of LPWAN technologies as well as Wi-Fi, cellular and short-range connections. Satellite and maybe LiFi (light-based) connections will play new roles. Arbitrage, bonding, load-balancing will occur at multiple levels from silicon to OS to gateway to mid-network. Very few things will be locked to a single network or provider - unless it has unique value such as managed security or power consumption.
  • Voice & messaging: Telephony will be 150yo in 2026. By 2030 we'll still be making some retro-style "phone calls" although it will seem even more clunky, interruptive, unnatural and primitive than today. (It won't stop the cellular industry spending billions upgrading to Vo6G though). SMS won't have disappeared, either. But most consumers will communicate through a broad variety of voice and video interaction models, in-app, group-based, mediated by an array of assistants, and veracity-checked to avoid "fake voice" and man-in-the-middle attacks of ever increasing subtlety. Another 10 years of evolution beyond emojis, stories, filters and live broadcasts will allow communication which is expressive, emotion-first, and perhaps even richer and more nuanced than in-person body language. I'm not sure about AR/VR comms, although it will still be more important than RCS which will no doubt be celebrating its 23rd year of irrelevance, hype and refusal to die.
  • Enterprise comms:  UCaaS, cPaaS and related collaboration tools will progress steadily, if unspectacularly - although with ever more cloud focus. There will be more video, more AI-enriched experiences for knowledge management, translation, whispered coaching and search. There will be attempts to reduce travel to meetings and events as carbon taxes bite, although few will come close to the in-person experience or effectiveness. We'll still have some legacy phone calls and numbers (as with consumer communications) although these will be progressively pushed to the margins of B2B and E2E interactions. Ever more communications will take place "contextually" - within apps, natively supported in IoT devices, or with AI-based assistants. Contact centres and customer interactions will be battlegrounds for bots and assistants on both sides. ("Alexa, renegotiate my subscription for a better price - you have permission to emulate my voice"). Security and verification will be highly prized - just because something is heard doesn't mean it will match what was originally spoken
  • Network ownership models: Some networks of today will still look mostly like "telcos" in 2030,  but as I wrote in this post the first industry to be transformed by 5G will be the telecom industry itself. We'll see many new stakeholders, some of which look like SPs, some which are private network operators, and many new forms of aggregator, virtual operator, wholesale or neutral mobile/fibre provider. I'm not expecting a major shift back to nationalised or government-run networks, but I think regulations will favour more sharing of assets where it makes sense. Individual industries will take control of their own connectivity and communications, perhaps using standardised 5G, or mild variations of it. There will be major telcos of today still around - but most will not be providing "slices" to companies and offering deep cross-vertical managed services. There will be M&A which means that we'll have a much more heterogeneous telco/CSP market by 2030 than today's 800 identikit national MNOs. Fixed and fibre providers will be diverse as well - especially with the addition of cloud, utility and muncipal providers. I think the towerco / property-telco model will be important as asset owners / builders as well.
I realise that I could go on at length about many other topics here - autonomous and connected vehicles, the future of cities and socio-political spheres, shifts in entertainment models, the second wave of blockchain/ledgers, the role of human enhancement & biotech, new sources of energy and environmental technology, new forms of regulation and so forth. But this list is already long enough, I think. Various of these topics will also appear in podcasts - which I'm intending to ramp up in 2020. At the moment I'm on SoundCloud (link) but watch out here or on Twitter for announcements of other platforms.

If this has piqued your interest, please comment on my blog or LinkedIn article. This is a vision for 2030, which I hope is self-consistent and reasonable - but it is not the only plausible future scenario.

If you're interested in running a private workshop to discuss, debate and strategise around any of these topics, please get in touch via private message, or information AT disruptive-analysis DOT com. I work with numerous operators, vendors, regulators, industry bodies and investors to imagine the future of networks and other advanced technologies - and steer the path of evolution.

Happy New Year! (and New Decade)

Wednesday, February 25, 2015

FTTx and 4G: Speed sells... and it's addictive [+ link to partner analyst research report]

I'm a big believer in 4G. Not just because I can do certain *tasks* faster, but because it just *feels* fast. Some of that perception is from quicker connection and start-up times than 3G, some is from impressive headline numbers when I've run a speed-test, but a lot is from a sense of "potential", knowing it's there if I need it.

It transpires that something similar is true of fibre in the fixed world - especially in FTTP (fibre to the premise) guise, according from some proper consumer research done by friends at Diffraction Analysis. People like raw speed, are annoyed when they can't get it, and are prepared to upgrade further once they do. They might even use it to consume extra operator-enabled services as well.


One of the regular Twitter debates I have with colleague Martin Geddes is whether broadband "performance" is about specific, measurable, application outcomes or not ("QoS" to most of us). He links that to an argument for non-neutrality, as he contends that specific Internet application providers have different performance requirements and should be allowed to "trade" for them, given the limits of the network.

Conversely, I argue that a lot of the benefit and value from fast, open (and neutral) networks is not application-specific, but rather is intangible. It's the potential for both users and developers to do what they want today or in the future, without trading or risking potential competitive abuse or extra friction added by ISPs. 

It's like having a powerful car but sticking (mostly) to the speed limit - you know the "shove" is there if you need it. That extra acceleration on the highway on-ramp doesn't get you to the destination faster - but it feels good. Some even pay a premium for unreliable or twitchy supercars, despite the chance of a bad (even catastrophic) "outcome".

We're human. Perceived performance - and flexibility - is often more important than measured performance. Doesn't matter if it's broadband, cars, fashion, headline megapixels in a camera or a million other areas of life. There is almost always some correlation between "More of X" and "Better outcomes", but even if it's not a perfect correlation, we don't care. 

Purists myopically focused on "optimisation" often don't understand this, or other human emotions like agency/control, a sense of novelty, image and so forth. While these apply more to individuals than businesses, there other factors like privacy and security and agility too.

In other words, "just give us faster networks & then get out of the way. Occasional glitches are a price well-worth paying for freedom and permissionless connectivity".

It's true of 4G - most people upgrading are happier than with 3G, although obviously some of that is down to the device itself, or perhaps better coverage with 7/800MHz vs. 2100MHz on some networks. Many notice the initial "whoosh factor" - that it's just fast even if they're doing something that doesn't need it. Sports-car feeling, again. And over time, developers are exploiting that, even if they know that there will sometimes be issues they have to work around

And that's the thing - the vagaries of mobile networks (coverage, congestion and so on) have taught application and content providers to work around performance limitations and occasional failures. To expect them and plan for them - so they use cacheing, variable bitrates, UI interventions to warn users of problems, clever codecs and error-correction. There will always be a weakest link. It's not perfect, but it beats having the hassle, friction and perhaps commercial conflicts-of-interest involved in paid QoS. It's why "non-neutral" mobile business models won't succeed, even if the law allows.

But back to fibre and FTTx, and some hard data. I don't often reference other analysts and consultants. (Martin G is one that I collaborate on about voice/comms, and disagree with on networks). But when it comes to the fixed-broadband world and especially fibre networks, access business models and wholesale metro, I'll gladly defer to Benoit Felten, who's been covering that beat for years. He now runs his own research firm, Diffraction Analysis, originally out of France although he's now living in China.

Many telcos have been slow at rolling out fibre, often because they have been unconvinced that consumers really want it, would pay more for it, or might adopt additional services as well.

Diffraction has been doing some interesting work in collaboration with the FTTH Council Europe, looking at the real-world experience of consumers who have fibre broadband, and comparing it to ADSL. Benoit has now published research on Sweden (one of the most developed FTTx markets in Europe), and has been working on France and Portugal studies as well.


I've had a chance to have a look through the Sweden report, and it corroborates my views in various ways (although it doesn't tackle neutrality, per-se):
  • Fibre is perceived to be higher-quality than DSL, even by people who don't have it.
  • FTTP users are more "satisfied". And higher speed FTTP equates to even more satisfaction. It's not about individual "outcomes" specifically. 
  • Fibre improves satisfaction with various "speed" metrics - latency, upload, download, variability and so on.
  • Individual users are happier with FTTx when asked to compare with their past DSL experiences.
  • Most DSL users perceive fibre to better (presumably because of friends who have it, or media coverage).
  • Upgrades, both DSL-to-FTTP and FTTP-to-faster-FTTP are typically linked to wanting more "performance", ie speed.
  • Lots of DSL users won't be upgrading soon, either because they can't get FTTP where they live, or because it's perceived as too expensive.
  • FTTP correlates with higher use of triple- and quad-play, although it's not 100% clear which is cause & which is effect here.
  • FTTP users do more stuff like streaming, video-calling, VoIP, tele-education etc.
  • FTTP users seem interested in advanced services (perhaps with operator involvement) like telemedicine, digital home services, TV videoconferencing etc.
There's a lot more in the full report, and I'm looking forward to seeing the outputs from other countries too. But two inferences leap out for me, although the wording of the survey makes it hard to be 100% certain of respondent perception:
  • People like fast Internet access, for its own sake. Speed sells, and feels good irrespective of specific applications or outcomes.
  • People who like fast Internet also seem more interested in possible non-Internet network services too.

To me, this suggests that not only is there a business-case for investment in faster networks (FTTX, 5G etc) but that we need to consider both measured and perceived performance. Tangible and intangible. This is something missed by most of the economic-led studies on broadband - and certainly by all those debating the FCC's Title II Net Neutrality plans this week.

The Diffraction Analysis full report (32 pages) titled "FTTP Dynamics in a Mature Market - Swedish Quantitative Analysis" is available in two versions: 

Contents pages are available on request via email:
information AT disruptive-analysis dot com
 
The links are to Diffraction Analysis' billing (although my company Disruptive Analysis has a financial interest here).  You should get the report emailed through within 24hrs (NB the time difference given Benoit's location in China).


Note: if you're based in France you'll need to add VAT - if so, or if you want to pay by a method other than Paypal/credit card, or get more details about the report please get in touch via information AT disruptive-analysis dot com

(Note: I wouldn't be recommending research if it wasn't thorough, interesting, and in analytical coherence with my own view. However, it's Benoit/Diffraction's product, so the T's and C's are not my own)