Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label Zero Rating. Show all posts
Showing posts with label Zero Rating. Show all posts

Thursday, July 13, 2017

Both sides are wrong in the Net Neutrality debate

I've been watching the ongoing shouting-match about Net Neutrality (in the US & Europe) with increasing exasperation. Recently there was a "day of action" by pro-neutrality activists, which raised the temperature yet further.

The problem? Pretty much everyone, on both sides (and on both sides of the Atlantic), is dead wrong a good % of the time. They're not necessarily wrong on the same things, but overall the signal-to-noise ratio on NN is very poor.

There are countless logical fallacies perpetrated by lobbyists and commentators of all stripes: strawman arguments, false dichotomies, tu-quoque, appeals to authority and all the rest. (This is a great list of fallacies, by the way. Everyone should read it). 

Everyone's analogies are useless too - networks aren't pipes, or dumb. Packets don't behave like fluids. Or cars on a road. There are no "senders". It's not like physical distribution or logistics. Even the word "neutrality" is dubious as a metaphor. The worst of all is "level playing field". Anyone using it is being duplicitous, ignorant, or probably both. (See this link).

I receive lots of exhortations from both sides - I get well-considered, but too-narrow network-science commentary & Twitter debates from friend & colleague Martin Geddes. I read detailed and learned regulatory snark and insider stories from John Strand. I see telco/vendor CEOs speaking (OK, grandstanding) at policy conferences. I get reports of egregious telco- and state-based blocking of certain Internet services from Access Now, EFF and elsewhere. I see VCs and investors lining up on both sides, depending on whether they have web interests, or network vendor/processing positions. I watch comments from the FCC, Ofcom, EU Commission, BEREC, TRAI and others - as well as politicians. And I read an absolute ton of skewed & partial soundbites from lobbyists on Twitter or assorted articles/papers.

And I see the same, tired - often fallacious or irrelevant - arguments trotted out again and again. Let me go through some of the common ones:
  • Some network purists insist routers & IP itself are (at core) non-neutral, because there are always vagaries & choices in how the internals, such as buffers, are configured. They try to use this to invalidate the whole NN concept, or claim that the Internet is broken/obsolete and needs to be replaced. Other Internet purists insist that the original "end-to-end" principle was to get as close as possible to "equal treatment" for packets, and either don't recognise the maths - or suggest that the qualitative description should be treated as a goal, even if the precise mechanisms involve some fudges. Everyone is wrong.
  • In the US, the current mechanism for NN was to incorporate it under the FCC's Title II rules. That was a clunky workaround, after an earlier NN ruling was challenged by Verizon in 2011. In many ways, the original version was a much cleaner way to do it, as it risked less regulatory creep. Everyone is wrong.
  • Many people talk about prioritisation of certain traffic (eg movies) and how that could either (a) allow innovative business models, or (b) disenfranchise startups unable to match web giants' payments. Yet the technology doesn't work properly (and won't), it's almost impossible to price/market/sell/manage in practice, and there is no demand. Conspicuously, there have been no lobbyists demanding the right to pay for priority. There is no market for it, and it won't work. It's irrelevant. Everyone is wrong.
  • Some people assert that NN will reduce "investment" in networks, as it will preclude innovation. Others assert that NN increases overall investment (on networks plus servers/apps/devices). When I tried to quantify the possible revenues from 25 suggested non-neutral business models (link), I concluded the incremental revenue would barely cover the extra costs of implementation, if that. There are many reasons for investments in networks (eg 4G then 5G deployment cycles), while we also see CapEx being replaced by OpEx or software licences for managed or virtual networks. Drawing meaningful correlations is hard enough, let alone causation from an individual issue out of dozens. Everyone is wrong.
  • Most of the debate seems to centre on content - notably video streaming. This ties in with operators wanting to bundle TV and related programming, or Netflix and YouTube seen as dominating Internet traffic and therefore being pivot-points for neutrality. Yet in most markets, IPTV is not delivered via the public Internet anyway, and is considered OK to prioritise as it's a basic service. On the opposite side, upgrades to high-speed consumer broadband is partly driven by the desire for streaming video - revenues would fall if it was blocked, while efforts to charge extra fees to Netflix and co would likely backfire - they'd insist on opposite fees to be carried, like TV channels. Meanwhile, most of the value in the Internet doesn't come from content, but from applications, communications, cloud services and data transmission. However, they are all much techier, so get mostly overlooked by lobbyists and politicians entranced by Hollywood, Netflix or the TV channels. Everyone is wrong.
  • Lots of irrelevant comments on all sides about CDNs or paid-peering being examples of prioritisation (or of craven content companies paying for special favours). Fascinating area, but irrelevant to discussion about access-network ISPs. Everyone is wrong.
  • Lots of discussion about zero-rating or "sponsored data" paid for by 3rd-parties and whether they are right/wrong/distortions. Lots of debate whether they have to be offered to all music / video streaming services, whether they should just be promotional or can be permanent. And so on. Neither relate to treatment of data transmission by the network - and differential treatment of pricing is, like CDNs, interesting but irrelevant to NN. And sponsored data models don't work technically or commercially, with a handful of minor exceptions. Ignore silly analogies to 1-800 phone numbers - they are totally flawed comparisons (see my 2014 rant here). Upshot: zero-rating isn't an NN issue, and sponsored data (with prioritisation or not) doesn't work (for at least 10 reasons). Everyone is wrong.
  • Almost everyone in the US and Europe regulatory scene now agrees that outright blocking of certain services (eg VoIP) or trying to force specific application/web providers to pay an "access" toll fee is both undesirable or unworkable. It would just drive use of VPNs (which ISPs would block at their peril), or amusingly could mean that Telco1.com could legally block the website of Telco2.com, which would make make future marketing campaigns a lot of fun. In other words, it's not going to happen, except maybe for special cases such as childrens' use, or on planes. It's undesirable, regulatorily unacceptable, easy to spot and impossible anyway. Forget about it. Everyone is wrong.
  • Lots of discussion about paid-for premium QoS on broadband, and whether or not it should apply to IoT, 5G, NFV/SDN, network-slicing, general developer-facing APIs and therefore allow different classes of service to be created, and winners/losers to be based on economic firepower. Leaving aside enterprise-grade MPLS and VPN services (where this is both permissible and possible), there's a lot of nonsense talked here. For consumer fixed broadband, many of the quality issues relate to in-home wiring and WiFi interference, for which ISP-provided QoS is irrelevant. For mobile, the radio environment is inherently unpredictable (concrete walls, sudden crowds of people, interference etc). Better packet scheduling can tilt the odds a bit, but forget about hard SLAs or even predictability. Coverage is far more a limiting factor. Dealing with 800 ISPs around the world with different systems/pricing is impossible. The whole area is a non-starter: bigger web companies know how much of a minefield this is, and smaller ones don't care. Everyone is wrong.
In summary - nearly anyone weighing in on Net Neutrality, on either side, is talking nonsense a good % of the time. (And yes, probably me too - I'm sure people will pick holes in a couple of things here).


So what's the answer?
  • First, tone down the rhetoric on both sides. The whole thing is a cacaphony of nonsense, mostly from lobbyists representing two opposing cheeks of the same arse. Acknowledge the hyperbole. Get some reputable fact-checkers involved, and maybe sponsored by government and/or crowdsourcing.
  • Second, recognise that many of the threatened non-neutral models are either impossible or obviously unprofitable. Arguing about them is sophistry and a waste of everyone's time. There are more important things at stake.
  • Thirdly, design and create proper field-trials to try to prove/disprove assertions about innovation, cost structures etc. Select a state, a city or a class of users, or speciallly-licensed ISPs to run prototypes and actually get some proper data. Don't try to change anything on a national or international basis overnight, no matter how many theoretical "studies" have been done. Create a space for operators and developers to try out creating "specialised services", see if they work, and see what happens to everything else. Then develop policy based on evidence - and yes, you'll have to wait a few years. You should have done it sooner instead of arguing. I suspect it'll prove my point 2 above, anyway
  • Fourth, consider "inevitabilities" (see this link for discussion). VPNs will get more common. NFV and edge-computing will get more common. Multiple connections will get more common. New networks (eg private cellular, LPWAN) will get more common. Multi-hop connections with WiFi and ZigBee & meshes will get more common. Devices & applications will fragment, cloudify, become "serverless", being componentised with micro-services, and be harder to decode and classify in the network. AI will get more common, to "game" the network policies, as well as help manage the infrastructure. All this changes the landscape for NN over the next couple of years, so we'll end up debating it all again. Think about these things (and others) now.
  • Six, try some rules on branding Internet / other access. Maybe allow specialised services, but force them to be sold separately from Internet access, and called something else (Ain'ternet? I Can't Believe it's Not Internet?)
  • Seven, get ISP executives (and maybe web/content companies' execs too) to make a public promise about acting in consumers' interests on Internet matters, as I suggested a few years ago - an IPocratic Oath. (link)
  • Eight, train and empower the judiciary to be able to understand, collect data and adjudicate quickly on Internet-related issues. It may be that competition law could be applied, or injunctions granted, even in the absence of hard NN laws. Let's get 24x7 overnight Internet courts able to take an initial view on permissibility of traffic management - not wait 2 years and appeals during which time an app-developer slowly dies.
  • Nine, let's get more accountability on traffic-management and network configurations, so that neutrality/competition law can be applied at a later date anyway. We already have rules on data-retention for customer calls & access to networks. Let's have all internal network configuration & operational data in ISPs' networks securely captured, encrypted, held in escrow and available to prosecutors if needed, under warrant. A blockchain use-case, perhaps? We're going to need that data anyway, to guarantee that customer data hasn't been tampered with by the network. 
  • Ten, ask software (and content and IoT device and cloud) developers what they actually want from the networks. Most seem to be absent from the debate - the forgotten stakeholders. Understand how important "permissionless innovation" actually is. Query whether they care about network QoS, or understand how it links to overall QoS which covers everything from servers to displays to device chipsets to user-interfaces. Find out how they deal with network glitches, dodgy coverage - and whether "fallback" strategies mean that the primary network is getting more or less important. Do they want better networks, are they prepared to pay for them - or would they just rather have better visibility and predictability of when problems are likely to occur?
Apologies for the length of this piece. I'll happily pay someone 0.0000001c for it to load faster, as long as the transaction cost is less than 5% of that.

Get in touch with me at information AT disruptive-analysis dot com if you'd like to discuss it more, or have a sane discussion about Neutrality and what it really means for broadband, policy, 5G, network slicing, IoT and all the rest.

Monday, July 18, 2016

My comments on BEREC's Net Neutrality guidelines consultation

I've been meaning to submit a response to the BEREC consultation on its draft implementation guidelines for the new EU Net Neutrality guidelines for some time. However, a combination of project-work and vacation has meant I've had to do just a fairly rapid set of comments at the last moment. 

I'm posting them here as a reference and further discussion-point. 

As a background, I think the guidelines are quite comprehensive - but have shifted the needle somewhat from the final EU regulation back towards the Internet-centric view of the world. However, the permissiveness around both zero-rating and (in certain circumstances) so-called "specialised services" seems a pragmatic compromise position. I tend to think that zero-rating is fine "in moderation" - it's basically the Internet equivalent of promotions and coupons. "Sponsored data" is an almost-unworkable concept anyway, so the regulatory aspect is largely irrelevant.

Specialised services are OK as long as they are genuinely "special" - something I've been saying for a long time (see post here). It should also be possible to watch for genuine innovation being catalysed / inhibited by the new rules - and then regulators and policymakers can take a more-educated view to revising them in a few years, based on hard evidence.

Anyway - the contents of my submission (reformatted slightly) are below:



Preamble

I am an independent telecom industry analyst and futurist, representing my own advisory company Disruptive Analysis. I advise a broad variety of telecom operators, network and software vendors, investors, NRAs, IT/Internet firms and others on technology evolution paths, business models and applications, and regulatory issues. I look at the issue of Net Neutrality particularly through the lens of what is, or what is not, possible – and also how the Internet value-chain, applications and user-behaviour are likely to evolve in future.

In the past, I have published research studies examining the possible roles and scale of “non-neutral” broadband & IAS business models. My primary conclusions have been that, irrespective of regulation, most proposed commercial models such as “paid prioritisation”, application-based charging or “sponsored data” are broadly unworkable, for many different technical and business reasons – such as growing use of encryption, plus the risks of false positives/negatives.

Overall, I see the guidelines as broadly positive, as they help clarify some of the many grey areas around implementing NN, and clearly try to close off future potential loopholes. Some aspects will likely be difficult to implement technically – notably the precise definitions and measurements of QoS / and “quality” – but the guidelines are good in setting the “spirit” of the law, even though in some cases the “letter” may be harder to achieve.

Listed below are comments that I feel could help to:

  • Clarify the guidelines further 
  •  Help future-proof them against changes in technology 
  •  Raise questions about possible evolution of the guidelines in response to those changes 
  •  Lock down a few additional possible loopholes
                                                                                                                       
Specific points on individual paragraphs: (reference to the guidelines doc here)

#10 – in locations where “WiFi guest access” is made available (eg visitors to a company’s offices), there is sometimes a sign-up or registration required, either via a splash-page, or simply via obtaining a password. Does this count as “publicly available”?

#11 – it should be clarified that there is a difference between corporate VPNs for connecting to a central site, and personal VPNs that are designed to secure/encrypt normal users’ access to the Internet. There is also a growing trend for corporate VPNs to be replaced by a new technology, software-defined WAN, which may itself use Internet access or even multiple accesses as transport.

#12 – Consideration of WiFi hotspots needs to distinguish between voluntary access (eg if a user obtains the cafĂ© password & registers independently) vs. automated “WiFi offload” by ISPs as an integral part of their IAS offering. The latter is a form of “public access”. Also, there are growing examples of ISPs using WiFi in public places, including outdoors, sometimes as part of “WiFi-Primary” public IAS.

#14 - It is worth distinguishing between capital-I “The Internet” (ie public Internet, addressable via the DNS system & IAS) and lower-case-I “internets” (internetworks) that are private domains.

#23-25 – This needs to reference what happens when “terminal equipment” becomes virtualised, through the imminent release of NFV (network function virtualisation) architectures. This could mean that either the “terminal” become a software-function in the ISPs’ data-centre, or could be (in part) pushed down as a “virtual network function” (VNF) to a general-purpose box at the customer site. Some providers are already discussing the concept of a “VNF AppStore” where the user can choose between different software “terminal” functions. It is unclear if this is permissible – or even mandatory.

#39 & #45 – the nature of software and Internet applications makes its increasingly hard to define categories. There are many blurred boundaries, overlapping categories, “mashups” and differentiated offers. How is the categorisation achieved, for example where a social network includes a large amount of video-streaming in its timelines? Is that equivalent to a “pure” video application? What about streaming of games? Is there a distinction between video-on-demand and live-streaming? This is particularly difficult where some functions such as voice communication are being included as “secondary features” embedded in many other applications, often via the use of 3rd-party platforms and APIs (application programming interfaces). There needs to be stronger guidance on how “categories” are defined and how disputed or ambiguous categorisation can be addressed.

#40 & #45 – a possible implementation option is to require ISPs to report the % of overall traffic (or % of particular user-classes) that is zero-rated.  If the total amount provided “for free” is less than (say) 10% of the total, it can a-priori be considered acceptable as it is unlikely to materially affect users’ choices. However, if it is higher this could trigger closely investigation by the NRA.

#43 – this section seems to focus more on established CAPs or possible new-entrants. It is unclear if this explicitly covers the needs open-source initiatives and general software-developers

#56 – There is a possible implementation option for NRAs to collect and hold configuration details for ISPs’ network equipment or software-equivalent VNFs, to allow retrospective analysis of network setup if disputes occur. This could be done on an encrypted / escrowed basis to maintain normal commercial confidentiality

#57 – the reference to encryption needs to explicitly include both app-level encryption (eg HTTPS / HTTP2) and more general “all-traffic” encryption using corporate or personal VPN “tunnels”

#57 & 58 – an implementation option for NRAs could be provision of a contact-point for internal ISP whistle-blowers to report infringement, or 3rd-party monitoring organisations (eg that use pattern-recognition to detect abuses)

#60 & #61 & #63 – categorisation is extremely hard, owing to application differentiation, complex hybrid and “mashup” applications, different levels of fault-tolerance built into applications by developers etc. For example, different VoIP applications use different approaches to error-correction, or are used differently (eg ordinary telephony vs. karaoke). In future there will also be a difference based on whether the application (at either end) is a machine rather than a person. Implied QoS when speaking to “Siri” or “Alexa” may have very different characteristics to speaking to a friend, despite being carried over VoIP. There may also be other dependencies – eg if network conditions have worse impact on badly-designed applications, or devices with other constraints (memory, CPU power, processing chips etc)

#64 – does “network management traffic” also include other types of operational (internal) ISP traffic such as billing records, customer-service inquiries & apps and so forth?

#71 – does “alteration” cover so-called “optimisation”, whereby various content such as a video or image can be paused, down-rated, reformatted etc.? Does it also cover “insertion” of additional data such as tracking codes / “supercookies”, or additional overlay advertising? Are “splash pages” (eg for WiFi registration) allowed?

#89 – Dimensioning may well be affected by other constraints, such as spectrum availability, location, economics of network coverage/capacity, or “emergent” unexpected trends in demand

#98 & #123 – this appears to define specialised services as “actually being special” rather than those capabilities that are normally delivered over IAS. How are hybrid specialised/non-specialised services to be treated?

#101 & #104 – technologies such as SD-WAN (software-defined WAN) allow improved QoS by linking together multiple IAS connections, which in aggregate can perform as well (or even better/cheaper) than one QoS-optimised connection. Should NRAs consider this option when determining if specialised services are valid? See http://disruptivewireless.blogspot.co.uk/2016/06/arbitrage-everywhere-inevitable.html  and http://disruptivewireless.blogspot.co.uk/2016/03/is-sd-wan-quasi-qos-overlay-for.html for more detailed discussion of this point

#111 – It is important to recognise that VPNs are increasingly used by consumers as well as businesses, often to provide a secure & privacy-protected path to the Internet over both public IAS and localised WiFi hotspots. The guidelines should specifically reference consumer VPNs.

#113 to #115 & #117 & #119 – It may be difficult to guarantee coexistence of IAS and specialised services over cellular/other radio networks, where factors such as location in a cell, mobility, density of users, coverage/interference etc are non-deterministic. Potentially the guidelines could advise use of different spectrum bands for IAS and specialised services, to mitigate these problems.

#113 & #116 – in future 5G architectures, we may see a concept called “network slicing”, where the radio and core networks are logically divided into “slices” suitable for different application classes – either broadly between Internet & specialised services of different types, or resold more granularly a bit like “super-MVNOs” to particular 3rd-parties on a wholesale basis. Where those parties are themselves CAPs, this could make interpretation of this section very difficult. If Netflix or Google or even a rival ISP/telco buy rights to a “slice”, how do the guidelines apply?

#131 – This guideline should potentially also include information/transparent guidance for application developers, who may be creating applications intended to run over the IAS provided

#152 – should coverage maps be 2-dimensional, or also include z-axis detail (eg speed in a basement / on the 50th floor of a tower block)? How can such maps cope with the trend towards self-optimising / self-reconfiguring networks of various types?

#167 & #180 – NRAs should potentially seek to maintain records of network configuration status (which may change abruptly with the advent of NFV & SDN). This could perhaps be stored securely & reliably using technologies such as Blockchain.

#172 & #179 – monitoring of aggregate volumes of traffic subject to price-discrimination (eg % of IAS traffic that is zero-rated) would be useful


General comments:
  • There needs to be consideration of meshed, relayed or shared connections which run directly between users’ devices. In device-to-device scenarios, does the owner/operator of an intermediate device become responsible for the neutrality of the “onward” link to 3rd parties? (which could be via any technology such as WiFi, Bluetooth, wired USB port etc) 
  •  There needs to be consideration that some of the more invasive mechanisms for traffic discrimination and control will in future move from “the network” to becoming virtualised software (provided by an ISP) that reside in edge-nodes at the customer premise, or even in customers’ mobile devices. It is unclear how the implementation guidelines deal with predictable near/mid-term trends in NFV/SDN technology, especially where there is no clear “demarcation point” in ownership between ISP and end-user. 
  • Equally, in future there may well be CAP companies that offer their services “in the network” itself, also with NFV/SDN. There needs to be careful thought given to how this intersects with Net Neutrality guidelines 
  • The evolution of artificial intelligence & machine-learning means that workarounds or infringements may become automated, and perhaps even invisible to ISPs, in future. This may also impact the nature of QoS as used for different applications. See http://disruptivewireless.blogspot.co.uk/2016/04/telcofuturism-will-ai-machine-learning.html for more details 
  •  Where wholesale relationships occur – eg MNO/MVNO, “neutral host” networks using unlicenced-band LTE, or secondary ID on the same WiFi hotspot – and the traffic-management / IAS functions are co-managed, how do the guidelines apply? Which party/parties is responsible?

Monday, October 26, 2015

EU Telecoms: Why the European Parliament needs to enforce clear Net Neutrality laws


It’s Net Neutrality time again….

On Tuesday 27th October, the European Parliament is once again looking at proposed telecoms regulation for the EU – specifically around roaming and Net Neutrality,  the so-called “telecoms package”.

This is a fairly long post, but covers a set of important issues for policymakers to consider, and for MEPs and their advisers to use as a basis for deciding how to treat the EU legislative package on the table. It also applies to politicians and regulators considering Net Neutrality in other geographies.

The post gives some background to the current legal/regulatory situation in Europe, before critiquing some recent commentary from both pro- and anti-neutrality advocates, notably Barbara Schewick & Martin Geddes respectively.

Background

In recent years there has been a regulatory ping-pong match played between the European Parliament and European Council. The former voted for “harder” forms of Neutrality in early 2014; the latter has been more accepting of various exceptions more favourable to the traditional telecom providers. The third body involved, the European Commission (which advises the other two), has also become more permissive over the last 18 months, especially with its new Digital Economy commissioners appearing to backtrack on their predecessor’s promises of strictly open-Internet rules. The proposals on mobile roaming have also changed, but that is not a particular focus here.

A rough compromise between Parliament and Council came out of discussions in mid-2015. It is this new set of proposals which is being voted on, with the hope of gaining a final consensus and law. However, as always, there is the possibility of amendments being suggested – and there is a fair amount of lobbying going on to persuade MEPs to do just that. (It is worth noting that amendments in 2014 brought in some surprising additional “hard NN” wording in the previous ruling).

The “pro-neutrality” proposed amendments are being loudly discussed, by bodies like Access Now, EFF and legal experts like Barbara van Schewick encouraging the public to demand “full neutrality” from their MEPs. Opposing views – and perhaps suggested amendments - are being pushed by the telcos and bodies such as ETNO, but with less emphasis on public engagement. This differs to the US process, which became a major public talking-point and political football. My colleague Martin Geddes has also given some strong views which I also critique below.

I think that both polarised sides of the debate have it wrong. The proposed rules are actually very good for the most part - but more clarity & tweaking is needed on a few issues, and one particular exemption, about classes of service, needs to go. (As a general rule, if a policy annoys both sides equally, it's probably got it about right).

While most of this concerns network neutrality (how the network treats packets, essentially), other aspects are also appearing on the table. There's a lot of debate about whether pricing is part of "neutrality",  especially zero-rating. Some argue it is, others not.

There is also a not-very-subtle attempt by the telcos to conflate the network with "neutrality" for applications and OS platforms. This has some good points about Apple and Google’s autocratic control over iOS and Android (eg vague criteria for accepting/rejecting apps), but mostly the telcos are being duplicitous: it is a cloaked attempt to reclaim power and relevance for telephony and SMS, by suggesting that proprietary Internet messaging and communications tools should be forced to interoperate. I address that briefly at the end – it’s a specious and dangerous argument, that’s utterly without merit. (Read this post about so-called platform neutrality).

The proposals also allow for “specialised services” with preferential network QoS, although not where they are direct substitutes for Internet services. This aligns with what I’ve written in the past: specialised services must be genuinely “special” to avoid competition risks. For example, I have no problem with remote monitoring of heart pacemakers having a “fast lane” – there are no likely competitive issues disenfranchising “pacemaker.com” as nobody is likely to run that service over the public Internet anyway.

My main personal concern is that the regulations protect open Internet access, foster innovation – but also don’t hamstring the technology industry and developers when it comes to non-Internet applications and services, especially in the timeframe of 5G. Forget the “I” in IoT – it actually just means Network, or more probably Networks, plural. Only a portion of IoT data will transit the public Internet – there will be many use-cases that need very different networking and regulatory controls, and it is important that legislation is framed in a way that recognises that. Not all of these networks and applications will be “broadband” either – a huge number of devices will continue to be narrowband, using lightweight and battery-friendly connections.

The core of the debate can be summarised like this:

“How can governments best protect and stimulate the value of the Internet for citizens, businesses and society - whilst also encouraging development of novel non-Internet services”


Policy vs. Regulation and Neutrality vs. equality

My collaborator Martin Geddes has weighed in with an articulate post on the topic here – which makes some good points about the technology, but in my view misses badly on its conclusions and consideration of politics and economy.

He asserts that lawyers don’t understand engineering, but then neatly illustrates that engineers don’t understand politics, as he refers to “regulators”, when they’re not directly involved here – it’s a legal and policy debate at the European Parliament. Regulators are the ones who then have to work out how to implement the laws, within the frame of both policy AND technology – and hopefully with awareness of the limits of the maths of network performance in mind. But that comes later.

First, the law (and democracy) sets up what we want the rules to achieve. Economics, business and social issues are the top-level concerns, not the performance of individual applications, or cost-structures of networks. The “outcomes” Governments are interested in are metrics like GDP, employment, digital inclusion, innovation and so forth. Those are the macro-level reasons why broadband investment is encouraged.

So laws on neutrality and networks should optimise for these lofty goals. If compromises need to be made later, to fit with the awkwardness of networking maths, then so be it. But they should be “de minimis”, with an over-riding objective of maintaining the status quo as far as possible.

Martin argues that because there’s no real, 100% objective “neutrality” when you consider the way IP networks actually work, that the whole concept is without merit. But it’s not so much “neutrality” in a mathematically-objective sense regarding packet transport that politicians are considering, but “equality” for Internet application and content providers, and especially the minimisation of “friction” for innovation.

Friction arises from the potentially cumbersome processes and payments involved in a so-called “quality marketplace”, where applications need to signal (and maybe pay for) desired levels of network performance. Not only is this extremely hard or impossible to do at scale (especially as apps evolve in shape and function, with many components and variations), but it would bring in huge obstacles to the software and content development process.

For example, how would an app-developer know the network quality/performance requirements for every module of open-source code, or 3rd-party API embedded in their software? How would they vary by device, network type, OS and location? What are the inter-dependencies? How would P2P or self-optimising software interact with a “quality marketplace”? How would people be able to know they got the “quality” they paid for, and be refunded if they didn’t get it? I also suspect it also brings a new attack-surface for privacy invasion and hacking. This level of friction is utterly unworkable for a general case – it can only apply to complex, expensive, time-consuming individual deals. I believe it cannot be automated, and thus could not scale.

The key point of evidence is that a (roughly) open and best-efforts Internet has been a huge boon for economy and society over the last 20 years. Contrary to Martin’s assertions, the right answer to his question “how does unpredictable and arbitrary performance help the development of the market?” is not “it doesn’t”. The correct response is “by providing 3 billion people with choice and opportunity to communicate, prosper and interact in ways that generate trillions of dollars, and huge social benefit”. He is right that some applications don’t work perfectly under this regime – but there is no evidence to suggest that these are remotely comparable in scope or scale to the ones that do. The overall value to humanity and nations of the status quo Internet is indisputable. This has arisen largely as a result of the historic friction-free, network-decoupled approach to Internet innovation.

It should also be noted that it has been the desire for accessing these open “equal” applications, over best-efforts connections, that has driven much of the investment for broadband in the first place.

For sure, other non-Internet communications has also been important (eg corporate networks and emergency infrastructure) and may be more so in future with a shift to IoT. But for now, the Internet model of “permissionless innovation” needs protection where there are strategic pinch-points, such as broadband access infrastructure. Any laws or technical policies which seek to change the current (non-)relationship between Internet application/content providers and networks, or diminish the viability of “best-efforts” access must be rejected. 

By all means frame laws to encourage operators to develop additional platforms which are “non-neutral”, or with "quality contracts" and allow them to try and disrupt the Internet model from adjacency, if they prove effective. There are already various private networks for IoT and other uses that are non-Internet connected. But unless and until there is clear evidence that such approaches don’t “break” what’s currently working, they are unacceptable from a policymaking standpoint, and governments should instruct their regulators accordingly.

The main complexity here is that one of the possible "loser" groups is the same as the one owning the pinch-points - telcos selling phone calls and other "old" products, as well as Internet access. While they benefit from additional demand for connectivity, they are also seeing greater substitution of historic revenue streams. This gives a large incentive to misbehave - and enough proof points of VoIP-blocking and other egregious behaviour to demonstrate the risk is real. The industry regularly tries to protect itself with spurious “level playing field” arguments, but that is mostly an attempt to excuse its lack of innovation in services over the last 20 years.

The telecoms industry should have focused on differentiating quality/capabilities between Internet and non-Internet services. Instead, it has tried to differentiate among or against Internet services, often with highly dubious motives. Behind the seemingly benign talk of “traffic management” or maybe “QoS monetisation” has often been a threat to extort money from application-layer competitors, or deliberately degrade the performance that could be reasonably expected.
 
The comparison of neutrality and equality is important here. Like every other form of equality in law (age, race, gender, sexuality etc), it’s often hard to come up with unambiguous definitions, and even harder to measure in practice. We typically know discrimination when we see it – and also know that sometimes it’s based on unconscious, even accidental, biases rather than external malice. It’s still important to have equality laws, even if we can’t measure or assure perfect equality. There is a close analogy with neutrality – in fact, if we’d called it “Network Equality” we’d probably have had less of the legal hoop-la in recent years.

It is right that the proposed EU rules allow non-Internet specialised services – and that they are kept distinct from Internet offers. This will make it difficult to create hybrid or mash-up services blending both worlds – but that seems a worthwhile price to pay. If we see genuine innovation around non-Internet prioritised services – and a maintained acceptable level of performance from Internet access at the same time – then perhaps the rules can be adapted later.

We need to be more careful with the term “broadband”. Plenty of studies document the role that broadband plays in economic growth, and this often drives government policy. But it’s much less normal to estimate what % of “broadband” benefits come from Internet vs. non-Internet use-cases. Without that understanding, we risk mis-framing regulation as being about the enabling networks, versus the most important service delivered over them.

The “Absolute Neutrality” argument is flawed too

While I am strongly sympathetic to the general concepts of Net Neutrality, I think that some of the more strident calls from the “fundamentalists” are naĂŻve and harmful as well.

Most proponents seem to forget that we only have open Internet access because other integrated “walled garden” services failed to gain traction. The 3G and some 4G spectrum auctions were not conducted with an expectation that “plain vanilla” Internet access was going to be the predominant use-case and source of revenue. The original vision was for a world made up entirely of specialised services and managed connections – so it is hardly surprising or unreasonable that the telecom industry is going to keep trying to make them work.

There is also a view that “broadband = Internet” without an awareness of other non-Internet uses that already exist, and how they may expand in the future.

Indeed, it is worth noting that the nature of telecoms and the Internet itself has changed over time too – we are now actively talking about use-cases for 5G which extend well beyond traditional Internet-based services, while virtualisation of networks is also starting to raise the prospect of network “slices” which behave in different ways, with different traffic. (I wrote about the possible regulatory impacts of SDN and NFV recently, here). It is important that laws are not framed in a way that will make them conceptually obsolete in coming years.

Many of the neutrality “absolutists” also tend to go over-the-top (*cough*, sorry!) on fears that innovation might be stifled by large companies paying for prioritisation, putting start-ups at a disadvantage.

This is a straw-man. Everything I’ve seen or heard suggests that there is almost zero willingness-to-pay for priority from content/apps providers anyway – their business models can’t accommodate the fees, they doubt the technology would work well enough, and they don’t want clunky commercial relationships with hundreds of network operators. They also know that most mobile users are connected via 3rd-party WiFi a lot of the time anyway, and coupled with other variables like radio coverage the net benefits would be slim. In other words, the idea of paid prioritisation (in mobile) is a total dud anyway. Things might be different in the fixed-broadband world, but even there most developers would prefer to pay for better adaptivity in their software, rather than notional “quality contracts”.

Schewick and many other lobby groups are also implacably opposed to the use of “zero-rating” of certain data against users’ quotas. This is something I’ve written on before as well. Again, the “absolutist” lobby engages in strawman arguments about telcos and ISPs “picking winners” via zero-rating of data traffic. I’d say that they are much more influential in picking winners when it comes to bundling of content or apps – real competition occurs when Spotify gets the deal, rather than another service, not in having its use zero-rated when it is used.

It’s also worth noting that zero-rating (where nobody pays for data) is very different to sponsored data, where the app provider picks up the bill. The latter exists almost nowhere, and will not gain any major traction in future either. Like paid-quality, there is almost zero willingness to pay, and almost no technical way to get it to work properly, except on painstakingly-crafted individual deals.

My view is that operators should be made to report zero-rated traffic volumes, and that as long as it was below a certain amount (1%, 5%, 10% of total data etc) it could just be considered a promotional tool and unlikely to affect user behaviour and competition. This would reduce risks of harm around bulky uses like streaming video and cloud storage.

Where Schewick is right is about the Trojan Horse of allowing management by “class of service”. The risk that networks deliberately throttle encrypted communications is a particular and pertinent risk.

The Myth of Platform Neutrality

One other area is worth noting, in case amendments are tabled about it this week.

MEPs should also be very wary of any amendments tabled about “platform neutrality”. There has been a recent upsurge in rhetoric from telcos trying to suggest that applications like Whatsapp and Facebook Messenger should be forced to interoperate with SMS. Not only is this ridiculous – the best communications are apps are too uniquely-designed and specifically-featured for “interoperability” to have any meaning – but it is clearly an anti-consumer and protectionist move.

There are also many side-effects to so-called platform neutrality, which would backfire spectacularly for the telecoms industry. See this post.

It should also be considered that if Skype or Viber are deemed sufficiently similar to “primary” telephony, that they should be forced to interconnect, then there is no obvious argument why they should not also benefit from number portability. Furthermore, there is also no reason then to force consumers to own phone numbers at all – it would be anti-competitive for businesses or public bodies to insist on having users’ phone details, rather than giving them a free choice of communications method.


Conclusion

The current proposals are generally good, but have some possible pitfalls and need clarification. They align with the most important policy objective – protecting the “generative” nature of Internet innovation, by ensuring that strategic pinch-points in the access network are not abused. The proposals mandate no blocking or deliberate degradation of Internet services – the most important aspect, in my opinion. However, the distinction of Internet and non-Internet uses of networks needs to be made clearer, as the latter may be able to benefit from more flexibility.

The laws should ideally protect innovation and status-quo business models in the Internet domain, while simultaneously encouraging innovation in non-Internet “specialised” applications and networks. From an economic and societal point of view, that would give “the best of both worlds”.

At a later date, if both domains evolve well, we can consider hybrids or more relaxed rules. But any such decision needs to be evidence-based – let’s see proof that both requirements can be satisfied first.

Martin’s post asks “Regulators face a simple choice: either there is a rational market pricing for quality (that developers must participate in), or there is rationing of quality. Which one do you want?”. That is a false dichotomy. Law-makers and regulators can define two distinct worlds, with two different answers to that question. For Internet access it is the latter option, of “rationed qyality”. Occasional failures and glitches are a minor inconvenience and annoyance, compared to the risks of killing the Golden Goose of Internet Innovation. Fully-predictable network quality is vastly exaggerated in importance for the Internet – although it may well have its uses for the emergent non-Internet world. Such a marketplace in “quality pricing” may prove itself over time, but it needs to do so without riding on the web’s coat-tails.

The interesting question is how we can create “both worlds” – non-Internet and Internet without losing some of the benefits of converged networks. One option may be disaggregation – actually creating separate physical/logical networks for Internet and non-Internet use. This may imply extra costs (and some loss of multiplexing benefits) but it may be worth it to resolve this dilemma. While in the past such costs may have been prohibitive, new technologies may help.

(I am wondering if using spatial or wavelength multiplexing to create separate connections, rather than statistical multiplexing on single connections, could be the solution. A discussion for another post)

In conclusion, my opinion is that the Net Neutrality part of the EU telecoms package is mostly reasonable, but needs to be clarified in some areas, and have amendments that ensure the optimal social and economic outcomes arise.

  • Internet access is treated differently to non-Internet access. Both are important. The status-quo model of frictionless Internet app development needs to continue, but we should also encourage non-Internet innovation where there is clear separation. Hybrids can come later if everything works out.
  • It should be explicitly made illegal to impede or slow encrypted traffic, relative to unencrypted traffic. The proposal that networks can differentially treat “classes” of traffic, but not individual applications, is a Trojan Horse – unless we have much more clarity about what types/styles of “class” might be permissible or not.
  • Specialised services must actually be special, not replicas of Internet services
  • The debate must not be hijacked by an over-focus on “content” rather than applications, communications & things
  • Predictable developments like 5G and SDN/NFV should be considered, so they don’t make a mockery of the wording used in any new laws and subsequent regulations
  • Differential charging (especially zero-rating) can be done for limited purposes and in limited volumes, but needs careful scrutiny to avoid being a competitive risk
  • Transparency (describing what is being done to data on the network, and current network status) is often more important than the actual management itself

MEPs should remember they are voting for “Internet Equality”, not “Broadband Network Neutrality”. For the Internet, Equality is more important than Quality.