It’s Net
Neutrality time again….
On
Tuesday 27th October, the European Parliament is once again looking at proposed
telecoms regulation for the EU – specifically around roaming and Net
Neutrality, the so-called “telecoms package”.
This is a
fairly long post, but covers a set of important issues for policymakers to
consider, and for MEPs and their advisers to use as a basis for deciding how to
treat the EU legislative package on the table. It also applies to politicians
and regulators considering Net Neutrality in other geographies.
The post
gives some background to the current legal/regulatory situation in Europe,
before critiquing some recent commentary from both pro- and anti-neutrality
advocates, notably Barbara Schewick & Martin Geddes respectively.
Background
In recent
years there has been a regulatory ping-pong match played between the European
Parliament and European Council. The former voted for “harder” forms of
Neutrality in early 2014; the latter has been more accepting of various
exceptions more favourable to the traditional telecom providers. The third body
involved, the European Commission (which advises the other two), has also
become more permissive over the last 18 months, especially with its new Digital
Economy commissioners appearing to backtrack on their predecessor’s promises of
strictly open-Internet rules. The proposals on mobile roaming have also
changed, but that is not a particular focus here.
A rough
compromise between Parliament and Council came out of discussions in mid-2015.
It is this new set of proposals which is being voted on, with the hope of
gaining a final consensus and law. However, as always, there is the possibility
of amendments being suggested – and there is a fair amount of lobbying going on
to persuade MEPs to do just that. (It is worth noting that amendments in 2014
brought in some surprising additional “hard NN” wording in the previous
ruling).
The
“pro-neutrality” proposed amendments are being loudly discussed, by bodies like
Access Now, EFF and legal experts like Barbara van Schewick encouraging the public to demand
“full neutrality” from their MEPs. Opposing views – and perhaps suggested
amendments - are being pushed by the telcos and bodies such as ETNO, but with
less emphasis on public engagement. This differs to the US process, which
became a major public talking-point and political football. My colleague Martin
Geddes has also given some strong views which I also critique below.
I think
that both polarised sides of the debate have it wrong. The
proposed rules are actually very good for the most part - but more clarity
& tweaking is needed on a few issues, and one particular exemption, about
classes of service, needs to go. (As a general rule, if a policy annoys both
sides equally, it's probably got it about right).
While
most of this concerns network neutrality (how the network treats packets,
essentially), other aspects are also appearing on the table. There's a lot of debate
about whether pricing is part of "neutrality", especially zero-rating. Some argue it is,
others not.
There is
also a not-very-subtle attempt by the telcos to conflate the network with
"neutrality" for applications and OS platforms. This has some good
points about Apple and Google’s autocratic control over iOS and Android (eg
vague criteria for accepting/rejecting apps), but mostly the telcos are being
duplicitous: it is a cloaked attempt to reclaim power and relevance for
telephony and SMS, by suggesting that proprietary Internet messaging and
communications tools should be forced to interoperate. I address that briefly
at the end – it’s a specious and dangerous argument, that’s utterly without
merit. (Read this post about so-called platform
neutrality).
The
proposals also allow for “specialised services” with preferential network QoS,
although not where they are direct substitutes for Internet services. This
aligns with what I’ve written in the past: specialised services must be
genuinely “special” to avoid competition risks. For example, I have no problem
with remote monitoring of heart pacemakers having a “fast lane” – there are no
likely competitive issues disenfranchising “pacemaker.com” as nobody is likely
to run that service over the public Internet anyway.
My main personal
concern is that the regulations protect open Internet access, foster innovation – but also don’t hamstring the technology industry and developers when it
comes to non-Internet applications
and services, especially in the timeframe of 5G. Forget the “I” in IoT – it
actually just means Network, or more probably Networks, plural. Only a portion of IoT data will transit the public
Internet – there will be many use-cases that need very different networking and
regulatory controls, and it is important that legislation is framed in a way
that recognises that. Not all of these networks and applications will be
“broadband” either – a huge number of devices will continue to be narrowband,
using lightweight and battery-friendly connections.
The core
of the debate can be summarised like this:
“How can governments best protect
and stimulate the value of the Internet for citizens, businesses and society -
whilst also encouraging development
of novel non-Internet services”
Policy
vs. Regulation and Neutrality vs. equality
My
collaborator Martin Geddes has weighed in with an articulate post on the
topic here – which
makes some good points about the technology, but in my view misses badly on its
conclusions and consideration of politics and economy.
He
asserts that lawyers don’t understand engineering, but then neatly illustrates
that engineers don’t understand politics, as he refers to “regulators”, when
they’re not directly involved here – it’s a legal and policy debate at the
European Parliament. Regulators are the ones who then have to work out how to
implement the laws, within the frame of both policy AND technology – and hopefully
with awareness of the limits of the maths of network performance in mind. But
that comes later.
First,
the law (and democracy) sets up what we want the rules to achieve. Economics,
business and social issues are the top-level concerns, not the performance of
individual applications, or cost-structures of networks. The “outcomes”
Governments are interested in are metrics like GDP, employment, digital
inclusion, innovation and so forth. Those are the macro-level reasons why
broadband investment is encouraged.
So laws
on neutrality and networks should optimise for these lofty goals. If
compromises need to be made later, to fit with the awkwardness of networking
maths, then so be it. But they should be “de
minimis”, with an over-riding objective of maintaining the status quo as
far as possible.
Martin
argues that because there’s no real, 100% objective “neutrality” when you
consider the way IP networks actually work, that the whole concept is without
merit. But it’s not so much “neutrality” in a mathematically-objective sense
regarding packet transport that politicians are considering, but “equality” for
Internet application and content providers, and especially the minimisation of
“friction” for innovation.
Friction
arises from the potentially cumbersome processes and payments involved in a
so-called “quality marketplace”, where applications need to signal (and maybe
pay for) desired levels of network performance. Not only is this extremely hard
or impossible to do at scale (especially as apps evolve in shape and function,
with many components and variations), but it would bring in huge obstacles to
the software and content development process.
For
example, how would an app-developer know the network quality/performance
requirements for every module of open-source code, or 3rd-party API
embedded in their software? How would they vary by device, network type, OS and
location? What are the inter-dependencies? How would P2P or self-optimising
software interact with a “quality marketplace”? How would people be able to
know they got the “quality” they paid for, and be refunded if they didn’t get
it? I also suspect it also brings a new attack-surface for privacy invasion and
hacking. This level of friction is utterly unworkable for a general case – it
can only apply to complex, expensive, time-consuming individual deals. I
believe it cannot be automated, and thus could not scale.
The key
point of evidence is that a (roughly) open and best-efforts Internet has been a
huge boon for economy and society over the last 20 years. Contrary to Martin’s
assertions, the right answer to his question “how does unpredictable and arbitrary performance help the development
of the market?” is not “it doesn’t”.
The correct response is “by providing 3
billion people with choice and opportunity to communicate, prosper and interact
in ways that generate trillions of dollars, and huge social benefit”. He is
right that some applications don’t work perfectly under this regime – but there
is no evidence to suggest that these are remotely comparable in scope or scale
to the ones that do. The overall value to humanity and nations of the status
quo Internet is indisputable. This has arisen largely as a result of the
historic friction-free, network-decoupled approach to Internet innovation.
It should
also be noted that it has been the desire for accessing these open “equal”
applications, over best-efforts connections, that has driven much of the
investment for broadband in the first place.
For sure,
other non-Internet communications has also been important (eg corporate
networks and emergency infrastructure) and may be more so in future with a
shift to IoT. But for now, the Internet model of “permissionless innovation”
needs protection where there are strategic pinch-points, such as broadband access
infrastructure. Any laws or technical policies which seek to change the current
(non-)relationship between Internet application/content providers and networks,
or diminish the viability of “best-efforts” access must be rejected.
By all
means frame laws to encourage operators to develop additional platforms which are “non-neutral”, or with "quality
contracts" and allow them to try and disrupt the Internet model from
adjacency, if they prove effective. There are already various private networks
for IoT and other uses that are non-Internet connected. But unless and until there
is clear evidence that such approaches don’t “break” what’s currently working,
they are unacceptable from a policymaking standpoint, and governments should
instruct their regulators accordingly.
The main complexity
here is that one of the possible "loser" groups is the same as the
one owning the pinch-points - telcos selling phone calls and other
"old" products, as well as Internet access. While they benefit from
additional demand for connectivity, they are also seeing greater substitution
of historic revenue streams. This gives a large incentive to misbehave - and
enough proof points of VoIP-blocking and other egregious behaviour to
demonstrate the risk is real. The industry regularly tries to protect itself
with spurious “level playing field” arguments, but that is mostly an attempt to
excuse its lack of innovation in services over the last 20 years.
The
telecoms industry should have focused
on differentiating quality/capabilities between
Internet and non-Internet services. Instead, it has tried to differentiate among or against Internet services, often with highly dubious motives.
Behind the seemingly benign talk of “traffic management” or maybe “QoS
monetisation” has often been a threat to extort money from application-layer
competitors, or deliberately degrade the performance that could be reasonably
expected.
The
comparison of neutrality and equality is important here. Like every other form
of equality in law (age, race, gender, sexuality etc), it’s often hard to come
up with unambiguous definitions, and even harder to measure in practice. We
typically know discrimination when we see it – and also know that sometimes
it’s based on unconscious, even accidental, biases rather than external malice.
It’s still important to have equality
laws, even if we can’t measure or assure perfect equality. There is a close
analogy with neutrality – in fact, if we’d called it “Network Equality” we’d
probably have had less of the legal hoop-la in recent years.
It is right
that the proposed EU rules allow non-Internet specialised services – and that
they are kept distinct from Internet offers. This will make it difficult to
create hybrid or mash-up services blending both worlds – but that seems a worthwhile
price to pay. If we see genuine innovation around non-Internet prioritised
services – and a maintained acceptable level of performance from Internet
access at the same time – then perhaps the rules can be adapted later.
We need
to be more careful with the term “broadband”. Plenty of studies document the
role that broadband plays in economic growth, and this often drives government
policy. But it’s much less normal to estimate what % of “broadband” benefits
come from Internet vs. non-Internet use-cases. Without that understanding, we
risk mis-framing regulation as being about the enabling networks, versus the
most important service delivered over them.
The
“Absolute Neutrality” argument is flawed too
While I
am strongly sympathetic to the general concepts of Net Neutrality, I think that
some of the more strident calls from the “fundamentalists” are naïve and
harmful as well.
Most
proponents seem to forget that we only have open Internet access because other
integrated “walled garden” services failed to gain traction. The 3G and some 4G
spectrum auctions were not conducted with an expectation that “plain vanilla”
Internet access was going to be the predominant use-case and source of revenue.
The original vision was for a world made up entirely
of specialised services and managed connections – so it is hardly surprising or
unreasonable that the telecom industry is going to keep trying to make them
work.
There is
also a view that “broadband = Internet” without an awareness of other
non-Internet uses that already exist, and how they may expand in the future.
Indeed,
it is worth noting that the nature of telecoms and the Internet itself has
changed over time too – we are now actively talking about use-cases for 5G
which extend well beyond traditional Internet-based services, while
virtualisation of networks is also starting to raise the prospect of network
“slices” which behave in different ways, with different traffic. (I wrote about
the possible regulatory impacts of SDN and NFV recently, here). It is important
that laws are not framed in a way that will make them conceptually obsolete in
coming years.
Many of
the neutrality “absolutists” also tend to go over-the-top (*cough*, sorry!) on
fears that innovation might be stifled by large companies paying for
prioritisation, putting start-ups at a disadvantage.
This is a
straw-man. Everything I’ve seen or heard suggests that there is almost zero
willingness-to-pay for priority from content/apps providers anyway – their
business models can’t accommodate the fees, they doubt the technology would
work well enough, and they don’t want clunky commercial relationships with
hundreds of network operators. They also know that most mobile users are
connected via 3rd-party WiFi a lot of the time anyway, and coupled
with other variables like radio coverage the net benefits would be slim. In
other words, the idea of paid prioritisation (in mobile) is a total dud anyway.
Things might be different in the fixed-broadband world, but even there most
developers would prefer to pay for better adaptivity in their software, rather
than notional “quality contracts”.
Schewick
and many other lobby groups are also implacably opposed to the use of
“zero-rating” of certain data against users’ quotas. This is something I’ve written on before as well. Again, the “absolutist” lobby engages in strawman
arguments about telcos and ISPs “picking winners” via zero-rating of data
traffic. I’d say that they are much more influential in picking winners when it
comes to bundling of content or apps – real competition occurs when Spotify
gets the deal, rather than another service, not in having its use zero-rated
when it is used.
It’s also
worth noting that zero-rating (where nobody pays for data) is very different to
sponsored data, where the app provider picks up the bill. The latter exists
almost nowhere, and will not gain any major traction in future either. Like
paid-quality, there is almost zero willingness to pay, and almost no technical
way to get it to work properly, except on painstakingly-crafted individual
deals.
My view
is that operators should be made to report zero-rated traffic volumes, and
that as long as it was below a certain amount (1%, 5%, 10% of total data etc)
it could just be considered a promotional tool and unlikely to affect user
behaviour and competition. This would reduce risks of harm around bulky uses
like streaming video and cloud storage.
Where
Schewick is right is about the Trojan Horse of allowing management by “class of
service”. The risk that networks deliberately throttle encrypted communications
is a particular and pertinent risk.
The Myth
of Platform Neutrality
One other
area is worth noting, in case amendments are tabled about it this week.
MEPs
should also be very wary of any amendments tabled about “platform neutrality”.
There has been a recent upsurge in rhetoric from telcos trying to suggest that
applications like Whatsapp and Facebook Messenger should be forced to
interoperate with SMS. Not only is this ridiculous – the best communications
are apps are too uniquely-designed and specifically-featured for
“interoperability” to have any meaning – but it is clearly an anti-consumer and
protectionist move.
There are
also many side-effects to so-called platform neutrality, which would backfire
spectacularly for the telecoms industry. See this post.
It should
also be considered that if Skype or Viber are deemed sufficiently similar to
“primary” telephony, that they should be forced to interconnect, then there is
no obvious argument why they should not also benefit from number portability.
Furthermore, there is also no reason then to force consumers to own phone
numbers at all – it would be anti-competitive for businesses or public bodies
to insist on having users’ phone details, rather than giving them a free choice
of communications method.
Conclusion
The
current proposals are generally good, but have some possible pitfalls and need
clarification. They align with the most important policy objective – protecting
the “generative” nature of Internet innovation, by ensuring that strategic
pinch-points in the access network are not abused. The proposals mandate no
blocking or deliberate degradation of Internet services – the most important
aspect, in my opinion. However, the distinction of Internet and non-Internet
uses of networks needs to be made clearer, as the latter may be able to benefit
from more flexibility.
The laws should ideally protect
innovation and status-quo business models in the Internet domain, while
simultaneously encouraging innovation in non-Internet “specialised”
applications and networks. From an economic and societal point of view, that
would give “the best of both worlds”.
At a later
date, if both domains evolve well, we can consider hybrids or more relaxed
rules. But any such decision needs to be evidence-based – let’s see proof that
both requirements can be satisfied first.
Martin’s
post asks “Regulators face a simple
choice: either there is a rational market pricing for quality (that developers
must participate in), or there is rationing of quality. Which one do you want?”.
That is a false dichotomy. Law-makers and regulators can define two distinct
worlds, with two different answers to that question. For Internet access it is
the latter option, of “rationed qyality”. Occasional failures and glitches are
a minor inconvenience and annoyance, compared to the risks of killing the
Golden Goose of Internet Innovation. Fully-predictable network quality is
vastly exaggerated in importance for the Internet – although it may well have
its uses for the emergent non-Internet world. Such a marketplace in “quality
pricing” may prove itself over time, but it needs to do so without riding on the
web’s coat-tails.
The
interesting question is how we can create “both worlds” – non-Internet and
Internet without losing some of the benefits of converged networks. One option may
be disaggregation – actually creating separate physical/logical networks for
Internet and non-Internet use. This may imply extra costs (and some loss of
multiplexing benefits) but it may be worth it to resolve this dilemma. While in
the past such costs may have been prohibitive, new technologies may help.
(I am
wondering if using spatial or wavelength multiplexing to create separate
connections, rather than statistical multiplexing on single connections, could
be the solution. A discussion for another post)
In
conclusion, my opinion is that the Net Neutrality part of the EU telecoms
package is mostly reasonable, but needs to be clarified in some areas, and have
amendments that ensure the optimal social and economic outcomes arise.
- Internet access is treated differently to non-Internet access. Both are important. The status-quo model of frictionless Internet app development needs to continue, but we should also encourage non-Internet innovation where there is clear separation. Hybrids can come later if everything works out.
- It should be explicitly made illegal to impede or slow encrypted traffic, relative to unencrypted traffic. The proposal that networks can differentially treat “classes” of traffic, but not individual applications, is a Trojan Horse – unless we have much more clarity about what types/styles of “class” might be permissible or not.
- Specialised services must actually be special, not replicas of Internet services
- The debate must not be hijacked by an over-focus on “content” rather than applications, communications & things
- Predictable developments like 5G and SDN/NFV should be considered, so they don’t make a mockery of the wording used in any new laws and subsequent regulations
- Differential charging (especially zero-rating) can be done for limited purposes and in limited volumes, but needs careful scrutiny to avoid being a competitive risk
- Transparency (describing what is being done to data on the network, and current network status) is often more important than the actual management itself
MEPs should remember they are voting for “Internet Equality”, not
“Broadband Network Neutrality”. For the
Internet, Equality is more important than Quality.
1 comment:
"Again, the “absolutist” lobby engages in strawman arguments about telcos and ISPs “picking winners” via zero-rating of data traffic."
Why do you consider "picking winners" to be a straw man argument? The telco/ISP does pick the services it includes in the zero rating package. Even if the zero rating itself does not pick the market winner, zero rating does influence the consumer which service to prefer. Clearly there will be a cost differential between services, as the zero rated service will cost only the monthly subscription fee while the non-zero rated service will cost the monthly fee plus any data charges. This cost differential can become quite large with streaming services.
It is also quite clear that the telco/ISP is "picking winners" when choosing which service providers to include. Non-entities, non-contenders and non-participant are clearly at a disadvantage compared to the service providers who manage to get on the short list.
Obviously zero rated does not equal market winner, but the "picking winners" term is a convenient shorthand to illustrate the concept and consequences even if it not strictly speaking 100% accurate.
Are you opposing "picking winners" due to it's inaccuracy or some other reason? And if "picking winners" is a straw man, what is the correct argument to be rebuffed?
Post a Comment