Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label AT&T. Show all posts
Showing posts with label AT&T. Show all posts

Friday, October 28, 2016

A realistic 5G view: Timelines, Standards & Politics

Things are moving incredibly fast for 5G!

...or are they? A couple of recent headlines make it a little hard to tell:

Verizon Eyes "Wireless Fibre" Launch in 2017 

Verizon Rejects AT&T-led Effort to Speed Up Release of 5G Standard

So, does Verizon want early 5G, or not? Are we looking at a 2017 launch, or still 2019-20? Why the apparent contradiction? And what about other operators in Asia and Europe?

I've been to recent 5G events including NGMN's conference (link), and a smaller one this week organised by Cambridge Wireless and the UK's National Infrastructure Commission (link). I've also been debating with assorted fellow-travellers online and at this week's WiFi Now event (link).

In my view, Verizon (and SKT in South Korea) are gunning hard for early "pre-5G" well in advance of the full standards, but are also subtly trying to push back the development of "proper" 5G so that they're able to influence it to their advantage. That's especially true for Verizon, which seems to be trying to out-game AT&T its with 5G strategy.

It's helpful to note a few things going on in the background:
  • 28GHz is definitely "a thing". The FCC released huge chunks of spectrum for 5G this summer (link). Even though 28GHz wasn't even identified as a candidate 5G band by ITU originally, and mmWave wasn't expected to be standardised until 2020, it is starting to look like an early "done deal", as it's also available for use in S Korea and Japan.
  • The Winter Olympics in Korea in 2018 has prompted local operators KT and SKT, as well as Samsung, to look for pre-5G solutions. They've already spent quite a lot of effort on 28GHz trials (as has DoCoMo in Japan which has the 2020 Summer Olympics) and they've gone well. They have been mostly interested in mobile broadband.
  • Verizon (and to an extent AT&T) have a different driver - gigabit-speed fixed broadband. They have been stung by the rapid growth of cable, which has far outpaced DSL in speed and market share. They also want to shut down the old PSTN and go to all-IP architectures. The problem is that much of the US is too sparsely-populated to run FTTH everywhere - putting new fibre in a trench down rural roads and driveways in Idaho, to serve a handful of homes is not appealing. But running fibre to a pole or cabinet distribution point & then using 5G as a "drop" to say 10-100 homes nearby is much cheaper. T-Mobile US and USCellular have also been trialling fixed-wireless 5G, although any deployment would be harder without their own fibre backhaul and transport infrastructure. Ericsson and Nokia are also involved in the trials.
  • Fixed-access 5G won't need complex network-slicing & NFV cores to be useful, as it can be functionally similar to other forms of broadband access. It also won't need mobility, or fallback to 4G, and will be able to run in big wall-mounted terminals connected to a power supply - and sold/branded by the carrier rather than Apple et al. In other words, it's a lot simpler, and a lot faster-to-market.
  • Meanwhile, the other "headline" use-case groups for 5G have some issues. "Massive IoT" is probably going to have to wait until after the 4G variant NB-IoT has been deployed and matured. A 5G version of low-power IoT networking seems unlikely before 2020-22. And the ultra-low latency IoT use-cases (drones and self-driving cars et al) introduce some unpleasant compromises in IP frame structure, and given probable low volumes are something of a "tail wagging the 5G dog". In other words, the IoT business models for 5G don't really exist yet.
  • Linked to the IoT argument, it seems that the much-vaunted NFV "network slicing" approach to combine all these myriad use-cases is going to be late, expensive, complex and in need of better integration with BSS/OSS and legacy domains. I wrote about my doubts over slicing last month - link
So in other words, the original 3-Bubble Venn diagram for 5G use-cases (Enhanced Mobile Broadband, Massive IoT & Low-Latency IoT) was wrong. There's a 4th bubble - fixed wireless, which is going to come first.



And this is massively important in the new technology reality. Increasingly often these days, fast-to-market beats perfect and then often defines future direction as well. We have seen various disruptions from adjacency, where expedient "narrow" solutions beat theoretical elegant-but-grandiose architectures to the punch. SD-WAN's rapid rise is disrupting the original NFV/NaaS plan for enterprise services, for example (link). Similarly, the rise of Internet VoIP and chat apps signalled the death-knell for IMS as a platform for anything except IP-PSTN.

In this case, I believe that fixed-wireless 5G - even if "pre-standard" and relatively small in volume - is going to set the agenda for later mobile broadband 5G, and then even-later IoT 5G. If it gets traction, there's a good chance the inertia will create de-facto standards and also skew official standards to ensure interoperability. This is already evident in steam-rollering 28GHz into the picture. (It's also worth remembering that Apple's surprise early decision to support 1.8GHz for LTE shifted the market a few years ago - while that had been an "official" band, it hadn't been expected to be popular so soon).

The critical element here is that AT&T is much more bullish and focused on mobile broadband (especially in urban hotspots) as a lead use-case for 5G, plus backhaul-type point to point connections. It expects that "the coverage layer will be 4G for many years to come”. At the NGMN conference the speaker noted that fixed uses were also of interest, but was wary of the business case - for instance whether it was possible to reach 10 homes or 30 from a single radio head. It also seems more interested in 70-80GHz links to apartment blocks, using existing in-building wiring, rather than Verizon's 28GHz rural-area drops. Coupled with its CEO's rather implausible assertion that mobile 5G will compete with cable broadband (link), this suggests it is somewhat distant from the Verizon/SKT/DoCoMo group. 

The kicker for me is the delay to the 3GPP standardisation of what is called the "non-standalone" NSA version 5G radio, which uses a 4G control plane and is suitable for mobile devices (link). Despite its bullishness on fixed-5G, Verizon has pushed the timeline for the more mobile-friendly version back 6 months, against AT&T's wishes. The NSA and SA versions will now both be targeted for the June 2018 meeting of the standards body, rather than December 2017.

The official reason given is fairly turgid "in order to effectively define a non-standalone option which can then migrate to a standalone, a complete study standalone would be required to derisk the migration". But I suspect the truth is rather more political: it gives Verizon and its partners (notably Samsung) another 6 months to get their 28GHz fixed-access solution into the market. Qualcomm has just announced a pre-5G chip that can accommodate just that, too. This means that standardised eMBB devices probably won't arrive until mid-2019, although there may be a few pre-standard ones for the 2018 Winter Olympics and elsewhere.

This will cement not just the 28GHz band in place, but also the fixed-5G uses and the idea that 5G doesn't need the full, fancy network-slicing NFV back-end. Given AT&T's huge emphasis on its eCOMP virtualisation project, that reduces the possible future advantage that might accrue if 5G was "always virtualised". It may also mean that lessons from real-world deployment get fed into the 2018 standards in some fashion, further advantaging the early movers. This is especially the case if it turns out that 28GHz can support some form of mobility - and early comments from Samsung suggest they've already experimented with beam-steering successfully.

Meanwhile.... what about Europe? Well to be honest, I'm a bit despondent. The European operators seem to be using 5G as a political football, playing with the European Commission and aiming at the goal marked "less net-neutrality and more consolidation". In July, a ridiculously-political "manifesto" was announced by a group of major telcos (link), trying to promise some not-very-demanding 5G rollouts if the EU agrees to a massive watering-down of regulation. The European 5G community also seems to be seduced by academia and the promise of lots of complex network-slicery and equally-dubious edge-computing visions. It's much more interested in the (late, uncertain-revenue) IoT use-cases rather than fixed-access and mobile broadband. And it has earmarked 26GHz (not 28) as a possible band for 2019 ITU Radio Congress to consider. 

In other words, it's missing the boat. By the time the EU, the European operators and European research institutions get their 5G act together, we'll have had a repeat of 4G, with the US, Korea and Japan leading the way. 

So overall, I see Verizon outmanouevring AT&T, once again. The Koreans and Japanese will benefit from VZ's extra scale and heft in moving vendors faster (notably Samsung, it seems, as Nokia and Ericsson seem more equivocal). The Europeans will be late to the party, once again. And the "boring" use-cases for 5G (fixed access and mobile broadband) will come out first, while the various IoT categories are still scratching their heads and waiting for the promised NFV slice-utopia to catch up.

Thursday, December 17, 2015

Communications apps, APIs & integrations: Import vs. Export models

There is a huge and growing interest in blending communications apps/services with other software capabilities. We are moving from a world of standalone voice, video and messaging to a range of contextualised, workstream-based and embedded alternatives.

But there are two very distinct philosophies emerging for app/comms integration:


  • Export: this involves extending communications capabilities out from a central system (phone system, UC, messaging app, videoconferencing etc) into other applications or websites via APIs, or by offering granular service-components (eg WebRTC gateway, transcoding, recording etc) via a PaaS approach. Numerous examples exist, from
    • Vendors (eg Unify's Circuit APIs, Genband Kandy, Xura Forge, Cisco Tropo, ALU's Rapport APIs, BroadSoft, Vidyo etc)
    • Dedicated PaaS providers (eg Twilio, SightCall, Temasys) or niche specialists such as Voxbone (which does numbering for example)  
    • Telcos' API platforms, which may be network-integrated like AT&T's Developer Platform, standalone PaaS like Telefonica Toxbox, or even just web-embeddable objects like Telenor appear.in
  • Import: this involves treating the communications application or service as the user's primary experience, and bringing in other applications as "integrations" or mini-apps. These can be other communications tools (eg WebRTC video windows in a messaging app) or other functions (eg social or process-based integrations). This particularly fits with the "timeline" or "workstream" model, or perhaps a "dashboard". Examples exist in a number of areas:
    • Enterprise is moving towards "workstream collaboration and communications" (WCC) apps, such as Slack, Cisco Spark, Unify Circuit and various others which can embed external services into a timeline. BroadSoft's Tempo concept looks more like a dashboard model than a timeline, but also brings in sources like DropBox.
    • Consumers are moving towards "Messaging as a Platform" apps, notably in Asia with WeChat, which embeds mini-apps such as taxi-ordering into the message stream. Facebook is taking Messenger in the same direction, and even telcos want to replicate this - Deutsche Telekom is trying to reinvent RCS to take it in that direction, for example.
The API-led "export" model has been the primary trend in WebRTC, SMS and telcos' network/IMS strategies in recent years. We hear a lot about the "consumption" of APIs, "embedding" of communications or the "exposure" of a core system. It is definitely growing rapidly, in numerous guises. Click-to-call buttons embedded in websites or apps are a typical manifestation. (Video below is embedding AT&T capability into Plantronics' website)




But the success of apps such as Slack and WeChat have led to a resurgence of the idea of "unifying" communications, or using a "hub" approach, where a messaging/voice/video app becomes the central anchor of a user's "online life", either as a dedicated application or browser home-page.



Some vendors are trying both approaches - Unify and Cisco seem to be looking at both import and export models. It might be where Google is intending to take Jibe along with telcos and Android as well. Some UCaaS players seem to be taking a similar path (eg with ThinkingPhones acquisition of Fuze) as well as WCC specialists like Atlassian's HipChat.

Others are taking different angles - Microsoft seems to be using Office 365 as the anchor, importing its own Skype4Business UC application as well as maybe others in future, probably via ORTC. I suspect it will "export" more communications as well, in future. Apple (as usual) is different, still using iOS as its main platform for very selective import of a few comms/social tools such as Facebook and Twitter, and largely avoiding any export models at all. (There is no way to embed FaceTime or iMessage in a website, for example). Apple also tends to dislike apps acting as subsidiary platforms on mobile, especially if there are payments involved.

It is too early - and too polarised - to determine whether import or export will be most significant, and for which use-cases and customer segments. We may see different "balances of payments" for different vendors and service providers. However, there are a number of early conclusions to draw:

  • Import models need a good and usable / well-liked core product, before they can become a platform
  • Export models need the right "raw ingredients", eg simple video or SMS APIs, with the right (typically freemium) commercial model to attract developers
  • Import models tend to work best with a core that is text/timeline-based, ie non-realtime
  • There is a risk that some import models appear as "arrogant": I can imagine some users thinking "What, you expect me to spend the core of my day in your app?! You must be joking"
  • Export models face a lot of competition - external developers have many APIs to choose from, or can implement their own capabilities from scratch.
  • Import models involve competition between comms tools and other apps as the "anchor" - eg a UC tool, vs. social networks, or an Office/Google Apps suite as hub, or major enterprise products like SAP/Salesforce, or a vertical-specific platform like a medical practice-management app.
  • Import and export approaches often vary in implementation between Android, iOS, Windows and native chipset-level
  • Telcos have been trying export models for a long time, with limited success. Often, 3rd party platforms that act as aggregators / or "export agents". Cable / IPTV companies are closer to the import model as they own the set-top box interface to "on-board" other solutions
  • We might see NFV / VNF architectures helping with telco-grade import & export in future, but for communications services it's still a long way off
  • Mobile app usage tends to be fragmented. With the notable exception of WeChat, it's not clear that a full import model works well with the app paradigm on smartphones. That said, we may see greater cross-linkage between apps in future.
  • Certain groups of knowledge-workers may be more well-suited to "import" comms apps, especially if they are either communications-primary users (eg call centre agents) or heavily-collaborating teams.
  • Design skills are paramount throughout, for integrations to be usable. 
  • We will see some "importers" acquiring companies to extend the core app functions. Slack/Screenhero is a good example. This may compete with some 3rd-parties' integrations, but may also make life easier for iOS appstore approvals.
  • Both import and export models make life much harder for network policy-management (or industry regulation) as mashups are by their nature hard to pigeon-hole. 
  • Every export implicitly also means an import from the other side - sometimes into "product", but in many ways horizontal apps such as SAP and Saleforce are turning into full import platforms in their own right, especially where they support multiple communications integrations.
I think that 2016 is going to see some epic battles between import and export philosophies for communications in general, and WebRTC in particular. The shift of communications to the cloud facilitates both directions. Worth watching very closely indeed.

Stop Press: just as I was about to publish, I read that Facebook is trialling Uber-in-Messenger, as part of its "Transportation on Messenger" initiative. This is a great example of an import model, and "messaging as a platform". Details are here.

Wednesday, January 07, 2015

WebRTC, telcos, phone numbers and identity - there won't be one ID to rule them all

Earlier this week, I spoke at AT&T's Developer Summit event in Las Vegas, presenting an overview of WebRTC market trends (which I'll upload to my Slideshare in the next few days), and also appearing on a panel with longstanding WebRTC luminaries like Cullen Jenning (Cisco), Dan Druta (AT&T), Eric Rescorla (Mozilla) & Daniel Enstrom (Ericsson).

This tied in with AT&T's launch of its new WebRTC API and platform offer, which is now in public beta and which was one of a variety of API areas it covered yesterday - others interesting from a service-creation standpoint were around M2M/IoT, connected home and car sectors and various others.

One aspect of the AT&T WebRTC offer to developers that is different to that seen from other platforms is a choice of identity enabled via its gateway - either using:
  • The user's AT&T mobile phone number, if they have one
  • A "guest" virtual number, which can also support SMS and other functions
  • A web domain-linked identity for the WebRTC service, user@domain.com or similar
This prompted a round of discussions about how far E.164 phone numbers are likely to go as Web/WebRTC identities. It is not a new concept for operators - especially in mobile - to look to reuse phone numbers and their subscriber databases as a platform for identity management. It's also worth recognising that a number of apps like Whatsapp already do use phone numbers as a way to plot social graphs and as (essentially) persistent virtual identities even when decoupled from a SIM card. (I wrote about this here).

As always, there is a broad spectrum of opinion, ranging from some within operators who assert that phone numbers could be a "universal identifier" spanning payments, communications, personal data, and even linked to citizen databases and government ID scheme. At the other end, there are people who regularly pronounce "the death of the phone number" and think we'll all just have a personal WebRTC URL or SIP URI or Google ID or similar.

The reality is likely to be more nuanced - as well as probably varying by demographic and geographic groups. I suspect that most people will end up with 3-6 "primary" online identities, and then a bunch of others linked to those or standalone for niche purposes.

I think phone numbers intersect with WebRTC for some use-cases, especially for voice- rather than video, and in instances where:

a) Primary use of a given application is through a phone, with secondary access via WebRTC on another device
b) Where there are regulatory stipulations involved, eg mandatory records of identity, need for 911-type functions etc
c) National rather than international usage predominates
d) The application provider has an existing relationship with the user based on phone numbers

So for example, many people have their caller-IDs registered in their favourite food takeaway's CRM system, so that they can recognise your number and ask if you want your usual pizza when you call. There is some sense in having a WebRTC front-end emulate your number and ID, so they can link your new form of ordering with their existing database profile. Other B2C instances - banks, airlines, tax offices etc - may want the same.

On the other hand, if you're a developer creating a global karaoke app with WebRTC, it's probably not really useful to drive it from a phone-number ID. Your users might prefer to login with Facebook, so their friends can laugh at their awful drunken choice of music displayed on their timeline the next morning.

A WebRTC video job-interview on LinkedIn would probably use its own identity space. A realtime voice debate around a contentious blog post might be best-suited to your Twitter handle. Web advertising-triggered customer service interactions might use a Google ID, and so on. An enterprise internal extension number - or email address - might be appropriate for UC or an comms-embedded vertical app. We will also likely see completely separate personal-administered identities, for those who are wary of relying on 3rd-party owned and controlled ID.

I think it's good that AT&T (and also 3GPP in some of its WebRTC standards work) seems to recognise that non-E.164 numbers have relevant roles to play. Also within the phone number space, there is potential for both your "real" phone number, and a secondary/temporary one. It's quite possible that some telcos will be able to monetise their number ranges here, as well as in other online areas such as commerce and privacy. I've seen a couple of presentations by Ericsson about GSMA's Mobile Connect approach recently, and I can see some interesting uses - although there are also some pitfalls such as full support of number portability with E.164-triggered ID, or how you deal with people with multiple (or shared) phone numbers.

We need to be realistic - there is not going to be "one ID to rule them all", at least for most of us. But having the flexibility to pick-and-choose on a use-case basis is beneficial to all. For me, a combination of phone, Facebook and Twitter handle probably cover 60-70% of my needs, but I still also want LinkedIn, Skype, Yahoo, email and various others as well. Everyone is going to be different here.

That said, there are also questions about whether it is right for companies and especially government bodies to insist on phone numbers as ID for communications, as they are not free for the user. I think there needs to be concerted action to make businesses give users a choice of online and "phone" identity options. I already make a point of entering my Skype ID or a WebRTC URL in web-forms if they don't force a numerical response, and I'm tempted to get an obscure international or premium-rate number to use if I'm forced to provide E.164 when I don't want to. But at the same time, there are instances where I'm happy to provide a +44 mobile or fixed number, especially if I trust a company not to send spam SMS. Offering that capability in WebRTC platforms is a positive option.

Tuesday, March 25, 2014

AT&T's shrill anti-neutrality stance is dangerous

AT&T is rapidly becoming the Internet's Public Enemy #1.

Its sponsored data API programme is sufficiently misguided that it is fairly harmless. It has virtually no chance of achieving what it sets out to do, as I explained in this blog post last month. It also sets itself on the "right side" of Net Neutrality quite carefully, by avoiding any reference to possible differential treatment of traffic. It is just aimed at differential pricing - more specifically, zero-rating certain websites and maybe apps, by having the "upstream" provider pick up the tab.



What I've found unclear was whether this is the top of a slippery slope, or more of a sacrificial lamb to be offered up & killed in exchange for other regulatory favours.

The last few days, however, have suggested that the slope is indeed slippery, the wedge thickening, and the iceberg's tip being exposed beneath the surface.

In response to a blog post about Net Neutrality by Netflix's CEO (which is also rather bombastic, to be fair), AT&T's public policy team have decided to come back with guns blazing. Having had a bit of Twitter banter with their team, I've gone through the details in more depth below. But in a nutshell, AT&T has responded with a disproportionate and largely illogical diatribe that doesn't even bear scrutiny from the perspective of "rational anti-neutrality". It has then further compounded it with a frankly unbelievable filing suggesting that allowing paid discrimination/prioritisation is a way to further Internet competition, not restrict it - and lower subscribers' costs at the same time.

(I also had another round of Twitter banter with the head of ETNO, who seemed confused that I didn't have a conflict of interests he could use as the basis for ad-hominem attacks. He swiftly bowed out of discussion when it transpired he might actually have to argue properly and play the ball, not the man. Apparently I'm more influential than I thought, and I've been warned that "with your strange advices you will destroy the whole sector")

But back to AT&T. The main thrust of its argument is that non-Netflix users are effectively subsidising the Netflix users, by bearing the extra cost of peering and/or other elements of infrastructure. At least that's what I infer, after working through this bizarre tautology: "faster broadband networks like our Gigapower service... are requiring all service providers to drive more fiber into their networks". I read that as "our fast network means we have to deploy fiber", which is rather self-evident. Personally, I'd say that if you deploy fast networks, it shouldn't come as a surprise that bandwidth consumption rises as a result. And also, the effects have positive feedback - faster networks drive more/richer video use, which drives deployment of faster & higher-capacity networks. 

It was ever thus. Hence we've moved on from dial-up modems to fibre, while at the same time more people buy broadband, and keep paying for it. In common with many industries (computers, cars, travel) we are conditioned to pay the same or lower prices for continually-better products.

AT&T then goes on to say "we should accept that companies must build additional capacity to handle this traffic.  If Netflix was delivering, for example, 10 Terabytes of data in 2012 and increased demand causes them to deliver 20 Terabytes of data in 2013, they will have to build, or hire someone to build, the capacity necessary". My initial reaction was "...and your point is?". Netflix does build extra capacity - more servers, more data centres, bigger connections as its end of the Internet, more CDN capacity, more transit if needed. Same with all Internet companies. At the same time, end-users on AT&T's network are buying faster connections, and are subject to usage caps.

It should not really matter to AT&T whether a user's paid-for usage, below its cap of 250GB (or whatever) comes mostly from one set of servers, or a hundred different ones. If my tax contains an element to deal with road maintenance, the government doesn't complain that I always drive to the same place, rather than a random bunch of irregular destinations. 

AT&T should know that if it offers and sells more broadband capacity to its customers, then it's going to have to buy some more peering capacity and ports to support it. Surely, it's been selling broadband Internet access long enough now, to realise that dimensioning applies to both ends of its network.

Now to be fair, I think Netflix oversteps the mark as well. It basically describes paid-peering as a necessary evil (I paraphrase) which it would like to outlaw. Well, yes, I expect it would - but that's also part of the nature of the Internet, as described by the redoubtable @internetthought in this document by the OECD. What I think Netflix should have aimed for is not the elimination of paid peering, but something closer to regulatory or competitive requirements for it to be priced in a way that is fair, reasonable, transparent and non-discriminatory. What would be unreasonable would be for Netflix's paid peering to be significantly higher than anyone else's for the same capacity (Dropbox, Google, or other telcos like Verizon or Telefonica). 

Hastings makes some good points about asymmetry and free peering between telcos, as well as the risk of termination monopolies, especially in markets with limited retail broadband competition. (Yes, the US has ridiculously little competition, because its equivalent of local-loop unbundling & CLECs proved disfunctional, and there is no obligation on cable companies to offer wholesale propositions).

AT&T's most egregious argument is that non-Netflix users end up paying to subsidise non-Netflix users. It talks about the postage it paid to send movies in the past, when it shipped DVDs - the same sort of 19th-century "delivery" metaphor it tries to apply with sponsored data. But the metaphor is flawed. The postal service doesn't have an "access" model where every household subscribes. It doesn't have the same structure as the web, with data being requested, adaptive applications, mashups, bi-directional interactive flows and so on. That's the way the Internet works - there's millions of sites, and we all pay to be able to access all of them. Inevitably, there's a lot of stuff that any one person doesn't access, but others do. Drawing an analogy with the mail is ridiculous. It's a logical fallacy, a strawman.


In fact, AT&T commits a good proportion of the logical fallacies outlined on this great website in its pronouncements. (And yeah, I know I used the "slippery slope" myself).

Let's scale this down a bit. Both I & my customers are bearing costs for people accessing other analysts' websites and buying their research (boo, hiss). And much as I'd like Gartner or Forrester or Informa to stump up some extra cash to save my clients some money, I accept that's not a realistic - or fair - suggestion on my part. I benefit hugely from the open Internet - this blog, Twitter, Paypal, LinkedIn, Google and so on help me run my business - and it's in my interest to ensure that innovation continues.

By the same token, I'm sure AT&T would be unhappy if Verizon and T-Mobile started charging it a premium fee to "deliver" its own website content to their subscribers. Which, to be honest, is a much more likely outcome than them trying to get money from Internet companies with no cash.
(Hey, John Legere, why not try it for a laugh?)

The bottom line is that the position is irrational. "
If there’s a cost of delivering Mr. Hastings’s movies at the quality level he desires – and there is – then it should be borne by Netflix and recovered in the price of its service". AT&T: get this through your collective heads - the Internet does not "deliver" stuff. Data traffic is not physical, so stop using physical terminology. Your electricity connection doesn't "deliver" electrons. There is a cost to Netflix of connecting to the Internet. There is a cost to your subscribers of connecting to the Internet, which includes both last-mile access and your implicit commitment to effectively connect to all the other bits of the Internet. Even bits you don't like. That's what you're being paid for. Now yes, there may be specific instances where it's in two Internet peers mutual interest to pay reasonable fees to expedite something. But framing that discussion in terms of "free lunches" harks back to the hyperbolic SBC-era tripe of "you can't use our pipes for free".

And sure, Netflix is overstepping the mark too with its wishful thinking that all peering, everywhere is going to be done on a handshake. But instead of just arguing that Netflix should look at the structure of the Internet and accept  that sometimes (small & reasonable) payments will occur, you've tried to expand the argument onto spurious grounds of fairness to non-Netflix subscribers.

And as for your risible argument about "allowing individualized dealings between ISPs and edge providers", the idea that it will "empower startups" is so patently flawed I'm worried that you might actually believe it yourselves, rather than just having it as a lobbying position. Think about this for a moment. Are you expecting wholesale prices for such content providers to be higher than end-users' retail prices for capacity? Have you spoken to any startups willing to pay? Under what conditions? Seems unlikely to me. If I'm buying a few petabytes, I want them at much lower prices than end-customers buying gigabytes. Which suggests a less-than-zero sum game, if it does indeed "reduce the cost of broadband service for consumers" as you claim. Unless you're not intending to pass on infrastructure upgrade cost-savings, I can't see why you wouldn't lose money here. Plus, nobody will pay you money for priority when the network is uncongested unless you threaten to downgrade them, or engineer the network so it's always congested.

The bottom line of all of this is probably unpalatable. Neutrality is almost certainly the least-worst option for AT&T and other ISPs, unless you are allowed by regulators to charge unreasonable peering fees for monopolistic access to your customer base. Your attempts to undermine the existing competitive structures of the Internet with differential access-network performance are actively dangerous and insidious. By all means set up a parallel ecosystem and disrupt from adjacency, but experimenting with unproven business models on a "live" and critical platform for global innovation and productivity is unacceptable.

Not only that, but you are also ignoring risks to your own business that will result from playing "silly games". Your own website & OTT-style services will be first in line for mistreatment by your rivals. You are likely to provoke a mass switch to encryption, proxying and numerous new and exciting forms of arbitrage.You incentivise Netflix to offer "free TB of backup" or other apps, to create symmetrical or opposite flows, with you "delivering" data to it and creating further congestion/costs. You appear to be promoting a model that will replace (profitable) retail revenue with bulk wholesale deals. And above all, you are making your company appear as a threat to both the Internet and consumers' and businesses interests (and possible society as a whole). As a major US & global telecoms firm, you're too important to be allowed to commit euthanasia through non-neutrality. If you're being serious here, the FCC needs to treat you as if you're a danger to both yourself and others, and regulate accordingly.

Or alternatively, just tone down the rhetoric and start making constructive comments rather than issuing irrational polemics. (And sure, accuse me of hypocrisy if you can find anything particularly irrational here. I like to think I specialise in rational vitriol).


Edit: Oh, and Netflix / Mr Hastings - I think you need to retune this "strong neutrality" message. Paid peering has been around for longer than you, and if handled appropriately is both equitable and doesn't require the extra bureaucracy of oversight. Argue for "FRAND" peering, rather than wishfully thinking that it should all be free. 



Tuesday, February 04, 2014

"Sender-pays" is a ridiculous 19th-Century idea misapplied to the Internet

I'm currently doing a fair amount of work looking at next-generation fixed and mobile data tariffs and business models.

One of the concepts I still see expounded is the idea of "sender-pays" models, where a third party (typically a content or application company) pays for a consumer's data usage for a given website or app.

Usually, this is given equivalence to either 1-800 phone numbers, or maybe the original model for the postal service, where the sender (generally) pays for the postage for a product, rather than the recipient. Occasionally, people talk about the Amazon Kindle or some other dedicated hardware such as smart-meters or in-vehicle telematics, where the network provision is vertically-integrated into the offer.

This has recently come to the fore with AT&T's suggested "sponsored data" model, which I believe will fail. It is also a recurrent theme among certain lobbying and industry groups, which keep trying to come up with ways for Internet access to be paid for "twice", usually with spurious references to "sustainability".

(I wrote a full report on the near-certain failure of prospective 1-800 models for mobile data some time ago - http://disruptivewireless.blogspot.co.uk/p/1-800-dataplans-report.html )

I thought it was worth reiterating a few of the reasons why "sender-pays" is a pointless, misleading or maybe even dangerous metaphor to use:
  • With very few exceptions, there are multiple "senders" in any given Internet interaction. Most obviously, there is upstream traffic as well as downstream. Who is responsible for the upstream?
  • The nature of the web is very bi-directional. Your web-browser sends (and receives) various data in the background, using technologies such as JavaScript.
  • Unlike 1-800 and postal models, users frequently use alternative connection mechanisms, especially 3rd-party WiFi. Few content companies will be happy to have different business models depending on how a user connects with their phone/tablet to their servers, especially as they have no control. It will also be in their interest to push users to free WiFi rather than sponsor cellular data
  • No content/app companies would want to have 800 different "sender-pays" arrangements with every network operator, and possibly multipliers of that given each has many different plans.
  • No content/app company is going to want to pay extra sponsor data where users have more-than-enough in their monthly/PAYG quota. 
  • On a typical web-page, many elements are served from CDNs, advertisers and other sources. Except for standalone downloads, it will be very difficult for users to ringfence which bit is sponsored - and even harder to demonstrate this in-app or in-webpage
  • There will need to be very strong controls for the sponsor - eg managing fraud & abuse (eg if a competitor tries to rack up bills by downloading excess volumes), dealing with network problems which drive users to hit refresh (or re-download) for problems which are the service provider's responsibility
  • Entirely unclear how "sender" models (eg notifications of sposored use) maps onto app-based models, especially in iOS. Many, many "gotchas", eg push notifications
  • Poor fit with adapative-rate codecs & other solutions, which will vary the amount of data depending on network conditions and therefore make predictions of volume/cost impossible.
  • The content/app company will often be at the mercy of the device/OS choice of the user and how it requests / caches / transmits / compresses data. This is utterly different to a 1-800 number where a phone is a phone is a phone, and a "minute" doesn't differ based on what type of device the user owns
There are many other pitfalls, erroneous assumptions and outright fallacies here. There is also a huge moral hazard around transparency, neutrality and potential damage to the wider Internet model which is a multi-trillion dollar benefit to the global economy and democracy. While AT&T has been very careful to stay on the right side of Net Neutrality for now, it is entirely possible that other operators will be consumer-unfriendly or simply fail to think through the ramifications.

The bottom line is that the sender/recipient model is not the way the Internet works. It is fine as a piece of rhetoric to talk about sponsored / 1-800 / third-party pays models for data services (especially outside Public Internet Access), but it must be remembered that a lot of the metaphors employed are archaic and do not translate to the reality of HTML or application design.

In particular, the use of 3rd-party WiFi puts a coach & horses (deliberate 19th Century reference!) through most of these type of models, as it gives both users and content/app companies a great incentive to arbitrage away even the current use of quota-driven broadband. I've also seen sponsored WiFi from app companies ("Free WiFi if you use Office 365"), which is likely to be better-value & better-publicity for Internet firms.