Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Monday, October 26, 2015

EU Telecoms: Why the European Parliament needs to enforce clear Net Neutrality laws


It’s Net Neutrality time again….

On Tuesday 27th October, the European Parliament is once again looking at proposed telecoms regulation for the EU – specifically around roaming and Net Neutrality,  the so-called “telecoms package”.

This is a fairly long post, but covers a set of important issues for policymakers to consider, and for MEPs and their advisers to use as a basis for deciding how to treat the EU legislative package on the table. It also applies to politicians and regulators considering Net Neutrality in other geographies.

The post gives some background to the current legal/regulatory situation in Europe, before critiquing some recent commentary from both pro- and anti-neutrality advocates, notably Barbara Schewick & Martin Geddes respectively.

Background

In recent years there has been a regulatory ping-pong match played between the European Parliament and European Council. The former voted for “harder” forms of Neutrality in early 2014; the latter has been more accepting of various exceptions more favourable to the traditional telecom providers. The third body involved, the European Commission (which advises the other two), has also become more permissive over the last 18 months, especially with its new Digital Economy commissioners appearing to backtrack on their predecessor’s promises of strictly open-Internet rules. The proposals on mobile roaming have also changed, but that is not a particular focus here.

A rough compromise between Parliament and Council came out of discussions in mid-2015. It is this new set of proposals which is being voted on, with the hope of gaining a final consensus and law. However, as always, there is the possibility of amendments being suggested – and there is a fair amount of lobbying going on to persuade MEPs to do just that. (It is worth noting that amendments in 2014 brought in some surprising additional “hard NN” wording in the previous ruling).

The “pro-neutrality” proposed amendments are being loudly discussed, by bodies like Access Now, EFF and legal experts like Barbara van Schewick encouraging the public to demand “full neutrality” from their MEPs. Opposing views – and perhaps suggested amendments - are being pushed by the telcos and bodies such as ETNO, but with less emphasis on public engagement. This differs to the US process, which became a major public talking-point and political football. My colleague Martin Geddes has also given some strong views which I also critique below.

I think that both polarised sides of the debate have it wrong. The proposed rules are actually very good for the most part - but more clarity & tweaking is needed on a few issues, and one particular exemption, about classes of service, needs to go. (As a general rule, if a policy annoys both sides equally, it's probably got it about right).

While most of this concerns network neutrality (how the network treats packets, essentially), other aspects are also appearing on the table. There's a lot of debate about whether pricing is part of "neutrality",  especially zero-rating. Some argue it is, others not.

There is also a not-very-subtle attempt by the telcos to conflate the network with "neutrality" for applications and OS platforms. This has some good points about Apple and Google’s autocratic control over iOS and Android (eg vague criteria for accepting/rejecting apps), but mostly the telcos are being duplicitous: it is a cloaked attempt to reclaim power and relevance for telephony and SMS, by suggesting that proprietary Internet messaging and communications tools should be forced to interoperate. I address that briefly at the end – it’s a specious and dangerous argument, that’s utterly without merit. (Read this post about so-called platform neutrality).

The proposals also allow for “specialised services” with preferential network QoS, although not where they are direct substitutes for Internet services. This aligns with what I’ve written in the past: specialised services must be genuinely “special” to avoid competition risks. For example, I have no problem with remote monitoring of heart pacemakers having a “fast lane” – there are no likely competitive issues disenfranchising “pacemaker.com” as nobody is likely to run that service over the public Internet anyway.

My main personal concern is that the regulations protect open Internet access, foster innovation – but also don’t hamstring the technology industry and developers when it comes to non-Internet applications and services, especially in the timeframe of 5G. Forget the “I” in IoT – it actually just means Network, or more probably Networks, plural. Only a portion of IoT data will transit the public Internet – there will be many use-cases that need very different networking and regulatory controls, and it is important that legislation is framed in a way that recognises that. Not all of these networks and applications will be “broadband” either – a huge number of devices will continue to be narrowband, using lightweight and battery-friendly connections.

The core of the debate can be summarised like this:

“How can governments best protect and stimulate the value of the Internet for citizens, businesses and society - whilst also encouraging development of novel non-Internet services”


Policy vs. Regulation and Neutrality vs. equality

My collaborator Martin Geddes has weighed in with an articulate post on the topic here – which makes some good points about the technology, but in my view misses badly on its conclusions and consideration of politics and economy.

He asserts that lawyers don’t understand engineering, but then neatly illustrates that engineers don’t understand politics, as he refers to “regulators”, when they’re not directly involved here – it’s a legal and policy debate at the European Parliament. Regulators are the ones who then have to work out how to implement the laws, within the frame of both policy AND technology – and hopefully with awareness of the limits of the maths of network performance in mind. But that comes later.

First, the law (and democracy) sets up what we want the rules to achieve. Economics, business and social issues are the top-level concerns, not the performance of individual applications, or cost-structures of networks. The “outcomes” Governments are interested in are metrics like GDP, employment, digital inclusion, innovation and so forth. Those are the macro-level reasons why broadband investment is encouraged.

So laws on neutrality and networks should optimise for these lofty goals. If compromises need to be made later, to fit with the awkwardness of networking maths, then so be it. But they should be “de minimis”, with an over-riding objective of maintaining the status quo as far as possible.

Martin argues that because there’s no real, 100% objective “neutrality” when you consider the way IP networks actually work, that the whole concept is without merit. But it’s not so much “neutrality” in a mathematically-objective sense regarding packet transport that politicians are considering, but “equality” for Internet application and content providers, and especially the minimisation of “friction” for innovation.

Friction arises from the potentially cumbersome processes and payments involved in a so-called “quality marketplace”, where applications need to signal (and maybe pay for) desired levels of network performance. Not only is this extremely hard or impossible to do at scale (especially as apps evolve in shape and function, with many components and variations), but it would bring in huge obstacles to the software and content development process.

For example, how would an app-developer know the network quality/performance requirements for every module of open-source code, or 3rd-party API embedded in their software? How would they vary by device, network type, OS and location? What are the inter-dependencies? How would P2P or self-optimising software interact with a “quality marketplace”? How would people be able to know they got the “quality” they paid for, and be refunded if they didn’t get it? I also suspect it also brings a new attack-surface for privacy invasion and hacking. This level of friction is utterly unworkable for a general case – it can only apply to complex, expensive, time-consuming individual deals. I believe it cannot be automated, and thus could not scale.

The key point of evidence is that a (roughly) open and best-efforts Internet has been a huge boon for economy and society over the last 20 years. Contrary to Martin’s assertions, the right answer to his question “how does unpredictable and arbitrary performance help the development of the market?” is not “it doesn’t”. The correct response is “by providing 3 billion people with choice and opportunity to communicate, prosper and interact in ways that generate trillions of dollars, and huge social benefit”. He is right that some applications don’t work perfectly under this regime – but there is no evidence to suggest that these are remotely comparable in scope or scale to the ones that do. The overall value to humanity and nations of the status quo Internet is indisputable. This has arisen largely as a result of the historic friction-free, network-decoupled approach to Internet innovation.

It should also be noted that it has been the desire for accessing these open “equal” applications, over best-efforts connections, that has driven much of the investment for broadband in the first place.

For sure, other non-Internet communications has also been important (eg corporate networks and emergency infrastructure) and may be more so in future with a shift to IoT. But for now, the Internet model of “permissionless innovation” needs protection where there are strategic pinch-points, such as broadband access infrastructure. Any laws or technical policies which seek to change the current (non-)relationship between Internet application/content providers and networks, or diminish the viability of “best-efforts” access must be rejected. 

By all means frame laws to encourage operators to develop additional platforms which are “non-neutral”, or with "quality contracts" and allow them to try and disrupt the Internet model from adjacency, if they prove effective. There are already various private networks for IoT and other uses that are non-Internet connected. But unless and until there is clear evidence that such approaches don’t “break” what’s currently working, they are unacceptable from a policymaking standpoint, and governments should instruct their regulators accordingly.

The main complexity here is that one of the possible "loser" groups is the same as the one owning the pinch-points - telcos selling phone calls and other "old" products, as well as Internet access. While they benefit from additional demand for connectivity, they are also seeing greater substitution of historic revenue streams. This gives a large incentive to misbehave - and enough proof points of VoIP-blocking and other egregious behaviour to demonstrate the risk is real. The industry regularly tries to protect itself with spurious “level playing field” arguments, but that is mostly an attempt to excuse its lack of innovation in services over the last 20 years.

The telecoms industry should have focused on differentiating quality/capabilities between Internet and non-Internet services. Instead, it has tried to differentiate among or against Internet services, often with highly dubious motives. Behind the seemingly benign talk of “traffic management” or maybe “QoS monetisation” has often been a threat to extort money from application-layer competitors, or deliberately degrade the performance that could be reasonably expected.
 
The comparison of neutrality and equality is important here. Like every other form of equality in law (age, race, gender, sexuality etc), it’s often hard to come up with unambiguous definitions, and even harder to measure in practice. We typically know discrimination when we see it – and also know that sometimes it’s based on unconscious, even accidental, biases rather than external malice. It’s still important to have equality laws, even if we can’t measure or assure perfect equality. There is a close analogy with neutrality – in fact, if we’d called it “Network Equality” we’d probably have had less of the legal hoop-la in recent years.

It is right that the proposed EU rules allow non-Internet specialised services – and that they are kept distinct from Internet offers. This will make it difficult to create hybrid or mash-up services blending both worlds – but that seems a worthwhile price to pay. If we see genuine innovation around non-Internet prioritised services – and a maintained acceptable level of performance from Internet access at the same time – then perhaps the rules can be adapted later.

We need to be more careful with the term “broadband”. Plenty of studies document the role that broadband plays in economic growth, and this often drives government policy. But it’s much less normal to estimate what % of “broadband” benefits come from Internet vs. non-Internet use-cases. Without that understanding, we risk mis-framing regulation as being about the enabling networks, versus the most important service delivered over them.

The “Absolute Neutrality” argument is flawed too

While I am strongly sympathetic to the general concepts of Net Neutrality, I think that some of the more strident calls from the “fundamentalists” are naïve and harmful as well.

Most proponents seem to forget that we only have open Internet access because other integrated “walled garden” services failed to gain traction. The 3G and some 4G spectrum auctions were not conducted with an expectation that “plain vanilla” Internet access was going to be the predominant use-case and source of revenue. The original vision was for a world made up entirely of specialised services and managed connections – so it is hardly surprising or unreasonable that the telecom industry is going to keep trying to make them work.

There is also a view that “broadband = Internet” without an awareness of other non-Internet uses that already exist, and how they may expand in the future.

Indeed, it is worth noting that the nature of telecoms and the Internet itself has changed over time too – we are now actively talking about use-cases for 5G which extend well beyond traditional Internet-based services, while virtualisation of networks is also starting to raise the prospect of network “slices” which behave in different ways, with different traffic. (I wrote about the possible regulatory impacts of SDN and NFV recently, here). It is important that laws are not framed in a way that will make them conceptually obsolete in coming years.

Many of the neutrality “absolutists” also tend to go over-the-top (*cough*, sorry!) on fears that innovation might be stifled by large companies paying for prioritisation, putting start-ups at a disadvantage.

This is a straw-man. Everything I’ve seen or heard suggests that there is almost zero willingness-to-pay for priority from content/apps providers anyway – their business models can’t accommodate the fees, they doubt the technology would work well enough, and they don’t want clunky commercial relationships with hundreds of network operators. They also know that most mobile users are connected via 3rd-party WiFi a lot of the time anyway, and coupled with other variables like radio coverage the net benefits would be slim. In other words, the idea of paid prioritisation (in mobile) is a total dud anyway. Things might be different in the fixed-broadband world, but even there most developers would prefer to pay for better adaptivity in their software, rather than notional “quality contracts”.

Schewick and many other lobby groups are also implacably opposed to the use of “zero-rating” of certain data against users’ quotas. This is something I’ve written on before as well. Again, the “absolutist” lobby engages in strawman arguments about telcos and ISPs “picking winners” via zero-rating of data traffic. I’d say that they are much more influential in picking winners when it comes to bundling of content or apps – real competition occurs when Spotify gets the deal, rather than another service, not in having its use zero-rated when it is used.

It’s also worth noting that zero-rating (where nobody pays for data) is very different to sponsored data, where the app provider picks up the bill. The latter exists almost nowhere, and will not gain any major traction in future either. Like paid-quality, there is almost zero willingness to pay, and almost no technical way to get it to work properly, except on painstakingly-crafted individual deals.

My view is that operators should be made to report zero-rated traffic volumes, and that as long as it was below a certain amount (1%, 5%, 10% of total data etc) it could just be considered a promotional tool and unlikely to affect user behaviour and competition. This would reduce risks of harm around bulky uses like streaming video and cloud storage.

Where Schewick is right is about the Trojan Horse of allowing management by “class of service”. The risk that networks deliberately throttle encrypted communications is a particular and pertinent risk.

The Myth of Platform Neutrality

One other area is worth noting, in case amendments are tabled about it this week.

MEPs should also be very wary of any amendments tabled about “platform neutrality”. There has been a recent upsurge in rhetoric from telcos trying to suggest that applications like Whatsapp and Facebook Messenger should be forced to interoperate with SMS. Not only is this ridiculous – the best communications are apps are too uniquely-designed and specifically-featured for “interoperability” to have any meaning – but it is clearly an anti-consumer and protectionist move.

There are also many side-effects to so-called platform neutrality, which would backfire spectacularly for the telecoms industry. See this post.

It should also be considered that if Skype or Viber are deemed sufficiently similar to “primary” telephony, that they should be forced to interconnect, then there is no obvious argument why they should not also benefit from number portability. Furthermore, there is also no reason then to force consumers to own phone numbers at all – it would be anti-competitive for businesses or public bodies to insist on having users’ phone details, rather than giving them a free choice of communications method.


Conclusion

The current proposals are generally good, but have some possible pitfalls and need clarification. They align with the most important policy objective – protecting the “generative” nature of Internet innovation, by ensuring that strategic pinch-points in the access network are not abused. The proposals mandate no blocking or deliberate degradation of Internet services – the most important aspect, in my opinion. However, the distinction of Internet and non-Internet uses of networks needs to be made clearer, as the latter may be able to benefit from more flexibility.

The laws should ideally protect innovation and status-quo business models in the Internet domain, while simultaneously encouraging innovation in non-Internet “specialised” applications and networks. From an economic and societal point of view, that would give “the best of both worlds”.

At a later date, if both domains evolve well, we can consider hybrids or more relaxed rules. But any such decision needs to be evidence-based – let’s see proof that both requirements can be satisfied first.

Martin’s post asks “Regulators face a simple choice: either there is a rational market pricing for quality (that developers must participate in), or there is rationing of quality. Which one do you want?”. That is a false dichotomy. Law-makers and regulators can define two distinct worlds, with two different answers to that question. For Internet access it is the latter option, of “rationed qyality”. Occasional failures and glitches are a minor inconvenience and annoyance, compared to the risks of killing the Golden Goose of Internet Innovation. Fully-predictable network quality is vastly exaggerated in importance for the Internet – although it may well have its uses for the emergent non-Internet world. Such a marketplace in “quality pricing” may prove itself over time, but it needs to do so without riding on the web’s coat-tails.

The interesting question is how we can create “both worlds” – non-Internet and Internet without losing some of the benefits of converged networks. One option may be disaggregation – actually creating separate physical/logical networks for Internet and non-Internet use. This may imply extra costs (and some loss of multiplexing benefits) but it may be worth it to resolve this dilemma. While in the past such costs may have been prohibitive, new technologies may help.

(I am wondering if using spatial or wavelength multiplexing to create separate connections, rather than statistical multiplexing on single connections, could be the solution. A discussion for another post)

In conclusion, my opinion is that the Net Neutrality part of the EU telecoms package is mostly reasonable, but needs to be clarified in some areas, and have amendments that ensure the optimal social and economic outcomes arise.

  • Internet access is treated differently to non-Internet access. Both are important. The status-quo model of frictionless Internet app development needs to continue, but we should also encourage non-Internet innovation where there is clear separation. Hybrids can come later if everything works out.
  • It should be explicitly made illegal to impede or slow encrypted traffic, relative to unencrypted traffic. The proposal that networks can differentially treat “classes” of traffic, but not individual applications, is a Trojan Horse – unless we have much more clarity about what types/styles of “class” might be permissible or not.
  • Specialised services must actually be special, not replicas of Internet services
  • The debate must not be hijacked by an over-focus on “content” rather than applications, communications & things
  • Predictable developments like 5G and SDN/NFV should be considered, so they don’t make a mockery of the wording used in any new laws and subsequent regulations
  • Differential charging (especially zero-rating) can be done for limited purposes and in limited volumes, but needs careful scrutiny to avoid being a competitive risk
  • Transparency (describing what is being done to data on the network, and current network status) is often more important than the actual management itself

MEPs should remember they are voting for “Internet Equality”, not “Broadband Network Neutrality”. For the Internet, Equality is more important than Quality.

Friday, October 16, 2015

Is there a regulatory elephant lurking in the SDN / NFV room?

I've just spent three days at the Layer123 SDN Congress, a well-attended and well-organised event in Dusseldorf. I'm increasingly spending time looking at, and considering the technology and business implications of virtualisation - it fits tightly with my Future of the Network* work-stream (see end for details). It also plays into my ongoing analysis of 5G, enterprise-focused services and WebRTC/voice and video communications.

There's a huge number of interesting angles to comment on about SDN & NFV - timelines, business models, practialities of implementation, costs, vendors and ecosystems and organisational dynamics, to name a few.

But one area leapt out at me during a session on "network slicing": what are the possible regulatory implications of all this? It's an area that hardly gets considered - the assumption appears to be that "it's just a technology evolution", so it's very much business-as-usual when it comes to the rules.

But I don't think it's as clear-cut as that. Various angles seem to me to intersect with legal and regulatory considerations, because a lot of telecom-related rules never anticipated networks becoming software-based. In some cases it may just be that the exact wording needs to be clarified or amended, but in others there might need to be some major changes to how the law is framed.

(This is similar to what we already see in the voice/messaging arena - telecoms laws pre-suppose telephony and SMS services, as they were framed long before apps, "embedded communications" and social networks became a reality. Regulators aren't quite sure how to deal with "non-telephony voice", if they even understand it)

Some issues that I've considered might arise:
  • A lot of proposed SDN models have Net Neutrality implications. For example, I heard many discussions about "app-aware service chains" and ways to "slice" the network so it behaves differently for particular services or DPI-detected flows. This might not be a problem for clearly non-Internet services (eg IPTV and various M2M concepts) but would certainly pose issues elsewhere.
  • Linked to this is virtual CPE with customer controls. It seems reasonable to allow end-users to manage their own broadband ("Give me priority to do my home-working applications, over my kids' gaming traffic") - but will that depend where that functionality is instantiated? (eg in a home gateway vs. a virtual gateway in the operator's network)
  • Will virtualisation in the wireless network ever affect coverage? Are spectrum licences granted on the basis that all users/apps get access to the whole network?
  • Are there performance regulations in force, either based on speed or some other method of defining QoS? Do they apply to all users or just some of them? If the regulator tries to do testing of networks, does it get "administrator rights", or is it constrained to whatever commercially-available "slice" it obtains for measurement?
  • Will the overall measures of telco "investment" (ie CapEx) be changed by a move to NFV and SDN? This is an important metric used to determine competitive intensity, for example by regulators looking at the effects of consolidation. If a shift to software has some unintended impact on headline network capex, will this be interpreted by regulators or governments as a sign of failing competition instead?
  • How are regulators going to deal with truly "virtual" operators, including ones which are foreign-owned? If (say) AT&T installs a virtual IMS and vCPE in a French data-centre and delivers services to a client in Belgium, which country's rules apply? What licences and reporting are needed? What about lawful interception and emergency services?
  • How does network-slicing fit with existing rules on wholesale networks, network-sharing and MVNOs? What about universal service?
  • If there are any problems with security and data-integrity, can the network (and customer) backtrack to find which VNF was running in which data-centre at the time, and who was responsible for it?
  • Will various rules on interconnect, roaming or other services still be strictly relevant where it's a "software-software interface" via API, rather than a "network-network interface"?
  • Can external APIs (eg to content or apps providers) act as a source of unfair competitive leverage?
  • Various other issues relating to privacy, data-export / safe-harbour and so on
I'm sure that some of these can be discounted easily, and plenty other issues raised instead. Some will only apply in some countries, and some will just be related to semantics of old regulation - although out-of-date wording has never stopped the legal or lobbying professions from acting in the past.

There are some signs of awareness - the FCC's Tom Wheeler referenced virtualisation earlier this year (here). Ofcom commissioned a study by Fujitsu on it last year (here), and the EU has recently launched a tender for a study (here). This is a positive sign and should add clarity - but ideally might have been launched a couple of years ago. There may also be bigger issues elsewhere in the world, especially around international ownership of virtual operators or VNFs, or circumvention of rules on interconnection, data localisation or content-control.


The risk for vendors and telcos is that this work may unearth areas of enough uncertainty to delay deployments until laws are clarified. There is an argument to carry on "as is", pretend the regulatory issues are trivial, and see if there is any push-back later, with "facts on the ground" and momentum pushing virtualisation ahead. But to me, that seems perhaps unwise.

I asked one speaker at the conference about this, and the suggestion was that it didn't pose any issues. However, looking around the room it appeared that it was the first time that attendees had ever heard the term "regulation" in the same sentence as SDN or NFV. To me, that suggests that too-few questions have been asked, to be sure that we already have all the answers.

**I am collaborating with STL Partners on its Future of the Network research work-stream, for which I am acting as lead analyst. It covers SDN/NFV, 4G, 5G, FTTx, IoT networking, WiFi, operator benchmarking & business models, network investment case & ROI, regulatory issues & spectrum management and the link with digital services. Please contact information AT disruptive-analysis DOT com for more details.

Wednesday, October 14, 2015

A few thoughts on the IIT RTC conference



Last week I attended the Illinois Institute of Technologyconference on Realtime Communications. I enjoyed it immensely – not a huge event (maybe 150-200 or so people, spread across various tracks), but a really good mix of attendees and topics. As well as WebRTC and cloud communications, it covered more general aspects of IP voice, IoT networks and applications, a touch of 5G, and quite a lot on public safety / NG911. Quite a bit of technology, but also a decent focus on use-cases and business. And well-curated to avoid obvious corporate pitches, even by sponsors.

There were many of the “usual suspects” for WebRTC, VoIP & APIs there – among them Chad Hart, Tim Panton, Emil Ivov, Alan Quayle, Dan Burnett, Ivelin Ivanov, Andy Abramson, Vladimir Beloborodov, Robin Raymond and James Body. But there was also a good representation of service providers (eg Comcast), and major IT/enterprise comms vendors with WebRTC leanings, including IBM, Microsoft, Oracle, Intel, Avaya & Unify, plus assorted smaller developers (for a mini TadHack hackathon), academics and students from IIT, and a few industry veterans like Henning Schulzrinne (former FCC CTO) and Richard Shockey (SIP Forum). Google participated remotely (especially on the topic of ORTC), and GenBand was also there in force, with its Kandy bus outside for a day.

I did a general presentation on the WebRTC market status (there’s now more than 100m active users, various new use-cases, and 20+ service provider deployments – more details soon). I also moderated a panel on contextual communications (with Tim, Ivelin & Santhana from GenBand), and participated in one with Henning about the evolution of “identifiers” and especially the future role of the phone number. I picked up a lot of new insights into areas such as WebRTC+IoT (more soon), the state-of-the-art for mobile WebRTC implementation, and the technical/regulatory/financial challenges of future forms of public safety networks and emergency communications.

A couple of quick thoughts I’ll expand on in other posts or reports in coming weeks:
  • IBM made a great point about using IoT events to trigger a separate WebRTC communications session, eg a temperature sensor in a machine kick-starting a video/audio session, when it hits a threshold
  • The fragmented nature of US emergency communications tells me that while 911/NG911 will remain as “lowest common denominator”, we will see various other higher-level emergency apps on smartphones for particular uses. Ideally, there would be an emergency API that developers could use inside any mobile app, as well as having support for the native dialler
  • Interesting presentation & ideas by Thomas Howe about the use of automated interaction with businesses via SMS (see just justkisst.me
  • Great presentation from Chris Rezendes about IoT - using the fascinating example of water metering/monitoring, also also asserting the resurgence of SMEs
  • As we go towards 5G (& also SDN), we’re seeing another attempt at defining “quality classes” in networks – but what is the right “level of abstraction” to encourage app-developers to be interested in network performance? And do developers actually want to ask for certain QoS levels (or even pay for it)? There seems to be a good argument that developers want to have something like a “network status API” that allows them to manage variable network conditions, rather than necessarily ask for specific quality or assurances.
I also did an interview with NoJitter during the event, about contextual comms - there's a great write-up by Beth Schultz here.

I go to a lot of good events, but I have to say that overall this was one of my favourites when it comes to understanding “what’s next?” in communications, without an overdose of corporate spin. In some ways it reminded me of the old and much-missed eComm conferences. 

Many thanks to the organisers - I’ll definitely be back next year, especially as Chicago is such a great city to hang out in for an extra couple of days. (The picture below is one of mine taken from the lake with #nofilter, as Instagram would say)