Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Thursday, July 20, 2017

Mobile Multi-Connection & SD-WAN is coming

I’ve written before (link) about the impact of SD-WAN on fixed (enterprise) operators, where it is having significant effects on the market for MPLS VPNs, allowing businesses to bond together / arbitrage between normal Internet connection(s), small-capacity MPLS links and perhaps an LTE modem in the same box. Now, similar things are being seen in the mobile world. This is the "multi-network" threat I've discussed before (link).

Sometimes provided through a normal CSP, and sometimes managed independently, SD-WAN has had a profound impact on MPLS pricing in some corporate sectors. It has partly been driven by an increasing % of branch-site data traffic going into the HQ network and straight out again to the web or a cloud service. That “tromboning” is expensive, especially if it is using premium MPLS capacity.

The key enabler has been the software used to combine multiple connections – either to bond them together, send traffic via differential connections based on type or speed, add security and cloud-management functions, or offer arbitrage capabilities of varying sorts. It has also disrupted network operators hoping to offer NFV- and SDN-services alongside access: if only a fraction of the traffic goes through that operator’s core, while the rest breaks-out straight to the Internet, or via a different carrier, it’s difficult to add valuable functionality with network software.

But thus far, the main impact has been on business fixed-data connections, especially MPLS which can be 30-40x the cost of a “vanilla” ISP broadband line, for comparable throughput speeds. Many network providers have now grudgingly launched SD-WAN services of their own – the “if you can’t beat them, then join them” strategy aiming to keep customer relevance, and push their own cloud-connect products. Typically they’ve partnered with SD-WAN providers like VeloCloud, while vendors such as Cisco have made acquisitions.

I’ve been wondering for a while if we’d see the principle extended to mobile devices or users – whether it’s likely to get multiple mobile connections, or a mix of mobile / fixed, to create similar problems for either business or consumer devices. It fits well with my broader belief of “arbitrage everywhere” (link).

Up to a point, WiFi on smartphones and other devices already does this multi-connection vision, but most implementations have been either/or cellular and WiFi, not both together. Either the user, the OS, or one of the various cellular hand-off standards has done the switching.

This is now starting to change. We are seeing early examples of mobile / WiFi / fixed combinations, where connections from multiple SPs and MNOs are being bonded, or where traffic is intelligently switched-between multiple “live” connections. (This is separate from things like eSIM- or multi-IMSI enabled mobile devices or services like Google Fi, which can connect to different networks, but only one at a time).

The early stages of mobile bonding / SD-WAN are mostly appearing in enterprise or IoT scenarios. The onboard WiFi in a growing number of passenger trains is often based on units combining multiple LTE radios. (And perhaps satellite). These can use multiple operators’ SIMs in order to maximise both coverage and throughput along the track. I’ve seen similar devices used for in-vehicle connections for law enforcement, and for some fixed-IoT implementations such as road-tolling or traffic-flow monitors.

At a trade show recently I saw the suitcase-sized unit below. It has 12 LTE radios and SIMs, plus a switch, so it can potentially combine 3 or 4 connections to each network operator. It’s used in locations like construction sites, to create a “virtual fibre” connection for the project office and workers, where normal fixed infrastructure is not available. Usually, the output is via WiFi or fixed-ethernet, but it can also potentially support site-wide LPWAN (or conceivably even a local private unlicensed/shared-band LTE network). 

It apparently costs about $6000 or so, although the vendor prefers to offer it as a service, with the various backhaul SIMs / data plans, rather than on a BYO basis. Apparently other similar systems are made by other firms – and I can certainly imagine less-rugged or fewer-radio versions having a much lower price point.

But what really caught my eye recently is a little-discussed announcement from Apple about the new iOS11. It supports “TCP Multipath”. (this link is a good description & the full Applie slide-deck from WWDC is here). This should enable it to use multiple simultaneous connections – notably cellular and WiFi, although I guess that conceivably a future device could even support two cellular radios (perhaps in an iPad with enough space and battery capacity). 

That on its own could yield some interesting results, especially as iOS already allows applications to distinguish between network connections (“only download video in high quality over WiFi”, etc).It also turns out that Apple has been privately using Multipath TCP for 4 years, for Siri - with, it claims, a 5x drop in network connection failure rates.

The iOS11 APIs offer various options for developers to combine WiFi and cellular (see slide 37 onward here). But I’m also wondering what future generations of developer controls over such multipath connectivity might enable. It could allow novel approaches to security, performance optimisation on a per-application or per-flow basis, offload and on-load, and perhaps integration with other similar devices, or home WiFi multi-AP solutions that are becoming popular. Where multiple devices cooperate, many other possibilities start to emerge.

What we may well see in future is multi-device, multi-access, P2P meshes. Imagine a family at home, with each member having a subscription and data-plan with a different mobile network. Either via some sort of gateway, or perhaps using WiFi or Bluetooth directly between devices, they can effectively share each others’ connections (and the fixed broadband), while simultaneously using their own “native” cellular data. Potentially, they can share phone numbers / identities this way as well. An advanced connection-management tool can optimise for throughput, latency or just simply coverage anywhere in the house or garden. 

This could have a number of profound implications. It would lead to much greater substitution between different networks and plans. It would indirectly improve network coverage, especially indoors. It could either increase or decrease demand for small cells (are they still needed, if phones can act as multi-network relays? Or perhaps operators try to keep people “on net” and give them away for free?). From a regulatory or law-enforcement standpoint it means serious challenges around identifying individual users. It could mean that non-neutral network policies could be “gamed”, as could pricing plans.

Now I’ll fully admit that I’m extrapolating quite a bit from a seemingly simple enhancement of iOS. (I’m also not sure how this would work with Android devices). But to me, this looks analogous to another Apple move last year – adding CallKit to iOS, which allowed other voice applications to become “first-class citizens” on iPhones, with multiple diallers and telephony experiences sharing call-logs and home-screen answerability.

Potentially, multipath in iOS allows other networks to become (effectively) first-class citizens as well as the “native” MNO connection controlled from the SIM.

I’m expecting other examples of mobile connection-bonding and arbitrage to emerge in the coming months and years. The lessons from SD-WAN in the fixed domain should be re-examined by carriers through a wireless lens: expect more arbitrage in future.

Thursday, July 13, 2017

Both sides are wrong in the Net Neutrality debate

I've been watching the ongoing shouting-match about Net Neutrality (in the US & Europe) with increasing exasperation. Recently there was a "day of action" by pro-neutrality activists, which raised the temperature yet further.

The problem? Pretty much everyone, on both sides (and on both sides of the Atlantic), is dead wrong a good % of the time. They're not necessarily wrong on the same things, but overall the signal-to-noise ratio on NN is very poor.

There are countless logical fallacies perpetrated by lobbyists and commentators of all stripes: strawman arguments, false dichotomies, tu-quoque, appeals to authority and all the rest. (This is a great list of fallacies, by the way. Everyone should read it). 

Everyone's analogies are useless too - networks aren't pipes, or dumb. Packets don't behave like fluids. Or cars on a road. There are no "senders". It's not like physical distribution or logistics. Even the word "neutrality" is dubious as a metaphor. The worst of all is "level playing field". Anyone using it is being duplicitous, ignorant, or probably both. (See this link).

I receive lots of exhortations from both sides - I get well-considered, but too-narrow network-science commentary & Twitter debates from friend & colleague Martin Geddes. I read detailed and learned regulatory snark and insider stories from John Strand. I see telco/vendor CEOs speaking (OK, grandstanding) at policy conferences. I get reports of egregious telco- and state-based blocking of certain Internet services from Access Now, EFF and elsewhere. I see VCs and investors lining up on both sides, depending on whether they have web interests, or network vendor/processing positions. I watch comments from the FCC, Ofcom, EU Commission, BEREC, TRAI and others - as well as politicians. And I read an absolute ton of skewed & partial soundbites from lobbyists on Twitter or assorted articles/papers.

And I see the same, tired - often fallacious or irrelevant - arguments trotted out again and again. Let me go through some of the common ones:
  • Some network purists insist routers & IP itself are (at core) non-neutral, because there are always vagaries & choices in how the internals, such as buffers, are configured. They try to use this to invalidate the whole NN concept, or claim that the Internet is broken/obsolete and needs to be replaced. Other Internet purists insist that the original "end-to-end" principle was to get as close as possible to "equal treatment" for packets, and either don't recognise the maths - or suggest that the qualitative description should be treated as a goal, even if the precise mechanisms involve some fudges. Everyone is wrong.
  • In the US, the current mechanism for NN was to incorporate it under the FCC's Title II rules. That was a clunky workaround, after an earlier NN ruling was challenged by Verizon in 2011. In many ways, the original version was a much cleaner way to do it, as it risked less regulatory creep. Everyone is wrong.
  • Many people talk about prioritisation of certain traffic (eg movies) and how that could either (a) allow innovative business models, or (b) disenfranchise startups unable to match web giants' payments. Yet the technology doesn't work properly (and won't), it's almost impossible to price/market/sell/manage in practice, and there is no demand. Conspicuously, there have been no lobbyists demanding the right to pay for priority. There is no market for it, and it won't work. It's irrelevant. Everyone is wrong.
  • Some people assert that NN will reduce "investment" in networks, as it will preclude innovation. Others assert that NN increases overall investment (on networks plus servers/apps/devices). When I tried to quantify the possible revenues from 25 suggested non-neutral business models (link), I concluded the incremental revenue would barely cover the extra costs of implementation, if that. There are many reasons for investments in networks (eg 4G then 5G deployment cycles), while we also see CapEx being replaced by OpEx or software licences for managed or virtual networks. Drawing meaningful correlations is hard enough, let alone causation from an individual issue out of dozens. Everyone is wrong.
  • Most of the debate seems to centre on content - notably video streaming. This ties in with operators wanting to bundle TV and related programming, or Netflix and YouTube seen as dominating Internet traffic and therefore being pivot-points for neutrality. Yet in most markets, IPTV is not delivered via the public Internet anyway, and is considered OK to prioritise as it's a basic service. On the opposite side, upgrades to high-speed consumer broadband is partly driven by the desire for streaming video - revenues would fall if it was blocked, while efforts to charge extra fees to Netflix and co would likely backfire - they'd insist on opposite fees to be carried, like TV channels. Meanwhile, most of the value in the Internet doesn't come from content, but from applications, communications, cloud services and data transmission. However, they are all much techier, so get mostly overlooked by lobbyists and politicians entranced by Hollywood, Netflix or the TV channels. Everyone is wrong.
  • Lots of irrelevant comments on all sides about CDNs or paid-peering being examples of prioritisation (or of craven content companies paying for special favours). Fascinating area, but irrelevant to discussion about access-network ISPs. Everyone is wrong.
  • Lots of discussion about zero-rating or "sponsored data" paid for by 3rd-parties and whether they are right/wrong/distortions. Lots of debate whether they have to be offered to all music / video streaming services, whether they should just be promotional or can be permanent. And so on. Neither relate to treatment of data transmission by the network - and differential treatment of pricing is, like CDNs, interesting but irrelevant to NN. And sponsored data models don't work technically or commercially, with a handful of minor exceptions. Ignore silly analogies to 1-800 phone numbers - they are totally flawed comparisons (see my 2014 rant here). Upshot: zero-rating isn't an NN issue, and sponsored data (with prioritisation or not) doesn't work (for at least 10 reasons). Everyone is wrong.
  • Almost everyone in the US and Europe regulatory scene now agrees that outright blocking of certain services (eg VoIP) or trying to force specific application/web providers to pay an "access" toll fee is both undesirable or unworkable. It would just drive use of VPNs (which ISPs would block at their peril), or amusingly could mean that Telco1.com could legally block the website of Telco2.com, which would make make future marketing campaigns a lot of fun. In other words, it's not going to happen, except maybe for special cases such as childrens' use, or on planes. It's undesirable, regulatorily unacceptable, easy to spot and impossible anyway. Forget about it. Everyone is wrong.
  • Lots of discussion about paid-for premium QoS on broadband, and whether or not it should apply to IoT, 5G, NFV/SDN, network-slicing, general developer-facing APIs and therefore allow different classes of service to be created, and winners/losers to be based on economic firepower. Leaving aside enterprise-grade MPLS and VPN services (where this is both permissible and possible), there's a lot of nonsense talked here. For consumer fixed broadband, many of the quality issues relate to in-home wiring and WiFi interference, for which ISP-provided QoS is irrelevant. For mobile, the radio environment is inherently unpredictable (concrete walls, sudden crowds of people, interference etc). Better packet scheduling can tilt the odds a bit, but forget about hard SLAs or even predictability. Coverage is far more a limiting factor. Dealing with 800 ISPs around the world with different systems/pricing is impossible. The whole area is a non-starter: bigger web companies know how much of a minefield this is, and smaller ones don't care. Everyone is wrong.
In summary - nearly anyone weighing in on Net Neutrality, on either side, is talking nonsense a good % of the time. (And yes, probably me too - I'm sure people will pick holes in a couple of things here).

So what's the answer?
  • First, tone down the rhetoric on both sides. The whole thing is a cacaphony of nonsense, mostly from lobbyists representing two opposing cheeks of the same arse. Acknowledge the hyperbole. Get some reputable fact-checkers involved, and maybe sponsored by government and/or crowdsourcing.
  • Second, recognise that many of the threatened non-neutral models are either impossible or obviously unprofitable. Arguing about them is sophistry and a waste of everyone's time. There are more important things at stake.
  • Thirdly, design and create proper field-trials to try to prove/disprove assertions about innovation, cost structures etc. Select a state, a city or a class of users, or speciallly-licensed ISPs to run prototypes and actually get some proper data. Don't try to change anything on a national or international basis overnight, no matter how many theoretical "studies" have been done. Create a space for operators and developers to try out creating "specialised services", see if they work, and see what happens to everything else. Then develop policy based on evidence - and yes, you'll have to wait a few years. You should have done it sooner instead of arguing. I suspect it'll prove my point 2 above, anyway
  • Fourth, consider "inevitabilities" (see this link for discussion). VPNs will get more common. NFV and edge-computing will get more common. Multiple connections will get more common. New networks (eg private cellular, LPWAN) will get more common. Multi-hop connections with WiFi and ZigBee & meshes will get more common. Devices & applications will fragment, cloudify, become "serverless", being componentised with micro-services, and be harder to decode and classify in the network. AI will get more common, to "game" the network policies, as well as help manage the infrastructure. All this changes the landscape for NN over the next couple of years, so we'll end up debating it all again. Think about these things (and others) now.
  • Six, try some rules on branding Internet / other access. Maybe allow specialised services, but force them to be sold separately from Internet access, and called something else (Ain'ternet? I Can't Believe it's Not Internet?)
  • Seven, get ISP executives (and maybe web/content companies' execs too) to make a public promise about acting in consumers' interests on Internet matters, as I suggested a few years ago - an IPocratic Oath. (link)
  • Eight, train and empower the judiciary to be able to understand, collect data and adjudicate quickly on Internet-related issues. It may be that competition law could be applied, or injunctions granted, even in the absence of hard NN laws. Let's get 24x7 overnight Internet courts able to take an initial view on permissibility of traffic management - not wait 2 years and appeals during which time an app-developer slowly dies.
  • Nine, let's get more accountability on traffic-management and network configurations, so that neutrality/competition law can be applied at a later date anyway. We already have rules on data-retention for customer calls & access to networks. Let's have all internal network configuration & operational data in ISPs' networks securely captured, encrypted, held in escrow and available to prosecutors if needed, under warrant. A blockchain use-case, perhaps? We're going to need that data anyway, to guarantee that customer data hasn't been tampered with by the network. 
  • Ten, ask software (and content and IoT device and cloud) developers what they actually want from the networks. Most seem to be absent from the debate - the forgotten stakeholders. Understand how important "permissionless innovation" actually is. Query whether they care about network QoS, or understand how it links to overall QoS which covers everything from servers to displays to device chipsets to user-interfaces. Find out how they deal with network glitches, dodgy coverage - and whether "fallback" strategies mean that the primary network is getting more or less important. Do they want better networks, are they prepared to pay for them - or would they just rather have better visibility and predictability of when problems are likely to occur?
Apologies for the length of this piece. I'll happily pay someone 0.0000001c for it to load faster, as long as the transaction cost is less than 5% of that.

Get in touch with me at information AT disruptive-analysis dot com if you'd like to discuss it more, or have a sane discussion about Neutrality and what it really means for broadband, policy, 5G, network slicing, IoT and all the rest.

Tuesday, July 11, 2017

Sensors: implications for wireless connectivity & video communications

Quick summary
  • Sensor technology is complex, diverse, fascinating & fast-evolving.
  • There are dozens of sensor types & technologies.
  • Nobody believes the 20-50bn devices forecasts, especially if they are based on assumptions that 1 sensor = 1 device
  • Some sensors improve the capabilities of already-connected devices, like phones or (increasingly) cars.
  • Some sensors enable creation of new forms of connected device & application.
  • Most sensors connect first via one or two tiers of local gateways, sub-systems or controllers, rather than directly connect to the Internet / cloud individually
  • While the amount of sensor-generated data is growing hugely, not all of this needs real-time collection and analysis, and so network needs are less-extreme.
  • Many industrial sensors use niche or unfamiliar forms of connectivity.
  • Genuine real-time controls often need sensors linked to "closed-loop" systems, rather than using Internet connections / cloud.
  • WiFi & short-range wireless technologies like Bluetooth & ZigBee are growing in importance. There is limited concern about using unlicensed spectrum
  • LoRa radios (sometimes but not always with LoRaWAN protocols) are growing in importance rapidly
  • Cellular connectivity is important for certain (especially standalone, remote/mobile & costly) sensor types, or sensor-rich complex objects like vehicles. 
  • The US seems more keen on LTE Cat-1 / Cat-M than NB-IoT for sensor-based standalone devices. Europe and Asia seem more oriented towards NB-IoT
  • There are no obvious & practical sensor use-cases that need 5G, but it will likely improve the performance / economics / reach of some 4G applications.
  • Camera / image sensors are becoming hugely important and diverse. These are increasingly linked to either AI systems (machine vision) or new forms of IoT-linked communication applications
  • "Ordinary" video sensors/modules are being supplemented by 3D, depth-sensing, emotion-sensing, 360degs, infra-red, microscopy and other next-gen capabilities.
  • AI and analytics will sometimes be performed on the sensor or controller/gateway itself, and sometimes in the cloud. This may reduce the need for realtime data transmission, but increase the need for batch transfer of larger files.
  • Conclusion: sensors are central to IoT and evolving fast, but the impact on network connectivity - especially new cellular 4G and 5G variants - is diffuse and non-linear.

A couple of weeks ago I went to Sensors Expo 2017 in San Jose. This topic is slightly outside my normal beat, but fits with my ongoing interest in "telcofuturism", especially around the intersection of IoT, networks and AI. It also dovetails well with recent writing I've done on edge computing (link & link), a webinar [this week] and paper on IoT+video for client Dialogic (link), and an upcoming report I'll be writing on LPWAN for my Future of the Network research stream at STL Partners (link).

First things first: listening to some of the conference speeches, and then walking around the show floor, made me realise just how little I actually knew about sensors, and how they fit into the rest of the IoT industry. I suspect a lot of people in telecoms - or more broadly in wireless networking and equipment - don't really understand the space that well either.

For a start, there's a bewildering array of sensor types and technologies - from tiny silicon accelerometers that can be built into a chip (based on MEMS - micro-electromechanical systems), right up to sensors woven into large-scale fabrics, that can be used to make tarpaulins or tents which know when someone tries to cut them. There's all manner of detectors for gases, proximity, light, pressure, force, airflow, air quality, humidity, torque, electrical current, vibration, magnetic fields, temperature, distance, and so forth.

Secondly, a lot of sensors have historically been part of "closed-loop" systems, without much in the way of "fully-connected" computing, permanent data collection, networking, cloud platforms or analysis. 

An easy example to think about is an old-fashioned thermostat for a heating system. It senses temperature - and switches a boiler or radiator on or off accordingly - without "compute" or networking resource. This has been reinvented by Nest and others. Plenty of other sensors just interact with "real-time" systems - for example older cars' airbags, or motion-detection alarms which switch on lights.

In industry, a lot of sensors hook into the "real-time control" systems, whether that's for industrial production machinery, quality control, aircraft avionics or whatever. These often use fixed connectivity, with a bewildering array of network and interface types. It's not just TCP/IP or familiar wireless technologies. If you haven't come across things like Modbus or Profibus, or terms like RS485 physical connections, you perhaps don't realise the huge complexity and unfamiliarity of some of these systems. This is not telco territory.

This is important, as it brings in an entire new realm to think about. From a telco perspective, we're comfortable talking about the touch-points of networks and IT. We are don't often talk about OT or "operational technology". A lot of people seem to naively believe that we can hook up a sensor or a robot or a piece of industrial machinery straight to a 4G/5G/WiFi connection, then via Internet or VPN to a cloud application to control it, and that's all there is to it. 

In fact, there may well be one, two or three layers of other technology involved first, notably PLC units (programmable logic controllers) as well as local gateways. A lot of this is the intranet-of-things, not the Internet-of-things - and may well not even be using IP as most people in networking and telecoms normally think about it.

In other words, there's a lot more optionality around ISO layers - there are a broad range of sector-specific or proporietary protocols, that control sensors or IoT devices over a particular "physical layer". That contrasts with most users' (and telco-world observers') day-to-day expectations of "IP everywhere" and using HTTP and TCP/IP and similar protocols over ethernet, WiFi, 4G or whatever. The sensor world is much more fragmented than that.

These are some of the specific themes I noted at the event:
  • Despite the protocol zoo I've discussed, WiFi is everywhere nonetheless. Pretty much all the sensor types have WiFi connectivity options somewhere, unless they're ultra-low power. There's quite a bit of Bluetooth and ZigBee / other varieties of IEEE 802.15.4 for short-range access too.
  • Almost nobody seems bothered about the vagaries of unlicensed spectrum, apart from a few seriously mission-critical, time-critical applications, in which case they'll probably use fixed connections if they can. Bear in mind that a lot of sensors are actually fairly time-insensitive so temporary interference or congestion doesn't matter much. Temperatures usually only change over seconds / minutes, not milliseconds, for example. Bear in mind though, that this is for sensing (ie gathering data) not actuating (doing stuff, eg controlling machines or robots).
  • Most sensors send small bursts of data - either at set intervals, or when something changes. There are exceptions (notably camera / image sensors)
  • I saw a fair amount of talk about 5G (and also 4G and NB-IoT) but comparatively little action. Unlike Europe, the US seems more interested in LTE Cat-1 and Cat-M rather than NB-IoT. Cat-M can support VoLTE, which makes it interesting for applications like elder/child-trackers, wearable and building security. NB-IoT seems fairly well-suited to things like parking meters, environmental sensors, energy metering etc. where each unit is comparatively standalone, and needs to link to cloud/external resources like payments.
  • There's also lot of interest in LoRa, both as a public network service (Senet was prominently involved), and also as privately-owned infrastructure. I think we're going to see a lot of private LoRa embedded into medium-area sensor networks. Imagine 100 moisture sensors for a farm, connected back to a central gateway on top of the barn, and then on to a wide-area connection (fixed or mobile) and a cloud-based application. The 100 sensors don't need a wireless "service" - they'll be owned by the farmer, or else perhaps the connectivity will be offered as a part of a broader "managed irrigation service" by the software company.
  • There's an interest in wireless connectivity to reduce regulatory burdens for some sensors. For example, to connect a temperature sensor in an area of an oil refinery with explosion risks, to a server in another building, requires all manner of paperwork and certification. The trenching, ducting and physical wire between them needs approval, inspection and so on. It's much simpler to do it with wireless transmitters and receivers.
  • A lot of the extra sensors getting connected are going to be bundled with existing sensors. Rather than just a vibration sensor, the unit might also include temperature and pressure sensors in integrated form. That probably adds quite a lot to the IoT billions number-count, without needing separate network links.
  • A lot of sensors will get built into already-connected objects. Cars and aircraft will continue to add cameras, material stress sensors, chemical analysis probes for exhaust gases, air/fluid flow sensors, battery sensors of numerous types, more accelerometers and so on. This means more data being collected, and perhaps more ways to justify always-on connections because of new use-cases - but it also means a greater need for onboard processing and "bulk" transfers of data in batches.
  • Safety considerations often come ahead of security, and a long way ahead of performance. A factory robot needs sensors to avoid killing humans first. Production quality, data for machine learning and efficiency come further down the list. That means that connecting devices and sensors via wider-range networks might make theoretical or economic sense - but it'll need to be seen through a safety lens (and often sector-specific regulation) first. Taking things away from realtime connections and control systems, into a non-deterministic IP or wireless domain, will need careful review.
  • Discussion of sensor security issues is multi-layer, and encouragingly pervasive. Plenty of discussions around data integrity, network protection, even device authenticity and counterfeiting.
  • Imaging sensors (cameras and variants of them) are rapidly proliferating in terms of both capabilities and reach into new device categories. 3D depth-sensing cameras are expected on phones soon, for example for facial recognition. 360-degree video is rapidly growing, for example with drones. Vehicles will use cameras not just for awareness of surrounding, but also to identify drivers or check for attentiveness and concentration. Rooms or public-spaces will use cameras to count occupancy numbers or footfall data. New video endpoints will link into UC and collaboration systems "Sensed video" will need greater network capacity in many instances. [I am doing a webinar with Dialogic about IoT+video on July 13th - sign up here: link]
  • Microphones are sensors too, and are also getting smarter and more capable. Expect future audio devices to be aware of directionality, correct for environmental issues such as wind noise, recognise audio events as triggers - and even do their own voice recognition in the sensor itself.
  • Textile and fabric sensors are really cool - anything from smart tarpaulins for trucks to stop theft, through to bandages which can measure moisture and temperature changes, to signal a need for medical attention. 
  • There's a lot of modularity being built into sensors - they can work with multiple different network types depending on the use-case, and evolve over time. A vibration sensor module might be configurable to ship with WiFi, BLE, LoRa, NB-IoT, ZigBee and various combinations. I spoke to Advantech and Murata and TE Connectivity, among others, who talked about this.
  • Not many people seemed to have thought about SIMs/eSIMs much, at a sensor level. The expectation is that they will be added by solution integrators, eg vehicle manufacturers or energy-meter suppliers, as needed.
  • AI will have a range of impacts both positive and negative from a connectivity standpoint. The need for collecting and pooling large volumes of data from sensors will increase the need for network transport... but conversely, smarter endpoints might process the data locally more effectively, with just occasional bulk uploads to help train a central system.
Overall - this has really helped to solidify some of my thinking about IoT, connectivity, the implications for LPWAN and also future 4G/5G coverage and spectrum requirements. I'd recommend readers in the mainstream telecom sector to drop in to any similar events for a day or two - it's a good way to frame your understanding of the broader IoT space and recognise that "sensors" are diverse and have varying impacts on network needs.