Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Sunday, March 01, 2015

The roadmap to contextual communications: sensors, apps & analytics

One of my major research themes for 2015 is "Contextual Communications". I believe that this will be a critical trend in telecoms, web and mobile applications, devices, IoT and enterprise productivity over the next 5 years and beyond.

While this very closely ties in with previous work on Future of Voice and WebRTC, it goes considerably beyond those domains, and also embraces sensors and aspects of Big Data. On a long-term view, its trajectory intersects with hypervoice/hypersense.

I'll be holding my first public Contextual Comms workshop on June 15th in London, along with Martin Geddes. Details here.


Contextual communications involves both placing voice/video in context (eg embedded into an app, website or device) and applications which use contextual information to help the user achieve a particular objective or purpose. 

Here, "contextual information" can be of three types:
  •  Virtual context: What you or your device are doing electronically, eg which website, app or content you’re using. It could relate to which web-page you're on, the fields of a form you're filling in, the music you're listening to, or the point you're at in an enterprise workflow or a game. In essence, this is software-originated context.
  • Physical context: This is information from sensors - most notably the device microphone(s) and camera(s), but also location, movement, temperature, power/battery, heart-rate, biometric sensors and so on. With processing, this can yield information such as local acoustics (and hence whether you're in a street, room etc), the position of other people around you, your identity via fingerprint or voiceprint, work out if you're walking/driving or showing signs of stress.
  • Analytic & Big Data context: When linked to cloud platforms (or perhaps a local database), additional insight can be factored into the application: perhaps past behaviours and preferences, web cookies, records from a CRM system, or stored data from your past virtual and physical contexts. Inferred context is also important here - for example your mood or happiness. (See also this post on sentiment analysis). There may also be 3rd-party context provided via mashups and APIs.



It is this three-way blend of context sources - and the history/predicted future from analytics - that presages a new era for communications. 

We already talk about adding richer application information into communications services. The page you’re on, or the function in the app you’re using, could help inform a customer-service agent or a friend or colleague why you’re calling, and maybe let them guess what you hope to achieve, and how they could assist. 

But the broader 3-way meaning of “context” offers much greater possibilities. Exploiting sensors to blend in “real world” data, as well as analytics, extends the use-cases hugely. Modern handsets (and other devices such as tablets and wearables) tend to have multiple sensors - perhaps two microphones, two cameras, orientation sensors, location-awareness and more. Future device chipsets will incorporate even more "cognitive" smarts.

So for example, an application that knows you're in an airport - and running - might make a decision to send an incoming call to voicemail. Coordination of a device's speaker and microphone, might allow it to guess it's inside a pocket or bag - and perhaps adjust the ringtone level. A phone might recognise its orientation lying flat on a table, and adjust to "speakerphone" mode, detecting multiple speakers around the room, and adjusting their volume levels, if one is further away. 

Perhaps a "friends and family" communications app might dial-down the noise suppression, to allow the sounds of waves crashing on a beach to give a genuine sense of "wish you were here". Whereas a smart, contextual business communications app might want to block out the backgroud hubbub, for that panicked "where is your booth?!" call from the show floor at MWC.

Going a step further, a contact-centre's software might be able to detect customers' rising stress levels and combat them with special offers, or escalation to a supervisor. (Clearly, the dividing line between context and privacy-invasive creepiness will need to be carefully monitored). 

How does this relate to WebRTC? Well most obviously, it is the technology that allows communications to be moved away from standalone functions (eg phone calls, or dedicated VoIP/video calling apps) and contextually-integrated into websites and apps. At that point, it becomes much easier to blend the communications events with the outputs from other OS or device APIs, either relating to sensors, or just to the application "state" at that time. 

One long-term vision is what colleague Martin Geddes describes as “hypersense”, an extension of “hypervoice”. It’s well worth downloading the Communications 2025 white paper (here) and watching the video – it posits a future where the “cloud” and a personal “avatar” knows what we want to do, and blends a whole range of contextual drivers (apps, online activity, sensors, analytics, personal knowledge of your behaviour and preferences etc.) and helps you have a more productive, healthier life, blending in communications at its core. Think of it as Siri crossed with any number of Sci-Fi artificial intelligences, helping you both proactively and reactively. 

But that is a long way off. Contextual communications applications which blend physical, virtual and analytic contexts with machine-learning will take some time to come to full fruition. Developers and device OEMs will have to gain experience in multiple new areas, with diverse APIs and styles of interaction. There are huge leaps in technology, design, psychology and probably law, to overcome first.

So the question is – what are the steps along the way? How does context go from where we are today (eg really poor and limited “presence” indicators, or in-app messaging) towards some combination of physical and virtual context being used meaningfully by developers, in the short-to-medium term?


It is important to recognise that within each of those domains, there are separate sub-categories of context that will get integrated first. For example, we will see coordination of multiple microphones, or speaker and microphone, or motion-sensing. Developers will likely be offered "sensing" APIs that span a number of inputs (although this will depend on how OS and device creators integrate and expose the capabilities).

The same is true of combining virtual context data-sources: we will find WebRTC contact centres combining which page is a user is on, coupled with the device it is being viewed from, to determine the best way for the agent to interact. The examples from the apps I mentioned the other day - such as language-exchange blended with online status and preferences - are further good examples.

Certain sorts of analytics context will be combined early on too – but mostly “small data” (eg cookies, customer records) rather than true “big data” such as realtime analysis of past behavioural patterns, or combining multiple cloud-data sources. Predictive context - where the software guesses what's going to happen in the future (eg where you'll be, when it's going to be a better time for a call) may be a while in arriving, and will likely need persistent network connections to cloud services, rather than purely local on-device analysis.



Overall, Disruptive Analysis thinks that the bigger picture of Contextual Communications is one of the key trends for vendors, developers and telecom operators over the next decade. WebRTC is a critical component and enabler, but it is also important to keep an eye on its convergence with the physical world of sensors, wearables/IoT and the cloud-analytics domain. 

Ultimately, the winners will be those applications - and device-based enablers - which help communications adapt to the users' real context and purpose, helping them achieve whatever it is they're doing more effectively - whether it's closing a sale, winning a game, or simply connecting with a distant loved-one.

The theme of Contextual Communications will be re-visited regularly. Please sign up to get this blog by email, consider buying the WebRTC research report, and get in touch if you're interested in custom internal workshops and projects. information AT disruptive-analysis DOT com.

No comments: