Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Wednesday, December 09, 2015

Contextual communications - using device sensors as context

There are two main definitions of contextual communications that I encounter:

  • Communications ina context (ie in-app or in-website)
  • Communications using contextual data (ie what you're doing, or inferred/analysed information about the communications session)
A lot of the discussion in telecoms and WebRTC circles is about the first one - extending voice/video/realtime data uses, by embedding them into web pages or native mobile apps. These could be anything from video-chat in a banking app, to realtime voice chat in a game, or extending a corporate videoconference system to guests. This has many clear and obvious vector for innovation and value.

But the other dimension is perhaps slightly harder to grasp - using contextual data to improve a communications session in some way. Some examples are fairly straightforward - for example letting a contact centre agent know which page you were on when you hit "click to call", or do a database lookup: ("Ah, Mr Bubley, I see you were looking at flights to Singapore. Thank you for being a frequent flyer - would you like to redeem your miles, as the system shows that you have enough?"). 

That type of use is still in its infancy, and has a huge potential not just for customer service / call-centres, but also social network/comms, and enterprise UC/WCC implementations - as well as countless other niche software and web applications in consumer, telecom and business realms. Using contextual data during a call or conference or other session can meaningfully improve the "outcome" - whether that's making a sale, making a complaint, or working out how to meet up with friends.

But another area is less obvious - using contextual data from sensors as part of a voice/video session, to improve the interaction in some way. I've come across two examples in the past week:

  • Talklessnow.com using the mic & your speaking patterns to tell you if you're dominating the conversation too much, or gabbling.
  • Talko.com using motion-sensors in a phone, to allow callers to make better decisions about how/when to interact with you
The first one is a demo, not a full product, based on the idea that the microphone in a PC or smartphone is actually a general-purpose audio sensor. It's a project that's been driven by Chris Koehncke, and developed by &yet under the capable guidance of Philipp “fippo” Hancke, although the original idea came from me, which I then discussed over a beer in SF with Chris in October. My original vision was for this to be implemented in a wearable - so for example, a conference presenter could get a vibrating alert that he or she is talking too fast, and get a reminder to slow down, and add pauses. As Chris discusses on his post (here), when added to a browser it's also suitable for salespeople and others who really ought to be listening more and talking less.


There's many other use-cases for what's basically a simple idea - using the microphone as a sensor, answering questions like "how much of the time is sound coming in?" or "how fast is the person speaking compared to the pauses between words & sentences?". This is obviously different to the normal use of a mic, which is to actually capture and encode the content. This is more like audio metadata, which can then be applied to the logic of the communications application itself - whether that's a sales tool, or perhaps a conferences-speech coaching app.

I'm really excited by this - as it illustrates perfectly how a voice-app idea can go from a casual discussion in a bar to reality (or at least, proof of concept), with very little pain. I'm not sure exactly how much time Fippo and his team spent on this - but it wasn't a huge project. It's also only using one of the WebRTC APIs (to access the mic) so it's not hugely sophisticated, but that doesn't matter. It's the results and opportunities and services that are the point here.

The other example of sensor use + context + WebRTC is from a company I've talked about before - Ray Ozzie's Talko. This is a mobile collaboration app for teams, that blends one-way voice messages, text messages, two-way calls etc. into recorded and conversations-specific timelines.

I just reinstalled it on a new phone, and was interested in the permission it requested to get access to my iPhone's motion API, "so that, for example, others may choose not to disturb you while you're driving". Firstly it's great UI/UX design for an app to illustrate why it wants access to an API - it allows the user to make a more informed decision about privacy ad security. But more importantly, it's a great way to improve the communication experience.

Basically, the current concept of "presence" in IM is broken. "Offline" is usually a lie meaning "Don't talk to me". "Online" is usually a lie meaning "I forgot to reset my status" and so on. But by using the phone's sensors and APIs it's possible to get more useful presence indicators: "Dean is driving"; "Dean is at the airport and running"; "Dean's phone is on charge and in a timezone where it's 3am".

All of these are important contextual inputs for either the application, or the other people using it - to decide whether to interrupt you, initiate a "call" or send a message, and so forth. If the desired outcome is a successful collaboration, without unnecessarily disturbing people at the wrong times, this is a huge positive.

These are both comparatively simple examples of using available contextual data in new ways, to enhance a given instance of communications. This is where the value is in future for the telecom industry: not minutes, not filling in coverage gaps with WiFi, but actually helping voice/video become more useful and fulfil the actual user needs and purposes involved. But to do that, you need to have insight into specific problems that need to be solved for the user - whether that's dramatic pauses.......... in a conference speech, or avoiding interrupting someone while they're navigating Hyde Park Corner.

(There's a lot of other potential uses for this general idea, and various possible extensions, enhancements and integrations for Talklessnow and similar concepts. Get in touch with me if you'd like to discuss them, or if you want to arrange a speech or workshop about contextual communications more generally - information AT disruptive-analysis DOT com)

1 comment:

Neal McQuaid said...

Great information, both of those sites look excellent.

Fully agree with your thoughts on presence - and I'm still amazed that the old long-dead Jaiku (Twitter competitor since purchased by Google) support something very similar (also taking into account your calendar) on Symbian devices some time around 2006. Will look forward to more of these genuine improvements other than the invention of just another messaging copycat.