Pages

Pages

Friday, July 29, 2011

What changes when "opened" vendor-specific technologies are better than "official" standards?

I've just been reading up on the history of PDF (Portable Document Format) on Wikipedia . A couple of lines to consider:


"PDF was originally a proprietary format controlled by Adobe, and was officially released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008.......  granting a royalty-free rights for all patents owned by Adobe that are necessary to make, use, sell and distribute PDF compliant implementations"

"PDF's adoption in the early days of the format's history was slow. Adobe Acrobat, Adobe's suite for reading and creating PDF files, was not freely available..[....].... required longer download times over the slower modems common at the time; and rendering PDF files was slow on the less powerful machines of the day. Additionally, there were competing formats such as  [.....] Adobe soon started distributing its Acrobat Reader program at no cost, and continued supporting the original PDF, which eventually became the de facto standard for printable documents on the web"

Imagine, back in 1999, that you were a service provider, or the standardisation group for a number of SPs. And you'd just invented the concept of a "document conversion and viewing service". You'd created the .xyz document format, worked out the billing system and knew how much you wanted to charge to interconnect with the leading word processors and other applications of the day. You were going to sell monthly subscriptions to end users, allowing them to read web documents.

Sounds silly now, doesn't it? PDF instead took document viewing/creation down the route of being an application (free reader and paid authoring tool), through to being a feature of some web browsers, to today's existence of PDF-ing something as a mere function on a menu, or right-click-save-as. Early attempts to do PDF-creation-as-a-service disappeared.

I often use PDF as an example of the difference between delivering value as a service or as merely a feature/function of something else. This is hugely relevant in voice, and features in the Future of Voice Masterclass discussions around voice-enabled applications.

But this has also got me thinking about the general case of large technology companies releasing an existing successful or de-facto-standard as a fully-open technology, especially where it is better than an "official" standard developed through the usual committee-and-politics process.

What is the impact of this? Why would that company open up that standard in the first place - how do they monetise it? What's the other strategic value? My thoughts are that it:
  • Needs to be based on something so widespread already (eg PDF), or something so superior, that it can gain firm and enduring traction, even though it has a proprietary heritage.
  • Weakens any related technology that is rigidly dependent on the official standard, and which can't flex to accommodate the superior now-open one. This might be deliberate or an accidental side-effect.
  • Allows the original company to retain a strong share of the necessary software, even though it's free. And it can add in extra features or capabilities that help them monetise it via different products. For example, you don't need Adobe Reader to view PDFs, but most people have it anyway - and it also allows various still-proprietary technologies to be displayed
  • Gets more developers involved in using that standard
  • Helps to commoditise part of the value chain, shifting value (implicitly) elsewhere
There's probably some more, but I've only just started thinking about this.

Now, why does this matter in mobile?

Three things come to mind:

  • Skype's release of the SILK codec for VoIP
  • Google's release of WebRTC for browser-based communications, which also includes the iSAC codec it obtained with its GIPS acquisition
  • Apple's release of HLS (HTTP Live Streaming)
There's also Google release of the WebM video format, and Real's Helix technology a few years ago, plus others from Microsoft and probable a variety of others. Others such as Jabber/XMPP [for IM interoperability] have started life as open-source and then been adopted by large companies like Google and Cisco. Many of these are around audio and video, for which it's necessary to have a good population of viewers/clients in the field to avoid chicken and egg problems with content developers.

What I've been trying to work out is the impact of all these new standards (or drafts) on "official" alternatives that are baked-in to some wireless network infrastructure offerings and standards.

So for example, quite a number of people seem to believe that SILK is better than the AMR-WB codec, which forms a core part of VoLTE for delivering telephony on LTE. Given that VoLTE is less flexible than various other OTT-style voice platforms, in terms of creating "non-telephony" voice applications, this might have a serious long-term strategic impact on the overall voice marketplace. Coupled with smart use of the ex-GIPS Google acoustic toolkit, this could mean that OTT-style VoIP on LTE might actually have better performance than the official, QoS-integrated, IMS-enabled version, at least in certain circumstances.

Apple HLS is another teaser. Along with a couple of other web-based streaming protocols, this is an "adaptive rate" video format that can vary the quality/bandwidth used based on realtime prevailing network throughput rates. In other words, it watches network congestion and cleverly self-adjusts the bitrate to minimise delays and stalls from buffering. In other words, it kills quite a lot of the touted benefits of so-called "transparent video optimisation" in the operator's network, not least because HLS is (indirectly) under control and visibility of the video publisher.

WebRTC and in-browser communications is probably the most direct analogy to PDF. Potentially, it turns voice (and that's voice generally, not just "telephony" as an application) into a function, rather than a service. Now clearly there may need to be other services at the back end for certain use cases (eg interconnect with the PSTN), but it has the potential to completely disrupt parts of the communications infrastructure and operator business model - because it doesn't need infrastructure. It does the whole thing "in the cloud" - not as a dedicated technology like Skype, but simply as an integral part of the web.

The open question is why Apple, Google and Skype are doing this. Apple is probably the easiest - HLS seems to be part of its anti-Adobe crusade, plus it helps it to perpetuate iTunes and potentially use it to sell to non-Apple devices. Google and Skype might be trying to run a "codec war" with each other with iSAC vs. SILK (why? I'm not sure yet), and might just take out AMR-WB (and by extention, VoLTE) as collateral damage.

This is an area I want to dig into more deeply - and please paste comments and theories here to support / attack / extend this argument, as it's still only part-formed in my mind.

5 comments:

  1. It's partly going to depend on people's threshold for deeming 'better'. For CS voice at least, AMR-WB was specifically designed to be well beyond wireline voice quality. I can't comment on its VoIP performance, but if it is that good at VoIP as well, how easily will Joe Public be able to decide if SILK is actually any better

    ReplyDelete
  2. In normal "steady state" calls they probably won't be able to tell the difference.

    It's how well it copes with challenging conditions and how it fails (glitch, drop, fade etc) that will probably determine QoE.

    Also how they both work in tandem with associated acoustic improvements. I'm not sure if the codec choice is independent of echo cancellation, noise suppression etc.

    I'm also uncertain about the relative impact on battery life to be fair.

    One other thing is around time-to-market. If a Skype-powered VoIP call is "good enough" but is available before VoLTE, and/or is *functionally better* then that's enough.

    In my mind, the voice market is about to undergo a transition "beyond telephony" and it's not obvious that the traditional vendors/operators have quite woken up to that.

    Dean

    ReplyDelete
  3. Why would that company open up that standard in the first place - how do they monetise it?

    is this not just the standard way in the 2.0 world?

    Gain eyeballs and then seek to monitise later - and it may be indirect monetisation, e.g. open up Google Maps APIs, everyone uses it, so then businesses start to use it, then there starts to be opportunity for flow of money (advertising etc.)

    So by integrating voice deeply into a social network, e.g. into Google +, if lots of third parties start to build "interoperability" by using the WebRTC, then you grow the ecosystem, more people come etc. etc.

    Is it not just an extension of the open APIs approach that all 2.0 companies have these days?

    Key thing is that interoperability in an IP world is just so easy in comparison to the old world. Publish the specs, have enough scale ... and then wait for others to build the apps that interoperate.

    ReplyDelete
  4. NJC

    Yes, up to a point.

    But there's a difference between releasing APIs to a platform you own, which acts as a sort of "marketplace" for users and developers - and releasing a specification or standard that can be used on a standalone basis.

    I'm not sure the same mechanics of leverage apply

    Dean

    ReplyDelete
  5. Opening up a standard may be necessary to gain (business) acceptance by assuring security and a range of suppliers. Xerox had to spin Adobe out to ensure that Xerox competitors and business would use it and not be held to ransom by a closed vertical owner (cf Apple).

    ReplyDelete