Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Showing posts with label SDN. Show all posts
Showing posts with label SDN. Show all posts

Friday, September 16, 2016

TelcoFuturism - the impact of Quantum Technology

The other day, I was invited to the Cambridge Wireless conference on quantum computing and communications (link). Fascinating and brain-melting domain, that has profound implications for many other areas of technology (and telecom). Even though I have a physics degree, I can't claim to be able to keep up with all the maths and concepts that are discussed - but I took away a few real-world implications of what seems to be occurring.

Quantum technology is a pretty broad area, that relates to the weird properties exhibited by individual atoms or photons (light). If you've heard of Schrodinger's Cat, then you'll know how strange some of the concepts can be - especially a "qubit" (quantum bit) that can simultaneously be a 1 or 0, or "entanglement" where pairs of particles remain spookily connected at a distance.

These properties can be used to create computers, communications systems, sensors, clocks and various other applications. In a way, quantum tech is a "foundational" idea similar to semiconductors (which are themselves based on quantum mechanical principles): there will be many, many applications. 

Terminology alert: often people in this sector compare quantum computers versus "classical" alternatives. 


Some quick highlights and comments:
  • It's early days. Although there are some existing quantum solutions, they are not "universal" computers, but tailored for particular use-cases. Cooler stuff is 5-10 years away depending on your level of optimism (and stealth)


  • There were a lot of telecom people in the room - although that's partly a function of Cambridge Wireless's community (link). 
  • Many of the opportunities (& threats) from quantum are "several layers up". For example, we should be able to make more accurate clocks, which means better timestamping, which means more accurate transactions or positioning, which means better ways to create networks... It's pretty hard to extrapolate through all the layers to work out what the "real world" impacts might be, as there are variables & uncertainties & practicalities at each stage. Same thing for quantum improving AI systems.
  • There will be a lot of hybrid quantum/classical systems - including being integrated on the same chip.
  • Some crypto & PKI systems are going to be compromised by quantum-enabled decryption. It makes mincemeat of some algorithms, but others are much more "quantum-proof". There might be a "Y2Q" problem digging out where the old and vulnerable ones might be, buried inside other systems and software. This might be a "big deal", but there was also debate among experts about whether some of the risks claimed might actually be scaremongering or limited in scope. I think there will be a big ramp-up in "quantum compliance consulting" though - if enough people can understand it.
  • Quantum tech also enables totally-secure* networks to be built, using quantum key distribution (QKD). There's a bunch of tests and prototypes working around the world. At the moment these are mostly fibre-based, although some are using free-space optics. (*I'm not a cryptanalyst. Or a quantum wizard. My understanding is that secure here means non-interceptible or perfect interception-detection, but as always with security there are other weak links in the chain when humans are involved).
  • We're not getting some sort of magical massmarket "quantum broadband" any time soon, fibre or (definitely) mobile. There might be quantum-related components in networks for timing or security, but the actual physics of shipping-around of bits through air and fibre isn't likely to change.
  • One caveat - if I understand correctly (and it's possible I don't) some quantum applications might make it more appropriate either to use dedicated individual fibres, or to use frequency multiplexing (separate colours essentially) rather than networks with other forms of multiplexing. One of my "to do's" is to get my head around what quantum-level transport really means for the way we build IP networks - and whether it's only ultra-secure point-to-point connections that are impacted, rather than general "routed" ones. At the moment it seems the main use is parallel QKD streams to secure the main "media" stream. I've found some stuff on early concepts of quantum routing (link) and quantum-aware SDN (link) but if anyone has a view on the commercial impact of this, I'm all ears. 

  • A lot of the current work on quantum computing seems oriented towards creating better ways to do machine learning - essentially the ability to absorb many, many different things "in parallel" rather than sequentially. Beyond AI/ML, many important tasks involve optimisation or pattern-recognition - quantum solutions should help. This has applications across the board, from finance to healthcare to telecoms, although there weren't many suggested use-cases in BSS/OSS or network design at the event. I suspect there could be a variety of interesting options & will think more about this over coming months. (Let me know if you'd like to discuss it)


  • There's lots of complexity in getting quantum engineering to work for computing - components often need to be cryogenically cooled, there's all manner of software design and error-correction and control issues, maybe some engineering of microwave systems to link bits together and so on. This is Big Science. It's not going to be in the iPhone 9. (Although some of the sensing and clock stuff seems to be "smaller")

  • There's some cool stuff being done around quanutum-based accelerometers, gravity sensors etc. One of the biggest drivers is the desire to create a GPS-type positioning system that doesn't rely on signals from satellites - which can be jammed, blocked or even destroyed. Currently GPS is turning into a bit of a "single point of failure" for the entire planet - especially including cellular networks and devices and financial transactions which need times-stamps.


  • Someone else has beaten me to the term QCaaS (link) so I'll have to settle with QDN "Quantum Defined Networking". You heard it here first....
  • There are various implied links with IoT (sensors) and blockchain (crypto). I'll keep an eye on those for future work.
Overall, a fascinating topic - and one which the UK government, academia and industry is pumping a ton of cash into. It's perhaps not as sexy as some other futurist obsessions like AI, genetic engineering or blockchain - but it's potentially just as transformative, not least by helping accelerate the progress of all of the others.

For the telecoms industry, there's relatively little to be worried about yet - although getting older network and IT systems' crypto checked over seems important given the timelines to replace legacy equipment. Given the rising desire to exploit PKI and identity in telecoms and IoT as a long-term business, the 10-year horizon for "sci-fi" possibilities is a bit uncomfortable, especially if new breakthroughs are made. And that's before second-guessing how much extra progress has been made by intelligence communities, and how fast Messrs Snowden and Assange get to hear about it. 

We might see quantum tech appearing first in clocks used in networks, or specific optimisation problems solved with early computers from the likes of D-Wave. In my mind there's a few options around NFV/SDN and network-planning that might be a fit, for instance. There's also some cool possible opportunity around super-secure communications and non-GPS navigation. But good news if you're a serious telco quantum doom-monger, don't worry about the prospect of Netflix quantum-entangling videos direct to peoples' TVs and smartphones just yet.

If you're interested in learning more about Disruptive Analysis' work on "TelcoFuturism" please get in touch at information AT disruptive-analysis dot com. My introduction to the concept is here (link) and I've also written about AI/machine learning (link) and Blockchain (link). I gave my first keynote presentation on TelcoFuturism a few months ago (link) and will be progressively ramping this up - get in touch if you need a speaker.

Wednesday, June 29, 2016

A challenge for new software-based telecom services NFV and eSIM: demarcation points

A couple of months ago, I had a problem with my home broadband connection, which intermittently cut out, or needed the router to be switched on/off again.

When I arranged a call-out from a BT engineer, I was told in no uncertain terms that if the problem was with my house-wiring, I'd be charged a fee. If it was a fault with the termination box or router, or external wiring, then BT would fix or replace it.

In other words, the little white BT-branded box and socket is the "demarcation point". It's where BT Openreach's responsibility stops, and the customer's starts. (BT Retail also provides me with an Infinity router, for Internet access, but that's technically a different service component, rather than network access).

Something similar is true with my mobile phone - the SIM card represents the demarcation-point for the connectivity part of Vodafone's service offering. It is essentially part of the network, rather than part of my phone. While I need operator settings and configurations to be pushed down to my device (eg APNs for Internet), that again is something separate.

Now consider what happens in a more software-driven, virtualised world.

For fixed connections, we may find either some sort of white-box termination unit (residential or business) down to which might be pushed virtual functions like a vRouter, vFirewall, vSmartHomeServer or whatever. Or alternatively, those virtualised functions might be located in a data-centre or local exchange server - there's a lot of talk about cloud-based vSetTopBoxes, for example.

And in mobile communications, we are starting to see eSIMs emerge for some use-cases, in which the manufacturer embeds a physical SIM chip in devices, and operator profiles get pushed down to it, or potentially switched between.

In both those cases, and various other forms of virtual/physical scenarios, we are moving from a world of clear and unambiguous demarcation points, to a world in which they are much less well-defined.

Today, if people switch SIM cards over to use a different network, or if a SIM stops working and needs replacing, or if consumers churn from one home broadband provider to another, then the lines of responsibility are pretty clear and obvious. There's very little finger-pointing that can occur.

But what happens with an eSIM? Is the manufacturer responsible if it stops working? Or is that the fault of the operator whose profile is working on it, the service provider which packaged and downloaded that profile, the software vendor(s) involved, the retailer where the device was bought - or worse, a second operator if the failure occured during a switch-over to a new profile? Who pays for the diagnosis, or for replacing the whole device if the eSIM can't be separated out? What happens if data is lost? Who is liable?

A similar set of questions apply with 3rd-party VNFs, or other software functions which drive underlying connectivity, or sit in the data path. We may be entering a world in which there is a "VNF AppStore" model - where the customer chooses between different software routers, or WiFi controllers, or firewalls. For businesses it may be possible to sell ethernet-style connectivity, and let the CIO take responsibility for those other connectivity-oriented functions, but that's clearly not an option for the consumer market.

This is different to higher-level virtual applications - if a game or a VoIP service stops working, but the network connection is still live, it's reasonable to assume that the connectivity function isn't to blame. 

Overall, there's a lot of opacity here - nobody really knows how to deal with network responsibilities, in a world where there are no clear demarcation points. It's a set of lessons that will need to be learned very soon - and which will probably involve regulators or other authorities in disputes.

Friday, May 13, 2016

Telecoms is too important to leave to the telcos

We are going to see rising presence of non-traditional providers, for both access networks and communications / applications services. Telecoms is far too important to confine to a mono-culture of just traditional "operators", fixed or mobile.

This week I've been in Nice for the TMForum conference & exhibition. As well as the classic OSS/BSS discussions, and more-modern focus on NFV, there was also a huge emphasis on other non-traditional areas for connectivity and potential services. In particular, there was a large presence for smart-city concepts and presentations, as well as health and advanced manufacturing. TMF also has numerous prototype projects called "catalysts" spanning everything from IoT to consumer virtual-CPE, typically headed by a telco and supported by vendors.

But there's a big problem here. Many of the new and most-promising areas for communications and networking don't really need - or often want - the involvement of classical telcos. While telco-steered prototypes are good, that doesn't necessarily translate to real-world deployment and monetisation. For example, telcos tend to focus on nation-wide deployments, scale and service initiatives, and so often aren't geared up to operate at (or customise for) a city-specific level.

In particular, the types of capability delivered by core networks and future NFV/SDN aren't really essential for most use-cases, while non-3GPP IoT-oriented LPWAN and WiFi networks sit alongside cellular and fibre for connectivity. There is a huge desire to use either generic Internet access for many new vertical applications, or perhaps private standalone connectivity from telcos (4G, 2G, ethernet, MPLS etc) but without additional "value-added" services on top.

It also seems increasingly likely that the move to NFV and SDN will also allow new classes of virtual operators to emerge as well. And while there may be revenue from customised "slices" of 4G/5G for specific industries, these will essentially be next-gen wholesale rather than retail propositions, with implied lower margins.

In addition, a growing number of industries are looking at deploying their own physical access networks too. In the past, this has mostly just meant that railways used GSM-R, while government and public-safety agencies implemented TETRA or various niche technologies. But increasingly, non-telco actors are becoming more aware, and more capable, of developing advanced infrastructures of their own. Private fibre deployments, enterprise LTE (perhaps in unlicenced bands), SigFox and LoRA networks, drones and balloons, and so on. 

(There is also a slowly-increasing discussion of decentralised mesh networks, perhaps using blockchain technology for authentication and security. That's a proper "telcofuturism" intersection between two otherwise orthogonal trends - to be considered in another post)

Some non-telco groups are even asking for dedicated spectrum bands, claiming that operators don't understand their needs well enough. I recently attended a European regulatory workshop on the impact of IoT, and representatives of manufacturing, automotives, electricity and other sectors all made a case for running their own infrastructure. 

A power company, for instance, pointed out that "Five 9's" isn't good enough - they need to have higher availability of communications to their transmission and transformer infrastructure. They cannot rely on cellular networks powered by (you guessed it) grid electricity for their own control systems. They also pointed out that unlike telcos, they maintain a fleet of helicopters, to rush engineers out to fix problems. That's a very different approach to managing QoS to that familiar to most in the telecoms industry.

One of the side-effects of the growing importance of wireless technology, and M2M/IoT is that major companies in other industries have hired their own wireless experts. They have also realised that they have very little representation or influence in telco standards bodies like 3GPP. And at the same time, the barriers to "rolling your own" networks have been falling, with open-source components, myriad new radio technologies, virtualised software elements and so on. When it's possible to run a cellular base-station on a $30 Raspberry Pi computer, or deploy a country-wide IoT network for single-digit $millions, the hegemony of telcos to own networks starts to crumble. (Obviously, many have run their own voice and PBX/UC infrastructures for decades, so they don't really need telcos for most communications applications either).

Add in various city/metropolitan initiatives, or community collective approaches in rural areas, and the picture deepens. Then layer on the Google and Facebook drone/balloon approaches, plus satellite vendors, and the ability to create parallel infrastructures multiplies further. This doesn't mean that these networks will replace telecom operators' infrastructures, but they will act as partial competitors and substitutes, cherry-picking specific use-cases, and pressuring margins.

There is quite a lot of arrogance and complacency I see in the telecom industry about this trend as well, especially in the mobile community. I hear lots of sneering about "proprietary" solutions, or the assumed inevitability of 5G to be the "one network to rule them all". I've heard lots of comparisons to the ill-fated WiMAX. While this might have been mostly-true for 4G (conveniently ignoring WiFi), that doesn't necessarily mean that the future will avoid disruption. I see many factors pointing to heterogeneity in network ownership/operation:

  • Rise of IoT meaning that conventional financial & business models for cellular (eg subscriptions) are inappropriate, while use-cases are fragmented
  • Rising number of skilled wireless/network people being employed by non-telecom companies
  • Experience of WiFi prompting greater use of private connectivity
  • Growing pressure on regulators to release dedicated spectrum slices for specific new non-telco purposes (eg electricity grid control, or drone communications)
  • Long run-up for 5G standardisation and spectrum releases, meaning that new stakeholders have time to understand and prepare their positions
  • Cheaper infrastructure and technology components, for reasons discussed above
  • Willingness of device and silicon providers to consider integrating alternative connection modes (look at Qualcomm's MuLTEfire for example)
  • Increasing numbers of big, well-funded companies that may be looking this area - it's easy to imagine that as well as Google, others such as GE, Phillips, Boeing, Ford, Exxon could all decide to dip their toes into connectivity in future.
  • The inability of telcos to cross-subsidise data connectivity with voice/video/messaging/content services, especially in enterprise
  • Growing pressure on regulators to release either more licence-free spectrum, or methods of dynamic or shared access, that would open resources to new players
  • The ability of technologies such as SD-WAN to bridge/load-balance/arbitrage between multiple access technologies. This makes it much easier for new networks to disrupt from adjacency. We can expect similar moves to allow "multi-access" for IoT and consumer devices.
The other angle here comes from suppliers. Some historically telco-focused network vendors are also recognising the inevitable, albeit quietly:
  • GenBand's recent customer event spent as much time on enterprise opportunities and partnerships as on telcos. It highlighted its work with IBM and SAP - and while IBM referenced telcos as possible channels/partners, it was clear that the majority of focus was on CRM or other embedded-communications use-cases, sold directly. While this is mostly at the application layer rather than connectivity, it was notable as a proposed source of growth.
  • Ericsson is increasingly focusing on direct opportunities with banks, smart-cities, automotive providers and other sectors. While its core technology base remains 3GPP-centric, its increasing focus on cloud and IT domains tends to be less telecoms-specific. Its partnership with Cisco also extends its implied direct-channel link to enterprise opportunities. It is a major believer in the "slice" concept for 5G - although it hasn't articulated the shifting wholesale/retail picture yet.
  • Huawei is pitching "enterprise LTE" for various sectors such as smart-cities, oil industry, rail, power utilities and more (link)
  • The MuLTEfire Alliance is pitching itself at various categories of network operator beyond conventional cellular providers: venue-owners, neutral hosts, enterprise campus owners and so forth. Ericsson, Intel and Nokia are all members.
  • The growing profile of IT players in the network industry (aided by NFV/SDN) brings in a group of companies far less wedded to "operators" and with large industrial / government customers used to buying direct. IBM, HPE, Oracle, Intel, Cisco are all obvious candidates here.
  • BSS/OSS vendors are also looking beyond the traditional SP space. Redknee acquired Orga Systems, for example - which specialises in sectors like utility billing. 
I suspect we'll see an increase in emphasis by network-infrastructure vendors on non-telco customers. Some will do so quietly to avoid alienating their existing mainstream clients, but overall I see a desire to tap into new pools of revenue and innovation. Where possible, I'd expect vendors like Ericsson to try to keep telcos having some "skin in the game", but a fallback position will likely be to at least repurpose 3GPP technologies where feasible.

Another strategy which may emerge is for telcos to start acting as "spectrum managers" or "super-MVNE providers", both at an access and core/NFV level. An early sign of this is the AT&T/Nokia announcement of a dedicated slice of spectrum targeted at utilities and IoT in the US (link) which will allow the creation of "private cellular" networks, but still keep AT&T in the loop at one level. A similar model could work for smart cities and other use-cases.

Overall,  a picture is starting to coalesce: Telecoms is far too important just to leave to the telcos. Although they obviously have incumbency, inertia and assets like spectrum and cell-towers, the proliferation of IoT is likely to reduce their leverage from things like numbering/voice. They will also face increasingly-capable, large and well-funded stakeholders, which will exploit technology enhancements to build more-customised networks. The growing virtualisation of technology will mean the number of "layers" at which 3rd-parties can enter the market will grow. 

This has important implications for existing operators, as well as regulators/governments and the broader vendor community. At the moment most seem to be treating the trend in a piecemeal fashion - but I think it needs to be considered more holistically, as it has a big implication for regulation, investment and innovation.

Tuesday, March 22, 2016

Is SD-WAN a Quasi-QoS overlay for enterprise, independent of telcos & NFV?


In the last two weeks I’ve been at two events: EnterpriseConnect in Orlando (EC16), and NetEvents in Rome. The former is a midsize trade-show, mostly UC/cloud-comms providers and vendors pitching to business users. The latter is smaller: vendors and a few SPs briefing and debating in front of technology journalists and analysts, mostly about enterprise networks, or the carrier networks needed to support them.

An interesting divide is emerging. Both events involved a huge focus on cloud – especially for communications apps and security functions. But it is mostly only the “traditional” carriers and their major vendors which are really discussing “proper” NFV and SDN as a platform for delivering new customer-facing services to businesses. For other enterprise vendors and service providers, NFV is not even on the radar screen as an acronym.

EC16 was dominated by major players in telephony and collaboration – vendors like Cisco, Avaya and Microsoft talking about cloud-based evolutions of their UC and conferencing tools; UCaaS providers like 8x8 and RingCentral with their own hosted platforms, or others based on BroadSoft. WebRTC, contact centres and cPaaS made a good showing as well. A few traditional telcos were there too, such as Verizon which has an LTE-based UC solution, and Sprint talking about its partnership with DialPad (formerly Switch.co). Slack wasn’t there, but other workstream-style messaging and collaboration tools were pretty ubiquitous, usually with a heavy mobile bias.

There was also a decent turnout of comm-centric vendors that make SBCs, UC/telephony servers and related infrastructure elements – Oracle, Dialogic, Metaswitch, Sonus, BroadSoft and peers. But while these were definitely talking about virtualisation, it was mostly not in the guise of NFV as perceived by the telecoms industry. There wasn’t much discussion of MANO and service-chaining, unless I specifically asked about them in meetings. Their use-cases for virtualisation were all much more pragmatic, aimed at non-telco UCaaS providers, or in-house deployments by enterprises in private-cloud or hybrid cloud/on-premise configurations.

The general assumption was that enterprises will continue to buy their collaboration apps/services separately to buying their network connectivity. Even where a UCaaS provider also sells access or SIP trunking, they’re not likely to be tightly coupled. There might be some "dimensioning" to ensure sufficiently-reliable performance, a separate MPLS connection entirely, or some tweaking of prioritisation to the UC provider's cloud. But there was no sense that a customer-facing UC server would be “just another VNF” hosted in the telco’s infrastructure alongside its vIMS and vEPC.

In a nutshell – corporate telephony and collaboration and contact centres are not really seen as “network functions”, any more than SAP or Office or other line-of-business apps are. (It’s worth noting that security functions like VPNs and firewalls are more aligned with NFV, as they are often integral with access connectivity). There's no real "telco cloud" either, except as an equivalent of an ISP-cloud or SaaS-cloud.

Maybe this will change in future as we see more telco "distributed cloud" and "fog computing" architectures emerge - for example, the Mobile Edge Computing initiative. But to be honest, I've got my doubts about that as well - a topic for another post, though.


However another form of software-based infrastructure for enterprise got airtime at both events: SD-WAN (software-defined WAN). I am starting to think that SD-WAN may actually reduce the potential for some proposed NFV business models, because it could put a new layer of abstraction between telco networks and corporate applications and communications.

In essence, SD-WAN allows the creation of “Quasi-QoS” by various methods. Perhaps the most important is the blending together of multiple access connections to a company’s sites – MPLS, vanilla Internet (perhaps x2 or x3), LTE and so on – and then load-balancing, bonding or using them for backup or differential routing of traffic.There are also various approaches involving hacking TCP in some way, or various proprietary approaches to packet classification and scheduling. Typically, SD-WAN will involve some sort of server or dedicated box at each customer site.

The following illustration is from SD-WAN vendor Velocloud, and is given as an example



This could put mission-critical applications onto managed connections, with less-demanding traffic onto Internet access. Or it could be “Internet-primary” and use more expensive connections only when congestion seems to be causing problems. It can also link into major IaaS and cloud platforms (Amazon, Google, Microsoft etc) in different locations and with large-scale connections. Many other use-cases and permutations are feasible as well, especially when linked with UCaaS or other SaaS offers.
 
In other words, SD-WAN could be described as “Arbitrage-as-a-Service” or “Managed Least-Cost-Routing”. Where the SD-WAN is offered by a company which isn’t one of the access providers, it is essentially “OTT-QoS” – although I think that “Quasi-QoS” sounds better.

I see this as a conspicuous threat to various forms of NFV-based enterprise service, especially what gets called NFVaaS or NaaS. By putting an overlay around access connections, it reduces the ability for any extra capabilities to be offered from within their infrastructure.

My colleague Martin Geddes has been scathing about this type of “QoS” (link) – noting that the underlying “network science” doesn’t allow for performance to be accurately predicted or guaranteed, without some very clever maths in the access network boxes. The “failure modes” can be ugly, as sudden sub-second spikes and buffering issues can occur and disappear randomly, trashing sensitive applications before the SD-WAN can respond.

My sense is that while that might be technically true, the real-world problems are more prosaic. Either a fully-dimensioned MPLS connection is too costly, or something fails completely because a fibre is cut, or a network node crashes and reboots. Or, is is the case here, the economics are so compelling that it's cheaper to just buy two redundant connections rather than optimise one.

The bottom line is that SD-WAN is potentially a game-changer - and it potentially undermines the NFV argument, not just for UC services, but perhaps other functions too. While some vendors are working with telcos to offer hybrid solutions, that's because of customer pull. This isn't to say that it invalidates everything proposed by NFV believers - far from it, in fact - but it does act as a counterbalance to the view that virtualisation is all telcos need to dominate enterprise connectivity and communications. 


SD-WAN entrenches the idea that enterprise communications and apps are decoupled from access. It also empowers Internet-based UCaaS providers to offer SLAs and QoS guarantees without owning access connectivity themselves - for example, Vonage works with VeloCloud, and Star2Star has its own connectivity boxes that optimise "vanilla Internet" access.