There has already been quite a lot of online discussion of this week's ruling by the European Court of Justice (ECJ) on whether a Belgian ISP can be forced to filter out P2P traffic on its network. (A quick news article is here and more detailed analysis here or here)
From my point of view, the most interesting thing about this judgement seems to be that the court has taken a very dim view of the possibility of "false positives" from the DPI or whatever other system might be used to monitor the traffic. (It also has implications for the privacy aspects of data monitoring by telcos as well - it was that, rather than more general "neutrality" concerns, that led to the Dutch Net Neutrality law)
The term "false positive" comes (I think) from the healthcare and pharmaceutical industry, where a "false positive" is a wrong diagnosis, such as telling someone that tests show they've got a disease, when actually they don't. The opposite (false negative) is arguably even worse - that means telling them that the tests show they're clear of the disease, when actually they DO have it all along. False positives/negatives also crop up regularly in discussions of security technology (eg fingerprint recognition or lie-detectors).
In other words, DPI tends to work blindly testing for a specific "class" or other group of traffic flows, using "signatures" to help it detect what's happening. This works OK, up to a point, and we see various ISPs blocking or throttling P2P traffic. It can also distinguish between particular web or IP addresses and various other parameters. But the issue with the ECJ judgement is that this process can't distinguish between "bad" P2P (illegal content piracy) and "good" P2P (legitimate use for distributing free content or other purposes).
A DPI system should be able to spot BitTorrent traffic, but likely wouldn't know if the content being transported was an illicit copy of New Order's True Faith, or a recording of a really terrible karaoke version with a plinky-plonky backing track you'd done yourself and released into the public domain. (To be fair, if your singing is as bad as mine, blocking its transmission is probably a service to humanity, but that's not the point here).
If the network is actually congested, the operator can probably claim it is reasonable to block or slow down all P2P traffic. And there is possibly a legal argument about the ISP working to limit piracy, if costs are low enough for implementation (again, a separate discussion). But if the network is not congested, it's definitely not reasonable to slow down the "false-positive" legal P2P data.
The interesting thing for me is how this could apply to other P2P use cases - for example application-based prioritisation. What is the legal (and/or consumer protection) stance when the DPI or PCRF makes a mistake? Let's say for some reason, I use a video codec, player and streaming server to transmit non-video material - animation perhaps, or some form of machine-readable codes. If the DPI leaps to the conclusion that it's "video" and prioritises/degrades/"optimises"/charges extra for it, is that a false positive and therefore illegal?
According to Wikipedia "Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion". There are definitely applications of video-type technology for non-video content.
There are plenty of other false-positive and false-negative risks stemming from mashups, encryption, application obfuscation or simple poor definition or consumer understanding of what constitutes "Facebook". (This is especially true where a network DPI / optimisation box / TDF works *without* the direct collaboration of the third party).
Now, in my experience, DPI vendors have been very cagey about disclosing their rate of "false positives", simply saying that overall, the situation is improved for the operator. I've heard from some sources that an accuracy of 85-90% is considered typical, but I imagine it varies quite a bit based on definition (eg # of bits vs. # of flows). It's also unclear exactly how you'd measure it from a legal point of view.
But the ECJ judgement would seem to suggest that's not good enough.
I'm quite glad that most of the vendors' product and marketing people don't work in the pharmaceuticals industry. Wrongly diagnosing 15% of patients would probably not be acceptable.... Even in other technology areas (eg anti-spam software) it would be near-useless. As I've said before about video, I think there are some very interesting use-cases for consent-based policy with involvement of the user and upstream provider, but it seems that the network *acting on its own* cannot be trusted to detect applications with sufficient accuracy, at least under stricter legal supervision from bodies like the ECJ.
From my point of view, the most interesting thing about this judgement seems to be that the court has taken a very dim view of the possibility of "false positives" from the DPI or whatever other system might be used to monitor the traffic. (It also has implications for the privacy aspects of data monitoring by telcos as well - it was that, rather than more general "neutrality" concerns, that led to the Dutch Net Neutrality law)
The term "false positive" comes (I think) from the healthcare and pharmaceutical industry, where a "false positive" is a wrong diagnosis, such as telling someone that tests show they've got a disease, when actually they don't. The opposite (false negative) is arguably even worse - that means telling them that the tests show they're clear of the disease, when actually they DO have it all along. False positives/negatives also crop up regularly in discussions of security technology (eg fingerprint recognition or lie-detectors).
In other words, DPI tends to work blindly testing for a specific "class" or other group of traffic flows, using "signatures" to help it detect what's happening. This works OK, up to a point, and we see various ISPs blocking or throttling P2P traffic. It can also distinguish between particular web or IP addresses and various other parameters. But the issue with the ECJ judgement is that this process can't distinguish between "bad" P2P (illegal content piracy) and "good" P2P (legitimate use for distributing free content or other purposes).
A DPI system should be able to spot BitTorrent traffic, but likely wouldn't know if the content being transported was an illicit copy of New Order's True Faith, or a recording of a really terrible karaoke version with a plinky-plonky backing track you'd done yourself and released into the public domain. (To be fair, if your singing is as bad as mine, blocking its transmission is probably a service to humanity, but that's not the point here).
If the network is actually congested, the operator can probably claim it is reasonable to block or slow down all P2P traffic. And there is possibly a legal argument about the ISP working to limit piracy, if costs are low enough for implementation (again, a separate discussion). But if the network is not congested, it's definitely not reasonable to slow down the "false-positive" legal P2P data.
The interesting thing for me is how this could apply to other P2P use cases - for example application-based prioritisation. What is the legal (and/or consumer protection) stance when the DPI or PCRF makes a mistake? Let's say for some reason, I use a video codec, player and streaming server to transmit non-video material - animation perhaps, or some form of machine-readable codes. If the DPI leaps to the conclusion that it's "video" and prioritises/degrades/"optimises"/charges extra for it, is that a false positive and therefore illegal?
According to Wikipedia "Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion". There are definitely applications of video-type technology for non-video content.
There are plenty of other false-positive and false-negative risks stemming from mashups, encryption, application obfuscation or simple poor definition or consumer understanding of what constitutes "Facebook". (This is especially true where a network DPI / optimisation box / TDF works *without* the direct collaboration of the third party).
Now, in my experience, DPI vendors have been very cagey about disclosing their rate of "false positives", simply saying that overall, the situation is improved for the operator. I've heard from some sources that an accuracy of 85-90% is considered typical, but I imagine it varies quite a bit based on definition (eg # of bits vs. # of flows). It's also unclear exactly how you'd measure it from a legal point of view.
But the ECJ judgement would seem to suggest that's not good enough.
I'm quite glad that most of the vendors' product and marketing people don't work in the pharmaceuticals industry. Wrongly diagnosing 15% of patients would probably not be acceptable.... Even in other technology areas (eg anti-spam software) it would be near-useless. As I've said before about video, I think there are some very interesting use-cases for consent-based policy with involvement of the user and upstream provider, but it seems that the network *acting on its own* cannot be trusted to detect applications with sufficient accuracy, at least under stricter legal supervision from bodies like the ECJ.
No comments:
Post a Comment