- Working groups
J. Scott Marcus talks to us about his very timely study entitled “Network Neutrality Revisited: Challenges and Responses in the EU and US”.
The study was prepared for the European Parliament’s Internal Market and Consumer Protection committee (IMCO) and was intended to be a preparatory exercise for what the EP rightly sees as very important issue. What I would like to emphasise is that it was done by the EP’s policy department, which prepares some of the best and most objective studies around. The department works very hard to insulate their experts from influence, including from the MEPs, because they recognise that MEPs need high quality and highly objective analyses.
The key finding is that Net Neutrality is a very complex and multi-faceted issue. Even starting from the definition of Net Neutrality: There are different definitions with different degrees of public acceptance, and what people do not realise is that they have different implications for what should be done at the policy level.
The average consumer probably has very little understanding of the details of the Net Neutrality issue, but people care passionately about freedom of expression and the freedom to access content of their choice. Because Net Neutrality is linked to so many issues that people care about, it has huge resonance with the public, and appropriately so.
Net Neutrality is a big, complex issue, and it is not easy to get one’s arms around it.
Indeed, clearly the decision process involves the Council, the European Parliament and the European Commission. It is a complicated process.
The main recommendation is that there is a huge need for balance, for proportionality, and for common sense.
What the study identified is that there are areas that could benefit from more attention: It is clear from the analysis and also from the results of the European Commission’s 2012 public consultation that almost everyone agrees that for a network operator to favour its own content or affiliated content over that of competitors is a bad thing. Some specific examples of this are VoIP limitations in handsets.
Another concern that both the study and the public consultation seemed to flag is that divergent solutions across the different Member States could lead to problems and be counter to a single market. Even though there seems to be nothing specifically problematic with the laws that came out in The Netherlands and Slovenia, the risk that we end up with 28 different laws, not quite compatible with each other, is a serious risk.
There are not necessarily simple answers and for this reason, the study doesn’t produce concrete recommendations on how to draft law, but rather provides a series of questions and considerations that the decision-makers need to take into account going forward.
I think the risk is that it would actually impede innovation by blocking practices that in fact benefit not only the network operators or content providers, but also consumers.
What often gets lost in the debate is that differentiated treatment of traffic has lots of positive uses. It wasn’t invented by rapacious monopolists, it was invented by computer scientists who understood perfectly well that some applications need quality differentiation. VoIP or any two-directional voice or video applications are a very good example, as are any number of services that for one reason or another are publicly important and that we would like to run over public networks. This will not be possible if public commercial networks are not able to traffic discriminate. For example, Public Protection and Disaster Relief (PPDR) applications, or some transport applications. I am currently carrying out some work for the European Railway Agency - if train operations cannot be prioritised ahead of normal consumer use, it means that in periods of stress, the trains will stop. That can’t be.
We should therefore be careful not to forget the positive uses of traffic discrimination. An overly prescriptive rule, particularly one that doesn’t have the right carve-outs, could indeed prevent uses that benefit the public.
There are two main reasons for traffic management: One of them is dealing with situations of overload, the other is favouring network applications that need to be favoured, like two-way voice.
Favouring applications that require it was actually part of the original Transmission Control Protocol/Internet Protocol (TCP/IP) specifications of 1981. I am always amazed when people speak of this as being a violation of IP principals. It was part of the specifications from the first. Some of the same computer scientists who were advocating the end-to-end principle were also advocating prioritisation. Here I point particularly to David Clark, who was both an author of the end-to-end principal paper, and the main driving force for the Integrated Services Architecture and the Resource Reservation Protocol (RSVP). It was always understood that there were applications that would benefit from prioritisation. It is important, I would argue, to be able to do that when it is legitimately needed.
Traffic management was also intended from the first. You had things like the ICMP source quench that was there even in the 1981 specifications. Initially, as with traffic management generally, we had the concepts but not the details for implementation; once again, however, these are things that have positive uses, particularly bearing in mind that overloads happen not just because of poor design but also because of either instantaneous spikes in usage, or because of occasional failures of a network component here or there. There are a lot of different reasons why you can have overload, and it makes sense to have some tools to manage it.
By Joanne Mazoyer - Brussels, 29 January 2015
J. Scott Marcus' professional biography can be found here.