The Internet was designed as a content access system which a predominantly client/server, assymetrically biased towards downstream (downloads etc.) With P2P exchange of data, the creation of decentralized groups allows for information to flow over the public Internet in an anonymous logical fashion. The individual users of these applications are shielded via this anonymity. There are obvious issues with IPR here but also more subtle issues regarding the categories and topology of P2P traffic. (I’ll provide a more rigorous mathematical look on this soon) via this form of information exchange, the service providers no longer have the ability to forecast network capacity based on historical subscriber usage patterns. There are four key areas where service providers are feeling the pinch:
- Upstream/downstream traffic is flipped where the upstream traffic is much larger then the downstream traffic. This results in network congestion on the upstream link that was never planned for with initial broadband deployments.
- Time of day usage statistics no longer apply. Previously, service providers could assume peak usage at certain times of the day and lower usage at other times. With P2P applications, the computers are often left to transfer data throughout the day in an unattended fashion.
- Previously, peering traffic always traversed the Internet to another location. In today’s world, two home users can form a direction connection.
- Over-subscription assumptions no longer apply. A handful of power users can “hog” all of the bandwidth deployed for a much larger usage base.
Thanks to network world for some pointers in this post.