Categories
technology

Steve Vinoski’s comments on the WS* standardisation track

Following on from my earlier post about WS standardisation. Steve Vinoksi points out that traditional standardisation efforts are often too slow and overly political. In this month’s IEEE Distributed Systems Online (DSO) he discusses WS-NonexistentStandards. Lots of standardisation work but where are the accepted standards and how does the process facilitate the creation and adoption of practical standards?
To get around these problems, WS-* authors appear to be taking a different approach toward standardization:

  1. Write a specification and make it publicly available.
  2. Invite interested parties to one or more private workshops where they can learn more details about the specification and provide feedback.
  3. Iterate steps 1 and 2 until chosen feedback from the workshop participants has been incorporated, and the specification is considered finished.
  4. Submit the specification to an official standards body with the hope of fast tracking it to actual standardization with minimal changes.

Overall, this approach reduces the number of participants involved, which can be a good thing because it reduces the overall volume of communication required to create the specification and resulting standard. However, it can also reduce the resulting standard’s effectiveness, even rendering it useless, because it circumvents at least some of the process of building consensus by not being a truly open process. A standard that is not generally agreed on is a standard on paper only.

This definitely seems to be part of the problem. It’s in marked contrast to the IETF standardisation process which often appears much more open and perhaps democratic. However, it’s a fine line to walk. I can’t help but feel that 2 modifications to the process would significantly improve matters.

  1. The creation of WS-arch so we can categorically say what piece of the WS-jigsaw goes WS-where? 😉
  2. Incentivised involvement of independent s/w developers in the standardisation process. Spec consumers rather than spec producer/pushers who can’t provide neutral guidance. Maybe even some decisions could be put to general developers using a web-based voting system.

Probably/definitely need to think about this more…

Categories
technology

Browser Identities

Browser incompatibilities are definitely the bane of a web developer’s life. Having spent much of my development life messing around with command lines, I’m now spending a lot of time looking ath CSS section of w3schools grabbling with CSS positioning & layout issues.

I decided that I’d solve some of these browser incompatibilites on the server side rather than with client side javascipt.. MT’s natty Perl-plugin interface looked the best bet and I whipped up a few quick lines of PERL to pull the HTTP_USER_AGENT from the env and parse it. Easy-peasy I thought having read all about browser identities here (skipped the RFC)… This turned out to be no fun. I learned a lot about writing plugins which are a really great feature but when I outputted the browser ID for both IE and Opera I got guess what?
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1) Opera 7.54 [en]
Not exactly what I was expecting. A diff of the two confirmed that I wasn’t going nuts. They’re the same so my plugin is effectively useless for sorting out CSS layout issues between IExploder, Opera and Nutscrape… So HTTP_USER_AGENT is apparently not the thing to use.. The appName in javascript would be more reliable apparently. SO much for sorting out the problem on the server side. Ah well… de nouveau au conseil de dessin as they say in pidgin french 😛

Categories
technology

IDC information society index

This index was established in mid 90s and provides a statistical analysis of the degree of IT access and absorbtion within 53 countries worldwide. Ireland can only manage 23rd spot, which is less than impressive considering we’re a small nation with such a disproportionate amount of our Gross Domestic Product (GDP) coming from IT. (For a cold hard look at our GDP/GNP comparisons read this) Our neighbours in the UK fare better in 10th, while the tech savvy danes and swedes claim 1st and 2nd place respectively.

Categories
technology

P2P traffic’s effect on ISP’s

The Internet was designed as a content access system which a predominantly client/server, assymetrically biased towards downstream (downloads etc.) With P2P exchange of data, the creation of decentralized groups allows for information to flow over the public Internet in an anonymous logical fashion. The individual users of these applications are shielded via this anonymity. There are obvious issues with IPR here but also more subtle issues regarding the categories and topology of P2P traffic. (I’ll provide a more rigorous mathematical look on this soon) via this form of information exchange, the service providers no longer have the ability to forecast network capacity based on historical subscriber usage patterns. There are four key areas where service providers are feeling the pinch:

  1. Upstream/downstream traffic is flipped where the upstream traffic is much larger then the downstream traffic. This results in network congestion on the upstream link that was never planned for with initial broadband deployments.
  2. Time of day usage statistics no longer apply. Previously, service providers could assume peak usage at certain times of the day and lower usage at other times. With P2P applications, the computers are often left to transfer data throughout the day in an unattended fashion.
  3. Previously, peering traffic always traversed the Internet to another location. In today’s world, two home users can form a direction connection.
  4. Over-subscription assumptions no longer apply. A handful of power users can “hog” all of the bandwidth deployed for a much larger usage base.

Thanks to network world for some pointers in this post.