The need for speed.

The web has evolved into a new paradigm encompassing almost all the different aspects of our lives. Today the internet is part of our lives; we almost can't live or do without it. It has reached a point where we seem to take it for granted. At its core the TCP/IP protocol that governs all the different communication mechanisms required to maintain connectivity access, and exchange digital information. The web today is totally thought and defined through two crucial terms which are: access and performance. For people nowadays, it's all about access and performance. With the diversified use of the internet going from basic tasks as checking e-mail, downloading music to more complex ones such as running a scientific simulation online, users are demanding more and more speed while they also want to be able to have access on any device, anywhere. Though recent computing and engineering progress have been made to bring improvement on the internet bandwidth and allow better high-speed links and connections, users and applications are still bandwidth hungry, in addition to an increased number of users in activity online.

Over the last decade we have been witnessing a significant increase in the capabilities of our computing and communication systems. On the one hand, processor speeds have been increasing exponentially, doubling every 18 months or so, while network bandwidth, has followed a similar (if not higher) rate of improvement, doubling every 9-12 months, or so. Unfortunately, applications that communicate frequently using standard protocols like TCP/IP do not seem to improve at similar rates.The Internet is founded on a very simple premise: shared communications links are more efficient than dedicated channels that lie idle much of the time. Ans so we share. We share local area networks at work and neighborhood links from home. And then we share again - at any given time, a terabit backbone cable is shared among thousand of folks surfing the Web, downloading videos, and talking on internet phones. The designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between conflicting demands of the Internet's hosts - now over a billion personal computers, mobile devices and servers. One of the foundational problems is that core Internet protocols such as TCP/IP. DNS and SSL/TLS haven’t been updated to reduce overhead that slows the loading of complex web pages.

The Internet's primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs run. TCP is one of the twin pillars of the Internet, the other being the Internet Protocol (IP), which delivers packets of data to particular addresses. Your TCP routine constantly increases your transmission rate until packets fail to get through some pipe up ahead - a tell-tale sign of congestion. Then TCP very politely halves your bit rate. The billions of other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally.
So what can be a solution or what are the different approaches to remedy such a technical challenge? Well, we perceive various or maybe a multitude of solution aimed at trying to make this work. So let's discuss some of the most popular and "so-found" effective ones.

TCP is a well-designed protocol, and provides nearly optimum performance over a wide range of conditions. However, obtaining the best possible performance requires the application to co-operate with TCP by setting the correct options if the defaults are not optimal, making the most efficient use of the socket API functions, and providing appropriate memory and CPU resources. Though the end objective remains making web pages load faster, we have to consider essential options in the background in order to make this work. The available performance-related options are:
* Leveraging the Nagle algorithm
* Settings for time-out values
* Defining a pending connection queue ("reserved port")
* Setting the IP Type Of Service field
* Resizing Packet, buffer and MTU
* ARP cache size (for Ethernet).

Within this critical context, major internet-oriented organizations and companies are thinking through the issue and striving to provide effective solutions. Amongst those, we have Google which recently declared that it was setting out to make the web faster, by sharing different perspectives, feasibilities and practices with the developing communities. Google in their official blog last year said, "Let's make the web faster" From building data centers in different parts of the world to designing highly efficient user interfaces, we at Google always strive to make our services faster. We focus on speed as a key requirement in product and infrastructure development, because our research indicates that people prefer faster, more responsive apps. Over the years, through continuous experimentation, we've identified some performance best practices that we'd like to share with the web community on, a new site for web developers, with tutorials, tips and performance tools.

Microsoft also came up with resolution provisions to tackle this performance challenge by implementing a better TCP/IP protocol in their latest release of servers and web server products. They called this new implementation, the Next Generation TCP/IP stack.
The Next Generation TCP/IP Stack
Windows Server 2008 and Windows Vista include a new implementation of the TCP/IP protocol stack known as the Next Generation TCP/IP stack. The Next Generation TCP/IP stack is a complete redesign of TCP/IP functionality for both Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) that meets the connectivity and performance needs of today's varied networking environments and technologies.
The following features are new or enhanced:
* Receive Window Auto-Tuning
* Compound TCP
* Enhancements for high-loss environments
* Neighbor Unreachability Detection for IPv4
* Changes in dead gateway detection
* Changes to PMTU black hole router detection
* Network Diagnostics Framework support
* Windows Filtering Platform
* Explicit Congestion Notification

Overall, tackling the issue of internet performance to make it faster is not an easy task. The Internet is getting bigger and much more complex to deal with. However, progress and improvement is what we've always believed in and it definitely helped. We believe new efforts and initiatives such as Next Generation Networks and Broadband connections have the performance problem in mind and will provide accordingly. No, it's definitely not a performance tale anymore. We hope for a faster Internet!

User login



Syndicate content





Stay Connected

Who's new

  • catalyst_one

Sponsored Links

Volutpat Consequat

Pharetra Sed Tempus


Donec Lorem Interdum